UPDATE:
Created below setup:
- MCP Server running on Docker
- Langchain is running on Windows.
Still same issue. No idea what is happening. This time I used HTTP Streamble transport.
Docker run output:
INFO: Started server process [7]
INFO: Waiting for application startup.
INFO | mcp.server.streamable_http_manager | streamable_http_manager.py:128 | StreamableHTTP session manager started
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO | mcp.server.streamable_http_manager | streamable_http_manager.py:255 | Created new transport with session ID: fa3cd8fb3c2941f1b455a9b90ac00ba6
INFO: 172.17.0.1:41112 - โPOST /mcp HTTP/1.1โ 200 OK
INFO: 172.17.0.1:41122 - โPOST /mcp HTTP/1.1โ 202 Accepted
INFO: 172.17.0.1:41126 - โGET /mcp HTTP/1.1โ 200 OK
INFO: 172.17.0.1:41136 - โPOST /mcp HTTP/1.1โ 200 OK
INFO | mcp.server.lowlevel.server | server.py:727 | Processing request of type ListToolsRequest
INFO | mcp.server.streamable_http | streamable_http.py:785 | Terminating session: fa3cd8fb3c2941f1b455a9b90ac00ba6
INFO: 172.17.0.1:41150 - โDELETE /mcp HTTP/1.1โ 200 OK
INFO | mcp.server.streamable_http_manager | streamable_http_manager.py:255 | Created new transport with session ID: 5f5d5c8550f94b5f8d9fcd9cb85b233c
INFO: 172.17.0.1:43074 - โPOST /mcp HTTP/1.1โ 200 OK
INFO: 172.17.0.1:43076 - โPOST /mcp HTTP/1.1โ 202 Accepted
INFO: 172.17.0.1:43084 - โGET /mcp HTTP/1.1โ 200 OK
INFO: 172.17.0.1:43086 - โPOST /mcp HTTP/1.1โ 200 OK
INFO | mcp.server.lowlevel.server | server.py:727 | Processing request of type CallToolRequest
INFO | mcp_tools_logger | tally_mcp_tools.py:19 | [1] Received request with transport: None
INFO: 172.17.0.1:43102 - โPOST /mcp HTTP/1.1โ 200 OK
INFO | mcp.server.lowlevel.server | server.py:727 | Processing request of type ListToolsRequest
INFO | mcp.server.streamable_http | streamable_http.py:785 | Terminating session: 5f5d5c8550f94b5f8d9fcd9cb85b233c
INFO: 172.17.0.1:43116 - โDELETE /mcp HTTP/1.1โ 200 OK
MCP Client output:
(.venv) PS D:\work\code.tallyai.backend\conversational-ai\infra\cdk\lib\docker\llm-ws> python .\tally_mcp_client_test.py
[human]: list my companies.
[tool]: [{โtypeโ: โtextโ, โtextโ: โ{\n โtransportโ: null,\n โcompaniesโ: [\n โAcme Corpโ,\n โGlobex Incโ,\n โInitech Ltdโ\n ],\n โnoteโ: โFetched via None transportโ\n}โ, โidโ: โlc_619d944a-f402-4865-9ef6-569ed76ae5d2โ}]
[ai]: Here are your companies: Acme Corp, Globex Inc, Initech Ltd.
(.venv) PS D:\work\code.tallyai.backend\conversational-ai\infra\cdk\lib\docker\llm-ws>
is the issue in client side of Windows?
Working when client is run from Mac, but not working when client is run from Windows.
Client Code:
tally_mco_client_test.py
import os
os.environ["MCP_PORT"] = "8000"
import asyncio
from langchain.agents import create_agent
mcp_port = os.getenv("MCP_PORT", "8000")
def get_mcp_client(thread_id: str, company_id: str, mobile_number: str) -> MultiServerMCPClient:
tally_mcp_client = MultiServerMCPClient(
{
"tally_mcp_server": {
"transport": "streamable_http",
"url": "http://localhost:" + mcp_port + "/mcp"
}
}
)
return tally_mcp_client
async def main():
client = get_mcp_client(thread_id, company_id, mobile_number)
#Get Tools
tools = await client.get_tools()
agent = create_agent(
model=...,
tools=tools
)
tally_response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "list my companies."}]}
)
for msg in tally_response["messages"]:
role = getattr(msg, "type", msg.__class__.__name__)
content = msg.content
if content:
print(f"\n[{role}]: {content}")
if __name__ == "__main__":
asyncio.run(main())
Docker Directory For MCP Server:
Dockerfile:
# Step 1: Use the official Python image (LTS version)
FROM python:3.13
ARG MCP_PORT
ARG LOG_LEVEL
# Step 2: Set the working directory inside the container
WORKDIR /usr/src/app
COPY requirements.txt ./
COPY *.py ./
# Step 3 : Install required python packages
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Step 4: Expose the port your application will run on (e.g., 80)
# EXPOSE Ports
EXPOSE ${MCP_PORT}
#Environment Variables
ENV MCP_PORT=${MCP_PORT}
ENV LOG_LEVEL=${LOG_LEVEL}
# Step 6: Set the command to start the application
COPY --chmod=755 <<EOT /usr/src/app/start.sh
#!/usr/bin/env bash
set -e
uvicorn --host 0.0.0.0 --port ${MCP_PORT} tally_mcp_server:mcp_app
EOT
ENTRYPOINT ["/usr/src/app/start.sh"]
tally_mcp_server.py
from mcp.server.fastmcp import FastMCP
from fastmcp.server.context import Context
# Create a server instance
tally_mcp = FastMCP(
name="tally_mcp_server"
)
@tally_mcp.tool()
async def list_companies(ctx: Context) -> dict:
"""Return transport info and list sample companies based on the transport type."""
transport = ctx.transport
companies = ["Acme Corp", "Globex Inc", "Initech Ltd"]
return {
"transport": transport,
"companies": companies,
"note": f"Fetched via {transport!r} transport",
}
My client side requirement.txt file (is this a problem?)
langchain-google-vertexai~=3.2.2
boto3~=1.42.83
fastapi~=0.124.4
pandas~=2.3.3
psutil~=7.1.3
langchain-tavily~=0.2.17
cryptography~=46.0.6
langchain-community~=0.4.1
valkey~=6.1.1
langchain_aws~=1.1.0
scipy~=1.16.3
langchain_chroma~=1.0.0
langchain_milvus~=0.3.3
tiktoken~=0.12.0
jellyfish~=1.2.1
langgraph-checkpoint-postgres~=3.0.5
uvicorn-worker~=0.4.0
uvicorn~=0.38.0
gunicorn~=23.0.0
aiohttp~=3.13.5
google-auth~=2.49.1
google-auth-oauthlib~=1.3.1
langchain~=1.2.15
langchain-core~=1.2.26
langchain-google-genai~=4.2.1
google-genai~=1.66.0
google-cloud-aiplatform~=1.140.0
vertexai~=1.43.0
langgraph~=1.1.6
langgraph-supervisor~=0.0.31
langgraph-prebuilt~=1.0.9
fastmcp~=3.2.2
langchain-mcp-adapters~=0.2.2