Hosting an agent server on Heroku

I’m trying to host an agent on Heroku as an Agent Server.

I’ve been following this guide so far: Self-host standalone servers - Docs by LangChain

According to this, it needs access to a Postgres DB and Redis Store.

And on Heroku, I use the container registry as I figure that using Heroku’s build-system where you just push to their repo won’t work: Container Registry & Runtime (Docker Deploys) | Heroku Dev Center

One major problem is that the Langgraph server expects DATABASE_URIand REDIS_URIas environment variables. Heroku though sets these by itself as DATABASE_URL and REDIS_URL and we can’t change them, and the connection strings can be rotated by Heroku anytime.

Hence, just duplicating these variables and copying over the connection strings won’t work either (not long term, at least).

So I wondered if there’s a way to either configure Langgraph to use different variables (looking into langgraph_api/config.py it doesn’t seem like it’s possible) - or if I can set the variables somewhere else at runtime, when the docker container starts up or something?

I’ve also seen these here, maybe this helps? If so, where to define these vars and how do I set them to my DATABASE_URL from the Heroku environment?

Here’s my agent.py file - we also see that it loads another DATABASE_URL but that’s the one containing production data which the agent is supposed to be answering questions about.

import os
from datetime import datetime, timedelta

import dotenv
import numpy as np
import pandas as pd
from langchain.agents import create_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities import SQLDatabase
from langgraph.checkpoint.memory import MemorySaver
from langchain_experimental.tools import PythonREPLTool
from langchain_experimental.utilities import PythonREPL
from langchain_openai import ChatOpenAI

from system_prompts import SQL_AGENT_PROMPT

dotenv.load_dotenv(override=False)

# Use LM Studio for local models or OpenAI for cloud
USE_LOCAL = os.getenv("USE_LOCAL", "false").lower() == "true"
# Disable checkpointer for langgraph server (enabled by default)
DISABLE_CHECKPOINTER = os.getenv("DISABLE_CHECKPOINTER", "false").lower() == "true"

# just for local hosting
if USE_LOCAL:
    model = ChatOpenAI(
        # Model name doesn't matter for LM Studio
        model=os.getenv("LOCAL_MODEL", "local-model"),
        temperature=0,
        streaming=True,
        base_url=os.getenv("LOCAL_BASE_URL", "http://localhost:1234/v1"),
        # LM Studio doesn't require API key
        api_key="not-needed",
    )
else:
    model = ChatOpenAI(model="gpt-5.1", temperature=0, streaming=True)


# was used for former deployment, might be obsolete
def normalize_db_uri(uri: str) -> str:
    """
    Normalize database connection URIs.
    - Convert 'postgres://' (Heroku) to 'postgresql://' (SQLAlchemy)
    """
    if not uri:
        return uri
    if uri.startswith("postgres://"):
        return uri.replace("postgres://", "postgresql://", 1)
    return uri


database_url = os.getenv("DATABASE_URL")
database_url = normalize_db_uri(database_url)

db = SQLDatabase.from_uri(database_url)

toolkit = SQLDatabaseToolkit(db=db, llm=model)

tools = [tool for tool in toolkit.get_tools() if
         tool.name != "sql_db_query_checker"]

# Add Python REPL tool for code execution with restricted globals
# Create Python REPL with restricted globals
python_repl_instance = PythonREPL(
    _globals={
        "pd": pd,
        "pandas": pd,
        "np": np,
        "numpy": np,
        "datetime": datetime,
        "timedelta": timedelta,
    }
)

# Create the tool with custom description
python_repl = PythonREPLTool(python_repl=python_repl_instance)
python_repl.description = (
    "A Python shell for executing code. Use this to process data, "
    "perform calculations, filter results, or do multi-step analysis. "
    "Input should be valid Python code. Available libraries: pandas (pd), "
    "numpy (np), datetime. Results from SQL queries can be converted to "
    "DataFrames for processing."
)
tools.append(python_repl)

system_prompt = SQL_AGENT_PROMPT.format(
    dialect=db.dialect,
    top_k=5,
)

checkpointer = None if DISABLE_CHECKPOINTER else MemorySaver()

agent = create_agent(
    model, tools, system_prompt=system_prompt, checkpointer=checkpointer
)

I’ve figured it out and wrote up my findings (will add an example repo later):

2 Likes