Combine LangGraph's graph-based orchestration with MultiMail's email infrastructure to build multi-step email workflows with built-in human approval checkpoints.
LangGraph is a framework for building stateful, multi-step agent workflows as directed graphs. Built by the LangChain team, it excels at complex agent patterns that require cycles, branching, persistence, and human-in-the-loop checkpoints. MultiMail provides the email infrastructure layer that LangGraph agents need to send, receive, and manage messages within these workflows.
LangGraph's checkpoint system pairs naturally with MultiMail's oversight modes. You can model email approval as a graph node that pauses execution until a human approves the draft in MultiMail's pending queue, then resumes the workflow automatically. This makes gated_send a first-class workflow primitive rather than an afterthought.
Connect LangGraph to MultiMail by defining tool nodes that call the MultiMail REST API or by integrating the @multimail/mcp-server. Both approaches let your graph nodes send, read, and reply to emails while respecting oversight boundaries.
LangGraph's checkpoint system and MultiMail's gated_send mode align perfectly. Model email approval as an explicit node in your state graph, pausing execution until the human approves the draft.
LangGraph persists state across graph executions. Combined with MultiMail's thread tracking via get_thread, your agent maintains full context across multi-turn email conversations spanning days or weeks.
Use conditional edges to route emails based on content, sender, or intent. LangGraph's branching lets you build sophisticated triage workflows that classify inbound mail and route responses through different processing paths.
LangGraph supports cycles, enabling agents that check for replies, send follow-ups, and loop until a conversation reaches resolution — all with MultiMail handling delivery and thread management.
Start new email workflows in gated_send mode for safety, then programmatically escalate to monitored or autonomous as the workflow proves reliable. MultiMail's five oversight levels map to different trust stages in your graph.
import requests
from typing import TypedDict, Annotated, Sequence
from langgraph.graph import StateGraph, END
from langchain_core.messages import BaseMessage
import operator
MULTIMAIL_API = "https://api.multimail.dev/v1"
HEADERS = {"Authorization": "Bearer mm_live_your_api_key"}
class EmailState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
inbox: list
draft: dict
approved: bool
def check_inbox(state: EmailState) -> dict:
resp = requests.get(
f"{MULTIMAIL_API}/mailboxes/your_mailbox_id/inbox",
headers=HEADERS, params={"limit": 10}
)
return {"inbox": resp.json()["emails"]}
def send_email(state: EmailState) -> dict:
draft = state["draft"]
resp = requests.post(f"{MULTIMAIL_API}/send", headers=HEADERS, json=draft)
return {"messages": [f"Email sent: {resp.json()}"]}Set up the state schema and MultiMail API tools for a LangGraph email workflow.
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver
def draft_reply(state: EmailState) -> dict:
"""LLM drafts a reply based on inbox content."""
email = state["inbox"][0]
return {"draft": {
"mailbox_id": "your_mailbox_id",
"to": email["from"],
"subject": f"Re: {email[&"cm">#039;subject']}",
"body": "Thank you for your message. I&"cm">#039;ll review and respond shortly."
}}
def check_approval(state: EmailState) -> dict:
"""Check if the pending email was approved in MultiMail dashboard."""
resp = requests.get(f"{MULTIMAIL_API}/pending", headers=HEADERS)
pending = resp.json().get("pending", [])
return {"approved": len(pending) == 0}
def should_send(state: EmailState) -> str:
return "send" if state["approved"] else "wait"
graph = StateGraph(EmailState)
graph.add_node("check_inbox", check_inbox)
graph.add_node("draft", draft_reply)
graph.add_node("submit", send_email)
graph.add_node("await_approval", check_approval)
graph.set_entry_point("check_inbox")
graph.add_edge("check_inbox", "draft")
graph.add_edge("draft", "submit")
graph.add_edge("submit", "await_approval")
graph.add_conditional_edges("await_approval", should_send, {
"send": END,
"wait": "await_approval"
})
checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)Create a LangGraph workflow that drafts an email, waits for human approval via MultiMail's pending queue, then continues.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
def classify_email(state: EmailState) -> dict:
"""Classify email intent using LLM."""
email = state["inbox"][0]
result = llm.invoke(
f"Classify this email as &"cm">#039;urgent', 'routine', or 'spam': {email['subject']}"
)
return {"messages": [result], "classification": result.content.strip().lower()}
def route_email(state: EmailState) -> str:
classification = state.get("classification", "routine")
if classification == "urgent":
return "urgent_reply"
elif classification == "routine":
return "auto_reply"
return END "cm"># spam gets dropped
graph = StateGraph(EmailState)
graph.add_node("fetch", check_inbox)
graph.add_node("classify", classify_email)
graph.add_node("urgent_reply", draft_reply) "cm"># gated_send for review
graph.add_node("auto_reply", send_email) "cm"># monitored mode
graph.set_entry_point("fetch")
graph.add_edge("fetch", "classify")
graph.add_conditional_edges("classify", route_email, {
"urgent_reply": "urgent_reply",
"auto_reply": "auto_reply"
})
graph.add_edge("urgent_reply", END)
graph.add_edge("auto_reply", END)
app = graph.compile()Build a graph that classifies inbound emails and routes them through different processing paths.
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
async def create_email_graph():
async with MultiServerMCPClient({
"multimail": {
"command": "npx",
"args": ["-y", "@multimail/mcp-server"],
"env": {"MULTIMAIL_API_KEY": "mm_live_your_api_key"}
}
}) as client:
tools = client.get_tools()
"cm"># create_react_agent builds a LangGraph graph with tool nodes
agent = create_react_agent(llm, tools)
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Check inbox and reply to urgent emails"}]
})
return resultUse the MultiMail MCP server with LangGraph for automatic tool discovery in graph nodes.
Sign up at multimail.dev, create a mailbox, and generate an API key from your dashboard. Your key will start with mm_live_.
Install LangGraph, LangChain core, and requests for calling the MultiMail API.
pip install langgraph langchain-openai requestsCreate a TypedDict for your email workflow state and implement node functions that call MultiMail endpoints for inbox checking, email drafting, and sending.
Wire nodes together with edges and conditional routing. Add a checkpointer for state persistence across workflow executions.
graph = StateGraph(EmailState)
graph.add_node("check_inbox", check_inbox)
graph.add_node("draft", draft_reply)
graph.set_entry_point("check_inbox")
app = graph.compile(checkpointer=MemorySaver())Execute the graph and review pending emails in the MultiMail dashboard. Approve or reject drafts, and the graph will proceed based on approval status.
result = app.invoke({"messages": [], "inbox": [], "draft": {}, "approved": False})Email infrastructure built for AI agents. Verifiable identity, graduated oversight, and a 38-tool MCP server. Formally verified in Lean 4.