Stateful Email Workflows for LangGraph Agents

Combine LangGraph's graph-based orchestration with MultiMail's email infrastructure to build multi-step email workflows with built-in human approval checkpoints.


LangGraph is a framework for building stateful, multi-step agent workflows as directed graphs. Built by the LangChain team, it excels at complex agent patterns that require cycles, branching, persistence, and human-in-the-loop checkpoints. MultiMail provides the email infrastructure layer that LangGraph agents need to send, receive, and manage messages within these workflows.

LangGraph's checkpoint system pairs naturally with MultiMail's oversight modes. You can model email approval as a graph node that pauses execution until a human approves the draft in MultiMail's pending queue, then resumes the workflow automatically. This makes gated_send a first-class workflow primitive rather than an afterthought.

Connect LangGraph to MultiMail by defining tool nodes that call the MultiMail REST API or by integrating the @multimail/mcp-server. Both approaches let your graph nodes send, read, and reply to emails while respecting oversight boundaries.

Built for LangGraph developers

Human-in-the-Loop as Graph Nodes

LangGraph's checkpoint system and MultiMail's gated_send mode align perfectly. Model email approval as an explicit node in your state graph, pausing execution until the human approves the draft.

Stateful Email Conversations

LangGraph persists state across graph executions. Combined with MultiMail's thread tracking via get_thread, your agent maintains full context across multi-turn email conversations spanning days or weeks.

Branching Email Logic

Use conditional edges to route emails based on content, sender, or intent. LangGraph's branching lets you build sophisticated triage workflows that classify inbound mail and route responses through different processing paths.

Cyclic Workflows for Follow-ups

LangGraph supports cycles, enabling agents that check for replies, send follow-ups, and loop until a conversation reaches resolution — all with MultiMail handling delivery and thread management.

Oversight Modes Match Workflow Stages

Start new email workflows in gated_send mode for safety, then programmatically escalate to monitored or autonomous as the workflow proves reliable. MultiMail's five oversight levels map to different trust stages in your graph.


Get started in minutes

Define Email State and Tools
python
import requests
from typing import TypedDict, Annotated, Sequence
from langgraph.graph import StateGraph, END
from langchain_core.messages import BaseMessage
import operator

MULTIMAIL_API = "https://api.multimail.dev/v1"
HEADERS = {"Authorization": "Bearer mm_live_your_api_key"}

class EmailState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
    inbox: list
    draft: dict
    approved: bool

def check_inbox(state: EmailState) -> dict:
    resp = requests.get(
        f"{MULTIMAIL_API}/mailboxes/your_mailbox_id/inbox",
        headers=HEADERS, params={"limit": 10}
    )
    return {"inbox": resp.json()["emails"]}

def send_email(state: EmailState) -> dict:
    draft = state["draft"]
    resp = requests.post(f"{MULTIMAIL_API}/send", headers=HEADERS, json=draft)
    return {"messages": [f"Email sent: {resp.json()}"]}

Set up the state schema and MultiMail API tools for a LangGraph email workflow.

Build a Graph with Approval Node
python
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver

def draft_reply(state: EmailState) -> dict:
    """LLM drafts a reply based on inbox content."""
    email = state["inbox"][0]
    return {"draft": {
        "mailbox_id": "your_mailbox_id",
        "to": email["from"],
        "subject": f"Re: {email[&"cm">#039;subject']}",
        "body": "Thank you for your message. I&"cm">#039;ll review and respond shortly."
    }}

def check_approval(state: EmailState) -> dict:
    """Check if the pending email was approved in MultiMail dashboard."""
    resp = requests.get(f"{MULTIMAIL_API}/pending", headers=HEADERS)
    pending = resp.json().get("pending", [])
    return {"approved": len(pending) == 0}

def should_send(state: EmailState) -> str:
    return "send" if state["approved"] else "wait"

graph = StateGraph(EmailState)
graph.add_node("check_inbox", check_inbox)
graph.add_node("draft", draft_reply)
graph.add_node("submit", send_email)
graph.add_node("await_approval", check_approval)

graph.set_entry_point("check_inbox")
graph.add_edge("check_inbox", "draft")
graph.add_edge("draft", "submit")
graph.add_edge("submit", "await_approval")
graph.add_conditional_edges("await_approval", should_send, {
    "send": END,
    "wait": "await_approval"
})

checkpointer = MemorySaver()
app = graph.compile(checkpointer=checkpointer)

Create a LangGraph workflow that drafts an email, waits for human approval via MultiMail's pending queue, then continues.

Email Triage with Conditional Routing
python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")

def classify_email(state: EmailState) -> dict:
    """Classify email intent using LLM."""
    email = state["inbox"][0]
    result = llm.invoke(
        f"Classify this email as &"cm">#039;urgent', 'routine', or 'spam': {email['subject']}"
    )
    return {"messages": [result], "classification": result.content.strip().lower()}

def route_email(state: EmailState) -> str:
    classification = state.get("classification", "routine")
    if classification == "urgent":
        return "urgent_reply"
    elif classification == "routine":
        return "auto_reply"
    return END  "cm"># spam gets dropped

graph = StateGraph(EmailState)
graph.add_node("fetch", check_inbox)
graph.add_node("classify", classify_email)
graph.add_node("urgent_reply", draft_reply)  "cm"># gated_send for review
graph.add_node("auto_reply", send_email)      "cm"># monitored mode

graph.set_entry_point("fetch")
graph.add_edge("fetch", "classify")
graph.add_conditional_edges("classify", route_email, {
    "urgent_reply": "urgent_reply",
    "auto_reply": "auto_reply"
})
graph.add_edge("urgent_reply", END)
graph.add_edge("auto_reply", END)

app = graph.compile()

Build a graph that classifies inbound emails and routes them through different processing paths.

Connect via MCP Server
python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent

async def create_email_graph():
    async with MultiServerMCPClient({
        "multimail": {
            "command": "npx",
            "args": ["-y", "@multimail/mcp-server"],
            "env": {"MULTIMAIL_API_KEY": "mm_live_your_api_key"}
        }
    }) as client:
        tools = client.get_tools()
        "cm"># create_react_agent builds a LangGraph graph with tool nodes
        agent = create_react_agent(llm, tools)
        result = await agent.ainvoke({
            "messages": [{"role": "user", "content": "Check inbox and reply to urgent emails"}]
        })
        return result

Use the MultiMail MCP server with LangGraph for automatic tool discovery in graph nodes.


Step by step

1

Create a MultiMail Account and API Key

Sign up at multimail.dev, create a mailbox, and generate an API key from your dashboard. Your key will start with mm_live_.

2

Install Dependencies

Install LangGraph, LangChain core, and requests for calling the MultiMail API.

bash
pip install langgraph langchain-openai requests
3

Define Your State and Nodes

Create a TypedDict for your email workflow state and implement node functions that call MultiMail endpoints for inbox checking, email drafting, and sending.

4

Build and Compile the Graph

Wire nodes together with edges and conditional routing. Add a checkpointer for state persistence across workflow executions.

bash
graph = StateGraph(EmailState)
graph.add_node("check_inbox", check_inbox)
graph.add_node("draft", draft_reply)
graph.set_entry_point("check_inbox")
app = graph.compile(checkpointer=MemorySaver())
5

Run and Monitor

Execute the graph and review pending emails in the MultiMail dashboard. Approve or reject drafts, and the graph will proceed based on approval status.

bash
result = app.invoke({"messages": [], "inbox": [], "draft": {}, "approved": False})

Common questions

How does LangGraph's human-in-the-loop work with MultiMail's oversight?
LangGraph's checkpoint system can pause graph execution at any node. When your graph hits a send_email node in gated_send mode, MultiMail queues the email for human approval. You can add an approval-polling node that checks the pending queue and resumes the graph once approved. This combines LangGraph's workflow control with MultiMail's email-specific oversight.
Can I persist LangGraph state across email conversations that span days?
Yes. LangGraph supports persistent checkpointers like SQLite or PostgreSQL. Your graph state — including inbox history, drafted emails, and conversation context — persists across executions. This is ideal for email workflows where responses may take hours or days to arrive.
What's the difference between using LangGraph vs plain LangChain with MultiMail?
LangChain is best for simple tool-calling agents that react to a single prompt. LangGraph adds stateful, multi-step workflows with branching and cycles. Use LangGraph when your email workflow has multiple stages (triage, draft, approve, send, follow up) or needs to maintain state across long-running conversations.
Can I use LangGraph's prebuilt ReAct agent with MultiMail?
Yes. LangGraph's create_react_agent function builds a graph with tool-calling capabilities. You can pass MultiMail tools (either custom functions or MCP-discovered tools) and the agent will reason about when to check email, draft replies, and send messages within the ReAct loop.
How do I handle email attachments in a LangGraph workflow?
Add an attachment-processing node to your graph that encodes files as base64 and includes them in the MultiMail API request body. LangGraph's state can carry attachment data between nodes, so one node can download files and a later node can attach them to outgoing emails.

Explore more

The only agent email with a verifiable sender

Email infrastructure built for AI agents. Verifiable identity, graduated oversight, and a 38-tool MCP server. Formally verified in Lean 4.