Open-Model Email Agents with Together AI

Use Together AI's inference platform with MultiMail to build email agents powered by open-source models — with full email capabilities and human oversight.


Together AI provides a fast inference platform for a wide selection of open-source models with function calling support. MultiMail gives these models email infrastructure so your agents can send, receive, and manage email without being locked into a single proprietary model provider.

By integrating MultiMail with the Together AI API, teams using open models for cost efficiency get enterprise-grade email oversight without needing expensive proprietary APIs. The default gated_send mode means your agent drafts emails but a human approves before delivery.

Connect Together AI to MultiMail by defining email tools in Together's OpenAI-compatible function calling format and routing calls to the MultiMail REST API. Switch between dozens of open models without changing your email integration code.

Built for Together AI API developers

Model Flexibility

Together AI hosts dozens of open models with function calling. Switch between Llama, Mistral, Qwen, and others without changing your MultiMail integration code — only the model name changes.

Graduated Trust via Oversight Modes

Open models vary in instruction-following quality. MultiMail's oversight modes provide a safety net — start with gated_send so every email is human-reviewed, and relax oversight only after validating model quality.

Cost-Effective Email Agents

Together AI's competitive pricing for open models combined with MultiMail's tiered plans means you can build production email agents at lower cost than proprietary model APIs.

OpenAI-Compatible API

Together AI follows the OpenAI function calling format. Existing email agent code from OpenAI or Groq works with Together AI by changing only the base URL and model name.

Fine-Tuning Potential

Together AI supports fine-tuning open models. You can fine-tune a model specifically for your email workflows and domain, then connect it to MultiMail for domain-specific email agent performance.


Get started in minutes

Define MultiMail Tools for Together AI
python
from together import Together
import requests
import json

client = Together(api_key="your_together_api_key")
MULTIMAIL_API = "https://api.multimail.dev/v1"
MM_HEADERS = {"Authorization": "Bearer mm_live_your_api_key"}

email_tools = [
    {
        "type": "function",
        "function": {
            "name": "send_email",
            "description": "Send an email through MultiMail. In gated_send mode, queues for human approval.",
            "parameters": {
                "type": "object",
                "properties": {
                    "mailbox_id": {"type": "string", "description": "Mailbox to send from"},
                    "to": {"type": "string", "description": "Recipient email"},
                    "subject": {"type": "string", "description": "Subject line"},
                    "body": {"type": "string", "description": "Email body"}
                },
                "required": ["mailbox_id", "to", "subject", "body"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "check_inbox",
            "description": "Check inbox for recent messages.",
            "parameters": {
                "type": "object",
                "properties": {
                    "mailbox_id": {"type": "string", "description": "Mailbox to check"},
                    "limit": {"type": "integer", "description": "Max messages"}
                },
                "required": ["mailbox_id"]
            }
        }
    }
]

Create tool definitions using Together AI's OpenAI-compatible function calling format.

Build an Email Agent with Together AI
python
def execute_tool(name, args):
    if name == "send_email":
        resp = requests.post(f"{MULTIMAIL_API}/send", headers=MM_HEADERS, json=args)
    elif name == "check_inbox":
        resp = requests.get(
            f"{MULTIMAIL_API}/mailboxes/{args[&"cm">#039;mailbox_id']}/inbox",
            headers=MM_HEADERS, params={"limit": args.get("limit", 10)}
        )
    elif name == "reply_email":
        resp = requests.post(f"{MULTIMAIL_API}/reply", headers=MM_HEADERS, json=args)
    else:
        return {"error": f"Unknown tool: {name}"}
    return resp.json()

def run_email_agent(user_message, mailbox_id):
    messages = [
        {"role": "system", "content": f"You are an email assistant for mailbox {mailbox_id}. "
         f"Emails use gated_send mode and queue for human approval."},
        {"role": "user", "content": user_message}
    ]
    while True:
        response = client.chat.completions.create(
            model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
            messages=messages,
            tools=email_tools
        )
        msg = response.choices[0].message
        if msg.tool_calls:
            messages.append(msg)
            for tc in msg.tool_calls:
                result = execute_tool(tc.function.name, json.loads(tc.function.arguments))
                messages.append({
                    "role": "tool", "tool_call_id": tc.id,
                    "content": json.dumps(result)
                })
        else:
            return msg.content

print(run_email_agent("Check my inbox and summarize new messages", "mbx_abc123"))

Create an agentic loop using Together AI's chat completions with MultiMail tools.

Compare Models for Email Quality
python
MODELS = [
    "meta-llama/Llama-3.3-70B-Instruct-Turbo",
    "mistralai/Mixtral-8x22B-Instruct-v0.1",
    "Qwen/Qwen2.5-72B-Instruct-Turbo"
]

def test_email_draft(model_name, prompt):
    """Test email drafting quality across different models."""
    response = client.chat.completions.create(
        model=model_name,
        messages=[
            {"role": "system", "content": "You are an email assistant. "
             "Draft professional emails. Emails use gated_send mode."},
            {"role": "user", "content": prompt}
        ],
        tools=email_tools
    )
    return {
        "model": model_name,
        "response": response.choices[0].message,
        "usage": response.usage
    }

results = []
for model in MODELS:
    result = test_email_draft(
        model,
        "Draft a follow-up email to a client about project delays"
    )
    results.append(result)
    print(f"{model}: {result[&"cm">#039;usage'].total_tokens} tokens")

Test different open models on the same email task to find the best fit for your use case.


Step by step

1

Create a MultiMail Account and API Key

Sign up at multimail.dev, create a mailbox, and generate an API key from your dashboard. Your key will start with mm_live_.

2

Install Dependencies

Install the Together AI Python SDK and requests library for calling the MultiMail API.

bash
pip install together requests
3

Define Email Tool Schemas

Create tool definitions for send_email, check_inbox, and other MultiMail operations. Together AI uses the same OpenAI-compatible format.

4

Choose a Model and Build the Agent

Select an open model with function calling support and implement the agent loop. Start with Llama 3.3 70B for best quality.

bash
response = client.chat.completions.create(
    model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
    tools=email_tools,
    messages=messages
)
5

Approve Pending Emails

If your mailbox uses gated_send mode (the default), review and approve pending emails in the MultiMail dashboard before they are delivered.


Common questions

Which Together AI model is best for email agents?
Llama 3.3 70B Instruct Turbo offers the best balance of quality and cost for email tasks. For simpler triage and categorization, smaller models like Llama 3.1 8B work well. Together AI lets you experiment with different models without changing your MultiMail integration.
What happens when my agent sends an email in gated_send mode?
In gated_send mode, the MultiMail API returns a success response with a pending status. The email is queued for human review in the MultiMail dashboard. Once approved, it is delivered. Your agent can check the status of pending emails using the list_pending endpoint.
Can I fine-tune a model for email tasks on Together AI?
Yes. Together AI supports fine-tuning open models. You could fine-tune a model on your organization's email style, terminology, and common workflows, then connect it to MultiMail. This produces higher-quality email drafts that require less human editing during approval.
How do I switch between models without breaking my email integration?
Change only the model parameter in your chat.completions.create call. The tool definitions and MultiMail API integration remain identical across all models. This makes A/B testing different models straightforward.
Is there rate limiting on the MultiMail API?
Rate limits depend on your plan tier. The Starter (free) plan allows 200 emails per month, while paid plans range from 5,000 to 150,000. The API returns standard 429 responses when limits are reached, which your agent can handle with retry logic.

Explore more

The only agent email with a verifiable sender

Email infrastructure built for AI agents. Verifiable identity, graduated oversight, and a 38-tool MCP server. Formally verified in Lean 4.