Use Together AI's inference platform with MultiMail to build email agents powered by open-source models — with full email capabilities and human oversight.
Together AI provides a fast inference platform for a wide selection of open-source models with function calling support. MultiMail gives these models email infrastructure so your agents can send, receive, and manage email without being locked into a single proprietary model provider.
By integrating MultiMail with the Together AI API, teams using open models for cost efficiency get enterprise-grade email oversight without needing expensive proprietary APIs. The default gated_send mode means your agent drafts emails but a human approves before delivery.
Connect Together AI to MultiMail by defining email tools in Together's OpenAI-compatible function calling format and routing calls to the MultiMail REST API. Switch between dozens of open models without changing your email integration code.
Together AI hosts dozens of open models with function calling. Switch between Llama, Mistral, Qwen, and others without changing your MultiMail integration code — only the model name changes.
Open models vary in instruction-following quality. MultiMail's oversight modes provide a safety net — start with gated_send so every email is human-reviewed, and relax oversight only after validating model quality.
Together AI's competitive pricing for open models combined with MultiMail's tiered plans means you can build production email agents at lower cost than proprietary model APIs.
Together AI follows the OpenAI function calling format. Existing email agent code from OpenAI or Groq works with Together AI by changing only the base URL and model name.
Together AI supports fine-tuning open models. You can fine-tune a model specifically for your email workflows and domain, then connect it to MultiMail for domain-specific email agent performance.
from together import Together
import requests
import json
client = Together(api_key="your_together_api_key")
MULTIMAIL_API = "https://api.multimail.dev/v1"
MM_HEADERS = {"Authorization": "Bearer mm_live_your_api_key"}
email_tools = [
{
"type": "function",
"function": {
"name": "send_email",
"description": "Send an email through MultiMail. In gated_send mode, queues for human approval.",
"parameters": {
"type": "object",
"properties": {
"mailbox_id": {"type": "string", "description": "Mailbox to send from"},
"to": {"type": "string", "description": "Recipient email"},
"subject": {"type": "string", "description": "Subject line"},
"body": {"type": "string", "description": "Email body"}
},
"required": ["mailbox_id", "to", "subject", "body"]
}
}
},
{
"type": "function",
"function": {
"name": "check_inbox",
"description": "Check inbox for recent messages.",
"parameters": {
"type": "object",
"properties": {
"mailbox_id": {"type": "string", "description": "Mailbox to check"},
"limit": {"type": "integer", "description": "Max messages"}
},
"required": ["mailbox_id"]
}
}
}
]Create tool definitions using Together AI's OpenAI-compatible function calling format.
def execute_tool(name, args):
if name == "send_email":
resp = requests.post(f"{MULTIMAIL_API}/send", headers=MM_HEADERS, json=args)
elif name == "check_inbox":
resp = requests.get(
f"{MULTIMAIL_API}/mailboxes/{args[&"cm">#039;mailbox_id']}/inbox",
headers=MM_HEADERS, params={"limit": args.get("limit", 10)}
)
elif name == "reply_email":
resp = requests.post(f"{MULTIMAIL_API}/reply", headers=MM_HEADERS, json=args)
else:
return {"error": f"Unknown tool: {name}"}
return resp.json()
def run_email_agent(user_message, mailbox_id):
messages = [
{"role": "system", "content": f"You are an email assistant for mailbox {mailbox_id}. "
f"Emails use gated_send mode and queue for human approval."},
{"role": "user", "content": user_message}
]
while True:
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
messages=messages,
tools=email_tools
)
msg = response.choices[0].message
if msg.tool_calls:
messages.append(msg)
for tc in msg.tool_calls:
result = execute_tool(tc.function.name, json.loads(tc.function.arguments))
messages.append({
"role": "tool", "tool_call_id": tc.id,
"content": json.dumps(result)
})
else:
return msg.content
print(run_email_agent("Check my inbox and summarize new messages", "mbx_abc123"))Create an agentic loop using Together AI's chat completions with MultiMail tools.
MODELS = [
"meta-llama/Llama-3.3-70B-Instruct-Turbo",
"mistralai/Mixtral-8x22B-Instruct-v0.1",
"Qwen/Qwen2.5-72B-Instruct-Turbo"
]
def test_email_draft(model_name, prompt):
"""Test email drafting quality across different models."""
response = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": "You are an email assistant. "
"Draft professional emails. Emails use gated_send mode."},
{"role": "user", "content": prompt}
],
tools=email_tools
)
return {
"model": model_name,
"response": response.choices[0].message,
"usage": response.usage
}
results = []
for model in MODELS:
result = test_email_draft(
model,
"Draft a follow-up email to a client about project delays"
)
results.append(result)
print(f"{model}: {result[&"cm">#039;usage'].total_tokens} tokens")Test different open models on the same email task to find the best fit for your use case.
Sign up at multimail.dev, create a mailbox, and generate an API key from your dashboard. Your key will start with mm_live_.
Install the Together AI Python SDK and requests library for calling the MultiMail API.
pip install together requestsCreate tool definitions for send_email, check_inbox, and other MultiMail operations. Together AI uses the same OpenAI-compatible format.
Select an open model with function calling support and implement the agent loop. Start with Llama 3.3 70B for best quality.
response = client.chat.completions.create(
model="meta-llama/Llama-3.3-70B-Instruct-Turbo",
tools=email_tools,
messages=messages
)If your mailbox uses gated_send mode (the default), review and approve pending emails in the MultiMail dashboard before they are delivered.
Email infrastructure built for AI agents. Verifiable identity, graduated oversight, and a 38-tool MCP server. Formally verified in Lean 4.