Run LLMs locally in LM Studio's desktop app and connect them to MultiMail for email capabilities — keeping your data private with human oversight as a safety net.
LM Studio is a desktop application for running LLMs locally with a user-friendly interface, model discovery, and an OpenAI-compatible local server. MultiMail provides the email infrastructure layer that turns LM Studio's local models into functional email agents with full send, receive, and management capabilities.
For users running local models for privacy-sensitive email tasks, MultiMail's oversight provides a critical safety net. The default gated_send mode ensures every email drafted by a local model requires human approval before delivery, so you get the privacy benefits of local inference with the safety of human review.
Connect LM Studio to MultiMail through its OpenAI-compatible local server endpoint. Use the OpenAI Python SDK pointed at LM Studio's server to call MultiMail's REST API with tool calling, making integration straightforward.
LM Studio keeps your model inference completely local. Combined with MultiMail, only the final email content leaves your machine. Your prompts, reasoning, and email analysis all stay private.
Local models may produce inconsistent outputs. MultiMail's gated_send mode ensures every email is human-reviewed before delivery, catching quality issues that local models are more prone to.
LM Studio's model browser lets you discover and download models with function calling support. Try different models for email tasks without any command-line setup.
LM Studio's local server exposes an OpenAI-compatible API. Use the same email agent code you would write for OpenAI's cloud API — just point it at localhost.
Start with gated_all (human approves every action) when testing new local models, then move to gated_send once you are confident in the model's email quality.
from openai import OpenAI
import requests
import json
"cm"># Point OpenAI client at LM Studio's local server
client = OpenAI(
base_url="http://localhost:1234/v1",
api_key="lm-studio" "cm"># LM Studio doesn't require a real key
)
MULTIMAIL_API = "https://api.multimail.dev/v1"
MM_HEADERS = {"Authorization": "Bearer mm_live_your_api_key"}
email_tools = [
{
"type": "function",
"function": {
"name": "send_email",
"description": "Send an email through MultiMail. In gated_send mode, queues for human approval.",
"parameters": {
"type": "object",
"properties": {
"mailbox_id": {"type": "string", "description": "Mailbox to send from"},
"to": {"type": "string", "description": "Recipient email"},
"subject": {"type": "string", "description": "Subject line"},
"body": {"type": "string", "description": "Email body"}
},
"required": ["mailbox_id", "to", "subject", "body"]
}
}
},
{
"type": "function",
"function": {
"name": "check_inbox",
"description": "Check inbox for recent messages.",
"parameters": {
"type": "object",
"properties": {
"mailbox_id": {"type": "string", "description": "Mailbox to check"},
"limit": {"type": "integer", "description": "Max messages"}
},
"required": ["mailbox_id"]
}
}
}
]Use the OpenAI Python SDK pointed at LM Studio's local server with MultiMail email tools.
def execute_tool(name, args):
if name == "send_email":
resp = requests.post(f"{MULTIMAIL_API}/send", headers=MM_HEADERS, json=args)
elif name == "check_inbox":
resp = requests.get(
f"{MULTIMAIL_API}/mailboxes/{args[&"cm">#039;mailbox_id']}/inbox",
headers=MM_HEADERS, params={"limit": args.get("limit", 10)}
)
elif name == "reply_email":
resp = requests.post(f"{MULTIMAIL_API}/reply", headers=MM_HEADERS, json=args)
else:
return {"error": f"Unknown tool: {name}"}
return resp.json()
def run_email_agent(user_message, mailbox_id):
messages = [
{"role": "system", "content": f"You are an email assistant for mailbox {mailbox_id}. "
f"Emails use gated_send mode and queue for human approval."},
{"role": "user", "content": user_message}
]
while True:
response = client.chat.completions.create(
model="llama-3.3-70b", # Use your loaded model name
messages=messages,
tools=email_tools
)
msg = response.choices[0].message
if msg.tool_calls:
messages.append(msg)
for tc in msg.tool_calls:
result = execute_tool(
tc.function.name,
json.loads(tc.function.arguments)
)
messages.append({
"role": "tool", "tool_call_id": tc.id,
"content": json.dumps(result)
})
else:
return msg.content
print(run_email_agent("Check my inbox and summarize new messages", "mbx_abc123"))Create an agentic loop using LM Studio's OpenAI-compatible API with MultiMail tools.
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'http://localhost:1234/v1',
apiKey: 'lm-studio'
});
const MULTIMAIL_API = 'https://api.multimail.dev/v1';
const MM_HEADERS = { Authorization: 'Bearer mm_live_your_api_key' };
const tools: OpenAI.Chat.Completions.ChatCompletionTool[] = [
{
type: 'function',
function: {
name: 'send_email',
description: 'Send an email. Queues for approval in gated_send mode.',
parameters: {
type: 'object',
properties: {
mailbox_id: { type: 'string' },
to: { type: 'string' },
subject: { type: 'string' },
body: { type: 'string' }
},
required: ['mailbox_id', 'to', 'subject', 'body']
}
}
}
];
const response = await client.chat.completions.create({
model: 'llama-3.3-70b',
messages: [
{ role: 'system', content: 'Email assistant. gated_send mode.' },
{ role: 'user', content: 'Draft a project update email to the team' }
],
tools
});Use the OpenAI Node.js SDK with LM Studio's local server for TypeScript projects.
Sign up at multimail.dev, create a mailbox, and generate an API key from your dashboard. Your key will start with mm_live_.
Download LM Studio from lmstudio.ai. Use the built-in model browser to download a model with function calling support, such as Llama 3.3 70B.
In LM Studio, start the local server (default: localhost:1234). Enable the OpenAI-compatible API endpoint.
Install the OpenAI SDK and requests library. Point the client at LM Studio's local server and build the agent loop.
pip install openai requestsReview and approve pending emails in the MultiMail dashboard. This step is especially important with local models that may produce lower-quality outputs.
Email infrastructure built for AI agents. Verifiable identity, graduated oversight, and a 38-tool MCP server. Formally verified in Lean 4.