Last week, I wrote about building an MCP server for DiceDB. In this article, I will show how to integrate that MCP server, or any MCP server, inside your LLM workflows, using the OpenAI Agents SDK.
The Agents SDK recently added support for MCP, strengthening its place as the default standard for LLM-application communication. This allows developers to leverage the existing features of the SDK to build more robust workflows with a growing list of MCP servers.
I found it incredibly easy to use, especially to augment my existing LLM workflows (that already use the Agents SDK) with external tools. Complementary features like handoffs and guardrails can make MCP-enabled workflows even more useful.
Install Required Libraries
Install the Agents SDK:
uv pip install openai-agents
To set your OpenAI API key and the DiceDB server URL via a .env
file, also install:
uv pip install openai python-dotenv
Import the Libraries
We’ll start by importing the libraries:
1from agents import Agent, Runner, trace
2from agents.mcp import MCPServer, MCPServerStdio
3from dotenv import load_dotenv
4import os
5import openai
6import asyncio
7
8load_dotenv()
Make sure your .env
files contain:
1OPENAI_API_KEY=your-api-key-here
2DICEDB_SERVER_URL=localhost:7379
Create an Agent
Let’s write a function that defines our “DiceDB MCP” agent. The agent takes a prompt, the DiceDB server URL, and an MCP server as input and prints the response after running:
10# run function runs the DiceDB MCP agent
11async def run(mcp_server: MCPServer, prompt: str, server_url: str):
12 agent = Agent(name="DiceDB MCP", # Name of the agent
13 # Make sure the LLM passes the server_url to the MCP server
14 instructions=f"""You can interact with a DiceDB
15 database running at {server_url},
16 USE THIS FOR URL.""",
17 mcp_servers=[mcp_server], # Use the MCP server with this agent
18 handoffs=[], # You can add handoffs
19 input_guardrails=[], # and guardrails like for other agents
20 )
21 # Run the agent with the prompt
22 result = await Runner.run(starting_agent=agent, input=prompt)
23 # Print the final output after running the tool from the MCP server
24 print("Final response:\n", result.final_output)
Here’s what the code does:
- 11: Create a
run
function that creates and runs an agent. - 12-20: Define the
"DiceDB MCP"
agent, which uses themcp_server
passed in the calling function and instructs the LLM to pass theserver_url
to the MCP tool. - 22-24: Run the agent with the provided
prompt
and print theresult
.
As shown, you can also integrate other features provided by the SDK, such as handoffs and guardrails. However, we will focus on integrating the MCP server to keep this example simple.
Run the Agent
Now let’s run the agent. For this example, we have hard-coded the prompt, but you can follow the same logic as your existing agentic workflows and get a prompt from a real user:
26async def main():
27 openai.api_key = os.getenv("OPENAI_API_KEY")
28 if not openai.api_key:
29 raise RuntimeError("OPENAI_API_KEY not set in environment variables.")
30
31 server_url = os.getenv("DICEDB_SERVER_URL")
32 if not server_url:
33 raise RuntimeError(
34 "DICEDB_SERVER_URL not set in environment variables.")
35 # Hardcoded prompt, and it has been 20 years since Friends
36 prompt = """Can you update the 'name' key
37 with the value 'Rachel Green'?
38 If it's already 'Rachel Green',
39 change it to 'Chandler Bing'."""
40
41 try:
42 # The MCP server is running locally
43 # and uses stdio transport
44 async with MCPServerStdio(
45 # Cache the list of available tools from the
46 # MCP server, as the tools list won't change
47 cache_tools_list=True,
48 # Run the MCP server binary at provided path
49 params={"command": "/Users/pottekkat/go/bin/dicedb-mcp",
50 "args": [""]},
51 ) as server:
52 print("Running the DiceDB agent...")
53 # Automatically trace the MCP operations
54 with trace(workflow_name="DiceDB MCP"):
55 await run(server, prompt, server_url)
56
57 except Exception as e:
58 print("Failed to run the DiceDB agent:", e)
59
60
61if __name__ == "__main__":
62 asyncio.run(main())
This is what’s happening here:
- 44: The MCP server runs locally and uses stdio transport. So, the agent will use it to communicate with the server.
- 47: The MCP server does not dynamically change the list of available tools. So, it’s ok to cache the list of available tools.
- 49: The MCP server we made last week is just a binary in
/Users/pottekkat/go/bin/dicedb-mcp
. You can replace it with the command to run your server. - 54: Use the built-in tracing functionality provided by the SDK.
- 55: Call the
run
function we defined before with the MCP server we initialized along with the prompt and the DiceDB server URL.
Actually Run the Agent
Let’s actually try running this script:
uv run main.py
You will get a response like:
The 'name' key was updated to 'Chandler Bing'.
You’ll also be able to see the trace for this call on the OpenAI Platform Dashboard. It should look something like this:
This is automatically generated. You can also customize the trace.
The introduction of MCP has improved the usefulness of an already popular SDK, and I have little doubt about the ubiquity of the MCP standard in the foreseeable future.
However, the Agents SDK is only available in Python now, while there are MCP SDKs available in a variety of programming languages. But I guess this is temporary, and new SDKs for languages like JS and Go will soon be released.
Thank you for reading "Provide Tools to Your LLM Agents with Model Context Protocol."
Subscribe via email or RSS feed to be the first to receive my content.
If you liked this post, check out my featured posts or learn more about me.