Dynamically presenting MCP clients as tools to LLMs in Go
Transform any MCP server into OpenAI tools that your agent can call without hardcoding any details
The most powerful thing (and the weakest part too :)) is how dynamic the agents are. Depending on the scenario, the applications that encapsulate the LLMs must either relax or tighten the grip to get good results.
Today we will take a look at a “relaxed” plumbing that will enable maximum dynamic behavior. We will take in an arbitrary number of remote MCP servers, extract available actions and on the fly convert them to OpenAI tools for our agent to use.
In this article we will look into main components but for the full code you can check our repo:
Store MCP servers
Discover MCP capabilities (fetch tools)
Convert individual MCP actions into tools (MCP schema > OpenAI tool schema)
MCP and OpenAI Tools
I assume you already know about MCPs and OpenAI Tools already but I will just quickly brief you about them regardless.
Tools are presented as name + description + parameters to the LLMs and they are trained/whipped into calling them with non-malformed payloads. They more or less work well, the trick is to keep the schemas relatively simple and provide a few examples.
⚠️ Large schemas (with lots of parameters) can quickly
deteriorate even smartest models. ⚠️
An example tool presentation to the LLM:
import OpenAI from “openai”;
const client = new OpenAI();
const response = await client.responses.create({
model: “gpt-5”,
tools: [
{ type: “web_search” },
],
input: “What was a positive news story from today?”,
});
console.log(response.output_text);An example MCP presentation from the docs:
...
// List resources
const resources = await client.listResources();
// Read a resource
const resource = await client.readResource({
uri: ‘file:///example.txt’
});
// Call a tool
const result = await client.callTool({
name: ‘example-tool’,
arguments: {
arg1: ‘value’
}
});
...However this is not even close to actual usage, it gets quite verbose as you end up using a bunch of SDKs and potentially need to marry a 3rd party framework like langchain to prep the MCP for use within your application.
Keeping track of configured MCP servers
In order to make things work well, my minimal Go struct for MCP configuration came to be:
type ToolMCPClientConfig struct {
Name string
Description string
Enabled bool
URL string
Headers map[string]string
OAuthProvider string
Tools []mcp.Tool
}This ends up stored in Postgres on a slightly larger struct that contains ID, user ID, agent info but it’s not important for this example.
I chose to only support SSE/streaming HTTP options as the socket based servers are probably on the way out due to how limited they are and also it’s just too painful to get them running on the server side. What each field is:
Name - this will be presented as top level tool for the LLM
Description - when to use this tool (if it’s a Postgres MCP), user who is adding this MCP server should try to be descriptive here
URL - where to find the server
Headers and OAuthProvider are optional but we can use them for authentication later on
Tools - read-only, user is not suppose to enter them but we will grab the actions from the server during initial handshake. Let’s touch on this in the next section :)
Initial MCP handshake
When agents are in their agent cycle mode you don’t want to call the MCP server all the time as you would be adding significant latency.
First we construct the client:
...
// Initialize the MCP session
initRequest := mcp.InitializeRequest{
Params: mcp.InitializeParams{
ProtocolVersion: mcp.LATEST_PROTOCOL_VERSION,
Capabilities: mcp.ClientCapabilities{},
ClientInfo: mcp.Implementation{
Name: “helix-http-client”,
Version: data.GetHelixVersion(),
},
},
}
_, err = mcpClient.Initialize(ctx, initRequest)
if err != nil {
return nil, err
}
return mcpClient, nilThen connect, authenticate and list the tools:
import (
...
“github.com/mark3labs/mcp-go/mcp”
)
func InitializeMCPClientSkill(ctx context.Context, clientGetter ClientGetter, meta agent.Meta, oauthManager *oauth.Manager, cfg *types.AssistantMCP) (*types.ToolMCPClientConfig, error) {
mcpClient, err := clientGetter.NewClient(ctx, meta, oauthManager, cfg)
if err != nil {
return nil, err
}
// List tools, server description
toolsResp, err := mcpClient.ListTools(ctx, mcp.ListToolsRequest{})
if err != nil {
return nil, err
}
return &types.ToolMCPClientConfig{
Name: cfg.Name,
Description: cfg.Description,
Tools: toolsResp.Tools,
}, nil
}Here the retrieved tools will act as a cache for all the iterations that the agent is going through.
Presenting MCP Tools as OpenAI Tools
First thing that needs to happen for the LLM to be able to use a tool is viewing names, descriptions and parameters.
For each OpenAI tool within helix we define an interface where one of the methods is:
func (t *MCPClientTool) OpenAI() []openai.Tool {
return []openai.Tool{
{
Type: openai.ToolTypeFunction,
Function: &openai.FunctionDefinition{
Name: “mcp_” + t.mcpTool.Name, // Duplicate MCPs
Description: t.mcpTool.Description,
Parameters: buildParameters(t.mcpTool.InputSchema),
},
},
}
}Where buildParameters contains most useful things and can be found here, to get an idea, we have to recursively convert a map[string]any to jsonschema.Definition:
func convertMapToDefinition(data map[string]any) jsonschema.Definition {
def := jsonschema.Definition{}
// Handle type - ensure we always have a valid type
if typeVal, ok := data[”type”].(string); ok && typeVal != “” {
switch typeVal {
case “string”:
def.Type = jsonschema.String
case “integer”:
def.Type = jsonschema.Integer
...
...
// Handle properties (recursive)
if props, ok := data[”properties”].(map[string]any); ok {
properties := make(map[string]jsonschema.Definition)
for key, prop := range props {
if propMap, ok := prop.(map[string]any); ok {
properties[key] = convertMapToDefinition(propMap)
}
}
if len(properties) > 0 {
def.Properties = properties
def.Type = jsonschema.Object
}
...
// Handle items (for arrays)
if items, ok := data[”items”].(map[string]any); ok {
itemsDef := convertMapToDefinition(items)
def.Items = &itemsDef
}
}The goal here is to take whatever we have coming from MCP and convert to OpenAI tools format. Both are JSON in the end, however the format is slightly different.
I have noticed that certain MCP tools work with some models but not with others. Good example could be HubSpot MCP not working with Google Gemini models but working OK with OpenAI ones.
Calling the MCP tools from your agent
Thankfully, only the MCP tool parameter conversion is the hard part, once that’s done, actual execution is pretty easy:
import (
...
“github.com/mark3labs/mcp-go/mcp”
)
func (t *MCPClientTool) Execute(ctx context.Context, meta agent.Meta, args map[string]any) (string, error) {
client, err := t.clientGetter.NewClient(ctx, meta, t.oauthManager, &types.AssistantMCP{
URL: t.cfg.URL,
Headers: t.cfg.Headers,
OAuthProvider: t.cfg.OAuthProvider,
OAuthScopes: t.cfg.OAuthScopes,
})
if err != nil {
return “”, err
}
req := mcp.CallToolRequest{}
req.Params.Name = t.mcpTool.Name
req.Params.Arguments = args
res, err := client.CallTool(ctx, mcp.CallToolRequest{
Params: req.Params,
})
if err != nil {...}
var results []string
for _, content := range res.Content {
switch content := content.(type) {
case mcp.TextContent:
results = append(results, content.Text)
...That’s it, your agent can now talk to any MCP server directly.
Ok, you are ready to face the world
First thing to do is of course connecting your agent to the production database!
Good luck!


