We’re examining how FastMCP turns plain Python functions into robust, observable prompts. FastMCP is an MCP server framework that tries to keep authors in "normal Python" while still exposing rich, well‑typed capabilities to MCP clients. At the center of that effort is prompt.py, which quietly takes a callable, infers its API, and wires it into synchronous or background execution. I’m Mahmoud Zalt, an AI software engineer, and in this article we’ll walk through how this file pulls off that transformation — and what that teaches us about designing function‑first APIs.
From function to prompt: the core flow
To see how a plain function becomes a prompt, we need the dataflow. Once that is clear, the rest of the abstractions fall into place.
fastmcp/
prompts/
prompt.py <-- defines Message, PromptResult, Prompt, FunctionPrompt
MCP server
|
| calls Prompt._render(arguments, task_meta)
v
Prompt._render
|-- check_background_task(...) --(may enqueue)--> Docket
| |
| v
| background execution
|
`--(if no background)--> Prompt.render(...) [overridden by FunctionPrompt.render]
|
v
user-defined function (wrapped)
|
v
Prompt.convert_result(...) -> PromptResult
|
v
PromptResult.to_mcp_prompt_result() -> GetPromptResult
The whole module acts as a translator: it adapts plain Python functions into MCP prompts, and MCP responses back into a canonical PromptResult shape.
The main pieces are:
Message– a single prompt message with role and content, always normalized into MCP‑compatible types.PromptResult– a "mailbag" of one or moreMessageobjects plus optional metadata.Prompt– an abstract component that knows how to publish itself as an MCP prompt and how to normalize raw results.FunctionPrompt– aPromptthat wraps a Python callable, auto‑generates argument schemas, and wires in background execution.
Translators: messages, results, and arguments
With the cast introduced, the interesting question is how the module reduces surface area for prompt authors while still enforcing strong contracts. The answer is layered translation: normalize messages, normalize results, then derive argument metadata from function signatures.
Normalizing messages and results
The simplest boundary is turning arbitrary content into safe, serializable messages. Message does that work.
class Message(pydantic.BaseModel):
role: Literal["user", "assistant"]
content: TextContent | EmbeddedResource
def __init__(
self,
content: Any,
role: Literal["user", "assistant"] = "user",
):
if isinstance(content, (TextContent, EmbeddedResource)):
normalized_content: TextContent | EmbeddedResource = content
elif isinstance(content, str):
normalized_content = TextContent(type="text", text=content)
else:
serialized = pydantic_core.to_json(content, fallback=str).decode()
normalized_content = TextContent(type="text", text=serialized)
super().__init__(role=role, content=normalized_content)
Message hides serialization details and enforces a text‑only wire format.Anything that isn’t already TextContent or EmbeddedResource becomes text. Dicts, lists, and Pydantic models are JSON‑encoded; other values fall back to str. After this point, the rest of the system can assume a small, predictable set of content types.
One level up, Prompt.convert_result plays the same role for whole prompt outputs:
def convert_result(self, raw_value: Any) -> PromptResult:
if isinstance(raw_value, PromptResult):
return raw_value
if isinstance(raw_value, str):
return PromptResult(raw_value, description=self.description, meta=self.meta)
if isinstance(raw_value, list | tuple):
messages: list[Message] = []
for i, item in enumerate(raw_value):
if isinstance(item, Message):
messages.append(item)
elif isinstance(item, str):
messages.append(Message(item))
else:
raise TypeError(
f"messages[{i}] must be Message or str, got {type(item).__name__}. "
f"Use Message({item!r}) to wrap the value."
)
return PromptResult(messages, description=self.description, meta=self.meta)
raise TypeError(
f"Prompt must return str, list[Message], or PromptResult, "
f"got {type(raw_value).__name__}"
)
PromptResult type.This is a straightforward adapter: it lets authors return a friendly shape (string, list of strings/messages) but guarantees that everything inside the framework is a PromptResult. Because normalization is centralized, every prompt gets the same behavior and error messages, and there’s only one place to change when you expand supported return types.
Deriving argument metadata from function signatures
The other half of the translator story lives in FunctionPrompt.from_function: take a Python function and turn its parameters into prompt arguments that MCP can expose.
Conceptually, this factory method does four things:
- Enforces a predictable signature: no
*args/**kwargs, no anonymous lambdas. - Normalizes background configuration through
TaskConfig. - Resolves dependency‑injected parameters via
without_injected_parametersso only true user inputs show up as arguments. - Uses Pydantic to derive JSON schema and turn each parameter into a
PromptArgument.
For non‑string parameters, it enriches descriptions with JSON schema. In simplified form:
Embedding JSON schema into argument descriptions
for param_name, param in parameters["properties"].items():
arg_description = param.get("description")
if param_name in sig.parameters:
sig_param = sig.parameters[param_name]
if (
sig_param.annotation != inspect.Parameter.empty
and sig_param.annotation is not str
):
try:
param_adapter = get_cached_typeadapter(sig_param.annotation)
param_schema = param_adapter.json_schema()
schema_str = json.dumps(param_schema, separators=(",", ":"))
schema_note = (
f"Provide as a JSON string matching the following schema: {schema_str}"
)
if arg_description:
arg_description = f"{arg_description}\n\n{schema_note}"
else:
arg_description = schema_note
except Exception:
pass
Clients still send strings, but the schema note tells humans (and UIs) the exact JSON shape that string should obey. Type hints are doing double duty here: they power validation and also generate documentation‑quality descriptions automatically.
One entrypoint for sync and background
Once messages, results, and arguments are standardized, the remaining question is execution: do we run this prompt now or schedule it as a background task? Prompt._render centralizes that decision.
async def _render(
self,
arguments: dict[str, Any] | None = None,
task_meta: TaskMeta | None = None,
) -> PromptResult | mcp.types.CreateTaskResult:
from fastmcp.server.tasks.routing import check_background_task
task_result = await check_background_task(
component=self,
task_type="prompt",
arguments=arguments,
task_meta=task_meta,
)
if task_result:
return task_result
result = await self.render(arguments)
return self.convert_result(result)
_render orchestrates routing, execution, and normalization.This is a template method: it fixes the high‑level algorithm — check background routing, execute, normalize — while delegating the render step to subclasses like FunctionPrompt.
Two properties matter here:
- Uniform entrypoint. The MCP server always calls
_render; it doesn’t need to know about Docket,TaskConfig, or any task routing details. - Opt‑in background support. Any
Promptsubclass can participate in tasks by defining a suitabletask_config;FunctionPromptbuilds on that by registering with Docket via helpers likeregister_with_docketandadd_to_docket.
Because routing and normalization live in this single method, cross‑cutting concerns like tracing, metrics, and logging can also be attached once. You can measure prompt latency, task creation rates, or error counts without each implementation re‑handling those details.
String arguments, real Python types
Everything we’ve seen so far sets up the API surface. The last piece is what makes authoring pleasant: calling your function with real Python types while the protocol still speaks strings. FunctionPrompt._convert_string_arguments is the bridge.
Its job is to take the MCP‑style argument dict and produce a kwargs dict that matches the wrapped function’s signature and types. In outline, it:
- Looks up the wrapped function’s signature.
- Ignores conversion for unannotated or
strparameters. - For parameters annotated with other types, uses a cached Pydantic
TypeAdapterto convert the string value, first attempting JSON, then direct parsing. - Raises a
PromptErrorwith a detailed message if conversion fails.
def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
from fastmcp.server.dependencies import without_injected_parameters
wrapper_fn = without_injected_parameters(self.fn)
sig = inspect.signature(wrapper_fn)
converted_kwargs = {}
for param_name, param_value in kwargs.items():
if param_name in sig.parameters:
param = sig.parameters[param_name]
if (
param.annotation == inspect.Parameter.empty
or param.annotation is str
) or not isinstance(param_value, str):
converted_kwargs[param_name] = param_value
else:
try:
adapter = get_cached_typeadapter(param.annotation)
try:
converted_kwargs[param_name] = adapter.validate_json(
param_value
)
except (ValueError, TypeError, pydantic_core.ValidationError):
converted_kwargs[param_name] = adapter.validate_python(
param_value
)
except (ValueError, TypeError, pydantic_core.ValidationError) as e:
raise PromptError(
f"Could not convert argument '{param_name}' with value '{param_value}' "
f"to expected type {param.annotation}. Error: {e}"
) from e
else:
converted_kwargs[param_name] = param_value
return converted_kwargs
This function sits on the hot path for prompt execution, but its cost is dominated by Pydantic’s validation and JSON parsing — typically negligible compared to model inference or external I/O. More interesting is the architecture: FunctionPrompt had already applied without_injected_parameters in from_function; repeating it here is redundant and slightly blurs the mental model. Treating self.fn as already wrapped keeps the dataflow clearer.
How FunctionPrompt.render ties it together
FunctionPrompt.render combines the abstractions into one straightforward pipeline:
- Validate that all required
PromptArguments have values. - Copy the incoming
argumentsdict to avoid mutation. - Run
_convert_string_argumentsto apply type conversions. - Call the wrapped function (awaiting if it is async).
- Hand the result to
convert_result(either directly or via_render). - On any exception, log using
logger.exceptionand raise a user‑facingPromptError.
Clients get a clean error like "Error rendering prompt X" instead of a stack trace; operators still see full diagnostics in logs. This separation of concerns — internal detail in logs, sanitized messages at the boundary — is essential when prompts become part of a shared platform.
Design lessons you can reuse
The core idea in prompt.py is simple but powerful: treat plain functions and their type hints as the contract, then build a small set of adapters around them to handle schemas, execution modes, and error boundaries. That approach gives authors the feeling of "just writing a function" while still producing a production‑ready MCP surface.
| Pattern | How it appears here | How you can reuse it |
|---|---|---|
| Centralized normalization | Message and convert_result turn many shapes into one canonical type. |
Define one normalization layer per boundary and route all inputs/outputs through it instead of letting endpoints ad‑hoc serialize. |
| Template method orchestration | Prompt._render owns routing, execution, and normalization. |
Put cross‑cutting concerns (background routing, metrics, logging) into a base method; let concrete implementations focus on business logic. |
| Type hints as UX | Pydantic schemas enrich argument descriptions with JSON schema. | Derive documentation and client hints directly from annotations, especially when rich data must pass through string‑only channels. |
| Error wrapping with logging | FunctionPrompt.render logs full exceptions and raises PromptError. |
Separate what operators see (structured logs) from what clients see (stable error types) at your public API boundary. |
There are also a few concrete refactors and extensions this design suggests for similar systems:
- Extract dense blocks such as the "append JSON schema note" logic into helpers; orchestration methods should read like a story.
- Avoid redundant wrapping (like calling
without_injected_parameterstwice) so that the path from incoming request to function call is easy to reason about. - Consider adding structured fields to domain errors (for example, an
error_typeonPromptError) so observability tools can categorize failures without parsing free‑form messages.
If you are building a framework, plugin system, or internal platform, the main takeaway is this: design around simple, type‑hinted functions, and invest in a small number of carefully designed adapters — for normalization, scheduling, and error handling. Do that well, and you give engineers the ergonomics of plain functions with the robustness of a full‑fledged prompt framework.



