Skip to main content
المدونة

Zalt Blog

Deep Dives into Code & Architecture

AT SCALE

When Plain Functions Become Robust Prompts

By محمود الزلط
Code Cracking
20m read
<

When plain functions become robust prompts, you stop thinking in prompt templates and start thinking in code, while still getting reliable prompt behavior.

/>
When Plain Functions Become Robust Prompts - Featured blog post image

MENTORING

Got a specific prompt engineering question?

Prompt design, chain-of-thought patterns, evaluation strategies — bring your prompt, get a focused answer in one session.

We’re examining how FastMCP turns plain Python functions into robust, observable prompts. FastMCP is an MCP server framework that tries to keep authors in "normal Python" while still exposing rich, well‑typed capabilities to MCP clients. At the center of that effort is prompt.py, which quietly takes a callable, infers its API, and wires it into synchronous or background execution. I’m Mahmoud Zalt, an AI software engineer, and in this article we’ll walk through how this file pulls off that transformation — and what that teaches us about designing function‑first APIs.

From function to prompt: the core flow

To see how a plain function becomes a prompt, we need the dataflow. Once that is clear, the rest of the abstractions fall into place.

fastmcp/
  prompts/
    prompt.py   <-- defines Message, PromptResult, Prompt, FunctionPrompt

MCP server
   |
   | calls Prompt._render(arguments, task_meta)
   v
Prompt._render
   |-- check_background_task(...)  --(may enqueue)-->  Docket
   |                                       |
   |                                       v
   |                               background execution
   |
   `--(if no background)--> Prompt.render(...)  [overridden by FunctionPrompt.render]
                                   |
                                   v
                           user-defined function (wrapped)
                                   |
                                   v
                           Prompt.convert_result(...) -> PromptResult
                                   |
                                   v
                         PromptResult.to_mcp_prompt_result() -> GetPromptResult
From MCP call to user function and back.

The whole module acts as a translator: it adapts plain Python functions into MCP prompts, and MCP responses back into a canonical PromptResult shape.

The main pieces are:

  • Message – a single prompt message with role and content, always normalized into MCP‑compatible types.
  • PromptResult – a "mailbag" of one or more Message objects plus optional metadata.
  • Prompt – an abstract component that knows how to publish itself as an MCP prompt and how to normalize raw results.
  • FunctionPrompt – a Prompt that wraps a Python callable, auto‑generates argument schemas, and wires in background execution.

Translators: messages, results, and arguments

With the cast introduced, the interesting question is how the module reduces surface area for prompt authors while still enforcing strong contracts. The answer is layered translation: normalize messages, normalize results, then derive argument metadata from function signatures.

Normalizing messages and results

The simplest boundary is turning arbitrary content into safe, serializable messages. Message does that work.

class Message(pydantic.BaseModel):
    role: Literal["user", "assistant"]
    content: TextContent | EmbeddedResource

    def __init__(
        self,
        content: Any,
        role: Literal["user", "assistant"] = "user",
    ):
        if isinstance(content, (TextContent, EmbeddedResource)):
            normalized_content: TextContent | EmbeddedResource = content
        elif isinstance(content, str):
            normalized_content = TextContent(type="text", text=content)
        else:
            serialized = pydantic_core.to_json(content, fallback=str).decode()
            normalized_content = TextContent(type="text", text=serialized)

        super().__init__(role=role, content=normalized_content)
Message hides serialization details and enforces a text‑only wire format.

Anything that isn’t already TextContent or EmbeddedResource becomes text. Dicts, lists, and Pydantic models are JSON‑encoded; other values fall back to str. After this point, the rest of the system can assume a small, predictable set of content types.

One level up, Prompt.convert_result plays the same role for whole prompt outputs:

def convert_result(self, raw_value: Any) -> PromptResult:
    if isinstance(raw_value, PromptResult):
        return raw_value

    if isinstance(raw_value, str):
        return PromptResult(raw_value, description=self.description, meta=self.meta)

    if isinstance(raw_value, list | tuple):
        messages: list[Message] = []
        for i, item in enumerate(raw_value):
            if isinstance(item, Message):
                messages.append(item)
            elif isinstance(item, str):
                messages.append(Message(item))
            else:
                raise TypeError(
                    f"messages[{i}] must be Message or str, got {type(item).__name__}. "
                    f"Use Message({item!r}) to wrap the value."
                )
        return PromptResult(messages, description=self.description, meta=self.meta)

    raise TypeError(
        f"Prompt must return str, list[Message], or PromptResult, "
        f"got {type(raw_value).__name__}"
    )
Multiple output shapes collapse into a single PromptResult type.

This is a straightforward adapter: it lets authors return a friendly shape (string, list of strings/messages) but guarantees that everything inside the framework is a PromptResult. Because normalization is centralized, every prompt gets the same behavior and error messages, and there’s only one place to change when you expand supported return types.

Deriving argument metadata from function signatures

The other half of the translator story lives in FunctionPrompt.from_function: take a Python function and turn its parameters into prompt arguments that MCP can expose.

Conceptually, this factory method does four things:

  1. Enforces a predictable signature: no *args/**kwargs, no anonymous lambdas.
  2. Normalizes background configuration through TaskConfig.
  3. Resolves dependency‑injected parameters via without_injected_parameters so only true user inputs show up as arguments.
  4. Uses Pydantic to derive JSON schema and turn each parameter into a PromptArgument.

For non‑string parameters, it enriches descriptions with JSON schema. In simplified form:

Embedding JSON schema into argument descriptions
for param_name, param in parameters["properties"].items():
    arg_description = param.get("description")

    if param_name in sig.parameters:
        sig_param = sig.parameters[param_name]
        if (
            sig_param.annotation != inspect.Parameter.empty
            and sig_param.annotation is not str
        ):
            try:
                param_adapter = get_cached_typeadapter(sig_param.annotation)
                param_schema = param_adapter.json_schema()
                schema_str = json.dumps(param_schema, separators=(",", ":"))
                schema_note = (
                    f"Provide as a JSON string matching the following schema: {schema_str}"
                )
                if arg_description:
                    arg_description = f"{arg_description}\n\n{schema_note}"
                else:
                    arg_description = schema_note
            except Exception:
                pass

Clients still send strings, but the schema note tells humans (and UIs) the exact JSON shape that string should obey. Type hints are doing double duty here: they power validation and also generate documentation‑quality descriptions automatically.

One entrypoint for sync and background

Once messages, results, and arguments are standardized, the remaining question is execution: do we run this prompt now or schedule it as a background task? Prompt._render centralizes that decision.

async def _render(
    self,
    arguments: dict[str, Any] | None = None,
    task_meta: TaskMeta | None = None,
) -> PromptResult | mcp.types.CreateTaskResult:
    from fastmcp.server.tasks.routing import check_background_task

    task_result = await check_background_task(
        component=self,
        task_type="prompt",
        arguments=arguments,
        task_meta=task_meta,
    )
    if task_result:
        return task_result

    result = await self.render(arguments)
    return self.convert_result(result)
_render orchestrates routing, execution, and normalization.

This is a template method: it fixes the high‑level algorithm — check background routing, execute, normalize — while delegating the render step to subclasses like FunctionPrompt.

Two properties matter here:

  • Uniform entrypoint. The MCP server always calls _render; it doesn’t need to know about Docket, TaskConfig, or any task routing details.
  • Opt‑in background support. Any Prompt subclass can participate in tasks by defining a suitable task_config; FunctionPrompt builds on that by registering with Docket via helpers like register_with_docket and add_to_docket.

Because routing and normalization live in this single method, cross‑cutting concerns like tracing, metrics, and logging can also be attached once. You can measure prompt latency, task creation rates, or error counts without each implementation re‑handling those details.

String arguments, real Python types

Everything we’ve seen so far sets up the API surface. The last piece is what makes authoring pleasant: calling your function with real Python types while the protocol still speaks strings. FunctionPrompt._convert_string_arguments is the bridge.

Its job is to take the MCP‑style argument dict and produce a kwargs dict that matches the wrapped function’s signature and types. In outline, it:

  • Looks up the wrapped function’s signature.
  • Ignores conversion for unannotated or str parameters.
  • For parameters annotated with other types, uses a cached Pydantic TypeAdapter to convert the string value, first attempting JSON, then direct parsing.
  • Raises a PromptError with a detailed message if conversion fails.
def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
    from fastmcp.server.dependencies import without_injected_parameters

    wrapper_fn = without_injected_parameters(self.fn)
    sig = inspect.signature(wrapper_fn)
    converted_kwargs = {}

    for param_name, param_value in kwargs.items():
        if param_name in sig.parameters:
            param = sig.parameters[param_name]

            if (
                param.annotation == inspect.Parameter.empty
                or param.annotation is str
            ) or not isinstance(param_value, str):
                converted_kwargs[param_name] = param_value
            else:
                try:
                    adapter = get_cached_typeadapter(param.annotation)
                    try:
                        converted_kwargs[param_name] = adapter.validate_json(
                            param_value
                        )
                    except (ValueError, TypeError, pydantic_core.ValidationError):
                        converted_kwargs[param_name] = adapter.validate_python(
                            param_value
                        )
                except (ValueError, TypeError, pydantic_core.ValidationError) as e:
                    raise PromptError(
                        f"Could not convert argument '{param_name}' with value '{param_value}' "
                        f"to expected type {param.annotation}. Error: {e}"
                    ) from e
        else:
            converted_kwargs[param_name] = param_value

    return converted_kwargs
Arguments arrive as strings and leave as well‑typed Python values.

This function sits on the hot path for prompt execution, but its cost is dominated by Pydantic’s validation and JSON parsing — typically negligible compared to model inference or external I/O. More interesting is the architecture: FunctionPrompt had already applied without_injected_parameters in from_function; repeating it here is redundant and slightly blurs the mental model. Treating self.fn as already wrapped keeps the dataflow clearer.

How FunctionPrompt.render ties it together

FunctionPrompt.render combines the abstractions into one straightforward pipeline:

  1. Validate that all required PromptArguments have values.
  2. Copy the incoming arguments dict to avoid mutation.
  3. Run _convert_string_arguments to apply type conversions.
  4. Call the wrapped function (awaiting if it is async).
  5. Hand the result to convert_result (either directly or via _render).
  6. On any exception, log using logger.exception and raise a user‑facing PromptError.

Clients get a clean error like "Error rendering prompt X" instead of a stack trace; operators still see full diagnostics in logs. This separation of concerns — internal detail in logs, sanitized messages at the boundary — is essential when prompts become part of a shared platform.

Design lessons you can reuse

The core idea in prompt.py is simple but powerful: treat plain functions and their type hints as the contract, then build a small set of adapters around them to handle schemas, execution modes, and error boundaries. That approach gives authors the feeling of "just writing a function" while still producing a production‑ready MCP surface.

Pattern How it appears here How you can reuse it
Centralized normalization Message and convert_result turn many shapes into one canonical type. Define one normalization layer per boundary and route all inputs/outputs through it instead of letting endpoints ad‑hoc serialize.
Template method orchestration Prompt._render owns routing, execution, and normalization. Put cross‑cutting concerns (background routing, metrics, logging) into a base method; let concrete implementations focus on business logic.
Type hints as UX Pydantic schemas enrich argument descriptions with JSON schema. Derive documentation and client hints directly from annotations, especially when rich data must pass through string‑only channels.
Error wrapping with logging FunctionPrompt.render logs full exceptions and raises PromptError. Separate what operators see (structured logs) from what clients see (stable error types) at your public API boundary.

There are also a few concrete refactors and extensions this design suggests for similar systems:

  • Extract dense blocks such as the "append JSON schema note" logic into helpers; orchestration methods should read like a story.
  • Avoid redundant wrapping (like calling without_injected_parameters twice) so that the path from incoming request to function call is easy to reason about.
  • Consider adding structured fields to domain errors (for example, an error_type on PromptError) so observability tools can categorize failures without parsing free‑form messages.

If you are building a framework, plugin system, or internal platform, the main takeaway is this: design around simple, type‑hinted functions, and invest in a small number of carefully designed adapters — for normalization, scheduling, and error handling. Do that well, and you give engineers the ergonomics of plain functions with the robustness of a full‑fledged prompt framework.

Full Source Code

Direct source from the upstream repository. Preview it inline or open it on GitHub.

heads/main/src/fastmcp/prompts/prompt.py

jlowin/fastmcp • refs

Choose one action below.

Open on GitHub

Thanks for reading! I hope this was useful. If you have questions or thoughts, feel free to reach out.

Content Creation Process: This article was generated via a semi-automated workflow using AI tools. I prepared the strategic framework, including specific prompts and data sources. From there, the automation system conducted the research, analysis, and writing. The content passed through automated verification steps before being finalized and published without manual intervention.

Mahmoud Zalt

About the Author

I’m Zalt, a technologist with 16+ years of experience, passionate about designing and building AI systems that move us closer to a world where machines handle everything and humans reclaim wonder.

Let's connect if you're working on interesting AI projects, looking for technical advice or want to discuss anything.

Support this content

Share this article