Published on

Implementing Resource Requests in MCP Clients: A Complete Guide

Authors
  • avatar
    Name
    Anablock
    Twitter

    AI Insights & Innovations

MCP_Blog_1 Resources in MCP allow your server to expose data that can be directly included in prompts, rather than requiring tool calls to access information. This creates a more efficient way to provide context to AI models like Claude.

Understanding Resource Requests

When you've defined resources on your MCP server, your client needs a way to request and use them. The client acts as a bridge between your application and the MCP server, handling the communication and data parsing automatically.

The flow is straightforward: when a user wants to reference a document (like typing @report.pdf), your application uses the MCP client to fetch that resource from the server and include its contents directly in the prompt sent to Claude.

Implementing Resource Reading

The core functionality requires a read_resource function in your MCP client. This function takes a URI parameter identifying which resource to fetch:

async def read_resource(self, uri: str) -> Any:
    result = await self.session().read_resource(AnyUrl(uri))
    resource = result.contents[0]

The response from the MCP server contains a contents list. You typically only need the first element, which contains the actual resource data along with metadata like the MIME type.

Handling Different Content Types

Resources can return different types of content, so your client needs to parse them appropriately. The MIME type tells you how to handle the data:

if isinstance(resource, types.TextResourceContents):
    if resource.mimeType == "application/json":
        return json.loads(resource.text)
    
    return resource.text

This approach ensures that JSON resources are properly parsed into Python objects, while plain text resources are returned as strings. The MIME type acts as your hint for determining the correct parsing strategy.

Required Imports

To make this work properly, you'll need these imports in your MCP client:

import json
from pydantic import AnyUrl

The json module handles parsing JSON responses, while AnyUrl ensures proper type handling for the URI parameter.

Complete Implementation Example

Here's a more complete example of a resource reading implementation:

import json
from typing import Any
from pydantic import AnyUrl
from mcp import types

class MCPClient:
    async def read_resource(self, uri: str) -> Any:
        """Read a resource from the MCP server.
        
        Args:
            uri: The resource URI (e.g., "docs://documents/report.pdf")
            
        Returns:
            Parsed resource content (dict for JSON, str for text)
        """
        # Request the resource from the server
        result = await self.session().read_resource(AnyUrl(uri))
        
        # Extract the first content item
        resource = result.contents[0]
        
        # Handle text-based resources
        if isinstance(resource, types.TextResourceContents):
            # Parse JSON if applicable
            if resource.mimeType == "application/json":
                return json.loads(resource.text)
            
            # Return plain text
            return resource.text
        
        # Handle binary resources if needed
        elif isinstance(resource, types.BlobResourceContents):
            return resource.blob
        
        # Fallback for unknown types
        return resource

Testing Resource Access

Once implemented, you can test the functionality through your CLI application. When you type something like "What's in the @report.pdf document?", the system should:

  1. Show available resources in an autocomplete list
  2. Allow you to select a resource from the list
  3. Fetch the resource content automatically
  4. Include that content in the prompt to Claude

The key advantage is that Claude receives the document content directly in the prompt, eliminating the need for tool calls to access the information. This makes interactions faster and more efficient.

Building an Autocomplete Feature

To enhance user experience, you can implement autocomplete for resource mentions:

async def list_resources(self) -> list[dict]:
    """List all available resources for autocomplete."""
    result = await self.session().list_resources()
    
    return [
        {
            "uri": resource.uri,
            "name": resource.name,
            "description": resource.description,
            "mimeType": resource.mimeType
        }
        for resource in result.resources
    ]

This allows your application to show users what resources are available when they type @.

Integration with Your Application

Remember that the MCP client code you write gets used by other parts of your application. The read_resource function becomes a building block that other components can call to:

  • Fetch document contents for inclusion in prompts
  • List available resources for autocomplete menus
  • Integrate resource data into conversational context
  • Cache frequently accessed resources for performance

This separation of concerns keeps your code clean: the MCP client handles communication with the server, while your application logic focuses on how to use that data effectively.

Error Handling Best Practices

Always implement robust error handling for resource requests:

async def read_resource(self, uri: str) -> Any:
    try:
        result = await self.session().read_resource(AnyUrl(uri))
        resource = result.contents[0]
        
        if isinstance(resource, types.TextResourceContents):
            if resource.mimeType == "application/json":
                return json.loads(resource.text)
            return resource.text
            
    except IndexError:
        raise ValueError(f"Resource '{uri}' returned no content")
    except json.JSONDecodeError as e:
        raise ValueError(f"Invalid JSON in resource '{uri}': {e}")
    except Exception as e:
        raise RuntimeError(f"Failed to read resource '{uri}': {e}")

Performance Considerations

Caching Resources

For frequently accessed resources, implement caching:

from functools import lru_cache
from datetime import datetime, timedelta

class MCPClient:
    def __init__(self):
        self._resource_cache = {}
        self._cache_ttl = timedelta(minutes=5)
    
    async def read_resource(self, uri: str, use_cache: bool = True) -> Any:
        # Check cache first
        if use_cache and uri in self._resource_cache:
            cached_data, cached_time = self._resource_cache[uri]
            if datetime.now() - cached_time < self._cache_ttl:
                return cached_data
        
        # Fetch from server
        result = await self.session().read_resource(AnyUrl(uri))
        resource = result.contents[0]
        
        # Parse content
        if isinstance(resource, types.TextResourceContents):
            if resource.mimeType == "application/json":
                data = json.loads(resource.text)
            else:
                data = resource.text
        else:
            data = resource
        
        # Cache the result
        if use_cache:
            self._resource_cache[uri] = (data, datetime.now())
        
        return data

Batch Resource Loading

When you need multiple resources, load them concurrently:

import asyncio

async def read_multiple_resources(self, uris: list[str]) -> dict[str, Any]:
    """Read multiple resources concurrently."""
    tasks = [self.read_resource(uri) for uri in uris]
    results = await asyncio.gather(*tasks, return_exceptions=True)
    
    return {
        uri: result if not isinstance(result, Exception) else None
        for uri, result in zip(uris, results)
    }

Real-World Example: Document Chat Application

Here's how you might use resource reading in a document chat application:

class DocumentChatApp:
    def __init__(self, mcp_client: MCPClient):
        self.mcp_client = mcp_client
    
    async def process_message(self, user_message: str) -> str:
        # Extract resource mentions (e.g., @report.pdf)
        mentions = self.extract_mentions(user_message)
        
        # Fetch all mentioned resources
        resources = await self.mcp_client.read_multiple_resources(mentions)
        
        # Build context for Claude
        context = self.build_context(resources)
        
        # Send to Claude with resource content included
        prompt = f"{context}\n\nUser question: {user_message}"
        response = await self.send_to_claude(prompt)
        
        return response
    
    def extract_mentions(self, message: str) -> list[str]:
        # Extract @mentions and convert to URIs
        import re
        mentions = re.findall(r'@([\w.-]+)', message)
        return [f"docs://documents/{m}" for m in mentions]
    
    def build_context(self, resources: dict[str, Any]) -> str:
        context_parts = []
        for uri, content in resources.items():
            if content:
                context_parts.append(f"Document {uri}:\n{content}\n")
        return "\n".join(context_parts)

Key Takeaways

Resources provide direct prompt injection — No tool calls needed, faster responses
MIME types guide parsing — Handle JSON, text, and binary content appropriately
Client acts as a bridge — Separates MCP communication from application logic
Implement caching — Improve performance for frequently accessed resources
Handle errors gracefully — Provide clear feedback when resources are unavailable
Support autocomplete — List resources to help users discover what's available

Conclusion

Implementing resource requests in your MCP client unlocks powerful capabilities for providing context to AI models. By fetching and including resource data directly in prompts, you create more efficient, responsive applications that can seamlessly reference documents, configurations, and other data sources.

The pattern is simple: define resources on your server, implement read_resource in your client, and let your application logic orchestrate how that data enhances AI interactions. With proper error handling, caching, and user experience features like autocomplete, you'll have a robust foundation for building context-aware AI applications.

Start implementing resource requests in your MCP client today and experience the efficiency gains of direct prompt injection!