diff --git a/units/en/_toctree.yml b/units/en/_toctree.yml
index d91214d..f916d54 100644
--- a/units/en/_toctree.yml
+++ b/units/en/_toctree.yml
@@ -15,40 +15,30 @@
title: The Communication Protocol
- local: unit1/capabilities
title: Understanding MCP Capabilities
+ - local: unit1/sdk
+ title: MCP SDK
+ - local: unit1/mcp-clients
+ title: MCP Clients
- local: unit1/gradio-mcp
title: Gradio MCP Integration
-- title: "2. Use Case: Building with MCP"
+- title: "2. Use Case: End-to-End MCP Application"
sections:
- local: unit2/introduction
- title: Introduction
- - local: unit2/environment-setup
- title: Setting Up Your Development Environment & SDKs
- - local: unit2/building-server
- title: Building Your First MCP Server
- - local: unit2/server-capabilities
- title: Implementing Server Capabilities
- - local: unit2/developing-clients
- title: Developing MCP Clients
- - local: unit2/configuration
- title: Configuration, Authentication, and Debugging
- - local: unit2/hub-mcp-servers
- title: MCP Servers on Hugging Face Hub
+ title: Introduction to Building an MCP Application
+ - local: unit2/gradio-server
+ title: Building the Gradio MCP Server
+ - local: unit2/clients
+ title: Using MCP Clients with your application
+ - local: unit2/gradio-client
+ title: Building an MCP Client with Gradio
+ - local: unit2/tiny-agents
+ title: Building a Tiny Agent with TypeScript
-- title: "3. Use Case: Deploying with MCP"
+- title: "3. Use Case: Advanced MCP Development"
sections:
- local: unit3/introduction
title: Introduction
- - local: unit3/advanced-features
- title: Exploring Advanced MCP Features
- - local: unit3/security
- title: Security Deep Dive - Threats and Mitigation Strategies
- - local: unit3/limitations
- title: Limitations, Challenges, and Comparisons
- - local: unit3/huggingface-ecosystem
- title: Hugging Face's Tiny Agents and MCP
- - local: unit3/final-project
- title: Final Project - Building a Complete MCP Application
- title: "Bonus Units"
sections:
diff --git a/units/en/unit0/introduction.mdx b/units/en/unit0/introduction.mdx
new file mode 100644
index 0000000..8a7d3a3
--- /dev/null
+++ b/units/en/unit0/introduction.mdx
@@ -0,0 +1,139 @@
+# Welcome to the π€ Model Context Protocol (MCP) Course
+
+
+
+Welcome to the most exciting topic in AI today: **Model Context Protocol (MCP)**!
+
+This free course will take you on a journey, **from beginner to informed**, in understanding, using, and building applications with MCP.
+
+This first unit will help you onboard:
+
+* Discover the **course's syllabus**.
+* **Get more information about the certification process and the schedule**.
+* Get to know the team behind the course.
+* Create your **account**.
+* **Sign-up to our Discord server**, and meet your classmates and us.
+
+Let's get started!
+
+## What to expect from this course?
+
+In this course, you will:
+
+* π Study Model Context Protocol in **theory, design, and practice.**
+* π§βπ» Learn to **use established MCP SDKs and frameworks**.
+* πΎ **Share your projects** and explore applications created by the community.
+* π Participate in challenges where you will **evaluate your MCP implementations against other students'.**
+* π **Earn a certificate of completion** by completing assignments.
+
+And more!
+
+At the end of this course, you'll understand **how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards**.
+
+Don't forget to [**sign up to the course!**](https://huggingface.co/mcp-course)
+
+## What does the course look like?
+
+The course is composed of:
+
+* _Foundational Units_: where you learn MCP **concepts in theory**.
+* _Hands-on_: where you'll learn **to use established MCP SDKs** to build your applications. These hands-on sections will have pre-configured environments.
+* _Use case assignments_: where you'll apply the concepts you've learned to solve a real-world problem that you'll choose.
+* _Collaborations_: We're collaborating with Hugging Face's partners to give you the latest MCP implementations and tools.
+
+This **course is a living project, evolving with your feedback and contributions!** Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.
+
+After you have gone through the course, you can also send your feedback π using this form [LINK TO FEEDBACK FORM]
+
+## What's the syllabus?
+
+Here is the **general syllabus for the course**. A more detailed list of topics will be released with each unit.
+
+| Chapter | Topic | Description |
+| ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
+| 0 | Onboarding | Set you up with the tools and platforms that you will use. |
+| 1 | MCP Fundamentals, Architecture and Core Concepts | Explain core concepts, architecture, and components of Model Context Protocol. Show a simple use case using MCP. |
+| 2 | End-to-end Use case: MCP in Action | Build a simple end-to-end MCP application that you can share with the community. |
+| 3 | Deployed Use case: MCP in Action | Build a deployed MCP application using a the Hugging Face ecosystem and partners' services. |
+| 4 | Bonus Units | Bonus units to help you get more out of the course, working with partners' libraries and services. |
+
+## What are the prerequisites?
+
+To be able to follow this course, you should have:
+
+* Basic understanding of AI and LLM concepts
+* Familiarity with software development principles and API concepts
+* Experience with at least one programming language (Python or TypeScript examples will be shown)
+
+If you don't have any of these, don't worry! Here are some resources that can help you:
+
+* [LLM Course](https://huggingface.co/learn/llm-course/en/chapter1/10) will guide you through the basics of using and building with LLMs.
+* [Agents Course](https://huggingface.co/learn/agents-course/en/chapter1/10) will guide you through building AI agents with LLMs.
+
+
+
+The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and agents, you can start the course now!
+
+
+
+## What tools do I need?
+
+You only need 2 things:
+
+* _A computer_ with an internet connection.
+* An _Account_: to access the course resources and create projects. If you don't have an account yet, you can create one [here](https://huggingface.co/join) (it's free).
+
+## The Certification Process
+
+You can choose to follow this course _in audit mode_, or do the activities and _get one of the two certificates we'll issue_. If you audit the course, you can participate in all the challenges and do assignments if you want, and **you don't need to notify us**.
+
+The certification process is **completely free**:
+
+* _To get a certification for fundamentals_: you need to complete Unit 1 of the course. This is intended for students that want to get up to date with the latest trends in MCP, without the need to build a full application.
+* _To get a certificate of completion_: you need to complete the use case units (2 and 3). This is intended for students that want to build a full application and share it with the community.
+
+## What is the recommended pace?
+
+Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**.
+
+Since there's a deadline, we provide you a recommended pace:
+
+
+
+## How to get the most out of the course?
+
+To get the most out of the course, we have some advice:
+
+1. Join study groups in Discord: studying in groups is always easier. To do that, you need to join our discord server and verify your account.
+2. **Do the quizzes and assignments**: the best way to learn is through hands-on practice and self-assessment.
+3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.
+
+
+
+## Who are we
+
+About the authors:
+
+### Ben Burtenshaw
+
+Ben is a Machine Learning Engineer at Hugging Face who focuses building LLM applications, with post training and agentic approaches.
+
+
+
+
+
+
+
+## I found a bug, or I want to improve the course
+
+Contributions are **welcome** π€
+
+* If you _found a bug π in a notebook_, please open an issue and **describe the problem**.
+* If you _want to improve the course_, you can open a Pull Request.
+* If you _want to add a full section or a new unit_, the best is to open an issue and **describe what content you want to add before starting to write it so that we can guide you**.
+
+## I still have questions
+
+Please ask your question in our discord server #mcp-course-questions.
+
+Now that you have all the information, let's get on board β΅
\ No newline at end of file
diff --git a/units/en/unit1/architectural-components.mdx b/units/en/unit1/architectural-components.mdx
new file mode 100644
index 0000000..621fc8e
--- /dev/null
+++ b/units/en/unit1/architectural-components.mdx
@@ -0,0 +1,85 @@
+# Architectural Components of MCP
+
+In the previous section, we discussed the key concepts and terminology of MCP. Now, let's dive deeper into the architectural components that make up the MCP ecosystem.
+
+## Host, Client, and Server
+
+The Model Context Protocol (MCP) is built on a client-server architecture that enables structured communication between AI models and external systems.
+
+
+
+The MCP architecture consists of three primary components, each with well-defined roles and responsibilities: Host, Client, and Server. We touched on these in the previous section, but let's dive deeper into each component and their responsibilities.
+
+### Host
+
+The **Host** is the user-facing AI application that end-users interact with directly.
+
+Examples include:
+- AI Chat apps like OpenAI ChatGPT or Anthropic's Claude Desktop
+- AI-enhanced IDEs like Cursor, or integrations to tools like Continue.dev
+- Custom AI agents and applications built in libraries like LangChain or smolagents
+
+The Host's responsibilities include:
+- Managing user interactions and permissions
+- Initiating connections to MCP Servers via MCP Clients
+- Orchestrating the overall flow between user requests, LLM processing, and external tools
+- Rendering results back to users in a coherent format
+
+In most cases, users will select their host application based on their needs and preferences. For example, a developer may choose Cursor for its powerful code editing capabilities, while domain experts may use custom applications built in smolagents.
+
+### Client
+
+The **Client** is a component within the Host application that manages communication with a specific MCP Server. Key characteristics include:
+
+- Each Client maintains a 1:1 connection with a single Server
+- Handles the protocol-level details of MCP communication
+- Acts as the intermediary between the Host's logic and the external Server
+
+### Server
+
+The **Server** is an external program or service that exposes capabilities to AI models via the MCP protocol. Servers:
+
+- Provide access to specific external tools, data sources, or services
+- Act as lightweight wrappers around existing functionality
+- Can run locally (on the same machine as the Host) or remotely (over a network)
+- Expose their capabilities in a standardized format that Clients can discover and use
+
+## Communication Flow
+
+Let's examine how these components interact in a typical MCP workflow:
+
+
+
+In the next section, we'll dive deeper into the communication protocol that enables these components with practical examples.
+
+
+
+1. **User Interaction**: The user interacts with the **Host** application, expressing an intent or query.
+
+2. **Host Processing**: The **Host** processes the user's input, potentially using an LLM to understand the request and determine which external capabilities might be needed.
+
+3. **Client Connection**: The **Host** directs its **Client** component to connect to the appropriate Server(s).
+
+4. **Capability Discovery**: The **Client** queries the **Server** to discover what capabilities (Tools, Resources, Prompts) it offers.
+
+5. **Capability Invocation**: Based on the user's needs or the LLM's determination, the Host instructs the **Client** to invoke specific capabilities from the **Server**.
+
+6. **Server Execution**: The **Server** executes the requested functionality and returns results to the **Client**.
+
+7. **Result Integration**: The **Client** relays these results back to the **Host**, which incorporates them into the context for the LLM or presents them directly to the user.
+
+A key advantage of this architecture is its modularity. A single **Host** can connect to multiple **Servers** simultaneously via different **Clients**. New **Servers** can be added to the ecosystem without requiring changes to existing **Hosts**. Capabilities can be easily composed across different **Servers**.
+
+
+
+As we discussed in the previous section, this modularity transforms the traditional MΓN integration problem (M AI applications connecting to N tools/services) into a more manageable M+N problem, where each Host and Server needs to implement the MCP standard only once.
+
+
+
+The architecture might appear simple, but its power lies in the standardization of the communication protocol and the clear separation of responsibilities between components. This design allows for a cohesive ecosystem where AI models can seamlessly connect with an ever-growing array of external tools and data sources.
+
+## Conclusion
+
+These interaction patterns are guided by several key principles that shape the design and evolution of MCP. The protocol emphasizes **standardization** by providing a universal protocol for AI connectivity, while maintaining **simplicity** by keeping the core protocol straightforward yet enabling advanced features. **Safety** is prioritized by requiring explicit user approval for sensitive operations, and discoverability enables dynamic discovery of capabilities. The protocol is built with **extensibility** in mind, supporting evolution through versioning and capability negotiation, and ensures **interoperability** across different implementations and environments.
+
+In the next section, we'll explore the communication protocol that enables these components to work together effectively.
\ No newline at end of file
diff --git a/units/en/unit1/capabilities.mdx b/units/en/unit1/capabilities.mdx
new file mode 100644
index 0000000..487354a
--- /dev/null
+++ b/units/en/unit1/capabilities.mdx
@@ -0,0 +1,243 @@
+# Understanding MCP Capabilities
+
+MCP Servers expose a variety of capabilities to Clients through the communication protocol. These capabilities fall into four main categories, each with distinct characteristics and use cases. Let's explore these core primitives that form the foundation of MCP's functionality.
+
+
+
+In this section, we'll show examples as framework agnostic functions in each language. This is to focus on the concepts and how they work together, rather than the complexities of any framework.
+
+In the coming units, we'll show how these concepts are implemented in MCP specific code.
+
+
+
+## Tools
+
+Tools are executable functions or actions that the AI model can invoke through the MCP protocol.
+
+- **Control**: Tools are typically **model-controlled**, meaning that the AI model (LLM) decides when to call them based on the user's request and context.
+- **Safety**: Due to their ability to perform actions with side effects, tool execution can be dangerous. Therefore, they typically require explicit user approval.
+- **Use Cases**: Sending messages, creating tickets, querying APIs, performing calculations.
+
+**Example**: A weather tool that fetches current weather data for a given location:
+
+
+
+
+```python
+def get_weather(location: str) -> dict:
+ """Get the current weather for a specified location."""
+ # Connect to weather API and fetch data
+ return {
+ "temperature": 72,
+ "conditions": "Sunny",
+ "humidity": 45
+ }
+```
+
+
+
+
+```javascript
+function getWeather(location) {
+ // Connect to weather API and fetch data
+ return {
+ temperature: 72,
+ conditions: 'Sunny',
+ humidity: 45
+ };
+}
+```
+
+
+
+
+## Resources
+
+Resources provide read-only access to data sources, allowing the AI model to retrieve context without executing complex logic.
+
+- **Control**: Resources are **application-controlled**, meaning the Host application typically decides when to access them.
+- **Nature**: They are designed for data retrieval with minimal computation, similar to GET endpoints in REST APIs.
+- **Safety**: Since they are read-only, they typically present lower security risks than Tools.
+- **Use Cases**: Accessing file contents, retrieving database records, reading configuration information.
+
+**Example**: A resource that provides access to file contents:
+
+
+
+
+```python
+def read_file(file_path: str) -> str:
+ """Read the contents of a file at the specified path."""
+ with open(file_path, 'r') as f:
+ return f.read()
+```
+
+
+
+
+```javascript
+function readFile(filePath) {
+ // Using fs.readFile to read file contents
+ const fs = require('fs');
+ return new Promise((resolve, reject) => {
+ fs.readFile(filePath, 'utf8', (err, data) => {
+ if (err) {
+ reject(err);
+ return;
+ }
+ resolve(data);
+ });
+ });
+}
+```
+
+
+
+
+## Prompts
+
+Prompts are predefined templates or workflows that guide the interaction between the user, the AI model, and the Server's capabilities.
+
+- **Control**: Prompts are **user-controlled**, often presented as options in the Host application's UI.
+- **Purpose**: They structure interactions for optimal use of available Tools and Resources.
+- **Selection**: Users typically select a prompt before the AI model begins processing, setting context for the interaction.
+- **Use Cases**: Common workflows, specialized task templates, guided interactions.
+
+**Example**: A prompt template for generating a code review:
+
+
+
+
+```python
+def code_review(code: str, language: str) -> list:
+ """Generate a code review for the provided code snippet."""
+ return [
+ {
+ "role": "system",
+ "content": f"You are a code reviewer examining {language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement."
+ },
+ {
+ "role": "user",
+ "content": f"Please review this {language} code:\n\n```{language}\n{code}\n```"
+ }
+ ]
+```
+
+
+
+
+```javascript
+function codeReview(code, language) {
+ return [
+ {
+ role: 'system',
+ content: `You are a code reviewer examining ${language} code. Provide a detailed review highlighting best practices, potential issues, and suggestions for improvement.`
+ },
+ {
+ role: 'user',
+ content: `Please review this ${language} code:\n\n\`\`\`${language}\n${code}\n\`\`\``
+ }
+ ];
+}
+```
+
+
+
+
+## Sampling
+
+Sampling allows Servers to request the Client (specifically, the Host application) to perform LLM interactions.
+
+- **Control**: Sampling is **server-initiated** but requires Client/Host facilitation.
+- **Purpose**: It enables server-driven agentic behaviors and potentially recursive or multi-step interactions.
+- **Safety**: Like Tools, sampling operations typically require user approval.
+- **Use Cases**: Complex multi-step tasks, autonomous agent workflows, interactive processes.
+
+**Example**: A Server might request the Client to analyze data it has processed:
+
+
+
+
+```python
+def request_sampling(messages, system_prompt=None, include_context="none"):
+ """Request LLM sampling from the client."""
+ # In a real implementation, this would send a request to the client
+ return {
+ "role": "assistant",
+ "content": "Analysis of the provided data..."
+ }
+```
+
+
+
+
+```javascript
+function requestSampling(messages, systemPrompt = null, includeContext = 'none') {
+ // In a real implementation, this would send a request to the client
+ return {
+ role: 'assistant',
+ content: 'Analysis of the provided data...'
+ };
+}
+
+function handleSamplingRequest(request) {
+ const { messages, systemPrompt, includeContext } = request;
+ // In a real implementation, this would process the request and return a response
+ return {
+ role: 'assistant',
+ content: 'Response to the sampling request...'
+ };
+}
+```
+
+
+
+
+The sampling flow follows these steps:
+1. Server sends a `sampling/createMessage` request to the client
+2. Client reviews the request and can modify it
+3. Client samples from an LLM
+4. Client reviews the completion
+5. Client returns the result to the server
+
+
+
+This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. When implementing sampling, it's important to provide clear, well-structured prompts and include relevant context.
+
+
+
+## How Capabilities Work Together
+
+Let's look at how these capabilities work together to enable complex interactions. In the table below, we've outlined the capabilities, who controls them, the direction of control, and some other details.
+
+| Capability | Controlled By | Direction | Side Effects | Approval Needed | Typical Use Cases |
+|------------|---------------|-----------|--------------|-----------------|-------------------|
+| Tools | Model (LLM) | Client β Server | Yes (potentially) | Yes | Actions, API calls, data manipulation |
+| Resources | Application | Client β Server | No (read-only) | Typically no | Data retrieval, context gathering |
+| Prompts | User | Server β Client | No | No (selected by user) | Guided workflows, specialized templates |
+| Sampling | Server | Server β Client β Server | Indirectly | Yes | Multi-step tasks, agentic behaviors |
+
+These capabilities are designed to work together in complementary ways:
+
+1. A user might select a **Prompt** to start a specialized workflow
+2. The Prompt might include context from **Resources**
+3. During processing, the AI model might call **Tools** to perform specific actions
+4. For complex operations, the Server might use **Sampling** to request additional LLM processing
+
+The distinction between these primitives provides a clear structure for MCP interactions, enabling AI models to access information, perform actions, and engage in complex workflows while maintaining appropriate control boundaries.
+
+## Discovery Process
+
+One of MCP's key features is dynamic capability discovery. When a Client connects to a Server, it can query the available Tools, Resources, and Prompts through specific list methods:
+
+- `tools/list`: Discover available Tools
+- `resources/list`: Discover available Resources
+- `prompts/list`: Discover available Prompts
+
+This dynamic discovery mechanism allows Clients to adapt to the specific capabilities each Server offers without requiring hardcoded knowledge of the Server's functionality.
+
+## Conclusion
+
+Understanding these core primitives is essential for working with MCP effectively. By providing distinct types of capabilities with clear control boundaries, MCP enables powerful interactions between AI models and external systems while maintaining appropriate safety and control mechanisms.
+
+In the next section, we'll explore how Gradio integrates with MCP to provide easy-to-use interfaces for these capabilities.
\ No newline at end of file
diff --git a/units/en/unit1/communication-protocol.mdx b/units/en/unit1/communication-protocol.mdx
new file mode 100644
index 0000000..4c7ec96
--- /dev/null
+++ b/units/en/unit1/communication-protocol.mdx
@@ -0,0 +1,274 @@
+# The Communication Protocol
+
+MCP defines a standardized communication protocol that enables Clients and Servers to exchange messages in a consistent, predictable way. This standardization is critical for interoperability across the community. In this section, we'll explore the protocol structure and transport mechanisms used in MCP.
+
+
+
+We're getting down to the nitty-gritty details of the MCP protocol. You won't need to know all of this to build with MCP, but it's good to know that it exists and how it works.
+
+
+
+## JSON-RPC: The Foundation
+
+At its core, MCP uses **JSON-RPC 2.0** as the message format for all communication between Clients and Servers. JSON-RPC is a lightweight remote procedure call protocol encoded in JSON, which makes it:
+
+- Human-readable and easy to debug
+- Language-agnostic, supporting implementation in any programming environment
+- Well-established, with clear specifications and widespread adoption
+
+
+
+The protocol defines three types of messages:
+
+### 1. Requests
+
+Sent from Client to Server to initiate an operation. A Request message includes:
+- A unique identifier (`id`)
+- The method name to invoke (e.g., `tools/call`)
+- Parameters for the method (if any)
+
+Example Request:
+
+```json
+{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "method": "tools/call",
+ "params": {
+ "name": "weather",
+ "arguments": {
+ "location": "San Francisco"
+ }
+ }
+}
+```
+
+### 2. Responses
+
+Sent from Server to Client in reply to a Request. A Response message includes:
+- The same `id` as the corresponding Request
+- Either a `result` (for success) or an `error` (for failure)
+
+Example Success Response:
+```json
+{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "result": {
+ "temperature": 62,
+ "conditions": "Partly cloudy"
+ }
+}
+```
+
+Example Error Response:
+```json
+{
+ "jsonrpc": "2.0",
+ "id": 1,
+ "error": {
+ "code": -32602,
+ "message": "Invalid location parameter"
+ }
+}
+```
+
+### 3. Notifications
+
+One-way messages that don't require a response. Typically sent from Server to Client to provide updates or notifications about events.
+
+Example Notification:
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "progress",
+ "params": {
+ "message": "Processing data...",
+ "percent": 50
+ }
+}
+```
+
+## Transport Mechanisms
+
+JSON-RPC defines the message format, but MCP also specifies how these messages are transported between Clients and Servers. Two primary transport mechanisms are supported:
+
+### stdio (Standard Input/Output)
+
+The stdio transport is used for local communication, where the Client and Server run on the same machine:
+
+The Host application launches the Server as a subprocess and communicates with it by writing to its standard input (stdin) and reading from its standard output (stdout).
+
+
+
+**Use cases** for this transport are local tools like file system access or running local scripts.
+
+
+
+The main **Advantages** of this transport are that it's simple, no network configuration required, and securely sandboxed by the operating system.
+
+### HTTP + SSE (Server-Sent Events) / Streamable HTTP
+
+The HTTP+SSE transport is used for remote communication, where the Client and Server might be on different machines:
+
+Communication happens over HTTP, with the Server using Server-Sent Events (SSE) to push updates to the Client over a persistent connection.
+
+
+
+**Use cases** for this transport are connecting to remote APIs, cloud services, or shared resources.
+
+
+
+The main **Advantages** of this transport are that it works across networks, enables integration with web services, and is compatible with serverless environments.
+
+Recent updates to the MCP standard have introduced or refined "Streamable HTTP," which offers more flexibility by allowing servers to dynamically upgrade to SSE for streaming when needed, while maintaining compatibility with serverless environments.
+
+## The Interaction Lifecycle
+
+In the previous section, we discussed the lifecycle of a single interaction between a Client (π») and a Server (π). Let's now look at the lifecycle of a complete interaction between a Client and a Server in the context of the MCP protocol.
+
+The MCP protocol defines a structured interaction lifecycle between Clients and Servers:
+
+
+
+### Initialization
+
+The Client connects to the Server and they exchange protocol versions and capabilities, and the Server responds with its supported protocol version and capabilities.
+
+
+
π»
+
+ initialize
+
+
π
+
+
π»
+
+ response
+
+
π
+
+
π»
+
+ initialized
+
+
π
+
+
+The Client confirms the initialization is complete via a notification message.
+
+### Discovery
+
+The Client requests information about available capabilities and the Server responds with a list of available tools.
+
+
+
π»
+
+ tools/list
+
+
π
+
+
π»
+
+ response
+
+
π
+
+
+This process could be repeated for each tool, resource, or prompt type.
+
+### Execution
+
+The Client invokes capabilities based on the Host's needs.
+
+
+
π»
+
+ tools/call
+
+
π
+
+
π»
+
+ notification (optional progress)
+
+
π
+
+
π»
+
+ response
+
+
π
+
+
+### Termination
+
+The connection is gracefully closed when no longer needed and the Server acknowledges the shutdown request.
+
+
+
π»
+
+ shutdown
+
+
π
+
+
π»
+
+ response
+
+
π
+
+
π»
+
+ exit
+
+
π
+
+
+The Client sends the final exit message to complete the termination.
+
+## Protocol Evolution
+
+The MCP protocol is designed to be extensible and adaptable. The initialization phase includes version negotiation, allowing for backward compatibility as the protocol evolves. Additionally, capability discovery enables Clients to adapt to the specific features each Server offers, enabling a mix of basic and advanced Servers in the same ecosystem.
diff --git a/units/en/unit1/gradio-mcp.mdx b/units/en/unit1/gradio-mcp.mdx
new file mode 100644
index 0000000..f6cb71e
--- /dev/null
+++ b/units/en/unit1/gradio-mcp.mdx
@@ -0,0 +1,154 @@
+# Gradio MCP Integration
+
+We've now explored the core concepts of the MCP protocol and how to implement an MCP Servers and Clients. In this section, we're going to make things slightly easier by using Gradio to create an MCP Server!
+
+
+
+Gradio is a popular Python library for quickly creating customizable web interfaces for machine learning models.
+
+
+
+## Introduction to Gradio
+
+Gradio allows developers to create UIs for their models with just a few lines of Python code. It's particularly useful for:
+
+- Creating demos and prototypes
+- Sharing models with non-technical users
+- Testing and debugging model behavior
+
+With the addition of MCP support, Gradio now offers a straightforward way to expose AI model capabilities through the standardized MCP protocol.
+
+Combining Gradio with MCP allows you to create both human-friendly interfaces and AI-accessible tools with minimal code. But best of all, Gradio is already well-used by the AI community, so you can use it to share your MCP Servers with others.
+
+## Prerequisites
+
+To use Gradio with MCP support, you'll need to install Gradio with the MCP extra:
+
+```bash
+pip install "gradio[mcp]"
+```
+
+You'll also need an LLM application that supports tool calling using the MCP protocol, such as Cursor ( known as "MCP Hosts").
+
+## Creating an MCP Server with Gradio
+
+Let's walk through a basic example of creating an MCP Server using Gradio:
+
+```python
+import gradio as gr
+
+def letter_counter(word: str, letter: str) -> int:
+ """
+ Count the number of occurrences of a letter in a word or text.
+
+ Args:
+ word (str): The input text to search through
+ letter (str): The letter to search for
+
+ Returns:
+ int: The number of times the letter appears in the text
+ """
+ word = word.lower()
+ letter = letter.lower()
+ count = word.count(letter)
+ return count
+
+# Create a standard Gradio interface
+demo = gr.Interface(
+ fn=letter_counter,
+ inputs=["textbox", "textbox"],
+ outputs="number",
+ title="Letter Counter",
+ description="Enter text and a letter to count how many times the letter appears in the text."
+)
+
+# Launch both the Gradio web interface and the MCP server
+if __name__ == "__main__":
+ demo.launch(mcp_server=True)
+```
+
+With this setup, your letter counter function is now accessible through:
+
+1. A traditional Gradio web interface for direct human interaction
+2. An MCP Server that can be connected to compatible clients
+
+The MCP server will be accessible at:
+```
+http://your-server:port/gradio_api/mcp/sse
+```
+
+The application itself will still be accessible and it looks like this:
+
+
+
+## How It Works Behind the Scenes
+
+When you set `mcp_server=True` in `launch()`, several things happen:
+
+1. Gradio functions are automatically converted to MCP Tools
+2. Input components map to tool argument schemas
+3. Output components determine the response format
+4. The Gradio server now also listens for MCP protocol messages
+5. JSON-RPC over HTTP+SSE is set up for client-server communication
+
+## Key Features of the Gradio <> MCP Integration
+
+1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit `http://your-server:port/gradio_api/mcp/schema` or go to the "View API" link in the footer of your Gradio app, and then click on "MCP".
+
+2. **Environment Variable Support**: There are two ways to enable the MCP server functionality:
+- Using the `mcp_server` parameter in `launch()`:
+ ```python
+ demo.launch(mcp_server=True)
+ ```
+- Using environment variables:
+ ```bash
+ export GRADIO_MCP_SERVER=True
+ ```
+
+3. **File Handling**: The server automatically handles file data conversions, including:
+ - Converting base64-encoded strings to file data
+ - Processing image files and returning them in the correct format
+ - Managing temporary file storage
+
+ It is **strongly** recommended that input images and files be passed as full URLs ("http://..." or "https://...") as MCP Clients do not always handle local files correctly.
+
+4. **Hosted MCP Servers on π€ Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools
+
+## Troubleshooting Tips
+
+1. **Type Hints and Docstrings**: Ensure you provide type hints and valid docstrings for your functions. The docstring should include an "Args:" block with indented parameter names.
+
+2. **String Inputs**: When in doubt, accept input arguments as `str` and convert them to the desired type inside the function.
+
+3. **SSE Support**: Some MCP Hosts don't support SSE-based MCP Servers. In those cases, you can use `mcp-remote`:
+ ```json
+ {
+ "mcpServers": {
+ "gradio": {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "http://your-server:port/gradio_api/mcp/sse"
+ ]
+ }
+ }
+ }
+ ```
+
+4. **Restart**: If you encounter connection issues, try restarting both your MCP Client and MCP Server.
+
+## Share your MCP Server
+
+You can share your MCP Server by publishing your Gradio app to Hugging Face Spaces. The video below shows how to create a Hugging Face Space.
+
+
+
+Now, you can share your MCP Server with others by sharing your Hugging Face Space.
+
+## Conclusion
+
+Gradio's integration with MCP provides an accessible entry point to the MCP ecosystem. By leveraging Gradio's simplicity and adding MCP's standardization, developers can quickly create both human-friendly interfaces and AI-accessible tools with minimal code.
+
+As we progress through this course, we'll explore more sophisticated MCP implementations, but Gradio offers an excellent starting point for understanding and experimenting with the protocol.
+
+In the next unit, we'll dive deeper into building MCP applications, focusing on setting up development environments, exploring SDKs, and implementing more advanced MCP Servers and Clients.
\ No newline at end of file
diff --git a/units/en/unit1/introduction.mdx b/units/en/unit1/introduction.mdx
new file mode 100644
index 0000000..abe360d
--- /dev/null
+++ b/units/en/unit1/introduction.mdx
@@ -0,0 +1,33 @@
+# Introduction to Model Context Protocol (MCP)
+
+Welcome to Unit 1 of the MCP Course! In this unit, we'll explore the fundamentals of Model Context Protocol.
+
+## What You Will Learn
+
+In this unit, you will:
+
+* Understand what Model Context Protocol is and why it's important
+* Learn the key concepts and terminology associated with MCP
+* Explore the integration challenges that MCP solves
+* Walk through the key benefits and goals of MCP
+* See a simple example of MCP integration in action
+
+By the end of this unit, you'll have a solid understanding of the foundational concepts of MCP and be ready to dive deeper into its architecture and implementation in the next unit.
+
+## Importance of MCP
+
+The AI ecosystem is evolving rapidly, with Large Language Models (LLMs) and other AI systems becoming increasingly capable. However, these models are often limited by their training data and lack access to real-time information or specialized tools. This limitation hinders the potential of AI systems to provide truly relevant, accurate, and helpful responses in many scenarios.
+
+This is where Model Context Protocol (MCP) comes in. MCP enables AI models to connect with external data sources, tools, and environments, allowing for the seamless transfer of information and capabilities between AI systems and the broader digital world. This interoperability is crucial for the growth and adoption of truly useful AI applications.
+
+## Overview of Unit 1
+
+Here's a brief overview of what we'll cover in this unit:
+
+1. **What is Model Context Protocol?** - We'll start by defining what MCP is and discussing its role in the AI ecosystem.
+2. **Key Concepts** - We'll explore the fundamental concepts and terminology associated with MCP.
+3. **Integration Challenges** - We'll examine the problems that MCP aims to solve, particularly the "MΓN Integration Problem."
+4. **Benefits and Goals** - We'll discuss the key benefits and goals of MCP, including standardization, enhanced AI capabilities, and interoperability.
+5. **Simple Example** - Finally, we'll walk through a simple example of MCP integration to see how it works in practice.
+
+Let's dive in and explore the exciting world of Model Context Protocol!
\ No newline at end of file
diff --git a/units/en/unit1/key-concepts.mdx b/units/en/unit1/key-concepts.mdx
new file mode 100644
index 0000000..6ba9c3e
--- /dev/null
+++ b/units/en/unit1/key-concepts.mdx
@@ -0,0 +1,88 @@
+# Key Concepts and Terminology
+
+Before diving deeper into the Model Context Protocol, it's important to understand the key concepts and terminology that form the foundation of MCP. This section will introduce the fundamental ideas that underpin the protocol and provide a common vocabulary for discussing MCP implementations throughout the course.
+
+MCP is often described as the "USB-C for AI applications." Just as USB-C provides a standardized physical and logical interface for connecting various peripherals to computing devices, MCP offers a consistent protocol for linking AI models to external capabilities. This standardization benefits the entire ecosystem:
+
+- **users** enjoy simpler and more consistent experiences across AI applications
+- **AI application developers** gain easy integration with a growing ecosystem of tools and data sources
+- **tool and data providers** need only create a single implementation that works with multiple AI applications
+- the broader ecosystem benefits from increased interoperability, innovation, and reduced fragmentation
+
+## The Integration Problem
+
+The **MΓN Integration Problem** refers to the challenge of connecting M different AI applications to N different external tools or data sources without a standardized approach.
+
+### Without MCP (MΓN Problem)
+
+Without a protocol like MCP, developers would need to create MΓN custom integrationsβone for each possible pairing of an AI application with an external capability.
+
+
+
+Each AI application would need to integrate with each tool/data source individually. This is a very complex and expensive process which introduces a lot of friction for developers, and high maintenance costs.
+
+### With MCP (M+N Solution)
+
+MCP transforms this into an M+N problem by providing a standard interface: each AI application implements the client side of MCP once, and each tool/data source implements the server side once. This dramatically reduces integration complexity and maintenance burden.
+
+
+
+Each AI application implements the client side of MCP once, and each tool/data source implements the server side once.
+
+## Core MCP Terminology
+
+Now that we understand the problem that MCP solves, let's dive into the core terminology and concepts that make up the MCP protocol.
+
+
+
+MCP is a standard like HTTP or USB-C, and is a protocol for connecting AI applications to external tools and data sources. Therefore, using standard terminology is crucial to making the MCP work effectively.
+
+When documenting our applications and communincating with the community, we should use the following terminology.
+
+
+
+### Components
+
+Just like client server relationships in HTTP, MCP has a client and a server.
+
+
+
+- **Host**: The user-facing AI application that end-users interact with directly. Examples include Anthropic's Claude Desktop, AI-enhanced IDEs like Cursor, inference libraries like Hugging Face Python SDK, or custom applications built in libraries like LangChain or smolagents. Hosts initiate connections to MCP Servers and orchestrate the overall flow between user requests, LLM processing, and external tools.
+
+- **Client**: A component within the host application that manages communication with a specific MCP Server. Each Client maintains a 1:1 connection with a single Server, handling the protocol-level details of MCP communication and acting as an intermediary between the Host's logic and the external Server.
+
+- **Server**: An external program or service that exposes capabilities (Tools, Resources, Prompts) via the MCP protocol.
+
+
+
+A lot of content uses 'Client' and 'Host' interchangeably. Technically speaking, the host is the user-facing application, and the client is the component within the host application that manages communication with a specific MCP Server.
+
+
+
+### Capabilities
+
+Of course, your application's value is the sum of the capabilities it offers. So the capabilities are the most important part of your application. MCP's can connect with any software service, but there are some common capabilities that are used for many AI applications.
+
+| Capability | Description | Example |
+| ---------- | ----------- | ------- |
+| **Tools** | Executable functions that the AI model can invoke to perform actions or retrieve computed data. Typically relating to the use case of the application. | A tool for a weather application might be a function that returns the weather in a specific location. |
+| **Resources** | Read-only data sources that provide context without significant computation. | A researcher assistant might have a resource for scientific papers. |
+| **Prompts** | Pre-defined templates or workflows that guide interactions between users, AI models, and the available capabilities. | A summarization prompt. |
+| **Sampling** | Server-initiated requests for the Client/Host to perform LLM interactions, enabling recursive actions where the LLM can review generated content and make further decisions. | A writing application reviewing its own output and decide to refine it further. |
+
+In the following diagram, we can see the collective capabilities applied to a use case for a code agent.
+
+
+
+This application might use their MCP entities in the following way:
+
+| Entity | Name | Description |
+| --- | --- | --- |
+| Tool | Code Interpreter | A tool that can execute code that the LLM writes. |
+| Resource | Documentation | A resource that contains the documentation of the application. |
+| Prompt | Code Style | A prompt that guides the LLM to generate code. |
+| Sampling | Code Review | A sampling that allows the LLM to review the code and make further decisions. |
+
+### Conclusion
+
+Understanding these key concepts and terminology provides the foundation for working with MCP effectively. In the following sections, we'll build on this foundation to explore the architectural components, communication protocol, and capabilities that make up the Model Context Protocol.
\ No newline at end of file
diff --git a/units/en/unit1/mcp-clients.mdx b/units/en/unit1/mcp-clients.mdx
new file mode 100644
index 0000000..3c9b08c
--- /dev/null
+++ b/units/en/unit1/mcp-clients.mdx
@@ -0,0 +1,342 @@
+# MCP Clients
+
+Now that we have a basic understanding of the Model Context Protocol, we can explore the essential role of MCP Clients in the Model Context Protocol ecosystem.
+
+ In this part of Unit 1, we'll explore the essential role of MCP Clients in the Model Context Protocol ecosystem.
+
+In this section, you will:
+
+* Understand what MCP Clients are and their role in the MCP architecture
+* Learn about the key responsibilities of MCP Clients
+* Explore the major MCP Client implementations
+* Discover how to use Hugging Face's MCP Client implementation
+* See practical examples of MCP Client usage
+
+## Understanding MCP Clients
+
+MCP Clients are crucial components that act as the bridge between AI applications (Hosts) and external capabilities provided by MCP Servers. Think of the Host as your main application (like an AI assistant or IDE) and the Client as a specialized module within that Host responsible for handling MCP communications.
+
+## User Interface Client
+
+Let's start by exploring the user interface clients that are available for the MCP.
+
+### Chat Interface Clients
+
+Anthropic's Claude Desktop stands as one of the most prominent MCP Clients, providing integration with various MCP Servers.
+
+### Interactive Development Clients
+
+Cursor's MCP Client implementation enables AI-powered coding assistance through direct integration with code editing capabilities. It supports multiple MCP Server connections and provides real-time tool invocation during coding, making it a powerful tool for developers.
+
+Continue.dev is another example of an interactive development client that supports MCP and connects to an MCP server from VS Code.
+
+## Configuring MCP Clients
+
+Now that we've covered the core of the MCP protocol, let's look at how to configure your MCP servers and clients.
+
+Effective deployment of MCP servers and clients requires proper configuration.
+
+
+
+The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.
+
+
+
+### MCP Configuration Files
+
+MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.
+
+Fortunately, the configuration files are very simple, easy to understand, and consistent across major MCP hosts.
+
+#### `mcp.json` Structure
+
+The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:
+
+```json
+{
+ "servers": [
+ {
+ "name": "Server Name",
+ "transport": {
+ "type": "stdio|sse",
+ // Transport-specific configuration
+ }
+ }
+ ]
+}
+```
+
+In this example, we have a single server with a name and a transport type. The transport type is either `stdio` or `sse`.
+
+#### Configuration for stdio Transport
+
+For local servers using stdio transport, the configuration includes the command and arguments to launch the server process:
+
+```json
+{
+ "servers": [
+ {
+ "name": "File Explorer",
+ "transport": {
+ "type": "stdio",
+ "command": "python",
+ "args": ["/path/to/file_explorer_server.py"]
+ }
+ }
+ ]
+}
+```
+
+Here, we have a server called "File Explorer" that is a local script.
+
+#### Configuration for HTTP+SSE Transport
+
+For remote servers using HTTP+SSE transport, the configuration includes the server URL:
+
+```json
+{
+ "servers": [
+ {
+ "name": "Remote API Server",
+ "transport": {
+ "type": "sse",
+ "url": "https://example.com/mcp-server"
+ }
+ }
+ ]
+}
+```
+
+#### Environment Variables in Configuration
+
+Environment variables can be passed to server processes using the `env` field. Here's how to access them in your server code:
+
+
+
+
+In Python, we use the `os` module to access environment variables:
+
+```python
+import os
+
+# Access environment variables
+github_token = os.environ.get("GITHUB_TOKEN")
+if not github_token:
+ raise ValueError("GITHUB_TOKEN environment variable is required")
+
+# Use the token in your server code
+def make_github_request():
+ headers = {"Authorization": f"Bearer {github_token}"}
+ # ... rest of your code
+```
+
+
+
+
+In JavaScript, we use the `process.env` object to access environment variables:
+
+```javascript
+// Access environment variables
+const githubToken = process.env.GITHUB_TOKEN;
+if (!githubToken) {
+ throw new Error("GITHUB_TOKEN environment variable is required");
+}
+
+// Use the token in your server code
+function makeGithubRequest() {
+ const headers = { "Authorization": `Bearer ${githubToken}` };
+ // ... rest of your code
+}
+```
+
+
+
+
+The corresponding configuration in `mcp.json` would look like this:
+
+```json
+{
+ "servers": [
+ {
+ "name": "GitHub API",
+ "transport": {
+ "type": "stdio",
+ "command": "python",
+ "args": ["/path/to/github_server.py"],
+ "env": {
+ "GITHUB_TOKEN": "your_github_token"
+ }
+ }
+ }
+ ]
+}
+```
+
+### Configuration Examples
+
+Let's look at some real-world configuration scenarios:
+
+#### Scenario 1: Local Server Configuration
+
+In this scenario, we have a local server that is a Python script which could be a file explorer or a code editor.
+
+```json
+{
+ "servers": [
+ {
+ "name": "File Explorer",
+ "transport": {
+ "type": "stdio",
+ "command": "python",
+ "args": ["/path/to/file_explorer_server.py"]
+ }
+ }
+ ]
+}
+```
+
+#### Scenario 2: Remote Server Configuration
+
+In this scenario, we have a remote server that is a weather API.
+
+```json
+{
+ "servers": [
+ {
+ "name": "Weather API",
+ "transport": {
+ "type": "sse",
+ "url": "https://example.com/mcp-server"
+ }
+ }
+ ]
+}
+```
+
+Proper configuration is essential for successfully deploying MCP integrations. By understanding these aspects, you can create robust and reliable connections between AI applications and external capabilities.
+
+In the next section, we'll explore the ecosystem of MCP servers available on Hugging Face Hub and how to publish your own servers there.
+
+## Code Clients
+
+You can also use the MCP Client in within code so that the tools are available to the LLM. Let's explore some examples in `smolagents`.
+
+First, let's explore our weather server from the previous page. In `smolagents`, we can use the `ToolCollection` class to automatically discover and register tools from an MCP server. This is done by passing the `StdioServerParameters` or `SSEServerParameters` to the `ToolCollection.from_mcp` method. We can then print the tools to the console.
+
+```python
+from smolagents import ToolCollection, CodeAgent
+from mcp.client.stdio import StdioServerParameters
+
+server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
+
+with ToolCollection.from_mcp(
+ server_parameters, trust_remote_code=True
+) as tools:
+ print("\n".join(f"{t.name}: {t.description}" for t in tools))
+
+```
+
+
+
+Output
+
+
+```sh
+Weather API: Get the weather in a specific location
+
+```
+
+
+
+We can also connect to an MCP server that is hosted on a remote machine. In this case, we need to pass the `SSEServerParameters` to the `ToolCollection.from_mcp` method.
+
+```python
+from smolagents.mcp_client import MCPClient
+
+with MCPClient(
+ {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
+) as tools:
+ # Tools from the remote server are available
+ print("\n".join(f"{t.name}: {t.description}" for t in tools))
+```
+
+
+
+Output
+
+
+```sh
+prime_factors: Compute the prime factorization of a positive integer.
+generate_cheetah_image: Generate a cheetah image.
+image_orientation: Returns whether image is portrait or landscape.
+sepia: Apply a sepia filter to the input image.
+```
+
+
+
+Now, let's see how we can use the MCP Client in a code agent.
+
+```python
+from smolagents import ToolCollection, CodeAgent
+from mcp.client.stdio import StdioServerParameters
+from smolagents import CodeAgent, InferenceClientModel
+
+model = InferenceClientModel()
+
+server_parameters = StdioServerParameters(command="uv", args=["run", "server.py"])
+
+with ToolCollection.from_mcp(
+ server_parameters, trust_remote_code=True
+) as tool_collection:
+ agent = CodeAgent(tools=[*tool_collection.tools], model=model)
+ agent.run("What's the weather in Tokyo?")
+
+```
+
+
+
+Output
+
+
+```sh
+The weather in Tokyo is sunny with a temperature of 20 degrees Celsius.
+```
+
+
+
+We can also connect to an MCP packages. Here's an example of connecting to the `pubmedmcp` package.
+
+```python
+from smolagents import ToolCollection, CodeAgent
+from mcp import StdioServerParameters
+
+server_parameters = StdioServerParameters(
+ command="uv",
+ args=["--quiet", "pubmedmcp@0.1.3"],
+ env={"UV_PYTHON": "3.12", **os.environ},
+)
+
+with ToolCollection.from_mcp(server_parameters, trust_remote_code=True) as tool_collection:
+ agent = CodeAgent(tools=[*tool_collection.tools], add_base_tools=True)
+ agent.run("Please find a remedy for hangover.")
+```
+
+
+
+Output
+
+
+```sh
+The remedy for hangover is to drink water.
+```
+
+
+
+## Next Steps
+
+Now that you understand MCP Clients, you're ready to:
+* Explore specific MCP Server implementations
+* Learn about creating custom MCP Clients
+* Dive into advanced MCP integration patterns
+
+Let's continue our journey into the world of Model Context Protocol!
diff --git a/units/en/unit1/sdk.mdx b/units/en/unit1/sdk.mdx
new file mode 100644
index 0000000..614d960
--- /dev/null
+++ b/units/en/unit1/sdk.mdx
@@ -0,0 +1,168 @@
+# MCP SDK
+
+The Model Context Protocol provides official SDKs for both JavaScript, Python and other languages. This makes it easy to implement MCP clients and servers in your applications. These SDKs handle the low-level protocol details, allowing you to focus on building your application's capabilities.
+
+## SDK Overview
+
+Both SDKs provide similar core functionality, following the MCP protocol specification we discussed earlier. They handle:
+
+- Protocol-level communication
+- Capability registration and discovery
+- Message serialization/deserialization
+- Connection management
+- Error handling
+
+## Core Primitives Implementation
+
+Let's explore how to implement each of the core primitives (Tools, Resources, and Prompts) using both SDKs.
+
+
+
+
+
+
+```python
+from mcp.server.fastmcp import FastMCP
+
+# Create an MCP server
+mcp = FastMCP("Weather Service")
+
+
+@mcp.tool()
+def get_weather(location: str) -> str:
+ """Get the current weather for a specified location."""
+ return f"Weather in {location}: Sunny, 72Β°F"
+
+
+@mcp.resource("weather://{location}")
+def weather_resource(location: str) -> str:
+ """Provide weather data as a resource."""
+ return f"Weather data for {location}: Sunny, 72Β°F"
+
+
+@mcp.prompt()
+def weather_report(location: str) -> str:
+ """Create a weather report prompt."""
+ return f"""You are a weather reporter. Weather report for {location}?"""
+
+
+# Run the server
+if __name__ == "__main__":
+ mcp.run()
+
+```
+
+
+
+
+```javascript
+import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
+import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
+import { z } from "zod";
+
+// Create an MCP server
+const server = new McpServer({
+ name: "Weather Service",
+ version: "1.0.0"
+});
+
+// Tool implementation
+server.tool("get_weather",
+ { location: z.string() },
+ async ({ location }) => ({
+ content: [{
+ type: "text",
+ text: `Weather in ${location}: Sunny, 72Β°F`
+ }]
+ })
+);
+
+// Resource implementation
+server.resource(
+ "weather",
+ new ResourceTemplate("weather://{location}", { list: undefined }),
+ async (uri, { location }) => ({
+ contents: [{
+ uri: uri.href,
+ text: `Weather data for ${location}: Sunny, 72Β°F`
+ }]
+ })
+);
+
+// Prompt implementation
+server.prompt(
+ "weather_report",
+ { location: z.string() },
+ async ({ location }) => ({
+ messages: [
+ {
+ role: "assistant",
+ content: {
+ type: "text",
+ text: "You are a weather reporter."
+ }
+ },
+ {
+ role: "user",
+ content: {
+ type: "text",
+ text: `Weather report for ${location}?`
+ }
+ }
+ ]
+ })
+);
+
+// Run the server
+const transport = new StdioServerTransport();
+await server.connect(transport);
+```
+
+
+
+
+Once you have your server implemented, you can start it by running the server script.
+
+```bash
+mcp dev server.py
+```
+
+This will initialize a development server running the file `server.py`. And log the following output:
+
+```bash
+Starting MCP inspector...
+βοΈ Proxy server listening on port 6277
+Spawned stdio transport
+Connected MCP client to backing server transport
+Created web app transport
+Set up MCP proxy
+π MCP Inspector is up and running at http://127.0.0.1:6274 π
+```
+
+You can then open the MCP Inspector at [http://127.0.0.1:6274](http://127.0.0.1:6274) to see the server's capabilities and interact with them.
+
+You'll see the server's capabilities and the ability to call them via the UI.
+
+
+
+## MCP SDKs
+
+MCP is designed to be language-agnostic, and there are official SDKs available for several popular programming languages:
+
+| Language | Repository | Maintainer(s) | Status |
+|----------|------------|---------------|--------|
+| TypeScript | [github.com/modelcontextprotocol/typescript-sdk](https://github.com/modelcontextprotocol/typescript-sdk) | Anthropic | Active |
+| Python | [github.com/modelcontextprotocol/python-sdk](https://github.com/modelcontextprotocol/python-sdk) | Anthropic | Active |
+| Java | [github.com/modelcontextprotocol/java-sdk](https://github.com/modelcontextprotocol/java-sdk) | Spring AI (VMware) | Active |
+| Kotlin | [github.com/modelcontextprotocol/kotlin-sdk](https://github.com/modelcontextprotocol/kotlin-sdk) | JetBrains | Active |
+| C# | [github.com/modelcontextprotocol/csharp-sdk](https://github.com/modelcontextprotocol/csharp-sdk) | Microsoft | Active (Preview) |
+| Swift | [github.com/modelcontextprotocol/swift-sdk](https://github.com/modelcontextprotocol/swift-sdk) | loopwork-ai | Active |
+| Rust | [github.com/modelcontextprotocol/rust-sdk](https://github.com/modelcontextprotocol/rust-sdk) | Anthropic/Community | Active |
+
+These SDKs provide language-specific abstractions that simplify working with the MCP protocol, allowing you to focus on implementing the core logic of your servers or clients rather than dealing with low-level protocol details.
+
+## Next Steps
+
+We've only scratched the surface of what you can do with the MCP but you've already got a basic server running. In fact, you've also connected to it using the MCP Client in the browser.
+
+In the next section, we'll look at how to connect to your server from an LLM.
diff --git a/units/en/unit2/clients.mdx b/units/en/unit2/clients.mdx
new file mode 100644
index 0000000..011d039
--- /dev/null
+++ b/units/en/unit2/clients.mdx
@@ -0,0 +1,80 @@
+# Building MCP Clients
+
+In this section, we'll create clients that can interact with our MCP server using different programming languages. We'll implement both a JavaScript client using HuggingFace.js and a Python client using smolagents.
+
+## Configuring MCP Clients
+
+Effective deployment of MCP servers and clients requires proper configuration. The MCP specification is still evolving, so the configuration methods are subject to evolution. We'll focus on the current best practices for configuration.
+
+### MCP Configuration Files
+
+MCP hosts use configuration files to manage server connections. These files define which servers are available and how to connect to them.
+
+The configuration files are very simple, easy to understand, and consistent across major MCP hosts.
+
+#### `mcp.json` Structure
+
+The standard configuration file for MCP is named `mcp.json`. Here's the basic structure:
+
+```json
+{
+ "servers": [
+ {
+ "name": "MCP Server",
+ "transport": {
+ "type": "sse",
+ "url": "http://localhost:7860/gradio_api/mcp/sse"
+ }
+ }
+ ]
+}
+```
+
+In this example, we have a single server configured to use SSE transport, connecting to a local Gradio server running on port 7860.
+
+
+
+We've connected to the Gradio app via SSE transport because we assume that the gradio app is running on a remote server. However, if you want to connect to a local script, `stdio` transport instead of `sse` transport is a better option.
+
+
+
+#### Configuration for HTTP+SSE Transport
+
+For remote servers using HTTP+SSE transport, the configuration includes the server URL:
+
+```json
+{
+ "servers": [
+ {
+ "name": "Remote MCP Server",
+ "transport": {
+ "type": "sse",
+ "url": "https://example.com/gradio_api/mcp/sse"
+ }
+ }
+ ]
+}
+```
+
+This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.
+
+## Configuring a UI MCP Client
+
+When working with Gradio MCP servers, you can configure your UI client to connect to the server using the MCP protocol. Here's how to set it up:
+
+### Basic Configuration
+
+Create a new file called `config.json` with the following configuration:
+
+```json
+{
+ "mcpServers": {
+ "mcp": {
+ "url": "http://localhost:7860/gradio_api/mcp/sse"
+ }
+ }
+}
+```
+
+This configuration allows your UI client to communicate with the Gradio MCP server using the MCP protocol, enabling seamless integration between your frontend and the MCP service.
+
diff --git a/units/en/unit2/gradio-client.mdx b/units/en/unit2/gradio-client.mdx
new file mode 100644
index 0000000..395b8b4
--- /dev/null
+++ b/units/en/unit2/gradio-client.mdx
@@ -0,0 +1,147 @@
+# Gradio as an MCP Client
+
+In the previous section, we explored how to create an MCP Server using Gradio and connect to it using an MCP Client. In this section, we're going to explore how to use Gradio as an MCP Client to connect to an MCP Server.
+
+
+
+Gradio is best suited to the creation of UI clients and MCP servers, but it is also possible to use it as an MCP Client and expose that as a UI.
+
+
+
+We'll connect to the MCP server we created in the previous section and use it to answer questions.
+
+## MCP Client in Gradio
+
+First, we need to install the `smolagents`, gradio and mcp-client libraries, if we haven't already:
+
+```bash
+pip install smolagents[mcp] gradio[mcp] mcp
+```
+
+Now, we can import the necessary libraries and create a simple Gradio interface that uses the MCP Client to connect to the MCP Server.
+
+```python
+```python
+import gradio as gr
+
+from mcp.client.stdio import StdioServerParameters
+from smolagents import ToolCollection, CodeAgent
+from smolagents import CodeAgent, InferenceClientModel
+from smolagents.mcp_client import MCPClient
+```
+
+Next, we'll connect to the MCP Server and get the tools that we can use to answer questions.
+
+```python
+mcp_client = MCPClient(
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"}
+)
+tools = mcp_client.get_tools()
+```
+
+Now that we have the tools, we can create a simple agent that uses them to answer questions. We'll just use a simple `InferenceClientModel` and the default model from `smolagents` for now.
+
+```python
+model = InferenceClientModel()
+agent = CodeAgent(tools=[*tools], model=model)
+```
+
+Now, we can create a simple Gradio interface that uses the agent to answer questions.
+
+```python
+demo = gr.ChatInterface(
+ fn=lambda message, history: str(agent.run(message)),
+ type="messages",
+ examples=["Prime factorization of 68"],
+ title="Agent with MCP Tools",
+ description="This is a simple agent that uses MCP tools to answer questions.",
+ messages=[],
+)
+
+demo.launch()
+```
+
+And that's it! We've created a simple Gradio interface that uses the MCP Client to connect to the MCP Server and answer questions.
+
+
+
+
+## Complete Example
+
+Here's the complete example of the MCP Client in Gradio:
+
+```python
+import gradio as gr
+
+from mcp.client.stdio import StdioServerParameters
+from smolagents import ToolCollection, CodeAgent
+from smolagents import CodeAgent, InferenceClientModel
+from smolagents.mcp_client import MCPClient
+
+
+try:
+ mcp_client = MCPClient(
+ # {"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/sse"}
+ {"url": "http://localhost:7860/gradio_api/mcp/sse"}
+ )
+ tools = mcp_client.get_tools()
+
+ model = InferenceClientModel()
+ agent = CodeAgent(tools=[*tools], model=model)
+
+ def call_agent(message, history):
+ return str(agent.run(message))
+
+
+ demo = gr.ChatInterface(
+ fn=lambda message, history: str(agent.run(message)),
+ type="messages",
+ examples=["Prime factorization of 68"],
+ title="Agent with MCP Tools",
+ description="This is a simple agent that uses MCP tools to answer questions.",
+ messages=[],
+ )
+
+ demo.launch()
+finally:
+ mcp_client.close()
+```
+
+You'll notice that we're closing the MCP Client in the `finally` block. This is important because the MCP Client is a long-lived object that needs to be closed when the program exits.
+
+## Deploying to Hugging Face Spaces
+
+To make your server available to others, you can deploy it to Hugging Face Spaces, just like we did in the previous section.
+To deploy your Gradio MCP client to Hugging Face Spaces:
+
+1. Create a new Space on Hugging Face:
+ - Go to huggingface.co/spaces
+ - Click "Create new Space"
+ - Choose "Gradio" as the SDK
+ - Name your space (e.g., "mcp-client")
+
+2. Create a `requirements.txt` file:
+```txt
+gradio[mcp]
+smolagents[mcp]
+```
+
+3. Push your code to the Space:
+```bash
+git init
+git add server.py requirements.txt
+git commit -m "Initial commit"
+git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-client
+git push -u origin main
+```
+
+## Conclusion
+
+In this section, we've explored how to use Gradio as an MCP Client to connect to an MCP Server. We've also seen how to deploy the MCP Client in Hugging Face Spaces.
+
+
diff --git a/units/en/unit2/gradio-server.mdx b/units/en/unit2/gradio-server.mdx
new file mode 100644
index 0000000..0bf6b36
--- /dev/null
+++ b/units/en/unit2/gradio-server.mdx
@@ -0,0 +1,188 @@
+# Building the Gradio MCP Server
+
+In this section, we'll create our sentiment analysis MCP server using Gradio. This server will expose a sentiment analysis tool that can be used by both human users through a web interface and AI models through the MCP protocol.
+
+## Introduction to Gradio MCP Integration
+
+Gradio provides a straightforward way to create MCP servers by automatically converting your Python functions into MCP tools. When you set `mcp_server=True` in `launch()`, Gradio:
+
+1. Automatically converts your functions into MCP Tools
+2. Maps input components to tool argument schemas
+3. Determines response formats from output components
+4. Sets up JSON-RPC over HTTP+SSE for client-server communication
+5. Creates both a web interface and an MCP server endpoint
+
+## Setting Up the Project
+
+First, let's create a new directory for our project and set up the required dependencies:
+
+```bash
+mkdir mcp-sentiment
+cd mcp-sentiment
+python -m venv venv
+source venv/bin/activate # On Windows: venv\Scripts\activate
+pip install "gradio[mcp]" textblob
+```
+
+## Creating the Server
+
+Create a new file called `server.py` with the following code:
+
+```python
+import gradio as gr
+from textblob import TextBlob
+
+def sentiment_analysis(text: str) -> dict:
+ """
+ Analyze the sentiment of the given text.
+
+ Args:
+ text (str): The text to analyze
+
+ Returns:
+ dict: A dictionary containing polarity, subjectivity, and assessment
+ """
+ blob = TextBlob(text)
+ sentiment = blob.sentiment
+
+ return {
+ "polarity": round(sentiment.polarity, 2), # -1 (negative) to 1 (positive)
+ "subjectivity": round(sentiment.subjectivity, 2), # 0 (objective) to 1 (subjective)
+ "assessment": "positive" if sentiment.polarity > 0 else "negative" if sentiment.polarity < 0 else "neutral"
+ }
+
+# Create the Gradio interface
+demo = gr.Interface(
+ fn=sentiment_analysis,
+ inputs=gr.Textbox(placeholder="Enter text to analyze..."),
+ outputs=gr.JSON(),
+ title="Text Sentiment Analysis",
+ description="Analyze the sentiment of text using TextBlob"
+)
+
+# Launch the interface and MCP server
+if __name__ == "__main__":
+ demo.launch(mcp_server=True)
+```
+
+## Understanding the Code
+
+Let's break down the key components:
+
+1. **Function Definition**:
+ - The `sentiment_analysis` function takes a text input and returns a dictionary
+ - It uses TextBlob to analyze the sentiment
+ - The docstring is crucial as it helps Gradio generate the MCP tool schema
+ - Type hints (`str` and `dict`) help define the input/output schema
+
+2. **Gradio Interface**:
+ - `gr.Interface` creates both the web UI and MCP server
+ - The function is exposed as an MCP tool automatically
+ - Input and output components define the tool's schema
+ - The JSON output component ensures proper serialization
+
+3. **MCP Server**:
+ - Setting `mcp_server=True` enables the MCP server
+ - The server will be available at `http://localhost:7860/gradio_api/mcp/sse`
+ - You can also enable it using the environment variable:
+ ```bash
+ export GRADIO_MCP_SERVER=True
+ ```
+
+## Running the Server
+
+Start the server by running:
+
+```bash
+python server.py
+```
+
+You should see output indicating that both the web interface and MCP server are running. The web interface will be available at `http://localhost:7860`, and the MCP server at `http://localhost:7860/gradio_api/mcp/sse`.
+
+## Testing the Server
+
+You can test the server in two ways:
+
+1. **Web Interface**:
+ - Open `http://localhost:7860` in your browser
+ - Enter some text and click "Submit"
+ - You should see the sentiment analysis results
+
+2. **MCP Schema**:
+ - Visit `http://localhost:7860/gradio_api/mcp/schema`
+ - This shows the MCP tool schema that clients will use
+ - You can also find this in the "View API" link in the footer of your Gradio app
+
+## Troubleshooting Tips
+
+1. **Type Hints and Docstrings**:
+ - Always provide type hints for your function parameters and return values
+ - Include a docstring with an "Args:" block for each parameter
+ - This helps Gradio generate accurate MCP tool schemas
+
+2. **String Inputs**:
+ - When in doubt, accept input arguments as `str`
+ - Convert them to the desired type inside the function
+ - This provides better compatibility with MCP clients
+
+3. **SSE Support**:
+ - Some MCP clients don't support SSE-based MCP Servers
+ - In those cases, use `mcp-remote`:
+ ```json
+ {
+ "mcpServers": {
+ "gradio": {
+ "command": "npx",
+ "args": [
+ "mcp-remote",
+ "http://localhost:7860/gradio_api/mcp/sse"
+ ]
+ }
+ }
+ }
+ ```
+
+4. **Connection Issues**:
+ - If you encounter connection problems, try restarting both the client and server
+ - Check that the server is running and accessible
+ - Verify that the MCP schema is available at the expected URL
+
+## Deploying to Hugging Face Spaces
+
+To make your server available to others, you can deploy it to Hugging Face Spaces:
+
+1. Create a new Space on Hugging Face:
+ - Go to huggingface.co/spaces
+ - Click "Create new Space"
+ - Choose "Gradio" as the SDK
+ - Name your space (e.g., "mcp-sentiment")
+
+2. Create a `requirements.txt` file:
+```txt
+gradio[mcp]
+textblob
+```
+
+3. Push your code to the Space:
+```bash
+git init
+git add server.py requirements.txt
+git commit -m "Initial commit"
+git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/mcp-sentiment
+git push -u origin main
+```
+
+Your MCP server will now be available at:
+```
+https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse
+```
+
+## Next Steps
+
+Now that we have our MCP server running, we'll create clients to interact with it. In the next sections, we'll:
+
+1. Create a HuggingFace.js-based client inspired by Tiny Agents
+2. Implement a SmolAgents-based Python client
+3. Test both clients with our deployed server
+
+Let's move on to building our first client!
\ No newline at end of file
diff --git a/units/en/unit2/introduction.mdx b/units/en/unit2/introduction.mdx
new file mode 100644
index 0000000..9b42dac
--- /dev/null
+++ b/units/en/unit2/introduction.mdx
@@ -0,0 +1,64 @@
+# Building an End-to-End MCP Application
+
+Welcome to Unit 2 of the MCP Course!
+
+In this unit, we'll build a complete MCP application from scratch, focusing on creating a server with Gradio and connecting it with multiple clients. This hands-on approach will give you practical experience with the entire MCP ecosystem.
+
+
+
+In this unit, we're going to build a simple MCP server and client using Gradio and the HuggingFace hub. In the next unit, we'll build a more complex server that tackles a real-world use case.
+
+
+
+## What You'll Learn
+
+In this unit, you will:
+
+- Create an MCP Server using Gradio's built-in MCP support
+- Build a sentiment analysis tool that can be used by AI models
+- Connect to the server using different client implementations:
+ - A HuggingFace.js-based client
+ - A SmolAgents-based client for Python
+- Deploy your MCP Server to Hugging Face Spaces
+- Test and debug the complete system
+
+By the end of this unit, you'll have a working MCP application that demonstrates the power and flexibility of the protocol.
+
+## Prerequisites
+
+Before proceeding with this unit, make sure you:
+
+- Have completed Unit 1 or have a basic understanding of MCP concepts
+- Are comfortable with both Python and JavaScript/TypeScript
+- Have a basic understanding of APIs and client-server architecture
+- Have a development environment with:
+ - Python 3.10+
+ - Node.js 18+
+ - A Hugging Face account (for deployment)
+
+## Our End-to-End Project
+
+We'll build a sentiment analysis application that consists of three main parts: the server, the client, and the deployment.
+
+
+
+### Server Side
+
+- Uses Gradio to create a web interface and MCP server via `gr.Interface`
+- Implements a sentiment analysis tool using TextBlob
+- Exposes the tool through both HTTP and MCP protocols
+
+### Client Side
+
+- Implements a HuggingFace.js client
+- Or, creates a smolagents Python client
+- Demonstrates how to use the same server with different client implementations
+
+### Deployment
+
+- Deploys the server to Hugging Face Spaces
+- Configures the clients to work with the deployed server
+
+## Let's Get Started!
+
+Are you ready to build your first end-to-end MCP application? Let's begin by setting up the development environment and creating our Gradio MCP server.
\ No newline at end of file
diff --git a/units/en/unit2/tiny-agents.mdx b/units/en/unit2/tiny-agents.mdx
new file mode 100644
index 0000000..5e587a8
--- /dev/null
+++ b/units/en/unit2/tiny-agents.mdx
@@ -0,0 +1,457 @@
+# Tiny Agents: an MCP-powered agent in 50 lines of code
+
+Now that we've built MCP servers in Gradio let's explore MCP clients even further. This section builds on the experimental project [Tiny Agents](https://huggingface.co/blog/tiny-agents), which demonstrates a super simple way of deploying MCP clients that can connect to services like our Gradio sentiment analysis server.
+
+In this short exercise, we will walk you through how to implement a TypeScript (JS) MCP client that can communicate with any MCP server, including the Gradio-based sentiment analysis server we built in the previous section. You'll see how MCP standardizes the way agents interact with tools, making Agentic AI development significantly simpler.
+
+
+Image credit https://x.com/adamdotdev
+
+We will show you how to connect your tiny agent to Gradio-based MCP servers, allowing it to leverage both your custom sentiment analysis tool and other pre-built tools.
+
+## How to run the complete demo
+
+If you have NodeJS (with `pnpm` or `npm`), just run this in a terminal:
+
+```bash
+npx @huggingface/mcp-client
+```
+
+or if using `pnpm`:
+
+```bash
+pnpx @huggingface/mcp-client
+```
+
+This installs the package into a temporary folder then executes its command.
+
+You'll see your simple Agent connect to multiple MCP servers (running locally), loading their tools (similar to how it would load your Gradio sentiment analysis tool), then prompting you for a conversation.
+
+
+
+By default our example Agent connects to the following two MCP servers:
+
+- the "canonical" [file system server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), which gets access to your Desktop,
+- and the [Playwright MCP](https://github.com/microsoft/playwright-mcp) server, which knows how to use a sandboxed Chromium browser for you.
+
+You can easily add your Gradio sentiment analysis server to this list, as we'll demonstrate later in this section.
+
+> [!NOTE]
+> Note: this is a bit counter-intuitive but currently, all MCP servers in tiny agents are actually local processes (though remote servers are coming soon). This doesn't includes our Gradio server running on localhost:7860.
+
+Our input for this first video was:
+
+> write a haiku about the Hugging Face community and write it to a file named "hf.txt" on my Desktop
+
+Now let us try this prompt that involves some Web browsing:
+
+> do a Web Search for HF inference providers on Brave Search and open the first 3 results
+
+
+
+With our Gradio sentiment analysis tool connected, we could similarly ask:
+> analyze the sentiment of this review: "I absolutely loved the product, it exceeded all my expectations!"
+
+### Default model and provider
+
+In terms of model/provider pair, our example Agent uses by default:
+- ["Qwen/Qwen2.5-72B-Instruct"](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct)
+- running on [Nebius](https://huggingface.co/docs/inference-providers/providers/nebius)
+
+This is all configurable through env variables! Here, we'll also show how to add our Gradio MCP server:
+
+```ts
+const agent = new Agent({
+ provider: process.env.PROVIDER ?? "nebius",
+ model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
+ apiKey: process.env.HF_TOKEN,
+ servers: [
+ // Default servers
+ {
+ command: "npx",
+ args: ["@modelcontextprotocol/servers", "filesystem"]
+ },
+ {
+ command: "npx",
+ args: ["playwright-mcp"]
+ },
+ // Our Gradio sentiment analysis server
+ {
+ command: "npx",
+ args: [
+ "mcp-remote",
+ "http://localhost:7860/gradio_api/mcp/sse"
+ ]
+ }
+ ],
+});
+```
+
+
+
+We connect to our Gradio based MCP server via the [`mcp-remote`](https://www.npmjs.com/package/mcp-remote) package.
+
+
+
+
+## The foundation for this: tool calling native support in LLMs.
+
+What makes connecting Gradio MCP servers to our Tiny Agent possible is that recent LLMs (both closed and open) have been trained for function calling, aka. tool use. This same capability powers our integration with the sentiment analysis tool we built with Gradio.
+
+A tool is defined by its name, a description, and a JSONSchema representation of its parameters - exactly how we defined our sentiment analysis function in the Gradio server. Let's look at a simple example:
+
+```ts
+const weatherTool = {
+ type: "function",
+ function: {
+ name: "get_weather",
+ description: "Get current temperature for a given location.",
+ parameters: {
+ type: "object",
+ properties: {
+ location: {
+ type: "string",
+ description: "City and country e.g. BogotΓ‘, Colombia",
+ },
+ },
+ },
+ },
+};
+```
+
+Our Gradio sentiment analysis tool would have a similar structure, with `text` as the input parameter instead of `location`.
+
+The canonical documentation I will link to here is [OpenAI's function calling doc](https://platform.openai.com/docs/guides/function-calling?api-mode=chat). (Yes... OpenAI pretty much defines the LLM standards for the whole community π ).
+
+Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools.
+As a developer, you run the tools and feed their result back into the LLM to continue the generation.
+
+> [!NOTE]
+> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
+
+## Implementing an MCP client on top of InferenceClient
+
+Now that we know what a tool is in recent LLMs, let's implement the actual MCP client that will communicate with our Gradio server and other MCP servers.
+
+The official doc at https://modelcontextprotocol.io/quickstart/client is fairly well-written. You only have to replace any mention of the Anthropic client SDK by any other OpenAI-compatible client SDK. (There is also a [llms.txt](https://modelcontextprotocol.io/llms-full.txt) you can feed into your LLM of choice to help you code along).
+
+As a reminder, we use HF's `InferenceClient` for our inference client.
+
+> [!TIP]
+> The complete `McpClient.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/McpClient.ts) if you want to follow along using the actual code π€
+
+Our `McpClient` class has:
+- an Inference Client (works with any Inference Provider, and `huggingface/inference` supports both remote and local endpoints)
+- a set of MCP client sessions, one for each connected MCP server (this allows us to connect to multiple servers, including our Gradio server)
+- and a list of available tools that is going to be filled from the connected servers and just slightly re-formatted.
+
+```ts
+export class McpClient {
+ protected client: InferenceClient;
+ protected provider: string;
+ protected model: string;
+ private clients: Map = new Map();
+ public readonly availableTools: ChatCompletionInputTool[] = [];
+
+ constructor({ provider, model, apiKey }: { provider: InferenceProvider; model: string; apiKey: string }) {
+ this.client = new InferenceClient(apiKey);
+ this.provider = provider;
+ this.model = model;
+ }
+
+ // [...]
+}
+```
+
+To connect to a MCP server (like our Gradio sentiment analysis server), the official `@modelcontextprotocol/sdk/client` TypeScript SDK provides a `Client` class with a `listTools()` method:
+
+```ts
+async addMcpServer(server: StdioServerParameters): Promise {
+ const transport = new StdioClientTransport({
+ ...server,
+ env: { ...server.env, PATH: process.env.PATH ?? "" },
+ });
+ const mcp = new Client({ name: "@huggingface/mcp-client", version: packageVersion });
+ await mcp.connect(transport);
+
+ const toolsResult = await mcp.listTools();
+ debug(
+ "Connected to server with tools:",
+ toolsResult.tools.map(({ name }) => name)
+ );
+
+ for (const tool of toolsResult.tools) {
+ this.clients.set(tool.name, mcp);
+ }
+
+ this.availableTools.push(
+ ...toolsResult.tools.map((tool) => {
+ return {
+ type: "function",
+ function: {
+ name: tool.name,
+ description: tool.description,
+ parameters: tool.inputSchema,
+ },
+ } satisfies ChatCompletionInputTool;
+ })
+ );
+}
+```
+
+`StdioServerParameters` is an interface from the MCP SDK that will let you easily spawn a local process: as we mentioned earlier, currently, all MCP servers are actually local processes, including our Gradio server (though we access it via HTTP).
+
+For each MCP server we connect to (including our Gradio sentiment analysis server), we slightly re-format its list of tools and add them to `this.availableTools`.
+
+### How to use the tools
+
+Using our sentiment analysis tool (or any other MCP tool) is straightforward. You just pass `this.availableTools` to your LLM chat-completion, in addition to your usual array of messages:
+
+```ts
+const stream = this.client.chatCompletionStream({
+ provider: this.provider,
+ model: this.model,
+ messages,
+ tools: this.availableTools,
+ tool_choice: "auto",
+});
+```
+
+`tool_choice: "auto"` is the parameter you pass for the LLM to generate zero, one, or multiple tool calls.
+
+When parsing or streaming the output, the LLM will generate some tool calls (i.e. a function name, and some JSON-encoded arguments), which you (as a developer) need to compute. The MCP client SDK once again makes that very easy; it has a `client.callTool()` method:
+
+```ts
+const toolName = toolCall.function.name;
+const toolArgs = JSON.parse(toolCall.function.arguments);
+
+const toolMessage: ChatCompletionInputMessageTool = {
+ role: "tool",
+ tool_call_id: toolCall.id,
+ content: "",
+ name: toolName,
+};
+
+/// Get the appropriate session for this tool
+const client = this.clients.get(toolName);
+if (client) {
+ const result = await client.callTool({ name: toolName, arguments: toolArgs });
+ toolMessage.content = result.content[0].text;
+} else {
+ toolMessage.content = `Error: No session found for tool: ${toolName}`;
+}
+```
+
+If the LLM chooses to use our sentiment analysis tool, this code will automatically route the call to our Gradio server, execute the analysis, and return the result back to the LLM.
+
+Finally you will add the resulting tool message to your `messages` array and back into the LLM.
+
+## Our 50-lines-of-code Agent π€―
+
+Now that we have an MCP client capable of connecting to arbitrary MCP servers (including our Gradio sentiment analysis server) to get lists of tools and capable of injecting them and parsing them from the LLM inference, well... what is an Agent?
+
+> Once you have an inference client with a set of tools, then an Agent is just a while loop on top of it.
+
+In more detail, an Agent is simply a combination of:
+- a system prompt
+- an LLM Inference client
+- an MCP client to hook a set of Tools into it from a bunch of MCP servers (including our Gradio server)
+- some basic control flow (see below for the while loop)
+
+> [!TIP]
+> The complete `Agent.ts` code file is [here](https://github.com/huggingface/huggingface.js/blob/main/packages/mcp-client/src/Agent.ts).
+
+Our Agent class simply extends McpClient:
+
+```ts
+export class Agent extends McpClient {
+ private readonly servers: StdioServerParameters[];
+ protected messages: ChatCompletionInputMessage[];
+
+ constructor({
+ provider,
+ model,
+ apiKey,
+ servers,
+ prompt,
+ }: {
+ provider: InferenceProvider;
+ model: string;
+ apiKey: string;
+ servers: StdioServerParameters[];
+ prompt?: string;
+ }) {
+ super({ provider, model, apiKey });
+ this.servers = servers;
+ this.messages = [
+ {
+ role: "system",
+ content: prompt ?? DEFAULT_SYSTEM_PROMPT,
+ },
+ ];
+ }
+}
+```
+
+By default, we use a very simple system prompt inspired by the one shared in the [GPT-4.1 prompting guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide).
+
+Even though this comes from OpenAI π, this sentence in particular applies to more and more models, both closed and open:
+
+> We encourage developers to exclusively use the tools field to pass tools, rather than manually injecting tool descriptions into your prompt and writing a separate parser for tool calls, as some have reported doing in the past.
+
+Which is to say, we don't need to provide painstakingly formatted lists of tool use examples in the prompt. The `tools: this.availableTools` param is enough, and the LLM will know how to use both the filesystem tools and our Gradio sentiment analysis tool.
+
+Loading the tools on the Agent is literally just connecting to the MCP servers we want (in parallel because it's so easy to do in JS):
+
+```ts
+async loadTools(): Promise {
+ await Promise.all(this.servers.map((s) => this.addMcpServer(s)));
+}
+```
+
+We add two extra tools (outside of MCP) that can be used by the LLM for our Agent's control flow:
+
+```ts
+const taskCompletionTool: ChatCompletionInputTool = {
+ type: "function",
+ function: {
+ name: "task_complete",
+ description: "Call this tool when the task given by the user is complete",
+ parameters: {
+ type: "object",
+ properties: {},
+ },
+ },
+};
+const askQuestionTool: ChatCompletionInputTool = {
+ type: "function",
+ function: {
+ name: "ask_question",
+ description: "Ask a question to the user to get more info required to solve or clarify their problem.",
+ parameters: {
+ type: "object",
+ properties: {},
+ },
+ },
+};
+const exitLoopTools = [taskCompletionTool, askQuestionTool];
+```
+
+When calling any of these tools, the Agent will break its loop and give control back to the user for new input.
+
+### The complete while loop
+
+Behold our complete while loop.π
+
+The gist of our Agent's main while loop is that we simply iterate with the LLM alternating between tool calling and feeding it the tool results, and we do so **until the LLM starts to respond with two non-tool messages in a row**.
+
+This is the complete while loop:
+
+```ts
+let numOfTurns = 0;
+let nextTurnShouldCallTools = true;
+while (true) {
+ try {
+ yield* this.processSingleTurnWithTools(this.messages, {
+ exitLoopTools,
+ exitIfFirstChunkNoTool: numOfTurns > 0 && nextTurnShouldCallTools,
+ abortSignal: opts.abortSignal,
+ });
+ } catch (err) {
+ if (err instanceof Error && err.message === "AbortError") {
+ return;
+ }
+ throw err;
+ }
+ numOfTurns++;
+ const currentLast = this.messages.at(-1)!;
+ if (
+ currentLast.role === "tool" &&
+ currentLast.name &&
+ exitLoopTools.map((t) => t.function.name).includes(currentLast.name)
+ ) {
+ return;
+ }
+ if (currentLast.role !== "tool" && numOfTurns > MAX_NUM_TURNS) {
+ return;
+ }
+ if (currentLast.role !== "tool" && nextTurnShouldCallTools) {
+ return;
+ }
+ if (currentLast.role === "tool") {
+ nextTurnShouldCallTools = false;
+ } else {
+ nextTurnShouldCallTools = true;
+ }
+}
+```
+
+## Connecting Tiny Agents with Gradio MCP Servers
+
+Now that we understand both Tiny Agents and Gradio MCP servers, let's see how they work together! The beauty of MCP is that it provides a standardized way for agents to interact with any MCP-compatible server, including our Gradio-based sentiment analysis server.
+
+### Using the Gradio Server with Tiny Agents
+
+To connect our Tiny Agent to the Gradio sentiment analysis server we built earlier, we just need to add it to our list of servers. Here's how we can modify our agent configuration:
+
+```ts
+const agent = new Agent({
+ provider: process.env.PROVIDER ?? "nebius",
+ model: process.env.MODEL_ID ?? "Qwen/Qwen2.5-72B-Instruct",
+ apiKey: process.env.HF_TOKEN,
+ servers: [
+ // ... existing servers ...
+ {
+ command: "npx",
+ args: [
+ "mcp-remote",
+ "http://localhost:7860/gradio_api/mcp/sse" // Your Gradio MCP server
+ ]
+ }
+ ],
+});
+```
+
+Now our agent can use the sentiment analysis tool alongside other tools! For example, it could:
+1. Read text from a file using the filesystem server
+2. Analyze its sentiment using our Gradio server
+3. Write the results back to a file
+
+### Example Interaction
+
+Here's what a conversation with our agent might look like:
+
+```
+User: Read the file "feedback.txt" from my Desktop and analyze its sentiment
+
+Agent: I'll help you analyze the sentiment of the feedback file. Let me break this down into steps:
+
+1. First, I'll read the file using the filesystem tool
+2. Then, I'll analyze its sentiment using the sentiment analysis tool
+3. Finally, I'll write the results to a new file
+
+[Agent proceeds to use the tools and provide the analysis]
+```
+
+### Deployment Considerations
+
+When deploying your Gradio MCP server to Hugging Face Spaces, you'll need to update the server URL in your agent configuration to point to your deployed space:
+
+```ts
+{
+ command: "npx",
+ args: [
+ "mcp-remote",
+ "https://YOUR_USERNAME-mcp-sentiment.hf.space/gradio_api/mcp/sse"
+ ]
+}
+```
+
+This allows your agent to use the sentiment analysis tool from anywhere, not just locally!
+
+
+
diff --git a/units/en/unit3/introduction.mdx b/units/en/unit3/introduction.mdx
new file mode 100644
index 0000000..d1526ee
--- /dev/null
+++ b/units/en/unit3/introduction.mdx
@@ -0,0 +1 @@
+# Introduction
\ No newline at end of file
diff --git a/units/en/unit4/introduction.mdx b/units/en/unit4/introduction.mdx
new file mode 100644
index 0000000..67b7e26
--- /dev/null
+++ b/units/en/unit4/introduction.mdx
@@ -0,0 +1 @@
+# Advanced Topics, Security, and the Future of MCP
\ No newline at end of file