Skip to content

Commit 0ce21b6

Browse files
Merge pull request #60 from CrackingShells/dev
Fixing MCP tool lifecycle state management
2 parents 4ff1c29 + 761683d commit 0ce21b6

File tree

4 files changed

+76
-31
lines changed

4 files changed

+76
-31
lines changed

docs/CHANGELOG.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
## [0.4.3-dev.2](https://github.com/CrackingShells/Hatchling/compare/v0.4.3-dev.1...v0.4.3-dev.2) (2025-08-26)
2+
3+
4+
### Bug Fixes
5+
6+
* **mcp:** disabled tools being called ([1d24986](https://github.com/CrackingShells/Hatchling/commit/1d249869b9d37c3b387f3197d8fc313055ac952a))
7+
8+
## [0.4.3-dev.1](https://github.com/CrackingShells/Hatchling/compare/v0.4.2...v0.4.3-dev.1) (2025-08-26)
9+
10+
11+
### Bug Fixes
12+
13+
* **mcp:** tools status after server reconnection ([bca9cc6](https://github.com/CrackingShells/Hatchling/commit/bca9cc6ffbadd0d3ef691b60364ea285700476fa))
14+
15+
16+
### Documentation
17+
18+
* **dev:** remove design reports ([2dfd324](https://github.com/CrackingShells/Hatchling/commit/2dfd324689062f9622b5c9468414133aa6493a88))
19+
* instructions for AMD GPUs ([e2499bd](https://github.com/CrackingShells/Hatchling/commit/e2499bdaa768408df7279ed673a96092d4424155))
20+
* instructions to run hatchling ([9d18ac5](https://github.com/CrackingShells/Hatchling/commit/9d18ac5a03ee56942db2cf5ecf014869ca7b9261))
21+
* moving `CONTRIBUTING.md` in the `/docs` ([998d1b2](https://github.com/CrackingShells/Hatchling/commit/998d1b2345ebda16f566c761a5d508f43f44d0e8))

docs/articles/users/tutorials/Installation/docker-ollama-setup.md

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ This document provides instructions on how to set up and run Ollama for deployin
1818
- On Windows, install Windows Subsystem for Linux (WSL). Latest version is v2: [Official Microsoft Documentation](https://learn.microsoft.com/en-us/windows/wsl/install)
1919
- GPU Support:
2020
- For MacOS users with Apple Silicon chips (typically M series), you can **follow the instructions for CPU and ignore the GPU-related sections**
21-
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. On the computer with the GPU, do:
21+
- For Windows & Linux with dedicated GPUs, we strongly recommend enabling GPU support to increase LLM output speed. We will be using the official documentation for each GPU type:
2222
- NVIDIA GPUs: [NVIDIA Container Toolkit Installation](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
23-
- AMD GPUs: Nothing, you can move on.
23+
- AMD GPUs: [AMD ROCm Installation](https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/native_linux/install-radeon.html)
2424

2525
## Setup with Docker Desktop
2626

@@ -33,10 +33,11 @@ This document provides instructions on how to set up and run Ollama for deployin
3333
- Either enable integration with your default WSL distro (arrow 4.1) OR select a specific one (arrow 4.2)
3434
- Click "Apply & Restart" if you make changes (arrow 5)
3535

36-
3. **For NVIDIA GPU owners, setup GPU Support (nothing to do for AMD GPU owners at this stage)**:
36+
3. **For GPU owners, setup GPU Support**:
3737
- [Open a terminal](../../../appendices/open_a_terminal.md) on the computer with the GPU you want to use (for GPU servers, you likely connect through ssh)
3838
- On Windows, launch the Linux version that was installed via WSL and that Docker is using. For example, in the previous image, that would be `Ubuntu-24.04`; so, run `wsl -d Ubuntu-24.04` to start Ubuntu.
39-
- For NVIDIA GPU support, run:
39+
40+
- **For NVIDIA GPU support**, run:
4041

4142
```bash
4243
# Add NVIDIA repository keys
@@ -55,6 +56,24 @@ This document provides instructions on how to set up and run Ollama for deployin
5556
sudo nvidia-ctk runtime configure --runtime=docker
5657
```
5758

59+
- **For AMD GPU support**, run:
60+
61+
```bash
62+
# Install required packages
63+
sudo apt install python3-setuptools python3-wheel
64+
65+
# Download and install AMD GPU installer script (for Ubuntu 24.04)
66+
sudo apt update
67+
wget https://repo.radeon.com/amdgpu-install/6.4.2.1/ubuntu/noble/amdgpu-install_6.4.60402-1_all.deb
68+
sudo apt install ./amdgpu-install_6.4.60402-1_all.deb
69+
70+
# Install graphics and ROCm support
71+
sudo amdgpu-install -y --usecase=graphics,rocm
72+
73+
# Add current user to render and video groups
74+
sudo usermod -a -G render,video $LOGNAME
75+
```
76+
5877
- Close the terminal
5978
- Restart Docker
6079
- For Docker Desktop, click on the three vertical dots icon (arrow 1), then `Restart` (arrow 2)

docs/articles/users/tutorials/Installation/running_hatchling.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,9 @@ This section assumes you have followed the [Docker & Ollama setup](./docker-olla
3232
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
3333
```
3434

35+
> [!Note]
36+
> Troubleshooting: If you encounter issues with the `/dev/kfd` or `/dev/dri` devices, try running the command with the `--privileged` flag: `docker run -d --privileged --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm`
37+
3538
### Checking that GPU support is enabled as expected
3639

3740
- Go to the `Containers` tab in Docker Desktop (arrow 1) and select your Ollama container
@@ -72,6 +75,12 @@ At this step, you will be downloading the content of Hatchling. Currently, we ar
7275
cd ./Hatchling/docker
7376
```
7477

78+
### Copy the `.env.example` file to `.env`
79+
80+
```bash
81+
cp .env.example .env
82+
```
83+
7584
### Install Hatchling by building the code
7685

7786
```bash

hatchling/mcp_utils/mcp_tool_lifecycle_subscriber.py

Lines changed: 23 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -121,39 +121,35 @@ def _handle_server_reachable_event(self, event: Event) -> None:
121121
def _handle_tool_enabled_event(self, event: Event) -> None:
122122
"""Handle tool enabled event."""
123123
tool_name = event.data.get("tool_name", "")
124-
125-
# Create or update tool info from event data
126-
if tool_name not in self._tool_cache:
127-
tool_info = event.data.get("tool_info", {})
124+
tool_info = event.data.get("tool_info", {})
125+
126+
if not tool_info:
127+
self.logger.error(f"'Tool enabled event' missing 'tool_info' for tool '{tool_name}'")
128+
return
128129

129-
if not tool_info:
130-
self.logger.error(f"'Tool enabled event' missing 'tool_info' for tool '{tool_name}'")
131-
return
132-
133-
# Convert tool to provider-specific format
134-
# Tool info is an in/out parameter in mcp_to_provider_tool
135-
# Hence, the provider_format field will be set
136-
# to the converted tool format
137-
self._mcp_to_provider_tool_func(tool_info)
130+
# Always convert tool to provider-specific format
131+
# This ensures that tools are properly formatted even during reconnection
132+
# when they already exist in the cache but need their provider_format refreshed
133+
self._mcp_to_provider_tool_func(tool_info)
138134

139-
self._tool_cache[tool_name] = tool_info
140-
self.logger.debug(f"Tool enabled: {tool_name}")
135+
# Update cache with the new tool info (whether new or existing)
136+
self._tool_cache[tool_name] = tool_info
137+
self.logger.debug(f"Tool enabled: {tool_name}")
141138

142139
def _handle_tool_disabled_event(self, event: Event) -> None:
143140
"""Handle tool disabled event."""
144141
tool_name = event.data.get("tool_name", "")
145-
146-
if tool_name in self._tool_cache:
147-
tool_info = self._tool_cache[tool_name]
148-
tool_info.status = MCPToolStatus.DISABLED
149-
150-
if "reason" in event.data:
151-
try:
152-
tool_info.reason = MCPToolStatusReason[event.data["reason"].upper()]
153-
except (KeyError, ValueError):
154-
tool_info.reason = MCPToolStatusReason.FROM_SYSTEM_ERROR
155-
156-
self.logger.debug(f"Tool disabled: {tool_name}")
142+
tool_info = event.data.get("tool_info", {})
143+
144+
if not tool_info:
145+
self.logger.error(f"'Tool disabled event' missing 'tool_info' for tool '{tool_name}'")
146+
return
147+
148+
# Update cache with the fresh tool info from the event
149+
# This ensures consistency with _handle_tool_enabled_event() and prevents
150+
# stale cache entries that could cause disabled tools to remain in payloads
151+
self._tool_cache[tool_name] = tool_info
152+
self.logger.debug(f"Tool disabled: {tool_name}")
157153

158154
def get_enabled_tools(self) -> Dict[str, MCPToolInfo]:
159155
"""Get all currently enabled tools.

0 commit comments

Comments
 (0)