Skip to content

Add Nix flake for packaging#6

Open
Veraticus wants to merge 1 commit intoMarimerLLC:mainfrom
Veraticus:add-nix-flake
Open

Add Nix flake for packaging#6
Veraticus wants to merge 1 commit intoMarimerLLC:mainfrom
Veraticus:add-nix-flake

Conversation

@Veraticus
Copy link

Summary

  • Adds Nix flake for building server and CLI as reproducible packages
  • Includes NixOS module for running as a systemd service
  • Provides update-nix-deps.sh helper script for regenerating dependency lockfiles
  • Documentation added to docs/nix-packaging.md

Why separate packages?

The server and CLI depend on different versions of Microsoft.Extensions.Logging (v9 vs v10). Combining them in a single build causes runtime assembly loading errors, so they're split into separate packages with separate dependency lockfiles.

Test plan

  • nix build .#default builds the MCP server
  • nix build .#cli builds the CLI
  • Server responds correctly to MCP initialize message
  • CLI shows help and subcommands

🤖 Generated with Claude Code

- Separate packages for server and CLI to avoid assembly version conflicts
- NixOS module for running as a systemd service
- Helper script for updating NuGet dependency lockfiles
- Documentation in docs/nix-packaging.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Nix flake support for reproducible packaging of the Calendar MCP project. It enables building the server and CLI as Nix packages, includes a NixOS module for systemd service deployment, and provides tooling for dependency management.

Changes:

  • Adds Nix flake configuration with separate packages for server and CLI to avoid assembly version conflicts
  • Includes NixOS module for running the server as a systemd service
  • Provides helper script and comprehensive documentation for Nix-based workflows

Reviewed changes

Copilot reviewed 5 out of 7 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
flake.nix Nix flake defining server and CLI packages, dev shell, and NixOS module
flake.lock Nix flake lockfile with pinned dependency versions
update-nix-deps.sh Helper script to regenerate NuGet dependency lockfiles
deps-server.json NuGet dependency lockfile for the MCP server package
deps-cli.json NuGet dependency lockfile for the CLI package
docs/nix-packaging.md Comprehensive documentation for Nix packaging workflows
.gitignore Adds Nix build output directories to ignore list

Comment on lines +14 to +15
dotnetSdk = pkgs.dotnet-sdk_9;
dotnetRuntime = pkgs.dotnet-runtime_9;
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The flake uses dotnet-sdk_9 and dotnet-runtime_9, but both projects target net9.0. However, the PR description mentions that the server and CLI depend on different versions of Microsoft.Extensions.Logging (v9 vs v10).

Looking at the project files:

  • CalendarMcp.StdioServer.csproj uses net9.0 but doesn't explicitly reference Microsoft.Extensions.Logging
  • CalendarMcp.Cli.csproj uses net9.0 and references Microsoft.Extensions.Logging Version="10.0.0"

The mismatch is that you're using .NET SDK/runtime version 9, but the CLI references Microsoft.Extensions.Logging 10.0.0, which is part of .NET 10. This could cause compatibility issues. Consider either:

  1. Using dotnet-sdk_10 and dotnet-runtime_10 if the projects will be upgraded to net10.0
  2. Ensuring the CLI uses Microsoft.Extensions.Logging 9.0.0 to match the .NET 9 runtime

Note that the project appears to be using .NET 9 based on the TargetFramework, but referencing .NET 10 packages in the CLI.

Suggested change
dotnetSdk = pkgs.dotnet-sdk_9;
dotnetRuntime = pkgs.dotnet-runtime_9;
dotnetSdk = pkgs.dotnet-sdk_10;
dotnetRuntime = pkgs.dotnet-runtime_10;

Copilot uses AI. Check for mistakes.
Comment on lines +81 to +82
echo "To regenerate deps.json:"
echo " cd src && dotnet restore && nuget-to-json . > ../deps.json"
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shellHook instructions reference a non-existent deps.json file. The actual files are deps-server.json and deps-cli.json (as used in the update-nix-deps.sh script). This instruction would not work correctly.

The instruction should be updated to match the actual workflow, which is to use the update-nix-deps.sh script or to run the fetch-deps commands for each package separately.

Suggested change
echo "To regenerate deps.json:"
echo " cd src && dotnet restore && nuget-to-json . > ../deps.json"
echo "To regenerate Nix NuGet deps (deps-server.json and deps-cli.json):"
echo " ./update-nix-deps.sh"

Copilot uses AI. Check for mistakes.
Comment on lines +74 to +82
pkgs.nuget-to-json
];

shellHook = ''
echo "Calendar MCP development shell"
echo " dotnet version: $(dotnet --version)"
echo ""
echo "To regenerate deps.json:"
echo " cd src && dotnet restore && nuget-to-json . > ../deps.json"
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The devShell includes nuget-to-json in buildInputs, but this tool has been removed from nixpkgs (as mentioned in the documentation's "Common Issues" section). This will cause the development shell to fail to build.

The fetch-deps approach is the correct method now. Consider removing nuget-to-json from buildInputs or adding a comment explaining it's only available in older nixpkgs versions if needed for reference.

Suggested change
pkgs.nuget-to-json
];
shellHook = ''
echo "Calendar MCP development shell"
echo " dotnet version: $(dotnet --version)"
echo ""
echo "To regenerate deps.json:"
echo " cd src && dotnet restore && nuget-to-json . > ../deps.json"
];
shellHook = ''
echo "Calendar MCP development shell"
echo " dotnet version: $(dotnet --version)"
echo ""
echo "NuGet dependency lock files (deps-server.json, deps-cli.json) are managed using the fetch-deps approach."
echo "See the project documentation for instructions on regenerating these files if needed."

Copilot uses AI. Check for mistakes.
Comment on lines +9 to +19
echo "Updating server deps..."
server_script=$(nix build .#default.fetch-deps --no-link --print-out-paths)
"$server_script" deps-server.json

echo "Updating CLI deps..."
cli_script=$(nix build .#cli.fetch-deps --no-link --print-out-paths)
"$cli_script" deps-cli.json

echo ""
echo "Done. Remember to stage the updated files:"
echo " git add deps-server.json deps-cli.json"
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script builds and runs fetch-deps scripts but doesn't add error handling or verification. Consider adding checks to ensure:

  1. The build commands succeed before attempting to run the scripts
  2. The resulting JSON files are valid
  3. The files are tracked by git (as mentioned in the output message)

For example, you could check if the JSON files were created successfully and contain valid JSON before the final message.

Suggested change
echo "Updating server deps..."
server_script=$(nix build .#default.fetch-deps --no-link --print-out-paths)
"$server_script" deps-server.json
echo "Updating CLI deps..."
cli_script=$(nix build .#cli.fetch-deps --no-link --print-out-paths)
"$cli_script" deps-cli.json
echo ""
echo "Done. Remember to stage the updated files:"
echo " git add deps-server.json deps-cli.json"
check_json_file() {
local file="$1"
if [ ! -f "$file" ]; then
echo "Error: expected JSON file '$file' was not created." >&2
exit 1
fi
if command -v jq >/dev/null 2>&1; then
if ! jq empty "$file" >/dev/null 2>&1; then
echo "Error: '$file' does not contain valid JSON." >&2
exit 1
fi
else
echo "Warning: 'jq' not found; skipping JSON validation for '$file'." >&2
fi
}
echo "Updating server deps..."
server_script=$(nix build .#default.fetch-deps --no-link --print-out-paths)
"$server_script" deps-server.json
check_json_file "deps-server.json"
echo "Updating CLI deps..."
cli_script=$(nix build .#cli.fetch-deps --no-link --print-out-paths)
"$cli_script" deps-cli.json
check_json_file "deps-cli.json"
if git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
for f in deps-server.json deps-cli.json; do
if ! git ls-files --error-unmatch "$f" >/dev/null 2>&1; then
echo "Note: '$f' is currently untracked by git and will need to be added." >&2
fi
done
echo ""
echo "Done. Remember to stage the updated files:"
echo " git add deps-server.json deps-cli.json"
else
echo ""
echo "Done. Generated deps-server.json and deps-cli.json (not in a git work tree)."
fi

Copilot uses AI. Check for mistakes.
Comment on lines +14 to +15
dotnetSdk = pkgs.dotnet-sdk_9;
dotnetRuntime = pkgs.dotnet-runtime_9;
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an inconsistency between the documented .NET version and the actual project configuration:

  • The custom-instructions/repo/.github/copilot-instructions.md states the project uses ".NET 10" (lines 3 and 38)
  • Both CalendarMcp.StdioServer.csproj and CalendarMcp.Cli.csproj specify TargetFramework as "net9.0"
  • The flake.nix uses dotnet-sdk_9 and dotnet-runtime_9

This PR should either:

  1. Update the documentation to reflect that the project uses .NET 9, or
  2. Update the project files and Nix configuration to use .NET 10 if that's the intention

Note: The CLI project references Microsoft.Extensions.Logging 10.0.0, which suggests there may be an incomplete migration to .NET 10 in progress.

Copilot uses AI. Check for mistakes.
Comment on lines +129 to +154
systemd.services.calendar-mcp = {
description = "Calendar MCP Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];

serviceConfig = {
Type = "simple";
User = cfg.user;
Group = cfg.group;
ExecStart = "${cfg.package}/bin/CalendarMcp.StdioServer";
Restart = "on-failure";
RestartSec = 5;

# Hardening
NoNewPrivileges = true;
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
ReadWritePaths = [ cfg.dataDir ];
};

environment = {
HOME = cfg.dataDir;
XDG_DATA_HOME = "${cfg.dataDir}/.local/share";
};
};
Copy link

Copilot AI Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MCP server is designed to be invoked on-demand by MCP clients (like Claude Desktop or VS Code) via stdio transport, not run as a persistent service. According to the README, clients launch the server process when needed and communicate via stdin/stdout.

The NixOS module creates a systemd service that runs continuously, which differs from the typical MCP server deployment pattern. Running the server as a persistent service may not work as expected since:

  1. The server expects to communicate via stdin/stdout with an MCP client
  2. A systemd service would have no stdin/stdout connected by default
  3. MCP clients typically manage the server lifecycle themselves

Consider either:

  1. Documenting that this service is for specialized deployment scenarios (not typical MCP usage)
  2. Removing the NixOS module if it doesn't align with the intended deployment model
  3. Adding socket activation or another IPC mechanism if persistent service deployment is desired

Copilot uses AI. Check for mistakes.
Copy link
Member

@rockfordlhotka rockfordlhotka Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Veraticus It seems to me that perhaps we need to add an HTTP head to the service so it can be accessed by apps running on a machine, connecting to localhost to interact with the daemon?

I did build this with the intent of someday being able to add an HTTP head - that's why the implementation of the tools is all in the '.Core' project.

Or did you have something else in mind?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwiw, the reason I didn't add an http head to start with, is that on a multi-user device, a daemon running at the system level would be a potential data leak - without careful design, it would use a single set of credentials, rather than per-user credentials.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Veraticus I did decide to add an HTTP head (and admin tools, etc.) because I think I want to run this in my k8s cluster so nanobot can interact with it.

You might want to revisit your PR to package the HttpServer as the daemon so it can be accessed via localhost on your device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants