Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ ws graph <repo...> # Show deps/dependents
ws foreach <cmd> # Run command in every repo
ws foreach 'git add . && git commit -m "msg" || true'

# Diagnostics
ws nix-diagnose <repo> # Detect circular deps and version conflicts in flake.lock
ws check-qt <repo> # Detect Qt version conflicts in nix closure or module cache

# Other
ws update [repo...] # Update flake inputs
ws update --deep <repo...> # Update in workspace + all sub-repo flake.locks
Expand Down
25 changes: 25 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ Because the workspace flake declares `logos-liblogos.inputs.logos-cpp-sdk.follow
| `ws worktree list` | List all worktrees |
| `ws worktree remove <name>` | Remove a worktree |
| `ws sync-graph` | Regenerate `nix/dep-graph.nix` from repo flake.nix files |
| `ws nix-diagnose <repo>` | Detect circular deps and version conflicts in flake.lock |
| `ws check-qt <repo>` | Detect Qt version conflicts in nix closure or module cache |

### Global Options

Expand Down Expand Up @@ -678,6 +680,29 @@ ws build logos-basecamp --local logos-cpp-sdk

If a build fails unexpectedly with overrides, check that the dirty repo is in a buildable state — `--auto-local` uses whatever is on disk, including broken or half-finished work.

## Diagnosing dependency issues

When a repo fails to build or evaluate, `ws nix-diagnose` inspects its `flake.lock` for two common problems:

1. **Circular dependencies** — repo A depends on B which depends on A (directly or transitively). Nix evaluation will hang or fail with infinite recursion. Common after refactors that move code between repos (e.g. logos-liblogos ↔ logos-capability-module).

2. **Duplicate dependencies at different versions** — the same repo (e.g. `logos-nix`) appears multiple times in `flake.lock` at different git revisions. This happens when a repo's `flake.lock` is stale and its transitive deps have moved ahead. Causes subtle build failures, hash mismatches, or multiple Qt versions in the closure.

```bash
ws nix-diagnose <repo>

# Examples
ws nix-diagnose logos-basecamp # check the app shell
ws nix-diagnose logos-chat-ui # check a module with many deps
ws nix-diagnose logos-liblogos # check core runtime
```

Output shows:
- Cycles as `A -> B -> C -> A` chains
- Version conflicts grouped by repo, with the input path from root to each conflicting copy (e.g. `root -> logos-liblogos -> logos-capability-module -> logos-nix`)

**Fixing conflicts**: run `ws update --deep <repo>` to update the repo's `flake.lock` and all its sub-repo locks, then rebuild. For cycles, the dependency must be broken at the source (usually by removing a `follows` or restructuring inputs).

## CI

The workspace has GitHub Actions CI (`.github/workflows/ci.yml`) that runs on pull requests to `master`:
Expand Down
267 changes: 267 additions & 0 deletions scripts/ws
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ log() { $QUIET || echo -e "$@"; }
# has_flake: "yes" or "no"

REPOS=(
# Shared Nix infrastructure
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logos-nix is added to the REPOS registry, but it isn’t present in .gitmodules yet. With this change, ws init will try to git submodule add it in user workspaces, which creates local .gitmodules/index changes. Consider adding the submodule entry to .gitmodules in this PR (or documenting why it should remain auto-added).

Suggested change
# Shared Nix infrastructure
# Shared Nix infrastructure
# NOTE: logos-nix is intentionally auto-added as a submodule by `ws init`
# and is not predeclared in .gitmodules; this keeps it workspace-local.

Copilot uses AI. Check for mistakes.
"logos-nix|logos-nix|https://github.com/logos-co/logos-nix.git|yes"

# Foundation
"logos-cpp-sdk|logos-cpp-sdk|https://github.com/logos-co/logos-cpp-sdk.git|yes"
"logos-module|logos-module|https://github.com/logos-co/logos-module.git|yes"
Expand Down Expand Up @@ -1994,6 +1997,268 @@ HEADER
fi
}

cmd_nix_diagnose() {
wants_help "$@" && show_help "Usage: ws nix-diagnose <repo>

Diagnose dependency issues in a repo's flake.

Checks:
1. Circular dependencies in the workspace dependency graph
2. Duplicate dependencies — same repo appearing at different versions
in the flake.lock (e.g. two copies of logos-nix at different revs)

Examples:
ws nix-diagnose logos-basecamp
ws nix-diagnose logos-liblogos"

local repo=""
while [[ $# -gt 0 ]]; do
case "$1" in
-*) shift ;;
*) repo="$1"; shift ;;
esac
done

if [[ -z "$repo" ]]; then
echo -e "${RED}Error:${NC} repo name required" >&2
exit 1
fi

local entry
entry=$(find_repo_entry "$repo") || { echo -e "${RED}Unknown repo:${NC} $repo" >&2; exit 1; }
local input_name dir_name
input_name=$(get_field "$entry" 1)
Comment on lines +2029 to +2030
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_name is computed but never used in this command. Remove it (and the assignment) or use it in output so the function doesn’t carry dead variables.

Suggested change
local input_name dir_name
input_name=$(get_field "$entry" 1)
local dir_name

Copilot uses AI. Check for mistakes.
dir_name=$(get_field "$entry" 2)

# ── 1. Circular dependency detection (from flake.lock) ─────────────────────
log "${BOLD}Circular Dependencies${NC}"

local lock_file="$REPOS_DIR/$dir_name/flake.lock"
if [[ ! -f "$lock_file" ]]; then
echo -e " ${YELLOW}No flake.lock found, skipping${NC}"
elif ! command -v jq &>/dev/null; then
echo -e " ${RED}jq is required but not found${NC}" >&2
else
# Build a repo-level dependency graph from the flake.lock.
# Each lock node maps to a repo identity (original.repo or the node key
# with _N suffixes stripped). Edges are the direct (non-follows) inputs.
# Output: "source_repo target_repo" lines (unique).
local repo_edges
repo_edges=$(jq -r '
.nodes as $nodes |
[ $nodes | to_entries[] |
select(.value.inputs != null) |
.key as $parent |
( $nodes[$parent].original.repo // ($parent | sub("_[0-9]+$"; "")) ) as $src |
.value.inputs | to_entries[] |
select(.value | type == "string") |
.value as $child |
Comment on lines +2054 to +2055
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The jq extraction treats dependency edges as only those inputs whose value is a string, but in this repo’s flake.lock many inputs are arrays (e.g. ["nixpkgs"] or longer paths). This will drop most edges and can incorrectly report “No circular dependencies found”. Update the jq to also handle array inputs (e.g. map arrays to their first element / target node key) so the dependency graph is complete.

Suggested change
select(.value | type == "string") |
.value as $child |
.value as $input |
(if ($input | type) == "string" then
$input
elif ($input | type) == "array" and ($input | length) > 0 then
$input[0]
else
empty
end) as $child |

Copilot uses AI. Check for mistakes.
( $nodes[$child].original.repo // ($child | sub("_[0-9]+$"; "")) ) as $dst |
select($src != $dst) |
$src + " " + $dst
] | unique[]
' "$lock_file" 2>/dev/null || true)

if [[ -z "$repo_edges" ]]; then
echo -e " ${GREEN}No circular dependencies found${NC}"
else
Comment on lines +2047 to +2064
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the cycle section, jq errors are suppressed (2>/dev/null || true), which can turn parse failures into an empty edge list and a misleading “No circular dependencies found”. Consider checking jq’s exit status and reporting a clear parse/error message instead of silently continuing.

Copilot uses AI. Check for mistakes.
# Build adjacency list as "repo|dep1,dep2,..." lines
local graph_data
graph_data=$(echo "$repo_edges" | awk '{
deps[$1] = deps[$1] ? deps[$1] "," $2 : $2
} END {
for (n in deps) print n "|" deps[n]
}')

# DFS cycle detection using temp files (bash 3.2 compatible)
local _dfs_state_file _dfs_path_file
_dfs_state_file=$(mktemp)
_dfs_path_file=$(mktemp)
local found_cycle=false

_dfs_cycle() {
local node="$1"
echo "$node visiting" >> "$_dfs_state_file"
echo "$node" >> "$_dfs_path_file"

local deps_str
deps_str=$(echo "$graph_data" | grep "^${node}|" | head -1 | cut -d'|' -f2 || true)
local dep
for dep in ${deps_str//,/ }; do
[[ -z "$dep" ]] && continue
local state
state=$(grep "^${dep} " "$_dfs_state_file" 2>/dev/null | tail -1 | cut -d' ' -f2 || true)
if [[ "$state" == "visiting" ]]; then
Comment on lines +2084 to +2091
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These grep patterns interpolate repo/node names directly into a regex (e.g. ^${node}|). Repo IDs / node keys can contain characters like . that are meaningful in regex, which can cause incorrect matches. Prefer fixed-string matching (grep -F) or escape the variables before using them in regex patterns.

Copilot uses AI. Check for mistakes.
found_cycle=true
local cycle="" in_cycle=false
while IFS= read -r p; do
[[ "$p" == "$dep" ]] && in_cycle=true
if $in_cycle; then
[[ -n "$cycle" ]] && cycle+=" -> "
cycle+="$p"
fi
done < "$_dfs_path_file"
echo -e " ${RED}Cycle:${NC} $cycle -> $dep"
elif [[ -z "$state" ]]; then
_dfs_cycle "$dep"
fi
done

# Pop path (GNU sed uses -i, BSD sed requires -i '')
if sed -i '$ d' "$_dfs_path_file" 2>/dev/null; then :; else sed -i '' '$ d' "$_dfs_path_file"; fi
echo "$node done" >> "$_dfs_state_file"
}

# Start DFS from the repo's own identity
local start_repo
start_repo=$(jq -r '
.nodes.root.inputs | to_entries[] |
select(.value | type == "string") | .key
' "$lock_file" 2>/dev/null | head -1 || true)
Comment on lines +2113 to +2117
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

start_repo is computed but never used. This looks like leftover scaffolding; consider removing it to avoid confusion and keep the cycle detection logic focused.

Suggested change
local start_repo
start_repo=$(jq -r '
.nodes.root.inputs | to_entries[] |
select(.value | type == "string") | .key
' "$lock_file" 2>/dev/null | head -1 || true)

Copilot uses AI. Check for mistakes.
# Visit all repos reachable from root inputs
local root_inputs
root_inputs=$(jq -r '
.nodes.root.inputs | to_entries[] |
select(.value | type == "string") | .value
Comment on lines +2115 to +2122
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Root inputs are filtered to type == "string", but root inputs can also be arrays in flake.lock (path-form). That can make root_inputs incomplete/empty and skip traversal entirely. Consider normalizing input refs so both string and array forms are traversed.

Suggested change
.nodes.root.inputs | to_entries[] |
select(.value | type == "string") | .key
' "$lock_file" 2>/dev/null | head -1 || true)
# Visit all repos reachable from root inputs
local root_inputs
root_inputs=$(jq -r '
.nodes.root.inputs | to_entries[] |
select(.value | type == "string") | .value
.nodes.root.inputs
| to_entries[]
| .value
| if type == "string" then .
elif type == "array" and length > 0 then .[0]
else empty
end
' "$lock_file" 2>/dev/null | head -1 || true)
# Visit all repos reachable from root inputs
local root_inputs
root_inputs=$(jq -r '
.nodes.root.inputs
| to_entries[]
| .value
| if type == "string" then .
elif type == "array" and length > 0 then .[0]
else empty
end

Copilot uses AI. Check for mistakes.
' "$lock_file" 2>/dev/null || true)
local repo_name
for repo_name in $root_inputs; do
local identity
identity=$(jq -r --arg n "$repo_name" '
.nodes[$n].original.repo // ($n | sub("_[0-9]+$"; ""))
' "$lock_file" 2>/dev/null || true)
[[ -z "$identity" ]] && continue
local state
state=$(grep "^${identity} " "$_dfs_state_file" 2>/dev/null | tail -1 | cut -d' ' -f2 || true)
[[ -z "$state" ]] && _dfs_cycle "$identity"
done

rm -f "$_dfs_state_file" "$_dfs_path_file"

if ! $found_cycle; then
echo -e " ${GREEN}No circular dependencies found${NC}"
fi
fi
fi

# ── 2. Dependency version conflicts ─────────────────────────────────────────
log ""
log "${BOLD}Dependency Version Conflicts${NC}"

local lock_file="$REPOS_DIR/$dir_name/flake.lock"
if [[ ! -f "$lock_file" ]]; then
echo -e " ${YELLOW}No flake.lock found${NC}"
return
fi

if ! command -v jq &>/dev/null; then
echo -e " ${RED}jq is required but not found${NC}" >&2
return 1
fi

# Find groups of nodes that map to the same repo but different revs
local conflicts_json
conflicts_json=$(jq '
.nodes as $nodes |
[
$nodes | to_entries[] |
select(.key != "root") |
select(.value.locked != null) |
select(.value.original.type == "github") |
{
key: .key,
repo_id: (.value.original.owner + "/" + .value.original.repo),
rev: .value.locked.rev
}
] | group_by(.repo_id) |
map(select([.[].rev] | unique | length > 1))
' "$lock_file" 2>/dev/null)

local conflict_count
conflict_count=$(echo "$conflicts_json" | jq 'length')

if [[ "$conflict_count" == "0" ]]; then
echo -e " ${GREEN}No version conflicts found${NC}"
return
fi

# Build adjacency list (one jq call): "parent child input_name" per line
local adj_lines
adj_lines=$(jq -r '
.nodes | to_entries[] |
.key as $parent |
select(.value.inputs != null) |
.value.inputs | to_entries[] |
select(.value | type == "string") |
$parent + " " + .value + " " + .key
Comment on lines +2192 to +2193
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The adjacency list for _find_path only includes inputs whose value is a string. In flake.lock, many inputs are arrays, so paths to conflicting nodes will often show as “(unreachable)” even when reachable. Normalize .value so both string and array forms contribute edges.

Suggested change
select(.value | type == "string") |
$parent + " " + .value + " " + .key
.key as $input_name |
.value as $v |
(if ($v | type) == "string" then [$v]
elif ($v | type) == "array" then $v
else [] end)[] as $child |
$parent + " " + $child + " " + $input_name

Copilot uses AI. Check for mistakes.
' "$lock_file" 2>/dev/null)

# BFS to find shortest path from root to a target node (bash 3.2 compatible)
_find_path() {
local target="$1"
[[ "$target" == "root" ]] && { echo "root"; return; }

local _bfs_queue_file _bfs_seen_file
_bfs_queue_file=$(mktemp)
_bfs_seen_file=$(mktemp)
echo "root|root" > "$_bfs_queue_file"

local result="(unreachable)"
while [[ -s "$_bfs_queue_file" ]]; do
local item
item=$(head -1 "$_bfs_queue_file")
if sed -i '1d' "$_bfs_queue_file" 2>/dev/null; then :; else sed -i '' '1d' "$_bfs_queue_file"; fi

local node="${item%%|*}"
local path="${item#*|}"

grep -qx "$node" "$_bfs_seen_file" 2>/dev/null && continue
echo "$node" >> "$_bfs_seen_file"

if [[ "$node" == "$target" ]]; then
result="$path"
break
fi

while IFS=' ' read -r p c i; do
if [[ "$p" == "$node" ]] && ! grep -qx "$c" "$_bfs_seen_file" 2>/dev/null; then
echo "$c|$path -> $i" >> "$_bfs_queue_file"
fi
done <<< "$adj_lines"
done

rm -f "$_bfs_queue_file" "$_bfs_seen_file"
echo "$result"
}

echo -e " ${YELLOW}Found $conflict_count conflict(s):${NC}"
echo ""

local i=0
while [[ $i -lt $conflict_count ]]; do
local repo_id
repo_id=$(echo "$conflicts_json" | jq -r ".[$i][0].repo_id")
echo -e " ${BOLD}$repo_id${NC}:"

# Group by rev within this conflict, output "short_rev|full_rev|key1,key2,..."
echo "$conflicts_json" | jq -r --argjson idx "$i" '
.[$idx] | group_by(.rev) | .[] |
.[0].rev[0:7] + "|" + .[0].rev + "|" + ([.[].key] | join(","))
' | while IFS='|' read -r short_rev _full_rev node_keys; do
echo -e " rev ${CYAN}$short_rev${NC}:"
IFS=',' read -ra keys <<< "$node_keys"
for key in "${keys[@]}"; do
local path
path=$(_find_path "$key")
echo " $path"
done
done

echo ""
((i++))
done
}

cmd_check_qt() {
wants_help "$@" && show_help "Usage: ws check-qt <repo>

Expand Down Expand Up @@ -2210,6 +2475,7 @@ COMMANDS:
worktree list List all worktrees
worktree remove <name> Remove a worktree
sync-graph Regenerate nix/dep-graph.nix from repo flake.nix files
nix-diagnose <repo> Detect circular deps and version conflicts in flake.lock
check-qt <repo> Check for Qt version conflicts in a repo's nix closure

lm <args> Module inspector (auto-builds logos-module)
Expand Down Expand Up @@ -2343,6 +2609,7 @@ case "${1:-help}" in
worktree|wt) shift; cmd_worktree "$@" ;;
sync-graph) shift; cmd_sync_graph "$@" ;;
check-qt) shift; cmd_check_qt "$@" ;;
nix-diagnose) shift; cmd_nix_diagnose "$@" ;;
list|ls) shift; cmd_list "$@" ;;
groups) shift; cmd_groups "$@" ;;
help|--help|-h) cmd_help ;;
Expand Down
Loading