Skip to content

Commit b1bc1fe

Browse files
fix: introduced methods for accessing map key nodes for getting line numbers and improved agent testing behaviour
1 parent a30e6a1 commit b1bc1fe

File tree

11 files changed

+1362
-2
lines changed

11 files changed

+1362
-2
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,3 +7,6 @@ coverage.html
77
!.vscode/settings.example.json
88
# Added by goreleaser init:
99
dist/
10+
11+
# Symlinked from AGENTS.md by mise install
12+
CLAUDE.md

.mise.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ run = [
1212

1313
[hooks]
1414
postinstall = [
15+
"ln -sf ./AGENTS.md ./CLAUDE.md",
1516
"mise run setup-vscode-symlinks",
1617
"go install go.uber.org/nilaway/cmd/nilaway@8ad05f0",
1718
]

AGENTS.md

Lines changed: 378 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,378 @@
1+
# Agent Development Guidelines
2+
3+
This document provides guidelines for AI agents working on this codebase.
4+
5+
## Running Tests
6+
7+
This project uses [mise](https://mise.jdx.dev/) for running tests with enhanced output formatting via gotestsum.
8+
9+
### Run All Tests
10+
11+
```bash
12+
mise test
13+
```
14+
15+
This runs all tests in the project with race detection enabled and provides clean, organized test output.
16+
17+
### Run Tests for Specific Packages
18+
19+
The `mise test` command accepts the same arguments as `go test`, allowing you to target specific packages or use any `go test` flags:
20+
21+
```bash
22+
# Run tests for a specific package
23+
mise test ./openapi/core
24+
25+
# Run tests matching a pattern
26+
mise test -run TestGetMapKeyNodeOrRoot ./openapi/core
27+
28+
# Run tests with verbose output
29+
mise test -v ./marshaller
30+
31+
# Run tests for multiple packages
32+
mise test ./openapi/core ./marshaller
33+
34+
# Use any go test flags
35+
mise test -race -count=1 ./...
36+
```
37+
38+
### Common Test Commands
39+
40+
```bash
41+
# Run all tests in current directory
42+
mise test .
43+
44+
# Run specific test function
45+
mise test -run TestSecurityRequirement_GetMapKeyNodeOrRoot_Success ./openapi/core
46+
47+
# Run tests with coverage
48+
mise run test-coverage
49+
50+
# Run tests without cache
51+
mise test -count=1 ./...
52+
```
53+
54+
### Why Use Mise for Testing?
55+
56+
- **Enhanced Output**: Uses gotestsum for better formatted, more readable test results
57+
- **Consistent Environment**: Ensures correct Go version and tool versions
58+
- **Race Detection**: Automatically enables race detection to catch concurrency issues
59+
- **Submodule Awareness**: Checks for and warns about uninitialized test submodules
60+
61+
## Testing
62+
63+
Follow these testing conventions when writing Go tests in this project. Run newly added or modified test immediately after changes to make sure they work as expected before continuing with more work.
64+
65+
### Test File Organization
66+
67+
**Keep tests localized to the files they are testing.** Each source file should have a corresponding test file in the same directory.
68+
69+
- `responses.go``responses_test.go`
70+
- `paths.go``paths_test.go`
71+
- `security.go``security_test.go`
72+
73+
This makes it easy to find tests and understand what functionality is being tested.
74+
75+
### Test Simplicity
76+
77+
**Keep tests simple by avoiding branching logic.** Tests should be straightforward and easy to understand.
78+
79+
#### ❌ Bad: Branching in tests
80+
81+
```go
82+
func TestExample(t *testing.T) {
83+
for _, tt := range tests {
84+
t.Run(tt.name, func(t *testing.T) {
85+
var model Model
86+
if tt.shouldInitialize {
87+
model = initializeModel()
88+
} else {
89+
model = Model{}
90+
}
91+
// test logic...
92+
})
93+
}
94+
}
95+
```
96+
97+
#### ✅ Good: Separate test functions
98+
99+
```go
100+
func TestExample_Initialized(t *testing.T) {
101+
for _, tt := range tests {
102+
t.Run(tt.name, func(t *testing.T) {
103+
model := initializeModel()
104+
// test logic...
105+
})
106+
}
107+
}
108+
109+
func TestExample_Uninitialized(t *testing.T) {
110+
for _, tt := range tests {
111+
t.Run(tt.name, func(t *testing.T) {
112+
model := Model{}
113+
// test logic...
114+
})
115+
}
116+
}
117+
```
118+
119+
### Parallel Test Execution
120+
121+
**Always use `t.Parallel()` for parallel test execution.** This speeds up test runs and ensures tests are independent.
122+
123+
```go
124+
func TestExample_Success(t *testing.T) {
125+
t.Parallel() // At the top level
126+
127+
tests := []struct {
128+
name string
129+
// ...
130+
}{
131+
// test cases...
132+
}
133+
134+
for _, tt := range tests {
135+
t.Run(tt.name, func(t *testing.T) {
136+
t.Parallel() // In each subtest
137+
// test logic...
138+
})
139+
}
140+
}
141+
```
142+
143+
### Context Usage
144+
145+
**Use `t.Context()` instead of `context.Background()`.** This provides better test lifecycle management and cancellation.
146+
147+
#### ❌ Bad
148+
149+
```go
150+
ctx := context.Background()
151+
```
152+
153+
#### ✅ Good
154+
155+
```go
156+
ctx := t.Context()
157+
```
158+
159+
### Table-Driven Tests
160+
161+
Use table-driven tests where possible and when they make sense (don't over-complicate the main test implementation).
162+
163+
```go
164+
func TestFeature_Success(t *testing.T) {
165+
t.Parallel()
166+
167+
tests := []struct {
168+
name string
169+
input InputType
170+
expected ExpectedType
171+
}{
172+
{
173+
name: "descriptive test case name",
174+
input: // test input,
175+
expected: // expected output,
176+
},
177+
// more test cases...
178+
}
179+
180+
for _, tt := range tests {
181+
t.Run(tt.name, func(t *testing.T) {
182+
t.Parallel()
183+
ctx := t.Context()
184+
185+
actual := FunctionUnderTest(ctx, tt.input)
186+
assert.Equal(t, tt.expected, actual, "should return expected value")
187+
})
188+
}
189+
}
190+
```
191+
192+
### Test Function Naming
193+
194+
Use `_Success` and `_Error` (or `_ReturnsRoot`, `_ReturnsDefault`, etc.) suffixes to denote different test scenarios.
195+
196+
#### Examples
197+
198+
- `TestGetMapKeyNodeOrRoot_Success` - Tests happy path scenarios
199+
- `TestGetMapKeyNodeOrRoot_ReturnsRoot` - Tests when root is returned
200+
- `TestParseConfig_Success` - Tests successful parsing
201+
- `TestParseConfig_Error` - Tests parsing failures
202+
203+
### Assertions
204+
205+
Use the testify assert/require libraries for cleaner assertions.
206+
207+
```go
208+
import (
209+
"testing"
210+
211+
"github.com/stretchr/testify/assert"
212+
"github.com/stretchr/testify/require"
213+
)
214+
```
215+
216+
#### Usage Guidelines
217+
218+
- Use `assert.Equal()` for value comparisons with descriptive messages
219+
- Use `assert.Nil()` and `assert.NotNil()` for pointer checks
220+
- Use `require.*` when the test should stop on failure (e.g., setup operations)
221+
- **Always include descriptive error messages**
222+
223+
```go
224+
// Good: Clear assertions with messages
225+
require.NoError(t, err, "unmarshal should succeed")
226+
require.NotNil(t, result, "result should not be nil")
227+
assert.Equal(t, expected, actual, "should return correct value")
228+
```
229+
230+
### Exact Object Assertions
231+
232+
**Assert against exact objects rather than using complex setup functions.** This makes tests clearer and easier to debug.
233+
234+
#### ❌ Bad: Complex setup with branching
235+
236+
```go
237+
tests := []struct {
238+
name string
239+
setup func() *Model
240+
}{
241+
{
242+
name: "test case",
243+
setup: func() *Model {
244+
if someCondition {
245+
return &Model{Field: "value1"}
246+
}
247+
return &Model{Field: "value2"}
248+
},
249+
},
250+
}
251+
```
252+
253+
#### ✅ Good: Direct object creation
254+
255+
```go
256+
tests := []struct {
257+
name string
258+
yaml string
259+
key string
260+
expected string
261+
}{
262+
{
263+
name: "returns key when exists",
264+
yaml: `key: value`,
265+
key: "key",
266+
expected: "key",
267+
},
268+
}
269+
```
270+
271+
### Leverage Existing Project Packages
272+
273+
**Use existing project packages for test setup instead of reinventing the wheel.** The project provides utilities for common testing needs.
274+
275+
#### Marshaller Package
276+
277+
Use `marshaller.UnmarshalCore()` to create properly initialized core models:
278+
279+
```go
280+
func TestCoreModel_Success(t *testing.T) {
281+
t.Parallel()
282+
283+
tests := []struct {
284+
name string
285+
yaml string
286+
key string
287+
}{
288+
{
289+
name: "test case",
290+
yaml: `
291+
key1: value1
292+
key2: value2
293+
`,
294+
key: "key1",
295+
},
296+
}
297+
298+
for _, tt := range tests {
299+
t.Run(tt.name, func(t *testing.T) {
300+
t.Parallel()
301+
ctx := t.Context()
302+
303+
var model CoreModel
304+
_, err := marshaller.UnmarshalCore(ctx, "", parseYAML(t, tt.yaml), &model)
305+
require.NoError(t, err, "unmarshal should succeed")
306+
307+
// Test logic using the model
308+
result := model.SomeMethod(tt.key)
309+
assert.Equal(t, tt.key, result.Value, "should return correct value")
310+
})
311+
}
312+
}
313+
314+
// Helper function for parsing YAML
315+
func parseYAML(t *testing.T, yml string) *yaml.Node {
316+
t.Helper()
317+
var node yaml.Node
318+
err := yaml.Unmarshal([]byte(yml), &node)
319+
require.NoError(t, err)
320+
return &node
321+
}
322+
```
323+
324+
#### YML Package
325+
326+
Use the `yml` package for creating and manipulating YAML nodes:
327+
328+
```go
329+
import "github.com/speakeasy-api/openapi/yml"
330+
331+
// Create scalar nodes
332+
stringNode := yml.CreateStringNode("value")
333+
intNode := yml.CreateIntNode(42)
334+
boolNode := yml.CreateBoolNode(true)
335+
336+
// Create map nodes
337+
ctx := t.Context()
338+
mapNode := yml.CreateMapNode(ctx, []*yaml.Node{
339+
yml.CreateStringNode("key1"),
340+
yml.CreateStringNode("value1"),
341+
})
342+
343+
// Get map elements
344+
keyNode, valueNode, found := yml.GetMapElementNodes(ctx, mapNode, "key1")
345+
```
346+
347+
#### General Principles
348+
349+
- **Don't recreate existing functionality** - Check if the project already has utilities for what you need
350+
- **Use project-specific helpers** - Packages like `marshaller`, `yml`, `sequencedmap`, etc. provide tested utilities
351+
- **Follow existing patterns** - Look at how other tests in the project construct test data
352+
- **Reuse helper functions** - If a test file has a `parseYAML` helper, use it rather than duplicating
353+
354+
#### Examples of Project Packages to Leverage
355+
356+
- `marshaller` - For unmarshalling and working with models
357+
- `yml` - For creating and manipulating YAML nodes
358+
- `sequencedmap` - For creating ordered maps
359+
- `extensions` - For working with OpenAPI extensions
360+
- `validation` - For validation utilities
361+
362+
### Test Coverage
363+
364+
Test cases should cover:
365+
366+
- **Happy path scenarios** - Various valid inputs
367+
- **Edge cases** - Empty inputs, boundary values
368+
- **Error conditions** - Nil inputs, invalid parameters
369+
- **Integration scenarios** - Where applicable
370+
371+
### Why These Conventions Matter
372+
373+
1. **Consistency**: All tests follow the same pattern, making them easier to read and maintain
374+
2. **Clarity**: Clear naming and simple logic make it obvious what each test covers
375+
3. **Maintainability**: Table tests make it easy to add new test cases
376+
4. **Performance**: Parallel execution speeds up test runs
377+
5. **Debugging**: testify assertions and clear structure provide helpful failure messages
378+
6. **Reliability**: Using `t.Context()` ensures proper test lifecycle management

0 commit comments

Comments
 (0)