Skip to content

test gemini code assist with random changes #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

zakisk
Copy link
Owner

@zakisk zakisk commented Aug 7, 2025

πŸ“ Description of the Change

πŸ”— Linked GitHub Issue

Fixes #

πŸ‘¨πŸ»β€ Linked Jira

πŸš€ Type of Change

  • πŸ› Bug fix (fix:)
  • ✨ New feature (feat:)
  • πŸ’₯ Breaking change (feat!:, fix!:)
  • πŸ“š Documentation update (docs:)
  • βš™οΈ Chore (chore:)
  • πŸ’… Refactor (refactor:)
  • πŸ”§ Enhancement (enhance:)
  • πŸ“¦ Dependency update (deps:)

πŸ§ͺ Testing Strategy

  • Unit tests
  • Integration tests
  • End-to-end tests
  • Manual testing
  • Not Applicable

βœ… Submitter Checklist

  • πŸ“ My commit messages are clear, informative, and follow the project's How to write a git commit message guide. The Gitlint linter ensures in CI it's properly validated
  • ✨ I have ensured my commit message prefix (e.g., fix:, feat:) matches the "Type of Change" I selected above.
  • β™½ I have run make test and make lint locally to check for and fix any
    issues. For an efficient workflow, I have considered installing
    pre-commit and running pre-commit install to
    automate these checks.
  • πŸ“– I have added or updated documentation for any user-facing changes.
  • πŸ§ͺ I have added sufficient unit tests for my code changes.
  • 🎁 I have added end-to-end tests where feasible. See README for more details.
  • πŸ”Ž I have addressed any CI test flakiness or provided a clear reason to bypass it.
  • If adding a provider feature, I have filled in the following and updated the provider documentation:
    • GitHub App
    • GitHub Webhook
    • Gitea/Forgejo
    • GitLab
    • Bitbucket Cloud
    • Bitbucket Data Center

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zakisk, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented a series of enhancements to the GitHub provider to improve its observability, reliability, and debugging capabilities. The primary goal is to provide better insights into GitHub API usage, track performance metrics, and proactively manage rate limits. These changes will help in identifying bottlenecks, understanding system behavior under load, and ensuring a more stable integration with GitHub.

Highlights

  • Enhanced GitHub API Performance Monitoring: I've introduced comprehensive performance monitoring and rate limit tracking for GitHub API interactions. This includes new structs to hold metrics, functions to record API call durations, track success/failure rates, and parse GitHub's rate limit headers. This data is logged to provide better visibility into the provider's health and API usage.
  • New GitHub Provider Health Check: I've added a new HealthCheck method to the GitHub provider, allowing for a quick verification of connectivity and authentication to the GitHub API. This improves the ability to diagnose issues related to the GitHub integration.
  • Improved Logging and Input Validation: I've significantly improved the logging across key GitHub provider functions like SetClient and GetTektonDir. This includes more detailed debug and info logs, input validation, and security audit logs, which will greatly assist in debugging and understanding the flow of operations.
  • Integrated API Call Tracking: API call tracking has been integrated into critical operations within the GitHub provider, such as client setup and fetching repository content. This allows for granular performance data collection on specific API calls, contributing to a more robust monitoring system.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

@zakisk
Copy link
Owner Author

zakisk commented Aug 7, 2025

/gemini help

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces performance and API usage metrics for the GitHub provider, which is a valuable addition for observability. My review focuses on ensuring the new metrics implementation is robust and free of potential runtime errors. I've identified a few critical issues, such as potential panics from division-by-zero errors, and some areas for improvement in terms of code structure and maintainability. The provided suggestions aim to address these points to make the new feature more reliable.

Comment on lines 212 to 73
percentage := float64(remaining) / float64(limit) * 100
if percentage < 10 {
v.Logger.Warnf("GitHub API rate limit critically low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else if percentage < 25 {
v.Logger.Infof("GitHub API rate limit getting low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else {
v.Logger.Debugf("GitHub API rate limit status: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a potential for a division-by-zero panic here if limit is 0. While the GitHub API is expected to return a non-zero limit, it's safer to add a guard to prevent the application from crashing.

Suggested change
percentage := float64(remaining) / float64(limit) * 100
if percentage < 10 {
v.Logger.Warnf("GitHub API rate limit critically low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else if percentage < 25 {
v.Logger.Infof("GitHub API rate limit getting low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else {
v.Logger.Debugf("GitHub API rate limit status: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
}
if limit == 0 {
v.Logger.Warnf("GitHub API rate limit 'limit' is 0, cannot calculate percentage")
return
}
percentage := float64(remaining) / float64(limit) * 100
if percentage < 10 {
v.Logger.Warnf("GitHub API rate limit critically low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else if percentage < 25 {
v.Logger.Infof("GitHub API rate limit getting low: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
} else {
v.Logger.Debugf("GitHub API rate limit status: %d/%d (%.1f%%) - resets at %v",
remaining, limit, percentage, resetTime.Format(time.RFC3339))
}

Comment on lines 226 to 265
func (v *Provider) logPerformanceMetrics() {
if v.performanceMetrics == nil {
return
}

v.performanceMetrics.mutex.RLock()
defer v.performanceMetrics.mutex.RUnlock()

successRate := float64(v.performanceMetrics.TotalAPICalls-v.performanceMetrics.FailedAPICalls) /
float64(v.performanceMetrics.TotalAPICalls) * 100

v.Logger.Infof("GitHub Provider Performance Summary - Total calls: %d, Failed: %d, Success rate: %.1f%%, Avg response time: %v",
v.performanceMetrics.TotalAPICalls,
v.performanceMetrics.FailedAPICalls,
successRate,
v.performanceMetrics.AverageResponseTime)

if v.performanceMetrics.LastRateLimit != nil {
rl := v.performanceMetrics.LastRateLimit
v.Logger.Infof("Current rate limit status: %d/%d remaining, resets at %v",
rl.Remaining, rl.Limit, rl.ResetTime.Format(time.RFC3339))
}

// Log per-operation metrics
v.clientMetrics.Range(func(key, value interface{}) bool {
if operation, ok := key.(string); ok {
if metrics, ok := value.(map[string]interface{}); ok {
count := metrics["count"].(int)
totalDuration := metrics["totalDuration"].(time.Duration)
errors := metrics["errors"].(int)
avgDuration := totalDuration / time.Duration(count)
errorRate := float64(errors) / float64(count) * 100

v.Logger.Debugf("Operation '%s': %d calls, avg duration: %v, error rate: %.1f%%",
operation, count, avgDuration, errorRate)
}
}
return true
})
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This function has a potential division-by-zero panic if v.performanceMetrics.TotalAPICalls is 0. It's important to add a check to prevent this.

Additionally, to allow calling this logging function from trackAPICall (which holds a write lock), it's better to separate the locking from the logging logic to avoid deadlocks. I'm suggesting a refactor to an unlocked version that can be safely called from other methods that already have a lock.

func (v *Provider) logPerformanceMetrics() {
	if v.performanceMetrics == nil {
		return
	}

	v.performanceMetrics.mutex.RLock()
	defer v.performanceMetrics.mutex.RUnlock()
	v.logPerformanceMetricsUnlocked()
}

// logPerformanceMetricsUnlocked outputs current performance statistics. It assumes a lock is already held on performanceMetrics.
func (v *Provider) logPerformanceMetricsUnlocked() {
	if v.performanceMetrics.TotalAPICalls == 0 {
		v.Logger.Info("GitHub Provider Performance Summary: No API calls recorded yet.")
		return
	}

	successRate := float64(v.performanceMetrics.TotalAPICalls-v.performanceMetrics.FailedAPICalls) /
		float64(v.performanceMetrics.TotalAPICalls) * 100

	v.Logger.Infof("GitHub Provider Performance Summary - Total calls: %d, Failed: %d, Success rate: %.1f%%, Avg response time: %v",
		v.performanceMetrics.TotalAPICalls,
		v.performanceMetrics.FailedAPICalls,
		successRate,
		v.performanceMetrics.AverageResponseTime)

	if v.performanceMetrics.LastRateLimit != nil {
		rl := v.performanceMetrics.LastRateLimit
		v.Logger.Infof("Current rate limit status: %d/%d remaining, resets at %v",
			rl.Remaining, rl.Limit, rl.ResetTime.Format(time.RFC3339))
	}

	// Log per-operation metrics
	v.clientMetrics.Range(func(key, value interface{}) bool {
		if operation, ok := key.(string); ok {
			if metrics, ok := value.(map[string]interface{}); ok {
				count := metrics["count"].(int)
				if count == 0 {
					return true // continue
				}
				totalDuration := metrics["totalDuration"].(time.Duration)
				errors := metrics["errors"].(int)
				avgDuration := totalDuration / time.Duration(count)
				errorRate := float64(errors) / float64(count) * 100

				v.Logger.Debugf("Operation '%s': %d calls, avg duration: %v, error rate: %.1f%%",
					operation, count, avgDuration, errorRate)
			}
		}
		return true
	})
}

Comment on lines 284 to 286
if v.performanceMetrics != nil && v.performanceMetrics.TotalAPICalls%50 == 0 && v.performanceMetrics.TotalAPICalls > 0 {
v.logPerformanceMetrics()
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The periodic logging of performance metrics is misplaced. TotalAPICalls is not incremented in this function, so the check will not behave as expected. This logic should be moved to trackAPICall where TotalAPICalls is incremented, to ensure metrics are logged based on the actual number of API calls.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-assist can you please make changes and commit here? I want you to do this by rebasing existing commit.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your request, @zakisk. As an AI assistant, I am unable to directly make changes to the codebase or commit to branches. My role is to provide code review feedback and suggestions to help you improve the code.

Please let me know if you would like me to provide further suggestions or clarify any of my previous comments.

Comment on lines 150 to 168
if existingMetric, exists := v.clientMetrics.Load(operation); exists {
if metric, ok := existingMetric.(map[string]interface{}); ok {
metric["count"] = metric["count"].(int) + 1
metric["totalDuration"] = metric["totalDuration"].(time.Duration) + duration
if err != nil {
metric["errors"] = metric["errors"].(int) + 1
}
}
} else {
errorCount := 0
if err != nil {
errorCount = 1
}
v.clientMetrics.Store(operation, map[string]interface{}{
"count": 1,
"totalDuration": duration,
"errors": errorCount,
})
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using map[string]interface{} for metrics is not type-safe and can lead to runtime panics if the types are not what's expected. Consider defining a dedicated struct for operation metrics to improve type safety and maintainability. For example:

type OperationMetrics struct {
    Count         int64
    TotalDuration time.Duration
    Errors        int64
}

And then use v.clientMetrics.Store(operation, &OperationMetrics{...}). The values would be updated directly since trackAPICall is already under a lock.

Copy link

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments or fill out our survey to provide feedback.


duration := time.Since(startTime)
v.performanceMetrics.TotalAPICalls++
v.performanceMetrics.LastAPICallTime = time.Now()
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gemini-code-assist can you please explain this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly, zakisk! This section of code within the New() function is responsible for initializing key fields of the Provider struct, specifically skippedRun and the newly introduced performanceMetrics.

  1. skippedRun: This field (an existing part of the Provider struct) is initialized with a sync.Mutex. This mutex is used to protect concurrent access to any state related to 'skipped runs' within the provider, ensuring thread safety when multiple goroutines might try to modify this state.

  2. performanceMetrics: This is a new addition in this pull request. It's initialized here as a pointer to a PerformanceMetrics struct, which itself contains a sync.RWMutex{}. This RWMutex (Read-Write Mutex) is essential for safely managing concurrent access to the performance data (like TotalAPICalls, AverageResponseTime, LastRateLimit, etc.) that will be collected by methods such as trackAPICall, updateRateLimitInfo, and logPerformanceMetrics.

Initializing performanceMetrics directly in the New() function ensures that every Provider instance starts with a ready-to-use metrics collector, allowing for immediate tracking of GitHub API call performance and rate limit information from the moment the provider is created.

@zakisk zakisk force-pushed the test-gemini-code-assist branch from 5671221 to 95e00bb Compare August 12, 2025 05:35
@zakisk
Copy link
Owner Author

zakisk commented Aug 12, 2025

/test linters

@zakisk zakisk force-pushed the test-gemini-code-assist branch 3 times, most recently from 24acbb1 to 13c19e4 Compare August 14, 2025 12:52
…elines#2208)

* fix: Annotate PLR when started status is reported

The use of the state label (which is mutable) for deciding when to report
to the SCM that the PLR was started is flaky.

It was seen that the reconciler get events about PLRs with unexpected
value for the state label. For example, after the status is reported
to the SCM, and the label value is patched to "started", after serval
reconcile iterations the label had the "queued" value again. This can
happen because of unsafe patching done by controllers
(not just the PAC controllers) which reconciles PLRs.

Introduce a new annotation for indicating the the status was reported
to the SCM. By adding an annotation which is set once, we remove the
risk that its value will get overwritten by other controllers
(since maps are merged when patched, values are not getting removed
unless explicitly defined in the patch
- https://datatracker.ietf.org/doc/html/rfc7386#section-2)

In addition, at the start of each reconcile loop, ensure that we
operate on the latest version of the PLR and not using a stale value
from the cache.

Assisted-By: Cursor
Signed-off-by: Gal Ben Haim <[email protected]>
@zakisk zakisk force-pushed the test-gemini-code-assist branch from 13c19e4 to a9d1b7d Compare August 14, 2025 12:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants