You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<Callouttype="note">Featured templates are supported in Sourcegraph v6.4 and more.</Callout>
377
+
378
+
Site-admins can mark a template as featured by either clicking the star button next to the list of library records. Featured records will automatically move to a section atop the remaining library records.
379
+
380
+
### Labels
381
+
382
+
<Callouttype="note">Labels are supported in Sourcegraph v6.4 and more.</Callout>
383
+
384
+
Batch Spec Library records support an optional `labels` field for categorization and filtering. Common labels include:
385
+
386
+
-`"featured"` - Marks popular or recommended batch specs that are displayed in a "Featured Templates" section above the remaining examples
387
+
- Custom labels for organizational categorization (not exposed to Batch Changes users yet)
388
+
389
+
To remove the featured status, you can update the library record with an empty list of labels (`[]`).
Copy file name to clipboardExpand all lines: docs/admin/updates/docker_compose.mdx
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,6 +11,10 @@ For upgrade procedures or general info about sourcegraph versioning see the link
11
11
>
12
12
> ***If the notes indicate a patch release exists, target the highest one.***
13
13
14
+
## v6.3 patch 1
15
+
16
+
- Grafana's port 3370 is no longer open by default for unauthenticated access for Docker Compose and Pure Docker deployments. Admins can still access Grafana by logging in to their Sourcegraph instance, navigating to the Site Admin page, then clicking Monitoring from the left navigation menu. If customers require port 3370 to be open, see [[PR 1204](https://github.com/sourcegraph/deploy-sourcegraph-docker/pull/1204/files)] for insight on how to add this port to their `docker-compose.override.yaml` file.
curl -X GET "https://analytics.sourcegraph.com/api/reports/by-user-client-date?instanceURL=$INSTANCE_URL&granularity=$GRANULARITY" \
154
154
-H "Authorization: Bearer $ACCESS_TOKEN"
155
155
```
156
+
157
+
Optional `startDate` and `endDate` values (formatted as `YYYY-MM-DD`) can be specified. Both parameters are optional. If neither is specified, the default is all time. If only one is specified, then only the start or end date filter will be applied.
158
+
159
+
Example:
160
+
161
+
```sh
162
+
export INSTANCE_URL="<INSTANCE_URL>"# e.g. example.sourcegraphcloud.com
163
+
export START_DATE="2025-01-01"
164
+
export END_DATE="2025-12-31"
165
+
166
+
curl -X GET "https://analytics.sourcegraph.com/api/reports/by-user-client-date?instanceURL=$INSTANCE_URL&startDate=$START_DATE&endDate=$END_DATE" \
Copy file name to clipboardExpand all lines: docs/analytics/index.mdx
+16-8Lines changed: 16 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -75,19 +75,20 @@ Many of the metrics above are also available for Cody only. However, some user d
75
75
| Daily code navigation activity | Count of code navigation operations performed each day |
76
76
| Daily code navigation users | Number of unique users utilizing code navigation features each day |
77
77
| Precise vs. search-based code navigation actions by language | Comparison of precise vs. search-based navigation success rates broken down by programming language |
78
+
| Batch changes usage funnel | Number of times various Batch Changes actions were taken each day: Batch Changes opened, specs created, specs executed, changesets published, and changes merged (from the Sourcegraph UI) |
| Total accepted completions | Count of completions accepted by users during the selected time |
84
-
| Hours saved | The number of hours saved by Cody users, assuming 2 minutes saved per completion |
85
-
| Completions by day | The number of completions suggested by day and by editor. |
86
-
| Completion acceptance rate (CAR) | The percent of completions presented to a user for at least 750ms accepted by day, the editor, day, and month. |
87
-
| Weighted completion acceptance rate (wCAR) | Similar to CAR, but weighted by the number of characters presented in the completion, by the editor, day, and month. This assigns more "weight" to accepted completions that provide more code to the user. |
88
-
| Completion persistence rate | Percent of completions that are retained or mostly retained (67%+ of inserted text) after various time intervals. |
89
-
| Average completion latency (ms) | The average milliseconds of latency before a user is presented with a completion suggestion by an editor. |
90
-
| Acceptance rate by language | CAR and total completion suggestions broken down by editor during the selected time |
84
+
| Total accepted completions & auto-edits| Count of completions and auto-edits accepted by users during the selected time |
85
+
| Hours saved | The number of hours saved by Cody users, assuming 2 minutes saved per completion and auto-edit |
86
+
| Completions and auto-edits by day | The number of completions and auto-edits suggested by day and by editor. |
87
+
| Completion and auto-edit acceptance rate (CAR) | The percent of completions and auto-edits presented to a user for at least 750ms accepted by day, the editor, day, and month. |
88
+
| Weighted completion and auto-edit acceptance rate (wCAR) | Similar to CAR, but weighted by the number of characters presented in the completion and auto-edit, by the editor, day, and month. This assigns more "weight" to accepted completions that provide more code to the user. |
89
+
| Completion persistence rate | Percent of completions that are retained or mostly retained (67%+ of inserted text) after various time intervals. Auto-edits are not included.|
90
+
| Average completion and auto-edit latency (ms) | The average milliseconds of latency before a user is presented with a completion or auto-edit suggestion by an editor. |
91
+
| Acceptance rate by language | CAR and total completion suggestions broken down by editor during the selected time. Auto-edits not included|
91
92
92
93
### Chat and prompt metrics
93
94
@@ -126,6 +127,8 @@ When exporting, you can group the data by:
126
127
- User and day
127
128
- User, day, client, and language
128
129
130
+
And you can select the timeframe for the export using the `startDate` and `endDate` parameters (with `YYYY-MM-DD` values).
131
+
129
132
Each row in the CSV represents a user's activity for a specific combination of these groupings (e.g., a particular day, month, client, and/or language). The CSV includes metrics such as searches, code navigation actions, chat conversations, code completions, and more.
130
133
131
134
#### Important Notes
@@ -155,3 +158,8 @@ Each row in the CSV represents a user's activity for a specific combination of t
155
158
| Total Characters Written by Cody | Inserted code that Cody generates via chat, prompt responses, accepted autocompletions, or suggestions/fixes. Used as the numerator in the "Percentage of Code Written by Cody" ratio. |
156
159
| Total Characters Written | Total new code inserted into the editor (includes both user-generated and Cody-generated characters). Used as the denominator in the "Percentage of Code Written by Cody" ratio. |
157
160
| Percentage of Code Written by Cody | Measures Cody's impact: (Total Characters Written by Cody ÷ Total Characters Written) × 100. [Learn more about this metric.](/analytics/pcw)|
161
+
| Auto-edit suggested events | Number of Cody auto-edit suggestions offered. |
162
+
| Auto-edit accepted events | Number of Cody auto-edit suggestions accepted by the user. |
163
+
| Auto-edit acceptance rate | Ratio of accepted to suggested auto-edits, combined across editors. |
164
+
| Lines changed by chats and commands | Number of lines of code (LOC) changed by chat and command events. |
165
+
| Lines changed by completions and auto-edits | Number of lines of code (LOC) changed by completion and auto-edit events. |
Copy file name to clipboardExpand all lines: docs/batch-changes/bulk-operations-on-changesets.mdx
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,12 +33,13 @@ Below is a list of supported bulk operations for changesets and the conditions w
33
33
|**Commenting**| Post a comment on all selected changesets. Useful for pinging people, reminding them to take a look at the changeset, or posting your favorite emoji |
34
34
|**Detach**| Detach a selection of changesets from the batch change to remove them from the archived tab |
35
35
|**Re-enqueue**| Re-enqueues the pending changes for all selected changesets that failed |
36
-
|**Merge (experimental)**| Merge the selected changesets on code hosts. Some changesets may be unmergeable due to their states, which does not impact the overall bulk operation. Failed merges are listed under the bulk operations tab. In the confirmation modal, you can opt for a squash merge strategy, available on GitHub, GitLab, and Bitbucket Cloud. For Bitbucket Server/Data Center, only regular merges are performed |
36
+
|**Merge**| Merge the selected changesets on code hosts. Some changesets may be unmergeable due to their states, which does not impact the overall bulk operation. Failed merges are listed under the bulk operations tab. In the confirmation modal, you can opt for a squash merge strategy, available on GitHub, GitLab, and Bitbucket Cloud. For Bitbucket Server/Data Center, only regular merges are performed |
37
37
|**Close**| Close the selected changesets on the code hosts |
38
38
|**Publish**| Publishes the selected changesets, provided they don't have a [`published` field](/batch-changes/batch-spec-yaml-reference#changesettemplatepublished) in the batch spec. You can choose between draft and normal changesets in the confirmation modal |
39
39
|**Push-only**| Pushes code changes to a new branch on a code host without making a merge request. Available on GitHub and GitLab only. |
40
40
|**Export**| Export selected changesets that you can use for later use |
41
41
|**Re-execute**| Users can re-execute individual changeset creation logic for selected workspaces. This allows for creating new changesets for users who are using non-deterministic run steps (for example,LLMs) |
42
+
|**Enable auto-merge for GitHub (experimental)**| Enable auto-merge on selected GitHub changesets. When enabled, changesets will be automatically merged once all required status checks pass and any blocking reviews are resolved. This feature is GitHub-specific and requires [appropriate setup](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-auto-merge-for-pull-requests-in-your-repository) on the target repositories. Failed actions are listed under the bulk operations tab. |
0 commit comments