Skip to content

Rebase to v2.51.0-rc1 #781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 302 commits into
base: vfs-2.51.0-rc1
Choose a base branch
from
Open

Rebase to v2.51.0-rc1 #781

wants to merge 302 commits into from

Conversation

dscho
Copy link
Member

@dscho dscho commented Aug 9, 2025

The usual shtick.

dscho and others added 30 commits August 8, 2025 10:47
In `fsck_commit()`, after counting the authors of a commit, we set the
`err` variable either when there was no author, or when there were more
than two authors recorded. Then we access the `err` variable to figure
out whether we should return early. But if there was exactly one author,
that variable is still uninitialized.

Let's just initialize the variable.

This issue was pointed out by CodeQL.

Signed-off-by: Johannes Schindelin <[email protected]>
When accessing an array element (or, in these instances, characters in
a string), the check whether the array index is within bounds should
always come before accessing said element.

Signed-off-by: Johannes Schindelin <[email protected]>
The `revindex_size` value is uninitialized in case the function is
erroring out, but we want to assign its value. Let's just initialize it.

Signed-off-by: Johannes Schindelin <[email protected]>
The `localtime()` function is inherently thread-unsafe and should not be
used anymore. Let's use `localtime_r()` instead.

Signed-off-by: Johannes Schindelin <[email protected]>
The `mtimes_size` variable is uninitialzed when the function errors out,
yet its value is assigned to another variable. Let's just initialize it.

Signed-off-by: Johannes Schindelin <[email protected]>
On the off chance that `lookup_decoration()` cannot find anything, let
`leave_one_treesame_to_parent()` return gracefully instead of crashing.

Pointed out by CodeQL.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `lookup_commit_reference()` can return NULL
values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `parse_object()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `lookup_commit()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `lookup_commit()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `parse_tree_indirect()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `lookup_commit()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `branch_get()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `branch_get()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `lookup_commit_reference()` can return NULL
values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL points out that `branch_get()` can return NULL values.

Signed-off-by: Johannes Schindelin <[email protected]>
CodeQL is GitHub's native offering of a static code analyzer, and hence
integrates with GitHub Actions better than any other static code
analyzer.

By default, it comes with a large range of "queries" that test for
common code patterns that should be avoided.

For now, we only target source code written in C, via the `language:
cpp` directive. Just in case that other languages should be targeted,
too, this GitHub workflow job is set up as a matrix job to make that
easier in the future.

For full documentation, see
https://docs.github.com/en/code-security/code-scanning/introduction-to-code-scanning/about-code-scanning-with-codeql

Co-authored-by: Pierre Tempel <[email protected]>
Co-authored-by: Arthur Baars <[email protected]>
Signed-off-by: Johannes Schindelin <[email protected]>
…fter release

As pointed out by CodeQL, it is a potentially dangerous practice to
store local variables' addresses in non-local structs.

My original intention was to make sure to clear it out after it was
used, and before the function returns (which is when the address would
go stale).

However, I faced too much resistance in the Git project against such
patches, there seemed to always be the overwhelming sentiment that the
code isn't broken (even if it requires a complex and demanding analysis
to wrap one's head around _that_). Therefore, I will be pragmatic and
simply ask CodeQL to hold its peace about this issue forever.

Signed-off-by: Johannes Schindelin <[email protected]>
In some instances, CodeQL's web UI on github.com leaves questions
unanswered. For example, in some alerts it is really necessary to follow
the entire "taint flow" to understand why something might be an issue.

The alerts for the `cpp/uncontrolled-allocation-size` rule, for example,
are all false positives, and only when inspecting the exact flow does it
become obvious that one alert wants to point out that the size of a
binary patch hunk, which is specified in the patch, is then used to
determine how much memory to allocate, which may potentially run out of
memory (and is hence just Git doing what it is asked to, and does not
need to be changed).

To help with those issues, publish the `.sarif` file as part of every
workflow run; This allows downloading that file and inspecting it e.g.
with the SARIF viewer extension in VS Code (for details, see
https://marketplace.visualstudio.com/items?itemName=MS-SarifVSCode.sarif-viewer).

Signed-off-by: Johannes Schindelin <[email protected]>
As pointed out by CodeQL, it could be NULL and we usually check for
that.

Signed-off-by: Johannes Schindelin <[email protected]>
A couple of CodeQL's queries are opinionated in a way that is obviously
not shared by Git's source code's state, and apparently intentionally so.

For example, the "For loop variable changed in body" query as well as
the "No trivial switch statements" one result in too many results that
are apparently intentional in Git's source code. Let's not worry about
those, then. Also, Git has plenty of instances where variables shadow
other variables.

Other valid yet not quite critical issues identified by CodeQL include
complex conditionals and nested switch statements spanning several
pages.

We probably want to address these issues at some stage, but they are not
as critical as other problems pointed out by CodeQL, so let's silence
those queries for now and take care of them at a later stage.

Signed-off-by: Johannes Schindelin <[email protected]>
On the off-chance that it's NULL...

Signed-off-by: Johannes Schindelin <[email protected]>
As pointed out by CodeQL, `lookup_commit()` can return NULL.

Signed-off-by: Johannes Schindelin <[email protected]>
The code is a bit too hard to reason about for CodeQL to figure out
whether the `fill_commit_graph_info()`  function is at all called after
`write_commit_graph()` returns (and hence whether `topo_levels` goes out
of context before it is used again).

The Git project insists that this is correct (and does not want to make
the code more obviously correct), so let's silence CodeQL's complaints
in this instance.

Signed-off-by: Johannes Schindelin <[email protected]>
…oes NUL-terminate correctly

Signed-off-by: Johannes Schindelin <[email protected]>
Let's exclude GitWeb from being scanned; It is not distributed by us.

Signed-off-by: Johannes Schindelin <[email protected]>
dscho and others added 26 commits August 8, 2025 19:08
Git v2.48.0 has become even more stringent about leaks.

Signed-off-by: Johannes Schindelin <[email protected]>
The --path-walk option in `git pack-objects` is implied by the
pack.usePathWalk=true config value. This is intended to help the
packfile generation within `git push` specifically.

While this config does enable the path-walk feature, it does not lead to
the expected levels of compression in the cases it was designed to
handle. This is due to the default implication of the --reuse-delta
option as well as auto-GC.

In the performance tests used to evaluate the --path-walk option, such
as those in p5313, the --no-reuse-delta option is used to ensure that
deltas are recomputed according to the new object walk. However, it was
assumed (I assumed this) that when the objects were loose from
client-side operations that better deltas would be computed during this
operation. This wasn't confirmed because the test process used data that
was fetched from real repositories and thus existed in packed form only.

I was able to confirm that this does not reproduce when the objects to
push are loose. Careful use of making the pushed commit unreachable and
loosening the objects via `git repack -Ad` helps to confirm my
suspicions here. Independent of this change, I'm pushing for these
pipeline agents to set `gc.auto=0` before creating their Git objects. In
the current setup, the repo is adding objects and then incrementally
repacking them and ending up with bad cross-path deltas. This approach
can help scenarios where that makes sense, but will not cover all of our
users without them choosing to opt-in to background maintenance (and
even then, an incremental repack could cost them efficiency).

In order to make sure we are getting the intended compression in `git
push`, this change enforces the spawned `git pack-objects` process to
use `--no-reuse-delta`.

As far as I can tell, the main motivation for implying the --reuse-delta
option by default is two-fold:

 1. The code in send-pack.c that executes 'git pack-objects' is ignorant
    of whether the current process is a client pushing to a remote or a
    remote sending a fetch or clone to a client.

 2. For servers, it is critical that they trust the previously computed
    deltas whenever possible, or they could overload their CPU
    resources.

There's also the side that most servers use repacking logic that will
replace any bad deltas that are sent by clients (or at least, that's the
hope; we've seen that repacks can also pick bad deltas).

This commit also adds a test case that demonstrates that `git -c
pack.usePathWalk=true push` now avoids reusing deltas.

To do this, the test case constructs a pack with a horrendously
inefficient delta object, then verifies that the pack on the receiving
side of the `push` fails to have such an inefficient delta.

The test case would probably be a lot more readable if hex numbers were
used instead of octal numbers, but alas, `printf "\x<hex>"` is not
portable, only `printf "\<octal>"` is. For example, dash's built-in
`printf` function simply prints `\x` verbatim while bash's built-in
happily converts this construct to the corresponding byte.

Signed-off-by: Derrick Stolee <[email protected]>
Signed-off-by: Johannes Schindelin <[email protected]>
The --path-walk option in 'git pack-objects' is implied by the
pack.usePathWalk=true config value. This is intended to help the
packfile generation within 'git push' specifically.

While this config does enable the path-walk feature, it does not lead
the expected levels of compression in the cases it was designed to
handle. This is due to the default implication of the --reuse-delta
option as well as auto-GC.

In the performance tests used to evaluate the --path-walk option, such
as those in p5313, the --no-reuse-delta option is used to ensure that
deltas are recomputed according to the new object walk. However, it was
assumed (I assumed this) that when the objects were loose from
client-side operations that better deltas would be computed during this
operation. This wasn't confirmed because the test process used data that
was fetched from real repositories and thus existed in packed form only.

I was able to confirm that this does not reproduce when the objects to
push are loose. Careful use of making the pushed commit unreachable and
loosening the objects via 'git repack -Ad' helps to confirm my
suspicions here. Independent of this change, I'm pushing for these
pipeline agents to set 'gc.auto=0' before creating their Git objects. In
the current setup, the repo is adding objects and then incrementally
repacking them and ending up with bad cross-path deltas. This approach
can help scenarios where that makes sense, but will not cover all of our
users without them choosing to opt-in to background maintenance (and
even then, an incremental repack could cost them efficiency).

In order to make sure we are getting the intended compression in 'git
push', this change makes the --path-walk option imply --no-reuse-delta
when the --reuse-delta option is not provided.

As far as I can tell, the main motivation for implying the --reuse-delta
option by default is two-fold:

1. The code in send-pack.c that executes 'git pack-objects' is ignorant
of whether the current process is a client pushing to a remote or a
remote sending a fetch or clone to a client.

2. For servers, it is critical that they trust the previously computed
deltas whenever possible, or they could overload their CPU resources.

There's also the side that most servers use repacking logic that will
replace any bad deltas that are sent by clients (or at least, that's the
hope; we've seen that repacks can also pick bad deltas).

The --path-walk option at the moment is not compatible with reachability
bitmaps, so is not planned to be used by Git servers. Thus, we can
reasonably assume (for now) that the --path-walk option is assuming a
client-side scenario, either a push or a repack. The repack option will
be explicit about the --reuse-delta option or not.

One thing to be careful about is background maintenance, which uses a
list of objects instead of refs, so we condition this on the case where
the --path-walk option will be effective by checking that the --revs
option was provided.

Alternative options considered included:

* Adding _another_ config ('pack.reuseDelta=false') to opt-in to this
choice. However, we already have pack.usePathWalk=true as an opt-in to
"do the right thing to make my data small" as far as our internal users
are concerned.

* Modify the chain between builtin/push.c, transport.c, and
builtin/send-pack.c to communicate that we are in "push" mode, not
within a fetch or clone. However, this seemed like overkill. It may be
beneficial in the future to pass through a mode like this, but it does
not meet the bar for the immediate need.

Reviewers, please see git-for-windows#5171 for the baseline
implementation of this feature within Git for Windows and thus
microsoft/git. This feature is still under review upstream.
Tests in t7900 assume the state of the `maintenance.strategy`
config setting; set/unset by previous tests. Correct this by
explictly unsetting and re-setting the config at the start of the
tests.

Signed-off-by: Matthew John Cheetham <[email protected]>
Introduce a new maintenance task, `cache-local-objects`, that operates
on Scalar or VFS for Git repositories with a per-volume, shared object
cache (specified by `gvfs.sharedCache`) to migrate packfiles and loose
objects from the repository object directory to the shared cache.

Older versions of `microsoft/git` incorrectly placed packfiles in the
repository object directory instead of the shared cache; this task will
help clean up existing clones impacted by that issue.

Migration of packfiles involves the following steps for each pack:

1. Hardlink (or copy):
   a. the .pack file
   b. the .keep file
   c. the .rev file
2. Move (or copy + delete) the .idx file
3. Delete/unlink:
   a. the .pack file
   b. the .keep file
   c. the .rev file

Moving the index file after the others ensures the pack is not read
from the new cache directory until all associated files (rev, keep)
exist in the cache directory also.

Moving loose objects operates as a move, or copy + delete.

Signed-off-by: Matthew John Cheetham <[email protected]>
Add the `cache-local-objects` maintenance task to the list of tasks run
by the `scalar run` command. It's often easier for users to run the
shorter `scalar run` command than the equivalent `git maintenance`
command.

Signed-off-by: Matthew John Cheetham <[email protected]>
Introduce a new maintenance task, `cache-local-objects`, that operates
on Scalar or VFS for Git repositories with a per-volume, shared object
cache (specified by `gvfs.sharedCache`) to migrate packfiles and loose
objects from the repository object directory to the shared cache.

Older versions of `microsoft/git` incorrectly placed packfiles in the
repository object directory instead of the shared cache; this task will
help clean up existing clones impacted by that issue.

Fixes #716
Add the ability to block built-in commands based on if the `core.gvfs`
setting has the `GVFS_USE_VIRTUAL_FILESYSTEM` bit set. This allows us
to selectively block commands that use the GVFS protocol, but don't use
VFS for Git (for example repos cloned via `scalar clone` against Azure
DevOps).

Signed-off-by: Matthew John Cheetham <[email protected]>
Loosen the blocking of the `repack` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `repack` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <[email protected]>
Loosen the blocking of the `fsck` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `fsck` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <[email protected]>
Loosen the blocking of the `prune` command from all "GVFS repos" (those
that have `core.gvfs` set) to only those that actually use the virtual
file system (VFS for Git only). This allows for `prune` to be used in
Scalar clones.

Signed-off-by: Matthew John Cheetham <[email protected]>
Replace the special casing of the `worktree` command being blocked on
VFS-enabled repos with the new `BLOCK_ON_VFS_ENABLED` flag.

Signed-off-by: Matthew John Cheetham <[email protected]>
Emit a warning message when the `gvfs.sharedCache` option is set that
the `repack` command will not perform repacking on the shared cache.

In the future we can teach `repack` to operate on the shared cache, at
which point we can drop this commit.

Signed-off-by: Matthew John Cheetham <[email protected]>
The microsoft/git fork includes pre- and post-command hooks, with the
initial intention of using these for VFS for Git. In that environment,
these are important hooks to avoid concurrent issues when the
virtualization is incomplete.

However, in the Office monorepo the post-command hook is used in a
different way. A custom hook is used to update the sparse-checkout, if
necessary. To avoid this hook from being incredibly slow on every Git
command, this hook checks for the existence of a "sentinel file" that is
written by a custom post-index-change hook and no-ops if that file does
not exist.

However, even this "no-op" is 200ms due to the use of two scripts (one
simple script in .git/hooks/ does some environment checking and then
calls a script from the working directory which actually contains the
logic).

Add a new config option, 'postCommand.strategy', that will allow for
multiple possible strategies in the future. For now, the one we are
adding is 'worktree-change' which states that we should write a
sentinel file instead of running the 'post-index-change' hook and then
skip the 'post-command' hook if the proper sentinel file doesn't exist.
If it does exist, then delete it and run the hook. This behavior is
_only_ triggered, however, if a part of the index changes that is within
the sparse checkout; If only parts of the index change that are not even
checked out on disk, the hook is still skipped.

I originally planned to put this into the repo-settings, but this caused
the repo settings to load in more cases than they did previously. When
there is an invalid boolean config option, this causes failure in new
places. This was caught by t3007.

This behavior is tested in t0401-post-command-hook.sh.

Signed-off-by: Derrick Stolee <[email protected]>
This helps t0401 pass while under SANITIZE=leak.

Signed-off-by: Derrick Stolee <[email protected]>
Currently when the `core.gvfs` setting is set, several commands are
outright blocked from running. Some of these commands, namely `repack`
are actually OK to run in a Scalar clone, even if it uses the GVFS
protocol (for Azure DevOps).

Introduce a different blocking mechanism to only block commands when the
virtual filesystem is being used, rather than as a broad block on any
`core.gvfs` setting.
The microsoft/git fork includes pre- and post-command hooks, with the
initial intention of using these for VFS for Git. In that environment,
these are important hooks to avoid concurrent issues when the
virtualization is incomplete.

However, in the Office monorepo the post-command hook is used in a
different way. A custom hook is used to update the sparse-checkout, if
necessary. To avoid this hook from being incredibly slow on every Git
command, this hook checks for the existence of a "sentinel file" that is
written by a custom post-index-change hook and no-ops if that file does
not exist.

However, even this "no-op" is 200ms due to the use of two scripts (one
simple script in .git/hooks/ does some environment checking and then
calls a script from the working directory which actually contains the
logic).

Add a new config option, 'postCommand.strategy', that will allow for
multiple possible strategies in the future. For now, the one we are
adding is 'post-index-change' which states that we should write a
sentinel file instead of running the 'post-index-change' hook and then
skip the 'post-command' hook if the proper sentinel file doesn't exist.
(If it does exist, then delete it and run the hook.)

--- 

This fork contains changes specific to monorepo scenarios. If you are an
external contributor, then please detail your reason for submitting to
this fork:

* [ ] This is an early version of work already under review upstream.
* [ ] This change only applies to interactions with Azure DevOps and the
      GVFS Protocol.
* [ ] This change only applies to the virtualization hook and VFS for
Git.
* [x] This change only applies to custom bits in the microsoft/git fork.
This new test demonstrates some behavior where a valid packfile is being
rejected by the Git client due to the order in which it is resolving
REF_DELTAs.

The thin packfile has a REF_DELTA chain A->B->C where C is not included
in the packfile. However, the client repository contains both C and B
already. Thus, 'git index-pack' is able to resolve A before resolving B.

When resolving B, it then attempts to resolve any other REF_DELTAs that
are pointing to B as a base. This "revisits" A and complains as if there
is a cycle, but it did not actually detect a cycle.

A fix will arrive in the next change.

Signed-off-by: Derrick Stolee <[email protected]>
This is an early version of work already under review upstream:
gitgitgadget#1906
Just like we just did in the backport from my upstream contribution,
let's convert the `curl_easy_setopt()` calls in `gvfs-helper.c` that
still passed `int` constants to pass `long` instead.

Signed-off-by: Johannes Schindelin <[email protected]>
This topic branch has backports of cURL compile fixes in the `osx-gcc`
job, plus a bonus `gvfs-helper` follow-up fix.

Signed-off-by: Johannes Schindelin <[email protected]>
On Linux, the following command would cause the terminal to be stuck
waiting:

  git fetch origin foobar

The issue would be that the fetch would fail with the error

  fatal: couldn't find remote ref foobar

but the underlying git-gvfs-helper process wouldn't die. The
`subprocess_exit_handler()` method would close its stdin and stdout, but
that wouldn't be enough to cause the process to end, even though the
`packet_read_line_gently()` call that is run in `do_sub_cmd__server()`
in a loop should return -1 and the process should the terminate
peacefully.

While it is unclear why this does not happen, there may be other
conditions where the `gvfs-helper` process would not terminate. Let's
ensure that the gvfs-helper-client process cleans up the gvfs-helper
server processes that it spawned upon exit.

Reported-by: Stuart Wilcox Humilde <[email protected]>
Co-authored-by: Johannes Schindelin <[email protected]>
Signed-off-by: Derrick Stolee <[email protected]>
Signed-off-by: Johannes Schindelin <[email protected]>
On Linux, the following command would cause the terminal to be stuck
waiting:

```
  git fetch origin foobar
```

The issue would be that the fetch would fail with the error

```
  fatal: couldn't find remote ref foobar
```

but the underlying `git-gvfs-helper` process wouldn't die. The
`subprocess_exit_handler()` method would close its stdin and stdout, but
that wouldn't be enough to cause the process to end.

This PR addresses that by skipping the `finish_command()` call of the
`clean_on_exit_handler` and instead lets `cleanup_children()` send a
SIGTERM to terminate those spawned child processes.
This patch series has been long in the making, ever since Johannes
Nicolai and myself spiked this in November/December 2020.

Signed-off-by: Johannes Schindelin <[email protected]>
This patch series has been long in the making, ever since Johannes
Nicolai and myself spiked this in November/December 2020.

Signed-off-by: Johannes Schindelin <[email protected]>
@dscho dscho requested a review from mjcheetham August 9, 2025 10:29
@dscho dscho self-assigned this Aug 9, 2025
@dscho
Copy link
Member Author

dscho commented Aug 9, 2025

Range-diff relative to -rc0
  • 3: e8419a6 = 1: 637b030 t: remove advice from some tests

  • 1: 1a8ef5a = 2: 87a0c4e sparse-index.c: fix use of index hashes in expand_index

  • 2: c1fbf55 = 3: d48e3fe t5300: confirm failure of git index-pack when non-idx suffix requested

  • 4: 153abf9 = 4: 59f9e8c t1092: add test for untracked files and directories

  • 5: 4e4ae6e = 5: a63b63f index-pack: disable rev-index if index file has non .idx suffix

  • 6: 8098da6 = 6: 79e6761 trace2: prefetch value of GIT_TRACE2_DST_DEBUG at startup

  • 7: 4850ac5 = 7: f4119a2 survey: calculate more stats on refs

  • 8: 5bd1994 = 8: 549e0b2 survey: show some commits/trees/blobs histograms

  • 9: 7b71948 = 9: fbd0935 survey: add vector of largest objects for various scaling dimensions

  • 10: d6aa7e5 = 10: a31fa1f survey: add pathname of blob or tree to large_item_vec

  • 11: 7d9fb7b = 11: 64e86d2 survey: add commit-oid to large_item detail

  • 12: 5f04ffd = 12: 3703b8a survey: add commit name-rev lookup to each large_item

  • 13: 69a365c = 13: 21e02a8 survey: add --no-name-rev option

  • 14: 42f8558 = 14: 86befb5 survey: started TODO list at bottom of source file

  • 15: 1339c64 = 15: 7bfc010 survey: expanded TODO list at the bottom of the source file

  • 16: 0ba098f = 16: c7bb46b survey: expanded TODO with more notes

  • 17: a47be46 = 17: ea29c05 reset --stdin: trim carriage return from the paths

  • 18: 1187191 ! 18: 72523b3 Identify microsoft/git via a distinct version suffix

    @@ Commit message
      ## GIT-VERSION-GEN ##
     @@
      
    - DEF_VER=v2.50.GIT
    + DEF_VER=v2.51.0-rc1
      
     +# Identify microsoft/git via a distinct version suffix
     +DEF_VER=$DEF_VER.vfs.0.0
  • 19: 196a877 = 19: 1fd4bbc gvfs: ensure that the version is based on a GVFS tag

  • 20: 7befb92 = 20: f4dd260 gvfs: add a GVFS-specific header file

  • 54: dc85f0a = 21: 9d299a0 gvfs: add the core.gvfs config setting

  • 55: feede92 = 22: 85df70d gvfs: add the feature to skip writing the index' SHA-1

  • 56: 7f2f815 = 23: 1d54ae4 gvfs: add the feature that blobs may be missing

  • 57: dc30755 = 24: bbd201c gvfs: prevent files to be deleted outside the sparse checkout

  • 58: 07cd48f = 25: e106776 gvfs: optionally skip reachability checks/upload pack during fetch

  • 59: 512babe = 26: 4767665 gvfs: ensure all filters and EOL conversions are blocked

  • 60: 84dbe95 = 27: 173d324 gvfs: allow "virtualizing" objects

  • 61: 4d97f27 ! 28: 78eeb63 Hydrate missing loose objects in check_and_freshen()

    @@ object-file.c
      #include "hex.h"
      #include "loose.h"
      #include "object-file-convert.h"
    -@@ object-file.c: static int check_and_freshen_nonlocal(const struct object_id *oid, int freshen)
    - 
    - static int check_and_freshen(const struct object_id *oid, int freshen)
    +@@ object-file.c: static int check_and_freshen_source(struct odb_source *source,
    + 				    int freshen)
      {
    --	return check_and_freshen_local(oid, freshen) ||
    -+	int ret;
    -+	int tried_hook = 0;
    + 	static struct strbuf path = STRBUF_INIT;
    ++	int ret, tried_hook = 0;
     +
    + 	odb_loose_path(source, &path, oid);
    +-	return check_and_freshen_file(path.buf, freshen);
     +retry:
    -+	ret = check_and_freshen_local(oid, freshen) ||
    - 	       check_and_freshen_nonlocal(oid, freshen);
    -+	if (!ret && gvfs_virtualize_objects(the_repository) && !tried_hook) {
    ++	ret = check_and_freshen_file(path.buf, freshen);
    ++	if (!ret && gvfs_virtualize_objects(source->odb->repo) && !tried_hook) {
     +		tried_hook = 1;
    -+		if (!read_object_process(the_repository, oid))
    ++		if (!read_object_process(source->odb->repo, oid))
     +			goto retry;
     +	}
    -+
     +	return ret;
      }
      
    - int has_loose_object_nonlocal(const struct object_id *oid)
    + int has_loose_object(struct odb_source *source,
     
      ## odb.c ##
     @@
    @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb,
      		}
     
      ## odb.h ##
    -@@ odb.h: enum for_each_object_flags {
    - 	FOR_EACH_OBJECT_SKIP_ON_DISK_KEPT_PACKS = (1<<4),
    - };
    +@@ odb.h: static inline int odb_write_object(struct object_database *odb,
    + 	return odb_write_object_ext(odb, buf, len, type, oid, NULL, 0);
    + }
      
     +int read_object_process(struct repository *r, const struct object_id *oid);
     +
  • 62: e03d905 ! 29: c443758 sha1_file: when writing objects, skip the read_object_hook

    @@ Commit message
         Signed-off-by: Johannes Schindelin <[email protected]>
     
      ## object-file.c ##
    -@@ object-file.c: static int check_and_freshen_nonlocal(const struct object_id *oid, int freshen)
    - 	return 0;
    - }
    +@@ object-file.c: int check_and_freshen_file(const char *fn, int freshen)
      
    --static int check_and_freshen(const struct object_id *oid, int freshen)
    -+static int check_and_freshen(const struct object_id *oid, int freshen,
    -+			     int skip_virtualized_objects)
    + static int check_and_freshen_source(struct odb_source *source,
    + 				    const struct object_id *oid,
    +-				    int freshen)
    ++				    int freshen, int skip_virtualized_objects)
      {
    - 	int ret;
    - 	int tried_hook = 0;
    -@@ object-file.c: static int check_and_freshen(const struct object_id *oid, int freshen)
    + 	static struct strbuf path = STRBUF_INIT;
    + 	int ret, tried_hook = 0;
    +@@ object-file.c: static int check_and_freshen_source(struct odb_source *source,
    + 	odb_loose_path(source, &path, oid);
      retry:
    - 	ret = check_and_freshen_local(oid, freshen) ||
    - 	       check_and_freshen_nonlocal(oid, freshen);
    --	if (!ret && gvfs_virtualize_objects(the_repository) && !tried_hook) {
    -+	if (!ret && gvfs_virtualize_objects(the_repository) &&
    + 	ret = check_and_freshen_file(path.buf, freshen);
    +-	if (!ret && gvfs_virtualize_objects(source->odb->repo) && !tried_hook) {
    ++	if (!ret && gvfs_virtualize_objects(source->odb->repo) &&
     +	    !skip_virtualized_objects && !tried_hook) {
      		tried_hook = 1;
    - 		if (!read_object_process(the_repository, oid))
    + 		if (!read_object_process(source->odb->repo, oid))
      			goto retry;
    -@@ object-file.c: int has_loose_object_nonlocal(const struct object_id *oid)
    - 
    - int has_loose_object(const struct object_id *oid)
    +@@ object-file.c: static int check_and_freshen_source(struct odb_source *source,
    + int has_loose_object(struct odb_source *source,
    + 		     const struct object_id *oid)
      {
    --	return check_and_freshen(oid, 0);
    -+	return check_and_freshen(oid, 0, 0);
    +-	return check_and_freshen_source(source, oid, 0);
    ++	return check_and_freshen_source(source, oid, 0, 0);
      }
      
      int format_object_header(char *str, size_t size, enum object_type type,
    -@@ object-file.c: static int write_loose_object(const struct object_id *oid, char *hdr,
    - 					  FOF_SKIP_COLLISION_CHECK);
    +@@ object-file.c: static int write_loose_object(struct odb_source *source,
      }
      
    --static int freshen_loose_object(const struct object_id *oid)
    -+static int freshen_loose_object(const struct object_id *oid,
    + static int freshen_loose_object(struct object_database *odb,
    +-				const struct object_id *oid)
    ++				const struct object_id *oid,
     +				int skip_virtualized_objects)
      {
    --	return check_and_freshen(oid, 1);
    -+	return check_and_freshen(oid, 1, skip_virtualized_objects);
    + 	odb_prepare_alternates(odb);
    + 	for (struct odb_source *source = odb->sources; source; source = source->next)
    +-		if (check_and_freshen_source(source, oid, 1))
    ++		if (check_and_freshen_source(source, oid, 1, skip_virtualized_objects))
    + 			return 1;
    + 	return 0;
      }
    +@@ object-file.c: int stream_loose_object(struct odb_source *source,
    + 	close_loose_object(source, fd, tmp_file.buf);
      
    - static int freshen_packed_object(const struct object_id *oid)
    -@@ object-file.c: int stream_loose_object(struct input_stream *in_stream, size_t len,
    - 		die(_("deflateEnd on stream object failed (%d)"), ret);
    - 	close_loose_object(fd, tmp_file.buf);
    - 
    --	if (freshen_packed_object(oid) || freshen_loose_object(oid)) {
    -+	if (freshen_packed_object(oid) || freshen_loose_object(oid, 1)) {
    + 	if (freshen_packed_object(source->odb, oid) ||
    +-	    freshen_loose_object(source->odb, oid)) {
    ++	    freshen_loose_object(source->odb, oid, 1)) {
      		unlink_or_warn(tmp_file.buf);
      		goto cleanup;
      	}
    -@@ object-file.c: int write_object_file_flags(const void *buf, size_t len,
    - 	 * it out into .git/objects/??/?{38} file.
    +@@ object-file.c: int write_object_file(struct odb_source *source,
      	 */
      	write_object_file_prepare(algo, buf, len, type, oid, hdr, &hdrlen);
    --	if (freshen_packed_object(oid) || freshen_loose_object(oid))
    -+	if (freshen_packed_object(oid) || freshen_loose_object(oid, 1))
    + 	if (freshen_packed_object(source->odb, oid) ||
    +-	    freshen_loose_object(source->odb, oid))
    ++	    freshen_loose_object(source->odb, oid, 1))
      		return 0;
    - 	if (write_loose_object(oid, hdr, hdrlen, buf, len, 0, flags))
    + 	if (write_loose_object(source, oid, hdr, hdrlen, buf, len, 0, flags))
      		return -1;
     
      ## t/t0410/read-object ##
  • 63: 417c6d5 = 30: c42bbeb gvfs: add global command pre and post hook procs

  • 64: 912d651 = 31: 69d7481 t0400: verify that the hook is called correctly from a subdirectory

  • 65: 125f58e = 32: 5c6812a t0400: verify core.hooksPath is respected by pre-command

  • 66: 9f63cd8 = 33: f49c4c3 Pass PID of git process to hooks.

  • 67: 3de0347 = 34: cb532d3 sparse-checkout: make sure to update files with a modify/delete conflict

  • 68: bd2f4fa = 35: 7ee77e2 git_config_set_multivar_in_file_gently(): add a lock timeout

  • 69: 2f64226 = 36: 550ef67 scalar: set the config write-lock timeout to 150ms

  • 70: 5f6ca48 = 37: 8e7d18c scalar: add docs from microsoft/scalar

  • 71: bc0a618 = 38: 51d5384 scalar (Windows): use forward slashes as directory separators

  • 72: 8e7a5b9 = 39: 3667669 scalar: add retry logic to run_git()

  • 73: 46a0c3f = 40: 2386887 scalar: support the config command for backwards compatibility

  • 21: d764e86 = 41: 5831601 sequencer: avoid progress when stderr is redirected

  • 23: 9d7740d = 42: 830ecd3 cat_one_file(): make it easy to see that the size variable is initialized

  • 25: a2e7859 = 43: 7130e62 clar: pass a string for a %s format placeholder

  • 26: 97990bf = 44: a43bde2 fsck: avoid using an uninitialized variable

  • 28: 394fa06 = 45: 7cb8b3c clar(clar__assert_equal): do in-bounds check before accessing element

  • 29: 0d62570 = 46: 1c35a3b load_revindex_from_disk(): avoid accessing uninitialized data

  • 31: 5d71939 = 47: bfffcac clar(clar_summary_testsuite): avoid thread-unsafe localtime()

  • 32: da875fb = 48: 8c16ef8 load_pack_mtimes_file(): avoid accessing uninitialized data

  • 22: b7b8be8 = 49: 0ea9dcd revision: defensive programming

  • 24: 86bcde0 = 50: aba0c0f get_parent(): defensive programming

  • 27: a0b3079 = 51: 4cb0912 fetch-pack: defensive programming

  • 30: 6d7e4a6 = 52: 765a31d unparse_commit(): defensive programming

  • 33: 3f28e40 = 53: b5bb46b verify_commit_graph(): defensive programming

  • 34: 231c371 = 54: e9ad531 stash: defensive programming

  • 35: 9e7a10a = 55: 7c690c0 stash: defensive programming

  • 37: 8911bae = 56: 8a8c958 push: defensive programming

  • 39: ca0d0da = 57: 15b818a fetch: defensive programming

  • 41: ea27328 = 58: 3c874f7 describe: defensive programming

  • 43: c4ff862 = 59: e6987d3 inherit_tracking(): defensive programming

  • 45: de6b87a = 60: 80fa88e codeql: run static analysis as part of CI builds

  • 36: 77506db = 61: 0eb8723 fetch: silence a CodeQL alert about a local variable's address' use after release

  • 46: 6f4dfbd = 62: dc89f56 codeql: publish the sarif file as build artifact

  • 38: 6958d7b = 63: be49bd8 submodule: check return value of submodule_from_path()

  • 47: 197c642 = 64: 2be22c6 codeql: disable a couple of non-critical queries for now

  • 40: 7796464 = 65: a85e41e test-tool repository: check return value of lookup_commit()

  • 48: 2cb2d17 = 66: 28b12cf date: help CodeQL understand that there are no leap-year issues here

  • 42: 3557d6c = 67: aa32ccd shallow: handle missing shallow commits gracefully

  • 49: eae2b62 = 68: 353f699 help: help CodeQL understand that consuming envvars is okay here

  • 44: 9828446 = 69: c25505f commit-graph: suppress warning about using a stale stack addresses

  • 50: 92aa9b8 = 70: be75e7f ctype: help CodeQL understand that sane_istest() does not access array past end

  • 51: 7321f92 = 71: 021ddf8 ctype: accommodate for CodeQL misinterpreting the z in mallocz()

  • 52: 3eb46bb = 72: 0bb7d99 strbuf_read: help with CodeQL misunderstanding that strbuf_read() does NUL-terminate correctly

  • 53: 62f540c = 73: e55ca89 codeql: also check JavaScript code

  • 78: 137a953 ! 74: 78e7b66 worktree: allow in Scalar repositories

    @@ builtin/worktree.c: int cmd_worktree(int ac,
      		prefix = "";
      
     
    - ## git.c ##
    -@@ git.c: static struct cmd_struct commands[] = {
    - #ifndef WITH_BREAKING_CHANGES
    - 	{ "whatchanged", cmd_whatchanged, RUN_SETUP },
    - #endif
    --	{ "worktree", cmd_worktree, RUN_SETUP | BLOCK_ON_GVFS_REPO },
    -+	{ "worktree", cmd_worktree, RUN_SETUP },
    - 	{ "write-tree", cmd_write_tree, RUN_SETUP },
    - };
    - 
    -
      ## gvfs.h ##
     @@ gvfs.h: struct repository;
    +  */
      #define GVFS_SKIP_SHA_ON_INDEX                      (1 << 0)
    - #define GVFS_BLOCK_COMMANDS                         (1 << 1)
      #define GVFS_MISSING_OK                             (1 << 2)
     +
     +/*
  • 79: 0aeffe3 = 75: 7448601 sparse-checkout: avoid writing entries with the skip-worktree bit

  • 74: 87eed03 = 76: 9c5dcef Do not remove files outside the sparse-checkout

  • 75: 680f447 = 77: f66dc27 send-pack: do not check for sha1 file when GVFS_MISSING_OK set

  • 76: d753b59 = 78: 6699fe8 cache-tree: remove use of strbuf_addf in update_one

  • 77: b327335 ! 79: d13f97b gvfs: block unsupported commands when running in a GVFS repo

    @@ gvfs.h: struct repository;
      #define GVFS_SKIP_SHA_ON_INDEX                      (1 << 0)
     +#define GVFS_BLOCK_COMMANDS                         (1 << 1)
      #define GVFS_MISSING_OK                             (1 << 2)
    - #define GVFS_NO_DELETE_OUTSIDE_SPARSECHECKOUT       (1 << 3)
    - #define GVFS_FETCH_SKIP_REACHABILITY_AND_UPLOADPACK (1 << 4)
    + 
    + /*
     
      ## t/meson.build ##
     @@ t/meson.build: integration_tests = [
  • 80: 780c99a = 80: f6ff978 gvfs: allow overriding core.gvfs

  • 81: b41cc38 = 81: 3a93fab BRANCHES.md: Add explanation of branches and using forks

  • 82: fc00d20 = 82: d991bc6 Add virtual file system settings and hook proc

  • 83: fe37d98 = 83: f236513 virtualfilesystem: don't run the virtual file system hook if the index has been redirected

  • 84: f5d1201 = 84: 585b118 virtualfilesystem: check if directory is included

  • 85: c8d9cfd = 85: 4c442fc backwards-compatibility: support the post-indexchanged hook

  • 86: 09c7512 = 86: b063dec gvfs: verify that the built-in FSMonitor is disabled

  • 87: 264b6d8 = 87: 3019a43 wt-status: add trace2 data for sparse-checkout percentage

  • 88: a884bdb = 88: 6f6c6ca wt-status: add VFS hydration percentage to normal git status output

  • 89: a63a215 = 89: 1605ecb status: add status serialization mechanism

  • 90: bed685b = 90: 9bccb63 Teach ahead-behind and serialized status to play nicely together

  • 91: 4c1b2c6 = 91: c3e3764 status: serialize to path

  • 92: 40be547 = 92: 77b841a status: reject deserialize in V2 and conflicts

  • 93: ed58304 = 93: d8a6dd0 serialize-status: serialize global and repo-local exclude file metadata

  • 94: 5a6faf2 = 94: 45a339d status: deserialization wait

  • 95: 8717dff = 95: 9e1cc29 status: deserialize with -uno does not print correct hint

  • 96: 93b0c24 = 96: 689b4ec fsmonitor: check CE_FSMONITOR_VALID in ce_uptodate

  • 97: 9eca474 = 97: be47d60 fsmonitor: add script for debugging and update script for tests

  • 98: 2bfabb6 = 98: 46fe840 status: disable deserialize when verbose output requested.

  • 99: e0b0143 = 99: a17803b t7524: add test for verbose status deserialzation

  • 100: b2b42e3 = 100: e953eb7 deserialize-status: silently fallback if we cannot read cache file

  • 101: f7615eb = 101: 641b94d gvfs:trace2:data: add trace2 tracing around read_object_process

  • 102: b47ac12 = 102: 54ba937 gvfs:trace2:data: status deserialization information

  • 103: ec3fa0f = 103: eeeec93 gvfs:trace2:data: status serialization

  • 104: 1418e58 = 104: 886b4b8 gvfs:trace2:data: add vfs stats

  • 105: 5e65478 = 105: a49fbf8 trace2: refactor setting process starting time

  • 106: 30c19f5 = 106: 474de59 trace2:gvfs:experiment: clear_ce_flags_1

  • 107: b0f7428 = 107: 0333672 trace2:gvfs:experiment: report_tracking

  • 108: f7f6d49 = 108: b386bba trace2:gvfs:experiment: read_cache: annotate thread usage in read-cache

  • 109: c478cd3 = 109: cfeec8e trace2:gvfs:experiment: read-cache: time read/write of cache-tree extension

  • 110: ead3631 = 110: 7ebf901 trace2:gvfs:experiment: add region to apply_virtualfilesystem()

  • 111: 57bdbe1 = 111: 46e79dd trace2:gvfs:experiment: add region around unpack_trees()

  • 112: e738fd8 = 112: 78a860f trace2:gvfs:experiment: add region to cache_tree_fully_valid()

  • 113: fa98735 = 113: fccc45d trace2:gvfs:experiment: add unpack_entry() counter to unpack_trees() and report_tracking()

  • 114: 54f91c6 = 114: 3f9a101 trace2:gvfs:experiment: increase default event depth for unpack-tree data

  • 115: 4c9459d = 115: 7def48c trace2:gvfs:experiment: add data for check_updates() in unpack_trees()

  • 116: 044bdda = 116: a02e9d9 Trace2:gvfs:experiment: capture more 'tracking' details

  • 117: 25caf6a = 117: 5dafe37 credential: set trace2_child_class for credential manager children

  • 118: 30a937b = 118: 1a48c17 sub-process: do not borrow cmd pointer from caller

  • 119: fb01211 = 119: d017045 sub-process: add subprocess_start_argv()

  • 120: 500e76d = 120: 7d12b6e sha1-file: add function to update existing loose object cache

  • 121: 6d31645 = 121: db8f5b2 packfile: add install_packed_git_and_mru()

  • 122: 03155e0 = 122: 4675e0b index-pack: avoid immediate object fetch while parsing packfile

  • 123: c90f3a2 ! 123: e6c52ec gvfs-helper: create tool to fetch objects using the GVFS Protocol

    @@ gvfs-helper.c (new)
     +		goto cleanup;
     +	}
     +
    -+	if (finalize_object_file(pack_name_tmp.buf, pack_name_dst.buf) ||
    -+	    finalize_object_file(idx_name_tmp.buf, idx_name_dst.buf)) {
    ++	if (finalize_object_file(the_repository, pack_name_tmp.buf, pack_name_dst.buf) ||
    ++	    finalize_object_file(the_repository, idx_name_tmp.buf, idx_name_dst.buf)) {
     +		unlink(pack_name_tmp.buf);
     +		unlink(pack_name_dst.buf);
     +		unlink(idx_name_tmp.buf);
    @@ gvfs-helper.c (new)
     +	 * collision we have to assume something else is happening in
     +	 * parallel and we lost the race.  And that's OK.
     +	 */
    -+	if (finalize_object_file(tmp_path.buf, params->loose_path.buf)) {
    ++	if (finalize_object_file(the_repository, tmp_path.buf, params->loose_path.buf)) {
     +		unlink(tmp_path.buf);
     +		strbuf_addf(&status->error_message,
     +			    "could not install loose object '%s'",
  • 124: b46a7f7 ! 124: ff9156d sha1-file: create shared-cache directory if it doesn't exist

    @@ odb.c: int odb_for_each_alternate(struct object_database *odb,
      void odb_prepare_alternates(struct object_database *odb)
      {
     +	extern struct strbuf gvfs_shared_cache_pathname;
    -+
      	if (odb->loaded_alternates)
      		return;
      
      	link_alt_odb_entries(odb, odb->alternate_db, PATH_SEP, NULL, 0);
      
      	read_info_alternates(odb, odb->sources->path, 0);
    --	odb->loaded_alternates = 1;
     +
     +	if (gvfs_shared_cache_pathname.len &&
     +	    !gvfs_matched_shared_cache_to_alternate) {
    @@ odb.c: int odb_for_each_alternate(struct object_database *odb,
     +				     '\n', NULL, 0);
     +	}
     +
    -+	odb->repo->objects->loaded_alternates = 1;
    + 	odb->loaded_alternates = 1;
      }
      
    - int odb_has_alternates(struct object_database *odb)
  • 125: f13819b = 125: 22c79d1 gvfs-helper: better handling of network errors

  • 126: d6c22c8 = 126: d184263 gvfs-helper-client: properly update loose cache with fetched OID

  • 127: b85fa98 ! 127: caeaa78 gvfs-helper: V2 robust retry and throttling

    @@ gvfs-helper.c: static void install_packfile(struct gh__response_status *status,
      		goto cleanup;
      	}
      
    --	if (finalize_object_file(pack_name_tmp.buf, pack_name_dst.buf) ||
    --	    finalize_object_file(idx_name_tmp.buf, idx_name_dst.buf)) {
    +-	if (finalize_object_file(the_repository, pack_name_tmp.buf, pack_name_dst.buf) ||
    +-	    finalize_object_file(the_repository, idx_name_tmp.buf, idx_name_dst.buf)) {
     -		unlink(pack_name_tmp.buf);
     -		unlink(pack_name_dst.buf);
     -		unlink(idx_name_tmp.buf);
     -		unlink(idx_name_dst.buf);
    -+	if (finalize_object_file(params->temp_path_pack.buf,
    ++	if (finalize_object_file(the_repository, params->temp_path_pack.buf,
     +				 params->final_path_pack.buf) ||
    -+	    finalize_object_file(params->temp_path_idx.buf,
    ++	    finalize_object_file(the_repository, params->temp_path_idx.buf,
     +				 params->final_path_idx.buf)) {
     +		unlink(params->temp_path_pack.buf);
     +		unlink(params->temp_path_idx.buf);
  • 128: 4914079 = 128: dacb6d2 gvfs-helper: expose gvfs/objects GET and POST semantics

  • 129: cecdfd5 = 129: 25c4456 gvfs-helper: dramatically reduce progress noise

  • 130: 5945425 = 130: a527370 gvfs-helper: handle pack-file after single POST request

  • 131: 8c99b85 = 131: cf6213e test-gvfs-prococol, t5799: tests for gvfs-helper

  • 132: e872d8a = 132: 6797785 gvfs-helper: move result-list construction into install functions

  • 133: e8007d9 = 133: 930ef88 t5799: add support for POST to return either a loose object or packfile

  • 134: 9fab94c = 134: e10b4c8 t5799: cleanup wc-l and grep-c lines

  • 135: 9bc78b9 ! 135: 366fcbc gvfs-helper: verify loose objects after write

    @@ gvfs-helper.c: static void install_packfile(struct gh__request_params *params,
     +	oi.typep = &type;
     +	oi.sizep = &size;
     +
    -+	ret = read_loose_object(path, expected_oid, &real_oid, &contents, &oi);
    ++	ret = read_loose_object(the_repository, path, expected_oid, &real_oid, &contents, &oi);
     +	free(contents);
     +
     +	return ret;
  • 136: 2db49cb = 136: 56d551e t7599: create corrupt blob test

  • 137: 1014f65 ! 137: 4d6957d gvfs-helper: add prefetch support

    @@ gvfs-helper.c: static int create_loose_pathname_in_odb(struct strbuf *buf_path,
     +				 struct strbuf *final_path_idx,
     +				 struct strbuf *final_filename)
     +{
    -+	if (finalize_object_file(temp_path_pack->buf, final_path_pack->buf) ||
    -+	    finalize_object_file(temp_path_idx->buf, final_path_idx->buf)) {
    ++	if (finalize_object_file(the_repository, temp_path_pack->buf, final_path_pack->buf) ||
    ++	    finalize_object_file(the_repository, temp_path_idx->buf, final_path_idx->buf)) {
     +		unlink(temp_path_pack->buf);
     +		unlink(temp_path_idx->buf);
     +		unlink(final_path_pack->buf);
    @@ gvfs-helper.c: static void create_tempfile_for_loose(
      		goto cleanup;
      	}
      
    --	if (finalize_object_file(params->temp_path_pack.buf,
    +-	if (finalize_object_file(the_repository, params->temp_path_pack.buf,
     -				 params->final_path_pack.buf) ||
    --	    finalize_object_file(params->temp_path_idx.buf,
    +-	    finalize_object_file(the_repository, params->temp_path_idx.buf,
     -				 params->final_path_idx.buf)) {
     -		unlink(params->temp_path_pack.buf);
     -		unlink(params->temp_path_idx.buf);
    @@ gvfs-helper.c: static void install_loose(struct gh__request_params *params,
      	 * collision we have to assume something else is happening in
      	 * parallel and we lost the race.  And that's OK.
      	 */
    --	if (finalize_object_file(tmp_path.buf, params->loose_path.buf)) {
    +-	if (finalize_object_file(the_repository, tmp_path.buf, params->loose_path.buf)) {
     +	if (create_loose_pathname_in_odb(&loose_path, &params->loose_oid)) {
     +		strbuf_addf(&status->error_message,
     +			    "cannot create directory for loose object '%s'",
    @@ gvfs-helper.c: static void install_loose(struct gh__request_params *params,
     +		goto cleanup;
     +	}
     +
    -+	if (finalize_object_file(tmp_path.buf, loose_path.buf)) {
    ++	if (finalize_object_file(the_repository, tmp_path.buf, loose_path.buf)) {
      		unlink(tmp_path.buf);
      		strbuf_addf(&status->error_message,
      			    "could not install loose object '%s'",
  • 138: 4c0d982 = 138: 031b41c gvfs-helper: add prefetch .keep file for last packfile

  • 139: 5fd41c3 = 139: 04a0d69 gvfs-helper: do one read in my_copy_fd_len_tail()

  • 140: ceecb71 = 140: 73c4abc gvfs-helper: move content-type warning for prefetch packs

  • 141: d4fccbe = 141: 3f134ac fetch: use gvfs-helper prefetch under config

  • 142: c4bf2b2 ! 142: 3e6c90e gvfs-helper: better support for concurrent packfile fetches

    @@ gvfs-helper.c: static void my_finalize_packfile(struct gh__request_params *param
     +	 * has already munged errno (and it has various creation
     +	 * strategies), so we don't bother looking at it.
     +	 */
    - 	if (finalize_object_file(temp_path_pack->buf, final_path_pack->buf) ||
    - 	    finalize_object_file(temp_path_idx->buf, final_path_idx->buf)) {
    + 	if (finalize_object_file(the_repository, temp_path_pack->buf, final_path_pack->buf) ||
    + 	    finalize_object_file(the_repository, temp_path_idx->buf, final_path_idx->buf)) {
      		unlink(temp_path_pack->buf);
      		unlink(temp_path_idx->buf);
     -		unlink(final_path_pack->buf);
  • 143: eb7e11b = 143: 0c3da89 remote-curl: do not call fetch-pack when using gvfs-helper

  • 144: 4a251fd = 144: 2560448 fetch: reprepare packs before checking connectivity

  • 145: 23c31cf = 145: f8166e0 gvfs-helper: retry when creating temp files

  • 146: ada684f = 146: e970c84 sparse: avoid warnings about known cURL issues in gvfs-helper.c

  • 147: 5556f4f = 147: 15cb933 gvfs-helper: add --max-retries to prefetch verb

  • 148: 1c9a79e = 148: 17ce06c t5799: add tests to detect corrupt pack/idx files in prefetch

  • 149: 77e120d = 149: 85cff8d gvfs-helper: ignore .idx files in prefetch multi-part responses

  • 150: d6269fb = 150: 2a251e3 t5799: explicitly test gvfs-helper --fallback and --no-fallback

  • 151: 4810c9e = 151: b2ac7d9 gvfs-helper: don't fallback with new config

  • 152: e389079 = 152: e1c0ed0 test-gvfs-protocol: add cache_http_503 to mayhem

  • 153: 4c7fe74 = 153: 84b52a0 t5799: add unit tests for new gvfs.fallback config setting

  • 154: ca423b6 ! 154: 42b818a maintenance: care about gvfs.sharedCache config

    @@ builtin/gc.c: static int write_loose_object_to_stdin(const struct object_id *oid
      	int result = 0;
      	struct write_loose_object_data data;
      	struct child_process pack_proc = CHILD_PROCESS_INIT;
    ++	struct odb_source *prev_source = NULL;
     +	const char *object_dir = r->objects->sources->path;
     +
     +	/* If set, use the shared object directory. */
    -+	if (shared_object_dir)
    ++	if (shared_object_dir) {
    ++		prev_source =
    ++			odb_set_temporary_primary_source(r->objects,
    ++							 shared_object_dir, 0);
     +		object_dir = shared_object_dir;
    ++	}
      
      	/*
      	 * Do not start pack-objects process
    - 	 * if there are no loose objects.
    +@@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
      	 */
    --	if (!for_each_loose_file_in_objdir(r->objects->sources->path,
    -+	if (!for_each_loose_file_in_objdir(object_dir,
    + 	if (!for_each_loose_file_in_source(r->objects->sources,
      					   bail_on_loose,
    - 					   NULL, NULL, NULL))
    +-					   NULL, NULL, NULL))
    ++					   NULL, NULL, NULL)) {
    ++		if (shared_object_dir)
    ++			odb_restore_primary_source(r->objects, prev_source,
    ++						   shared_object_dir);
      		return 0;
    ++	}
    + 
    + 	pack_proc.git_cmd = 1;
    + 
     @@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
      		strvec_push(&pack_proc.args, "--quiet");
      	else
    @@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
      	pack_proc.in = -1;
      
     @@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
    - 	else if (data.batch_size > 0)
    - 		data.batch_size--; /* Decrease for equality on limit. */
      
    --	for_each_loose_file_in_objdir(r->objects->sources->path,
    -+	for_each_loose_file_in_objdir(object_dir,
    - 				      write_loose_object_to_stdin,
    - 				      NULL,
    - 				      NULL,
    + 	if (start_command(&pack_proc)) {
    + 		error(_("failed to start 'git pack-objects' process"));
    ++		if (shared_object_dir)
    ++			odb_restore_primary_source(r->objects, prev_source,
    ++						   shared_object_dir);
    + 		return 1;
    + 	}
    + 
    +@@ builtin/gc.c: static int pack_loose(struct maintenance_run_opts *opts)
    + 		result = 1;
    + 	}
    + 
    ++	if (shared_object_dir)
    ++		odb_restore_primary_source(r->objects, prev_source,
    ++					   shared_object_dir);
    ++
    + 	return result;
    + }
    + 
     @@ builtin/gc.c: static int task_option_parse(const struct option *opt,
      }
      
  • 155: f6807a2 = 155: f924de2 unpack-trees:virtualfilesystem: Improve efficiency of clear_ce_flags

  • 156: 2c5f43c = 156: a4577d0 homebrew: add GitHub workflow to release Cask

  • 157: d93063b = 157: 277b10c Adding winget workflows

  • 158: 490cd7c = 158: 8444657 Disable the monitor-components workflow in msft-git

  • 159: 0f333e4 = 159: 3684801 .github: enable windows builds on microsoft fork

  • 160: 70c4033 = 160: 3184b3d .github/actions/akv-secret: add action to get secrets

  • 161: b89ed88 ! 161: bcfec90 release: create initial Windows installer build workflow

    @@ .github/workflows/build-git-installers.yml (new)
     +      - name: Validate tag
     +        run: |
     +          echo "$GITHUB_REF" |
    -+          grep -E '^refs/tags/v2\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)\.vfs\.0\.(0|[1-9][0-9]*)(\.rc[0-9])?$' || {
    -+            echo "::error::${GITHUB_REF#refs/tags/} is not of the form v2.<X>.<Y>.vfs.0.<W>[.rc<N>]" >&2
    ++            grep -E '^refs/tags/v2\.(0|[1-9][0-9]*)\.(0|[1-9][0-9]*)(-rc[0-9]+)?\.vfs\.0\.(0|[1-9][0-9]*)$' || {
    ++            echo "::error::${GITHUB_REF#refs/tags/} is not of the form v2.<X>.<Y>[-rc<N>].vfs.0.<W>" >&2
     +            exit 1
     +          }
     +      - name: Determine tag to build
    @@ .github/workflows/build-git-installers.yml (new)
     +          # Verify tag follows rules in GIT-VERSION-GEN (i.e., matches the specified "DEF_VER" in
     +          # GIT-VERSION-FILE) and matches tag determined from trigger
     +          make GIT-VERSION-FILE
    -+          test "${{ steps.tag.outputs.version }}" == "$(sed -n 's/^GIT_VERSION *= *//p'< GIT-VERSION-FILE)" || die "GIT-VERSION-FILE tag ($(cat GIT-VERSION-FILE)) does not match ${{ steps.tag.outputs.name }}"
    ++          expected_version="${{ steps.tag.outputs.version }}"
    ++          # Convert -rc to .rc to match GIT-VERSION-FILE format
    ++          expected_version="${expected_version//-rc/.rc}"
    ++          test "$expected_version" == "$(sed -n 's/^GIT_VERSION *= *//p'< GIT-VERSION-FILE)" || die "GIT-VERSION-FILE tag ($(cat GIT-VERSION-FILE)) does not match ${{ steps.tag.outputs.name }}"
     +  # End check prerequisites for the workflow
     +
     +  # Build Windows installers (x86_64 & aarch64; installer & portable)
  • 162: 570fe38 = 162: 7dc327c release: create initial Windows installer build workflow

  • 163: 4233825 = 163: 175bd6e help: special-case HOST_CPU universal

  • 164: a3eab7e ! 164: 1a75f6b release: add Mac OSX installer build

    @@ .github/macos-installer/Makefile (new)
     +PREFIX := /usr/local
     +GIT_PREFIX := $(PREFIX)/git
     +
    ++# Replace -rc with .rc in the version string
    ++# This is to ensure compatibility with the format as generated by GIT-VERSION-GEN
    ++ORIGINAL_VERSION := $(VERSION)
    ++VERSION := $(shell echo $(ORIGINAL_VERSION) | sed 's/-rc/.rc/g')
    ++
     +BUILD_DIR := $(GITHUB_WORKSPACE)/payload
     +DESTDIR := $(PWD)/stage/git-$(ARCH_UNIV)-$(VERSION)
     +ARTIFACTDIR := build-artifacts
    @@ .github/macos-installer/Makefile (new)
     +	pkg_cmd += --sign "$(APPLE_INSTALLER_IDENTITY)"
     +endif
     +
    -+pkg_cmd += disk-image/git-$(VERSION)-$(ARCH_UNIV).pkg
    -+disk-image/git-$(VERSION)-$(ARCH_UNIV).pkg: disk-image/VERSION-$(VERSION)-$(ARCH_UNIV) symlinks
    ++pkg_cmd += disk-image/git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).pkg
    ++
    ++disk-image/git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).pkg: disk-image/VERSION-$(VERSION)-$(ARCH_UNIV) symlinks
     +	$(pkg_cmd)
     +
     +git-%-$(ARCH_UNIV).dmg:
    -+	hdiutil create git-$(VERSION)-$(ARCH_UNIV).uncompressed.dmg -fs HFS+ -srcfolder disk-image -volname "Git $(VERSION) $(ARCH_UNIV)" -ov 2>&1 | tee err || { \
    ++	hdiutil create git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).uncompressed.dmg -fs HFS+ -srcfolder disk-image -volname "Git $(ORIGINAL_VERSION) $(ARCH_UNIV)" -ov 2>&1 | tee err || { \
     +		grep "Resource busy" err && \
     +		sleep 5 && \
    -+		hdiutil create git-$(VERSION)-$(ARCH_UNIV).uncompressed.dmg -fs HFS+ -srcfolder disk-image -volname "Git $(VERSION) $(ARCH_UNIV)" -ov; }
    -+	hdiutil convert -format UDZO -o $@ git-$(VERSION)-$(ARCH_UNIV).uncompressed.dmg
    -+	rm -f git-$(VERSION)-$(ARCH_UNIV).uncompressed.dmg
    ++		hdiutil create git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).uncompressed.dmg -fs HFS+ -srcfolder disk-image -volname "Git $(ORIGINAL_VERSION) $(ARCH_UNIV)" -ov; }
    ++	hdiutil convert -format UDZO -o $@ git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).uncompressed.dmg
    ++	rm -f git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).uncompressed.dmg
     +
     +payload: $(BUILD_DIR)/git-$(VERSION)/osx-installed $(BUILD_DIR)/git-$(VERSION)/osx-built-assert-$(ARCH_UNIV)
     +
    -+pkg: disk-image/git-$(VERSION)-$(ARCH_UNIV).pkg
    ++pkg: disk-image/git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).pkg
     +
    -+image: git-$(VERSION)-$(ARCH_UNIV).dmg
    ++image: git-$(ORIGINAL_VERSION)-$(ARCH_UNIV).dmg
     +
     +ifdef APPLE_APP_IDENTITY
     +codesign:
    @@ .github/workflows/build-git-installers.yml: jobs:
     +          # Trace execution, stop on error
     +          set -ex
     +
    -+          # Write to "version" file to force match with trigger payload version
    -+          echo "${{ needs.prereqs.outputs.tag_version }}" >>git/version
    ++          # Convert -rc to .rc to match GIT-VERSION-FILE behavior
    ++          BUILD_VERSION=$(echo "${VERSION}" | sed 's/-rc/.rc/g')
     +
     +          # Configure universal build
     +          cat >git/config.mak <<EOF
    @@ .github/workflows/build-git-installers.yml: jobs:
     +
     +          # Extract tarballs
     +          mkdir payload manpages
    -+          tar -xvf git/git-$VERSION.tar.gz -C payload
    -+          tar -xvf git/git-manpages-$VERSION.tar.gz -C manpages
    ++          tar -xvf git/git-$BUILD_VERSION.tar.gz -C payload
    ++          tar -xvf git/git-manpages-$BUILD_VERSION.tar.gz -C manpages
     +
     +          # Lay out payload
    -+          cp git/config.mak payload/git-$VERSION/config.mak
    ++          cp git/config.mak payload/git-$BUILD_VERSION/config.mak
     +          make -C git/.github/macos-installer V=1 payload
     +
     +          # Codesign payload
    -+          cp -R stage/git-universal-$VERSION/ \
    ++          cp -R stage/git-universal-$BUILD_VERSION/ \
     +            git/.github/macos-installer/build-artifacts
     +          make -C git/.github/macos-installer V=1 codesign \
     +            APPLE_APP_IDENTITY='${{ steps.signing-secrets.outputs.appsign-id }}' || die "Creating signed payload failed"
  • 165: 74f6cbf ! 165: 0ec2fa8 release: build unsigned Ubuntu .deb package

    @@ .github/workflows/build-git-installers.yml: jobs:
     +              exit 1
     +          }
     +
    -+          echo "${{ needs.prereqs.outputs.tag_version }}" >>git/version
    -+          make -C git GIT-VERSION-FILE
    -+
     +          VERSION="${{ needs.prereqs.outputs.tag_version }}"
    ++          make -C git GIT-VERSION-FILE
     +
     +          ARCH="$(dpkg-architecture -q DEB_HOST_ARCH)"
     +          if test -z "$ARCH"; then
  • 166: d2e36bf = 166: b2f3d93 release: add signing step for .deb package

  • 167: 05eb718 = 167: 230066d release: create draft GitHub release with packages & installers

  • 168: ce62988 = 168: dc9982b build-git-installers: publish gpg public key

  • 169: be89ec0 = 169: 3f594ac release: continue pestering until user upgrades

  • 170: d2efdbb = 170: 429c764 dist: archive HEAD instead of HEAD^{tree}

  • 171: b25ad8e ! 171: 598d081 release: include GIT_BUILT_FROM_COMMIT in MacOS build

    @@ Commit message
         Signed-off-by: Victoria Dye <[email protected]>
     
      ## .github/macos-installer/Makefile ##
    -@@ .github/macos-installer/Makefile: GIT_PREFIX := $(PREFIX)/git
    +@@ .github/macos-installer/Makefile: VERSION := $(shell echo $(ORIGINAL_VERSION) | sed 's/-rc/.rc/g')
      BUILD_DIR := $(GITHUB_WORKSPACE)/payload
      DESTDIR := $(PWD)/stage/git-$(ARCH_UNIV)-$(VERSION)
      ARTIFACTDIR := build-artifacts
    @@ .github/workflows/build-git-installers.yml: jobs:
      
                make -C git -j$(sysctl -n hw.physicalcpu) GIT-VERSION-FILE dist dist-doc
      
    -+          export GIT_BUILT_FROM_COMMIT=$(gunzip -c git/git-$VERSION.tar.gz | git get-tar-commit-id) ||
    ++          export GIT_BUILT_FROM_COMMIT=$(gunzip -c git/git-$BUILD_VERSION.tar.gz | git get-tar-commit-id) ||
     +            die "Could not determine commit for build"
     +
                # Extract tarballs
                mkdir payload manpages
    -           tar -xvf git/git-$VERSION.tar.gz -C payload
    +           tar -xvf git/git-$BUILD_VERSION.tar.gz -C payload
  • 172: d6f75ea ! 172: 3ad2e4b release: add installer validation

    @@ .github/workflows/build-git-installers.yml: jobs:
     +        shell: bash
     +        run: |
     +          "${{ matrix.component.command }}" --version | sed 's/git version //' >actual
    -+          echo ${{ needs.prereqs.outputs.tag_version }} >expect
    ++          echo "${{ needs.prereqs.outputs.tag_version }}" | sed 's/-rc/.rc/g' >expect
     +          cmp expect actual || exit 1
     +
     +      - name: Validate universal binary CPU architecture
  • 173: c4cd789 = 173: e0dff9c update-microsoft-git: create barebones builtin

  • 174: d339f7f = 174: 03ba50e update-microsoft-git: Windows implementation

  • 175: f440076 = 175: 56db279 update-microsoft-git: use brew on macOS

  • 176: a9aa7a1 = 176: 7aba314 .github: reinstate ISSUE_TEMPLATE.md for microsoft/git

  • 177: fa1b35b = 177: e13efec .github: update PULL_REQUEST_TEMPLATE.md

  • 178: 505b5af = 178: 9f417ec Adjust README.md for microsoft/git

  • 179: 6495d60 = 179: 3fab670 scalar: implement a minimal JSON parser

  • 180: 86b3390 = 180: 1822022 scalar clone: support GVFS-enabled remote repositories

  • 181: efd0448 = 181: 990c6b1 test-gvfs-protocol: also serve smart protocol

  • 182: 46e0e0c = 182: 790823e gvfs-helper: add the endpoint command

  • 183: bc11d59 = 183: d51400c dir_inside_of(): handle directory separators correctly

  • 184: 5c43d43 = 184: 40b4a20 scalar: disable authentication in unattended mode

  • 185: 1fd8f0e = 185: b0f6df4 scalar: do initialize gvfs.sharedCache

  • 186: ba09355 = 186: 8cda0eb scalar diagnose: include shared cache info

  • 187: e495cb1 = 187: d7fc26b scalar: only try GVFS protocol on https:// URLs

  • 188: c3707db = 188: dffce17 scalar: verify that we can use a GVFS-enabled repository

  • 189: 9ff0c42 = 189: 4b25bc2 scalar: add the cache-server command

  • 190: b310626 = 190: ccbd379 scalar: add a test toggle to skip accessing the vsts/info endpoint

  • 191: db1e76f = 191: c555193 scalar: adjust documentation to the microsoft/git fork

  • 192: e3c02f8 = 192: 8612828 scalar: enable untracked cache unconditionally

  • 193: 9e97dfb = 193: 38d57c4 scalar: parse clone --no-fetch-commits-and-trees for backwards compatibility

  • 194: f7f0d8f = 194: baeb576 scalar: make GVFS Protocol a forced choice

  • 195: 3b23370 = 195: 45fda45 scalar: work around GVFS Protocol HTTP/2 failures

  • 196: cdd3ce7 = 196: 9f6983b scalar diagnose: accommodate Scalar's Functional Tests

  • 197: 65acc85 = 197: 1ff70b3 ci: run Scalar's Functional Tests

  • 198: dc4a1ab = 198: 70305e1 scalar: upgrade to newest FSMonitor config setting

  • 199: e261d06 = 199: d17d6e0 abspath: make strip_last_path_component() global

  • 200: 87cf75d = 200: 9a52cd9 scalar: .scalarCache should live above enlistment

  • 201: b308163 = 201: 97b8530 add/rm: allow adding sparse entries when virtual

  • 202: 185e008 = 202: cd6b23c sparse-checkout: add config to disable deleting dirs

  • 203: c537d29 = 203: f2d63eb diff: ignore sparse paths in diffstat

  • 204: 9dbd34b = 204: 2b0ccf7 repo-settings: enable sparse index by default

  • 205: f550e3d = 205: 265f2d4 diff(sparse-index): verify with partially-sparse

  • 206: c54e14b = 206: ebea5eb stash: expand testing for git stash -u

  • 207: f30e5ea = 207: aaf3ac1 sparse: add vfs-specific precautions

  • 208: 1f607a6 = 208: 024a07a reset: fix mixed reset when using virtual filesystem

  • 209: 01ce2df = 209: a39df43 sparse-index: add ensure_full_index_with_reason()

  • 210: b7a3367 = 210: f73dfc2 treewide: add reasons for expanding index

  • 211: 403a160 = 211: ee986cf treewide: custom reasons for expanding index

  • 212: b9ea6d1 = 212: 16e5e46 sparse-index: add macro for unaudited expansions

  • 213: cea5ba4 = 213: 326d69e Docs: update sparse index plan with logging

  • 214: 5bc3f10 = 214: 5e40c29 sparse-index: log failure to clear skip-worktree

  • 215: 36fe2f4 = 215: 4a8e03b stash: use -f in checkout-index child process

  • 216: 5be73e6 = 216: bc37d08 sparse-index: do not copy hashtables during expansion

  • 236: aec2a62 = 217: e7e2be9 sparse-checkout: remove use of the_repository

  • 237: 12c90dd = 218: 648d5bb sparse-checkout: add basics of 'clean' command

  • 238: 8f2ab78 = 219: 73078bd sparse-checkout: match some 'clean' behavior

  • 239: 409d0fd = 220: 6781125 dir: add generic "walk all files" helper

  • 240: 67c46e9 = 221: 5908996 sparse-checkout: add --verbose option to 'clean'

  • 241: 6cd508b = 222: 2567852 sparse-index: point users to new 'clean' action

  • 242: 4b28845 = 223: aafdeed t: expand tests around sparse merges and clean

  • 243: 7d1927d = 224: 61b104b sparse-checkout: make 'clean' clear more files

  • 244: ce5f7e9 = 225: d7cb614 sparse-checkout: mark 'clean' as experimental

  • 217: d2aa8cb = 226: 74aac4e sub-process: avoid leaking cmd

  • 218: 1ece110 = 227: 5c94575 remote-curl: release filter options before re-setting them

  • 219: 56cf374 = 228: db67630 transport: release object filter options

  • 220: 9675174 = 229: 825cb83 push: don't reuse deltas with path walk

  • 221: 0b54529 = 230: a29e87e t7900-maintenance.sh: reset config between tests

  • 222: ac8b25d ! 231: 7502fb9 maintenance: add cache-local-objects maintenance task

    @@ builtin/gc.c: static int maintenance_task_incremental_repack(struct maintenance_
     +	for_each_file_in_pack_dir(r->objects->sources->path, move_pack_to_shared_cache,
     +				  dstdir.buf);
     +
    -+	for_each_loose_object(move_loose_object_to_shared_cache, NULL,
    ++	for_each_loose_object(r->objects, move_loose_object_to_shared_cache, NULL,
     +			      FOR_EACH_OBJECT_LOCAL_ONLY);
     +
     +cleanup:
  • 223: 55fa13e = 232: 95eb365 scalar.c: add cache-local-objects task

  • 224: 9fe41c7 = 233: 40ff696 git.c: add VFS enabled cmd blocking

  • 225: be98a92 = 234: 78a886d git.c: permit repack cmd in Scalar repos

  • 226: 60e224f = 235: 0d1e58f git.c: permit fsck cmd in Scalar repos

  • 227: 23e538c = 236: 9d038b5 git.c: permit prune cmd in Scalar repos

  • 228: a274f54 ! 237: f8208e9 worktree: remove special case GVFS cmd blocking

    @@ git.c: static struct cmd_struct commands[] = {
      #ifndef WITH_BREAKING_CHANGES
      	{ "whatchanged", cmd_whatchanged, RUN_SETUP },
      #endif
    --	{ "worktree", cmd_worktree, RUN_SETUP },
    +-	{ "worktree", cmd_worktree, RUN_SETUP | BLOCK_ON_GVFS_REPO },
     +	{ "worktree", cmd_worktree, RUN_SETUP | BLOCK_ON_VFS_ENABLED },
      	{ "write-tree", cmd_write_tree, RUN_SETUP },
      };
  • 229: a60f2bd = 238: 24ee85e builtin/repack.c: emit warning when shared cache is present

  • 230: cec3ded = 239: f815816 hooks: add custom post-command hook config

  • 231: 39ecccb = 240: 01797da Docs: fix asciidoc failures from short delimiters

  • 232: 9eefb36 = 241: 834836e hooks: make hook logic memory-leak free

  • 233: 9234140 = 242: d855545 t5309: create failing test for 'git index-pack'

  • 234: 114b9f6 = 243: e5ed366 gvfs-helper: pass long values where expected

  • 235: d044410 = 244: d750683 gvfs-helper-client: clean up server process(es)

  • 245: 54bbd4a < -: ----------- GIT-VERSION-GEN: update for 2.51.0-rc0

  • 246: 546bdc7 < -: ----------- fixup! release: create initial Windows installer build workflow

  • 247: 4ed59e7 < -: ----------- fixup! release: add Mac OSX installer build

  • 248: 730a8e5 < -: ----------- fixup! release: include GIT_BUILT_FROM_COMMIT in MacOS build

  • 249: 0e6f69a < -: ----------- fixup! release: build unsigned Ubuntu .deb package

  • 250: cede95d < -: ----------- fixup! release: add installer validation

  • 251: 40801ae < -: ----------- fixup! fixup! release: add Mac OSX installer build

  • 252: c4a7966 < -: ----------- fixup! fixup! release: build unsigned Ubuntu .deb package

  • 253: f4ca31d < -: ----------- fixup! fixup! fixup! release: add Mac OSX installer build

Basically, some non-trivial mechanics required to accommodate for more upstream refactors that avoid the_repository in more places (including the introduction of odb_source).

@dscho
Copy link
Member Author

dscho commented Aug 9, 2025

We probably want to add a test case to verify that I transmogrified 9150f1d correctly...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.