You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
filter-repo: avoid failures with LFS objects and submodules
Our attempts to filter LFS objects with --sensitive-data-removal cause
us to try to look at the sizes of all blobs and read the contents of
those that are small enough. That's fine, but we automatically assume
all the FileChanges in a commit are blobs, when some might be
submodules. Since submodule oids are unlikely to exist in the current
repository, that is likely to lead to tracebacks of the form
...
File "/path/to/git-filter-repo", line 1115, in _parse_optional_filechange
self._lfs_object_tracker.check_file_change_data(value, True)
File "/path/to/git-filter-repo", line 3026, in check_file_change_data
size = self.file_info.get_size_by_identifier(git_id)
File "/path/to/git-filter-repo", line 2956, in get_size_by_identifier
(oid, oidtype, size) = line.split()
ValueError: not enough values to unpack (expected 3, got 2)
fatal: stream ends early
fast-import: dumping crash report to .git/fast_import_crash_774517
Add some checks to avoid these problems so that we don't query `git
cat-file --batch-command` for these oids.
Signed-off-by: Elijah Newren <[email protected]>
0 commit comments