Skip to content

Commit af3abef

Browse files
committed
diffcore-delta.c: update the comment on the algorithm.
The comment at the top of the file described an old algorithm that was neutral to text/binary differences (it hashed sliding window of N-byte sequences and counted overlaps), but long time ago we switched to a new heuristics that are more suitable for line oriented (read: text) files that are much faster. Signed-off-by: Junio C Hamano <[email protected]>
1 parent 706098a commit af3abef

File tree

1 file changed

+9
-12
lines changed

1 file changed

+9
-12
lines changed

diffcore-delta.c

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -5,23 +5,20 @@
55
/*
66
* Idea here is very simple.
77
*
8-
* We have total of (sz-N+1) N-byte overlapping sequences in buf whose
9-
* size is sz. If the same N-byte sequence appears in both source and
10-
* destination, we say the byte that starts that sequence is shared
11-
* between them (i.e. copied from source to destination).
8+
* Almost all data we are interested in are text, but sometimes we have
9+
* to deal with binary data. So we cut them into chunks delimited by
10+
* LF byte, or 64-byte sequence, whichever comes first, and hash them.
1211
*
13-
* For each possible N-byte sequence, if the source buffer has more
14-
* instances of it than the destination buffer, that means the
15-
* difference are the number of bytes not copied from source to
16-
* destination. If the counts are the same, everything was copied
17-
* from source to destination. If the destination has more,
18-
* everything was copied, and destination added more.
12+
* For those chunks, if the source buffer has more instances of it
13+
* than the destination buffer, that means the difference are the
14+
* number of bytes not copied from source to destination. If the
15+
* counts are the same, everything was copied from source to
16+
* destination. If the destination has more, everything was copied,
17+
* and destination added more.
1918
*
2019
* We are doing an approximation so we do not really have to waste
2120
* memory by actually storing the sequence. We just hash them into
2221
* somewhere around 2^16 hashbuckets and count the occurrences.
23-
*
24-
* The length of the sequence is arbitrarily set to 8 for now.
2522
*/
2623

2724
/* Wild guess at the initial hash size */

0 commit comments

Comments
 (0)