aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2024-11-15t5332-multi-pack-reuse.sh: demonstrate duplicate packing failureTaylor Blau1-0/+22
In the multi-pack reuse code, there are two paths for reusing the on-disk representation of an object, handled by: - builtin/pack-objects.c::write_reused_pack_one() - builtin/pack-objects.c::write_reused_pack_verbatim() The former is responsible for copying the bytes for a single object out of an existing source pack. The latter does the same but for a region of objects aligned at eword_t boundaries. Demonstrate a bug whereby write_reused_pack_verbatim() can be tricked into writing out objects from some source pack, even when those objects were selected from a different source pack in the MIDX bitmap. When the caller wants at least one of the objects in that region, pack-objects will write the same object twice as a result of this bug. In the other case where the caller doesn't want any of the objects in the region of interest, we will write out objects that weren't requested. Demonstrate this bug by creating two packs, where the preferred one of those packs contains a single object which also appears in the main (non-preferred) pack. A separate bug[^1] prevents us from triggering the main bug when the duplicated object is the last one in the main pack, but any earlier object will suffice. We could fix that separate bug, but the following commit will simplify write_reused_pack_verbatim() and only call it on the preferred pack, so doing so would have little point. [^1]: Because write_reused_pack_verbatim() only reuses bits in the range off_t pack_start_off = pack_pos_to_offset(reuse_packfile->p, 0); off_t pack_end_off = pack_pos_to_offset(reuse_packfile->p, pos - reuse_packfile->bitmap_pos); written += pos - reuse_packfile->bitmap_pos; /* We're recording one chunk, not one object. */ record_reused_object(pack_start_off, pack_start_off - (hashfile_total(out) - pack_start)); , or in other words excluding the object beginning at position 'pos - reuse_packfile->bitmap_pos' in the source pack. But since reuse_packfile->bitmap_pos is '1' in the non-preferred pack (accounting for the single-object pack which is preferred), we don't actually copy the bytes from the last object. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-15refs: don't invoke reference-transaction hook for reflogsKarthik Nayak2-2/+3
The reference-transaction hook is invoked whenever there is a reference update being performed. For each state of the transaction, we iterate over the updates present and pass this information to the hook. The `ref_update` structure is used to hold these updates within a `transaction`. We use the same structure for holding reflog updates too. Which means that the reference transaction hook is also obtaining information about a reflog update. This is a bug, since: - The hook is designed to work with reference updates and reflogs updates are different. - The hook doesn't have the required information to distinguish reference updates from reflog updates. This is particularly evident when the default branch (pointed by HEAD) is updated, we see that the hook also receives information about HEAD being changed. In reality, we only add a reflog update for HEAD, while HEAD's values remains the same. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-14test-lib: move malloc-debug setup after $PATH setupJeff King1-50/+50
Originally, the conditional definition of the setup/teardown functions for malloc checking could be run at any time, because they depended only on command-line options and the system getconf function. But since 02d900361c (test-lib: check malloc debug LD_PRELOAD before using, 2024-11-11), we probe the system by running "git version". Since this code runs before we've set $PATH to point to the version of Git we intend to test, we actually run the system version of git. This mostly works, since what we really care about is whether the LD_PRELOAD works, and it should work the same with any program. But there are some corner cases: 1. You might not have a system git at all, in which case the preload will appear to fail, even though it could work with the actual built version of git. 2. Your system git could be linked in a different way. For example, if it was built statically, then it will ignore LD_PRELOAD entirely, and we might assume that the preload works, even though it might not when used with a dynamic build. We could give a more complete path to the version of Git we intend to test, but features like GIT_TEST_INSTALLED make that not entirely trivial. So instead, let's just bump the setup until after we've set up the $PATH. There's no need for us to do it early, as long as it is done before the first test runs. Reported-by: Toon Claes <toon@iotcl.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-13The ninth batchJunio C Hamano1-0/+15
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-13Merge branch 'ps/mingw-rename'Junio C Hamano2-9/+156
The MinGW compatibility layer has been taught to support POSIX semantics for atomic renames when other process(es) have a file opened at the destination path. * ps/mingw-rename: compat/mingw: support POSIX semantics for atomic renames compat/mingw: allow deletion of most opened files compat/mingw: share file handles created via `CreateFileW()`
2024-11-13Merge branch 'jt/commit-graph-missing'Junio C Hamano2-22/+24
A regression where commit objects missing from a commit-graph can cause an infinite loop when doing a fetch in a partial clone has been fixed. * jt/commit-graph-missing: fetch-pack: die if in commit graph but not obj db Revert "fetch-pack: add a deref_without_lazy_fetch_extended()"
2024-11-13Merge branch 'en/shallow-exclude-takes-a-ref-fix'Junio C Hamano8-10/+21
The "--shallow-exclude=<ref>" option to various history transfer commands takes a ref, not an arbitrary revision. * en/shallow-exclude-takes-a-ref-fix: doc: correct misleading descriptions for --shallow-exclude upload-pack: fix ambiguous error message
2024-11-13Merge branch 'ak/t1016-style'Junio C Hamano1-132/+130
Test modernization. * ak/t1016-style: t1016: clean up style
2024-11-13Merge branch 'ps/leakfixes-part-9'Junio C Hamano38-35/+115
More leakfixes. * ps/leakfixes-part-9: (22 commits) list-objects-filter-options: work around reported leak on error builtin/merge: release output buffer after performing merge dir: fix leak when parsing "status.showUntrackedFiles" t/helper: fix leaking buffer in "dump-untracked-cache" t/helper: stop re-initialization of `the_repository` sparse-index: correctly free EWAH contents dir: release untracked cache data combine-diff: fix leaking lost lines builtin/tag: fix leaking key ID on failure to sign transport-helper: fix leaking import/export marks builtin/commit: fix leaking cleanup config trailer: fix leaking strbufs when formatting trailers trailer: fix leaking trailer values builtin/commit: fix leaking change data contents upload-pack: fix leaking URI protocols pretty: clear signature check diff-lib: fix leaking diffopts in `do_diff_cache()` revision: fix leaking bloom filters builtin/grep: fix leak with `--max-count=0` grep: fix leak in `grep_splice_or()` ...
2024-11-13builtin/difftool: intialize some hashmap variablesSimon Marchi2-4/+6
When running a dir-diff command that produces no diff, variables `wt_modified` and `tmp_modified` are used while uninitialized, causing: $ /home/smarchi/src/git/git-difftool --dir-diff master free(): invalid pointer [1] 334004 IOT instruction (core dumped) /home/smarchi/src/git/git-difftool --dir-diff master $ valgrind --track-origins=yes /home/smarchi/src/git/git-difftool --dir-diff master ... Invalid free() / delete / delete[] / realloc() at 0x48478EF: free (vg_replace_malloc.c:989) by 0x422CAC: hashmap_clear_ (hashmap.c:208) by 0x283830: run_dir_diff (difftool.c:667) by 0x284103: cmd_difftool (difftool.c:801) by 0x238E0F: run_builtin (git.c:484) by 0x2392B9: handle_builtin (git.c:750) by 0x2399BC: cmd_main (git.c:921) by 0x356FEF: main (common-main.c:64) Address 0x1ffefff180 is on thread 1's stack in frame #2, created by run_dir_diff (difftool.c:358) ... If taking any `goto finish` path before these variables are initialized, `hashmap_clear_and_free()` operates on uninitialized data, sometimes causing a crash. This regression was introduced in 7f795a1715 (builtin/difftool: plug several trivial memory leaks, 2024-09-26). Fix it by initializing those variables with the `HASHMAP_INIT` macro. Add a test comparing the main branch to itself, resulting in no diff. Signed-off-by: Simon Marchi <simon.marchi@efficios.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12refspec: store raw refspecs inside refspec_itemJeff King5-31/+19
The refspec struct keeps two matched arrays: one for the refspec_item structs and one for the original raw refspec strings. The main reason for this is that there are other users of refspec_item that do not care about the raw strings. But it does make managing the refspec struct awkward, as we must keep the two arrays in sync. This has led to bugs in the past (both leaks and double-frees). Let's just store a copy of the raw refspec string directly in each refspec_item struct. This simplifies the handling at a small cost: 1. Direct callers of refspec_item_init() will now get an extra copy of the refspec string, even if they don't need it. This should be negligible, as the struct is already allocating two strings for the parsed src/dst values (and we tend to only do it sparingly anyway for things like the TAG_REFSPEC literal). 2. Users of refspec_appendf() will now generate a temporary string, copy it, and then free the result (versus handing off ownership of the temporary string). We could get around this by having a "nodup" variant of refspec_item_init(), but it doesn't seem worth the extra complexity for something that is not remotely a hot code path. Code which accesses refspec->raw now needs to look at refspec->item.raw. Other callers which just use refspec_item directly can remain the same. We'll free the allocated string in refspec_item_clear(), which they should be calling anyway to free src/dst. One subtle note: refspec_item_init() can return an error, in which case we'll still have set its "raw" field. But that is also true of the "src" and "dst" fields, so any caller which does not _clear() the failed item is already potentially leaking. In practice most code just calls die() on an error anyway, but you can see the exception in valid_fetch_refspec(), which does correctly call _clear() even on error. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12refspec: drop separate raw_nr countJeff King5-15/+14
A refspec struct contains zero or more refspec_item structs, along with matching "raw" strings. The items and raw strings are kept in separate arrays, but those arrays will always have the same length (because we write them only via refspec_append_nodup(), which grows both). This can lead to bugs when manipulating the array, since the arrays and lengths must be modified in lockstep. For example, the bug fixed in the previous commit, which forgot to decrement raw_nr. So let's get rid of "raw_nr" and have only "nr", making this kind of bug impossible (and also making it clear that the two are always matched, something that existing code already assumed but was not guaranteed by the interface). Even though we'd expect "alloc" and "raw_alloc" to likewise move in lockstep, we still need to keep separate counts there if we want to continue to use ALLOC_GROW() for both. Conceptually this would all be simpler if refspec_item just held onto its own raw string, and we had a single array. But there are callers which use refspec_item outside of "struct refspec" (and so don't hold on to a matching "raw" string at all), which we'd possibly need to adjust. So let's not worry about refactoring that for now, and just get rid of the redundant count variable. That is the first step on the road to combining them anyway. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12fetch: adjust refspec->raw_nr when filtering prefetch refspecsJeff King2-0/+5
In filter_prefetch_refspecs(), we may remove one or more refspecs if they point into refs/tags/. When we do, we remove the item from the refspec->items array, shifting subsequent items down, and then decrement the refspec->nr count. We also remove the item from the refspec->raw array, but fail to decrement refspec->raw_nr. This leaves us with a count that is too high, and anybody looking at the "raw" array will erroneously see either: 1. The removed entry, if there were no subsequent items to shift down. 2. A duplicate of the final entry, as everything is shifted down but there was nothing to overwrite the final item. The obvious culprit to run into this is calling refspec_clear(), which will try to free the removed entry (case 1) or double-free the final entry (case 2). But even though the bug has existed since the function was added in 2e03115d0c (fetch: add --prefetch option, 2021-04-16), we did not trigger it in the test suite. The --prefetch option is normally only used with configured refspecs, and we never bother to call refspec_clear() on those (they are stored as part of a struct remote, which is held in a global variable). But you could trigger case 2 manually like: git fetch --prefetch . refs/tags/foo refs/tags/bar Ironically you couldn't trigger case 1, because the code accidentally leaked the string in the raw array, and the two bugs (the leak and the double-free) cancelled out. But when we fixed the leak in ea4780307c (fetch: free "raw" string when shrinking refspec, 2024-09-24), it became possible to trigger that, too, with a single item: git fetch --prefetch . refs/tags/foo We can fix both cases by just correctly decrementing "raw_nr" when we shrink the array. Even though we don't expect people to use --prefetch with command-line refspecs, we'll add a test to make sure it behaves well (like the test just before it, we're just confirming that the filtered prefetch succeeds at all). Reported-by: Eric Mills <ermills@epic.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12index-pack: repack local links into promisor packsJonathan Tan4-2/+172
Teach index-pack to, when processing the objects in a pack with --promisor specified on the CLI, repack local objects (and the local objects that they refer to, recursively) referenced by these objects into promisor packs. This prevents the situation in which, when fetching from a promisor remote, we end up with promisor objects (newly fetched) referring to non-promisor objects (locally created prior to the fetch). This situation may arise if the client had previously pushed objects to the remote, for example. One issue that arises in this situation is that, if the non-promisor objects become inaccessible except through promisor objects (for example, if the branch pointing to them has moved to point to the promisor object that refers to them), then GC will garbage collect them. There are other ways to solve this, but the simplest seems to be to enforce the invariant that we don't have promisor objects referring to non-promisor objects. This repacking is done from index-pack to minimize the performance impact. During a fetch, the only time most objects are fully inflated in memory is when their object ID is computed, so we also scan the objects (to see which objects they refer to) during this time. Also to minimize the performance impact, an object is calculated to be local if it's a loose object or present in a non-promisor pack. (If it's also in a promisor pack or referred to by an object in a promisor pack, it is technically already a promisor object. But a misidentification of a promisor object as a non-promisor object is relatively benign here - we will thus repack that promisor object into a promisor pack, duplicating it in the object store, but there is no correctness issue, just an issue of inefficiency.) Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12doc: git-add.txt: convert to new style conventionJean-Noël Avila2-62/+68
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-12test-lib: check malloc debug LD_PRELOAD before usingJeff King1-2/+5
This fixes test failures across the suite on glibc platforms that don't have libc_malloc_debug.so.0. We added support for glibc's malloc checking routines long ago in a731fa916e (Add MALLOC_CHECK_ and MALLOC_PERTURB_ libc env to the test suite for detecting heap corruption, 2012-09-14). Back then we didn't need to do any checks to see if the platform supported it. We were just setting some environment variables which would either enable it or not. That changed in 131b94a10a (test-lib.sh: Use GLIBC_TUNABLES instead of MALLOC_CHECK_ on glibc >= 2.34, 2022-03-04). Now that glibc split this out into libc_malloc_debug.so, we have to add it to LD_PRELOAD. We only do that when we detect glibc, but it's possible to have glibc but not the malloc debug library. In that case LD_PRELOAD will complain to stderr, and tests which check for an empty stderr will fail. You can work around this by setting TEST_NO_MALLOC_CHECK, which disables the feature entirely. But it's not obvious to know you need to do that. Instead, since this malloc checking is best-effort anyway, let's just automatically disable it when the LD_PRELOAD appears not to work. We can check it by running something simple that should work (and produce nothing on stderr) like "git version". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-11Merge https://github.com/j6t/git-guiJunio C Hamano2-3/+29
* https://github.com/j6t/git-gui: git gui: add directly calling merge tool from configuration git-gui: strip commit messages less aggressively git-gui: strip comments and consecutive empty lines from commit messages
2024-11-11t5300: add test for 'show-index --object-format'Abhijeet Sonar1-0/+14
In 88a09a557c (builtin/show-index: provide options to determine hash algo), the flag --object-format was added to show-index builtin as a way to provide a hash algorithm explicitly. However, we do not have tests in place for that functionality. Add them. Signed-off-by: Abhijeet Sonar <abhijeet.nkt@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-11show-index: fix uninitialized hash functionAbhijeet Sonar2-0/+13
In c8aed5e8da (repository: stop setting SHA1 as the default object hash), we got rid of the default hash algorithm for the_repository. Due to this change, it is now the responsibility of the callers to set their own default when this is not present. As stated in the docs, show-index should use SHA1 as the default hash algorithm when run outside a repository. Make sure this promise is met by falling back to SHA1 when the_hash_algo is not present (i.e. when the command is run outside a repository). Also add a test that verifies this behavior. Signed-off-by: Abhijeet Sonar <abhijeet.nkt@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-09Merge branch 'ob/strip-comments-on-commit'Johannes Sixt1-1/+10
* ob/strip-comments-on-commit: git-gui: strip commit messages less aggressively git-gui: strip comments and consecutive empty lines from commit messages
2024-11-09Merge branch 'tb/mergetool-from-config'Johannes Sixt1-2/+19
* tb/mergetool-from-config: git gui: add directly calling merge tool from configuration
2024-11-08The eighth batchJunio C Hamano1-0/+15
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-08Merge branch 'jk/left-right-bitmap'Junio C Hamano2-0/+19
When called with '--left-right' and '--use-bitmap-index', 'rev-list' will produce output without any left/right markers, which has been corrected. * jk/left-right-bitmap: rev-list: skip bitmap traversal for --left-right
2024-11-08Merge branch 'ps/upgrade-clar'Junio C Hamano14-166/+219
Buildfix and upgrade of Clar to a newer version. * ps/upgrade-clar: cmake: set up proper dependencies for generated clar headers cmake: fix compilation of clar-based unit tests Makefile: extract script to generate clar declarations Makefile: adjust sed command for generating "clar-decls.h" t/unit-tests: update clar to 206accb
2024-11-08Merge branch 'cw/config-extensions'Junio C Hamano6-64/+62
Centralize documentation for repository extensions into a single place. * cw/config-extensions: doc: consolidate extensions in git-config documentation
2024-11-08Merge branch 'kn/ci-clang-format-tidy'Junio C Hamano1-6/+10
Updates the '.clang-format' to match project conventions. * kn/ci-clang-format-tidy: clang-format: align consecutive macro definitions clang-format: re-adjust line break penalties
2024-11-08Merge branch 'kn/arbitrary-suffixes'Junio C Hamano1-0/+14
Update the project's CodingGuidelines to discourage naming functions with a "_1()" suffix. * kn/arbitrary-suffixes: CodingGuidelines: discourage arbitrary suffixes in function names
2024-11-07describe: stop traversing when we run out of namesJeff King1-0/+2
When trying to describe a commit, we'll traverse from the commit, collecting candidate tags that point to its ancestors. But once we've seen all of the tags in the repo, there's no point in traversing further. There's nothing left to find! For a default "git describe", this isn't usually a big problem. In a large repo you'll probably have multiple tags, so we'll eventually find 10 candidates (the default for max_candidates) and stop there. And in a small repo, it's quick to traverse to the root. But you can imagine a large repo with few tags. Or, as we saw in a real world case, explicitly limiting the set of matches like this (on linux.git): git describe --match=v6.12-rc4 HEAD which goes all the way to the root before realizing that no, there are no other tags under consideration besides the one we fed via --match. If we add in "--candidates=1" there, it's much faster (at least as of the previous commit). But we should be able to speed this up without the user asking for it. After expanding all matching tags, we know the total number of names. We could just stop the traversal there, but as hinted at above we already have a mechanism for doing that: the max_candidate limit. So we can just reduce that limit to match the number of possible candidates. Our p6100 test shows this off: Test HEAD^ HEAD --------------------------------------------------------------------------------------- 6100.2: describe HEAD 0.71(0.65+0.06) 0.72(0.68+0.04) +1.4% 6100.3: describe HEAD with one max candidate 0.01(0.00+0.00) 0.01(0.00+0.00) +0.0% 6100.4: describe HEAD with one tag 0.72(0.66+0.05) 0.01(0.00+0.00) -98.6% Now we are fast automatically, just as if --candidates=1 were supplied by the user. Reported-by: Josh Poimboeuf <jpoimboe@kernel.org> Helped-by: Rasmus Villemoes <ravi@prevas.dk> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-07describe: stop digging for max_candidates+1Jeff King1-7/+8
By default, describe considers only 10 candidate matches, and stops traversing when we have enough. This makes things much faster in a large repository, where collecting all candidates requires walking all the way down to the root (or at least to the oldest tag). This goes all the way back to 8713ab3079 (Improve git-describe performance by reducing revision listing., 2007-01-13). However, we don't stop immediately when we have enough candidates. We keep traversing and only bail when we find one more candidate that we're ignoring. Usually this is not too expensive, if the tags are sprinkled evenly throughout history. But if you are unlucky, you might hit the max candidate quickly, and then have a huge swath of history before finding the next one. Our p6100 test has exactly this unlucky case: with a max of "1", we find a recent tag quickly and then have to go all the way to the root to find the old tag that will be discarded. A more interesting real-world case is: git describe --candidates=1 --match=v6.12-rc4 HEAD in the linux.git repo. There we restrict the set of tags to a single one, so there is no older candidate to find at all! But despite --candidates=1, we keep traversing to the root only to find nothing. So why do we keep traversing after hitting thet max? There are two reasons I can see: 1. In theory the extra information that there was another candidate could be useful, and we record it in the gave_up_on variable. But we only show this information with --debug. 2. After finding the candidate, there's more processing we do in our loop. The most important of this is propagating the "within" flags to our parent commits, and putting them in the commit_list we'll use for finish_depth_computation(). That function continues the traversal until we've counted all commits reachable from the starting point but not reachable from our best candidate tag (so essentially counting "$tag..$start", but avoiding re-walking over the bits we've seen). If we break immediately without putting those commits into the list, our depth computation will be wrong (in the worst case we'll count all the way down to the root, not realizing those commits are included in our tag). But we don't need to find a new candidate for (2). As soon as we finish the loop iteration where we hit max_candidates, we can then quit on the next iteration. This should produce the same output as the original code (which could, after all, find a candidate on the very next commit anyway) but ends the traversal with less pointless digging. We still have to set "gave_up_on"; we've popped it off the list and it has to go back. An alternative would be to re-order the loop so that it never gets popped, but it's perhaps still useful to show in the --debug output, so we need to know it anyway. We do have to adjust the --debug output since it's now just a commit where we stopped traversing, and not the max+1th candidate. p6100 shows the speedup using linux.git: Test HEAD^ HEAD --------------------------------------------------------------------------------------- 6100.2: describe HEAD 0.70(0.63+0.06) 0.71(0.66+0.04) +1.4% 6100.3: describe HEAD with one max candidate 0.70(0.64+0.05) 0.01(0.00+0.00) -98.6% 6100.4: describe HEAD with one tag 0.70(0.67+0.03) 0.70(0.63+0.06) +0.0% Reported-by: Josh Poimboeuf <jpoimboe@kernel.org> Helped-by: Rasmus Villemoes <ravi@prevas.dk> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-07t/perf: add tests for git-describeJeff King1-0/+30
We don't have a perf script for git-describe, despite it often being accused of slowness. Let's add a few simple tests to start with. Rather than use the existing tags from our test repo, we'll make our own so that we have a known quantity and position. We'll add a "new" tag near the tip of HEAD, and an "old" one that is at the very bottom. And then our tests are: 1. Describing HEAD naively requires walking all the way down to the old tag as we collect candidates. This gives us a baseline for what "slow" looks like. 2. Doing the same with --candidates=1 can potentially be fast, because we can quie after finding "new". But we don't, and it's also slow. 3. Likewise we should be able to quit when there are no more tags to find. This can happen naturally if a repo has few tags, but also if you restrict the set of tags with --match. Here are the results running against linux.git. Note that I have a commit-graph built for the repo, so "slow" here is ~700ms. Without a commit graph it's more like 9s! Test HEAD -------------------------------------------------------------- 6100.2: describe HEAD 0.70(0.66+0.04) 6100.3: describe HEAD with one max candidate 0.70(0.66+0.04) 6100.4: describe HEAD with one tag 0.70(0.64+0.06) Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-07t6120: demonstrate weakness in disjoint-root handlingJeff King1-3/+11
Commit 30b1c7ad9d (describe: don't abort too early when searching tags, 2020-02-26) tried to fix a problem that happens when there are disjoint histories: to accurately compare the counts for different tags, we need to keep walking the history longer in order to find a common base. But its fix misses a case: we may still bail early if we hit the max_candidates limit, producing suboptimal output. You can see this in action by adding "--candidates=2" to the tests; we'll stop traversing as soon as we see the second tag and will produce the wrong answer. I hit this in practice while trying to teach git-describe not to keep looking for candidates after we've seen all tags in the repo (effectively adding --candidates=2, since these toy repos have only two tags each). This is probably fixable by continuing to walk after hitting the max-candidates limit, all the way down to a common ancestor of all candidates. But it's not clear in practice what the preformance implications would be (it would depend on how long the branches that hold the candidates are). So I'm punting on that for now, but I'd like to adjust the tests to be more resilient, and to document the findings. So this patch: 1. Adds an extra tag at the bottom of history. This shouldn't change the output, but does mean we are more resilient to low values of --candidates (e.g., if we start reducing it to the total number of tags). This is arguably closer to the real world anyway, where you're not going to have just 2 tags, but an arbitrarily long history going back in time, possibly with multiple irrelevant tags in it (I called the new tag "H" here for "history"). 2. Run the same tests with --candidates=2, which shows that even with the current code they can fail if we end the traversal early. That leaves a trail for anybody interested in trying to improve the behavior. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-07Merge branch 'ps/leakfixes-part-9' into ps/leakfixes-part-10Junio C Hamano38-35/+115
* ps/leakfixes-part-9: (22 commits) list-objects-filter-options: work around reported leak on error builtin/merge: release output buffer after performing merge dir: fix leak when parsing "status.showUntrackedFiles" t/helper: fix leaking buffer in "dump-untracked-cache" t/helper: stop re-initialization of `the_repository` sparse-index: correctly free EWAH contents dir: release untracked cache data combine-diff: fix leaking lost lines builtin/tag: fix leaking key ID on failure to sign transport-helper: fix leaking import/export marks builtin/commit: fix leaking cleanup config trailer: fix leaking strbufs when formatting trailers trailer: fix leaking trailer values builtin/commit: fix leaking change data contents upload-pack: fix leaking URI protocols pretty: clear signature check diff-lib: fix leaking diffopts in `do_diff_cache()` revision: fix leaking bloom filters builtin/grep: fix leak with `--max-count=0` grep: fix leak in `grep_splice_or()` ...
2024-11-06compat/mingw: support POSIX semantics for atomic renamesPatrick Steinhardt2-7/+88
By default, Windows restricts access to files when those files have been opened by another process. As explained in the preceding commits, these restrictions can be loosened such that reads, writes and/or deletes of files with open handles _are_ allowed. While we set up those sharing flags in most relevant code paths now, we still don't properly handle POSIX-style atomic renames in case the target path is open. This is failure demonstrated by t0610, where one of our tests spawns concurrent writes in a reftable-enabled repository and expects all of them to succeed. This test fails most of the time because the process that has acquired the "tables.list" lock is unable to rename it into place while other processes are busy reading that file. Windows 10 has introduced the `FILE_RENAME_FLAG_POSIX_SEMANTICS` flag that allows us to fix this usecase [1]. When set, it is possible to rename a file over a preexisting file even when the target file still has handles open. Those handles must have been opened with the `FILE_SHARE_DELETE` flag, which we have ensured in the preceding commits. Careful readers might have noticed that [1] does not mention the above flag, but instead mentions `FILE_RENAME_POSIX_SEMANTICS`. This flag is not for use with `SetFileInformationByHandle()` though, which is what we use. And while the `FILE_RENAME_FLAG_POSIX_SEMANTICS` flag exists, it is not documented on [2] or anywhere else as far as I can tell. Unfortunately, we still support Windows systems older than Windows 10 that do not yet have this new flag. Our `_WIN32_WINNT` SDK version still targets 0x0600, which is Windows Vista and later. And even though that Windows version is out-of-support, bumping the SDK version all the way to 0x0A00, which is Windows 10 and later, is not an option as it would make it impossible to compile on Windows 8.1, which is still supported. Instead, we have to manually declare the relevant infrastructure to make this feature available and have fallback logic in place in case we run on a Windows version that does not yet have this flag. On another note: `mingw_rename()` has a retry loop that is used in case deleting a file failed because it's still open in another process. One might be pressed to not use this loop anymore when we can use POSIX semantics. But unfortunately, we have to keep it around due to our dependence on the `FILE_SHARE_DELETE` flag. While we know to set that sharing flag now, other applications may not do so and may thus still cause sharing violations when we try to rename a file. This fixes concurrent writes in the reftable backend as demonstrated in t0610, but may also end up fixing other usecases where Git wants to perform renames. [1]: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/ntifs/ns-ntifs-_file_rename_information [2]: https://learn.microsoft.com/en-us/windows/win32/api/winbase/ns-winbase-file_rename_info Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-05fetch-pack: die if in commit graph but not obj dbJonathan Tan2-5/+18
When fetching, there is a step in which sought objects are first checked against the local repository; only objects that are not in the local repository are then fetched. This check first looks up the commit graph file, and returns "present" if the object is in there. However, the action of first looking up the commit graph file is not done everywhere in Git, especially if the type of the object at the time of lookup is not known. This means that in a repo corruption situation, a user may encounter an "object missing" error, attempt to fetch it, and still encounter the same error later when they reattempt their original action, because the object is present in the commit graph file but not in the object DB. Therefore, make it a fatal error when this occurs. (Note that we cannot proceed to include this object in the list of objects to be fetched without changing at least the fetch negotiation code: what would happen is that the client will send "want X" and "have X" and when I tested at $DAYJOB with a work server that uses JGit, the server reasonably returned an empty packfile. And changing the fetch negotiation code to only use the object DB when deciding what to report as "have" would be an unnecessary slowdown, I think.) This was discovered when a lazy fetch of a missing commit completed with nothing actually fetched, and the writing of the commit graph file after every fetch then attempted to read said missing commit, triggering a lazy fetch of said missing commit, resulting in an infinite loop with no user-visible indication (until they check the list of processes running on their computer). With this fix, there is no infinite loop. Note that although the repo corruption we discovered was caused by a bug in GC in a partial clone, the behavior that this patch teaches Git to warn about applies to any repo with commit graph enabled and with a missing commit, whether it is a partial clone or not. t5330, introduced in 3a1ea94a49 (commit-graph.c: no lazy fetch in lookup_commit_in_graph(), 2022-07-01), tests that an interaction between fetch and the commit graph does not cause an infinite loop. This patch changes the exit code in that situation, so that test had to be changed. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-05Revert "fetch-pack: add a deref_without_lazy_fetch_extended()"Jonathan Tan1-18/+7
This reverts commit a6e65fb39caf18259c660c1c7910d5bf80bc15cb. This revert simplifies the next patch in this patch set. The commit message of that commit mentions that the new function "will be used for the bundle-uri client in a subsequent commit", but it seems that eventually it wasn't used. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04doc: correct misleading descriptions for --shallow-excludeElijah Newren6-9/+9
The documentation for the --shallow-exclude option to clone/fetch/etc. claims that the option takes a revision, but it does not. As per upload-pack.c's process_deepen_not(), it passes the option to expand_ref() and dies if it does not find exactly one ref matching the name passed. Further, this has always been the case ever since these options were introduced by the commits merged in a460ea4a3cb1 (Merge branch 'nd/shallow-deepen', 2016-10-10). Fix the documentation to match the implementation. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04list-objects-filter-options: work around reported leak on errorPatrick Steinhardt2-10/+8
This one is a little bit more curious. In t6112, we have a test that exercises the `git rev-list --filter` option with invalid filters. We execute git-rev-list(1) via `test_must_fail`, which means that we check for leaks even though Git exits with an error code. This causes the following leak: Direct leak of 27 byte(s) in 1 object(s) allocated from: #0 0x5555555e6946 in realloc.part.0 lsan_interceptors.cpp.o #1 0x5555558fb4b6 in xrealloc wrapper.c:137:8 #2 0x5555558b6e06 in strbuf_grow strbuf.c:112:2 #3 0x5555558b7550 in strbuf_add strbuf.c:311:2 #4 0x5555557c1a88 in strbuf_addstr strbuf.h:310:2 #5 0x5555557c1d4c in parse_list_objects_filter list-objects-filter-options.c:261:3 #6 0x555555885ead in handle_revision_pseudo_opt revision.c:2899:3 #7 0x555555884e20 in setup_revisions revision.c:3014:11 #8 0x5555556c4b42 in cmd_rev_list builtin/rev-list.c:588:9 #9 0x5555555ec5e3 in run_builtin git.c:483:11 #10 0x5555555eb1e4 in handle_builtin git.c:749:13 #11 0x5555555ec001 in run_argv git.c:819:4 #12 0x5555555eaf94 in cmd_main git.c:954:19 #13 0x5555556fd569 in main common-main.c:64:11 #14 0x7ffff7ca714d in __libc_start_call_main (.../lib/libc.so.6+0x2a14d) #15 0x7ffff7ca7208 in __libc_start_main@GLIBC_2.2.5 (.../libc.so.6+0x2a208) #16 0x5555555ad064 in _start (git+0x59064) This leak is valid, as we call `die()` and do not clean up the memory at all. But what's curious is that this is the only leak reported, because we don't clean up any other allocated memory, either, and I have no idea why the leak sanitizer treats this buffer specially. In any case, we can work around the leak by shuffling things around a bit. Instead of calling `gently_parse_list_objects_filter()` and dying after we have modified the filter spec, we simply do so beforehand. Like this we don't allocate the buffer in the error case, which makes the reported leak go away. It's not pretty, but it manages to make t6112 leak free. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04builtin/merge: release output buffer after performing mergePatrick Steinhardt2-0/+2
The `obuf` member of `struct merge_options` is used to buffer output in some cases. In order to not discard its allocated memory we only release its contents in `merge_finalize()` when we're not currently recursing into a subtree. This results in some situations where we seemingly do not release the buffer reliably. We thus have calls to `strbuf_release()` for this buffer scattered across the codebase. But we're missing one callsite in git-merge(1), which causes a memory leak. We should ideally refactor this interface so that callers don't have to know about any such internals. But for now, paper over the issue by adding one more `strbuf_release()` call. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04dir: fix leak when parsing "status.showUntrackedFiles"Patrick Steinhardt2-2/+3
We use `repo_config_get_string()` to read "status.showUntrackedFiles" from the config subsystem. This function allocates the result, but we never free the result after parsing it. The value never leaves the scope of the calling function, so refactor it to instead use `repo_config_get_string_tmp()`, which does not hand over ownership to the caller. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04t/helper: fix leaking buffer in "dump-untracked-cache"Patrick Steinhardt1-0/+2
We never release the local `struct strbuf base` buffer, thus leaking memory. Fix this leak. This leak is exposed by t7063, but plugging it alone does not make the whole test suite pass. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04t/helper: stop re-initialization of `the_repository`Patrick Steinhardt2-2/+1
While "common-main.c" already initializes `the_repository` for us, we do so a second time in the "read-cache" test helper. This causes a memory leak because the old repository's contents isn't released. Stop calling `initialize_repository()` to plug this leak. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04sparse-index: correctly free EWAH contentsPatrick Steinhardt1-2/+5
While we free the `fsmonitor_dirty` member of `struct index_state`, we do not free the contents of that EWAH. Do so by using `ewah_free()` instead of `FREE_AND_NULL()`. This leak is exposed by t7519, but plugging it alone does not make the test suite pass. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04dir: release untracked cache dataPatrick Steinhardt1-0/+8
There are several cases where we invalidate untracked cache directory entries where we do not free the underlying data, but reset the number of entries. This causes us to leak memory because `free_untracked()` will not iterate over any potential entries which we still had in the array. Fix this issue by freeing old entries. The leak is exposed by t7519, but plugging it alone does not make the whole test suite pass. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04combine-diff: fix leaking lost linesPatrick Steinhardt2-2/+4
The `cnt` variable tracks the number of lines in a patch diff. It can happen though that there are no newlines, in which case we'd still end up allocating our array of `sline`s. In fact, we always allocate it with `cnt + 2` entries: one extra entry for the deletion hunk at the end, and another entry that we don't seem to ever populate at all but acts as a kind of sentinel value. When we loop through the array to clear it at the end of this function we only loop until `lno < cnt`, and thus we may not end up releasing whatever the two extra `sline`s contain. While that shouldn't matter for the sentinel value, it does matter for the extra deletion hunk sline. Regardless of that, plug this memory leak by releasing both extra entries, which makes the logic a bit easier to reason about. While at it, fix the formatting of a local comment, which incidentally also provides the necessary context for why we overallocate the `sline` array. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04builtin/tag: fix leaking key ID on failure to signPatrick Steinhardt2-1/+2
We do not free the key ID when signing a tag fails. Do so by using the common exit path. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04transport-helper: fix leaking import/export marksPatrick Steinhardt2-0/+3
Fix leaking import and export marks for transport helpers. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04builtin/commit: fix leaking cleanup configPatrick Steinhardt2-5/+13
The cleanup string set by the config is leaking when it is being overridden by an option. Fix this by tracking these via two separate variables such that we can free the old value. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04trailer: fix leaking strbufs when formatting trailersPatrick Steinhardt2-5/+8
When formatting trailer lines we iterate through each of the trailers and munge their respective token/value pairs according to the trailer options. When formatting a trailer that has its `item->token` pointer set we perform the munging in two local buffers. In the case where we figure out that the value is empty and `trim_empty` is set we just skip over the trailer item. But the buffers are local to the loop and we don't release their contents, leading to a memory leak. Plug this leak by lifting the buffers outside of the loop and releasing them on function return. This fixes the memory leaks, but also optimizes the loop as we don't have to reallocate the buffers on every single iteration. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04trailer: fix leaking trailer valuesPatrick Steinhardt1-2/+8
Fix leaking trailer values when replacing the value with a command or when the token value is empty. This leak is exposed by t7513, but plugging it does not make the whole test suite pass. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-04builtin/commit: fix leaking change data contentsPatrick Steinhardt2-1/+9
While we free the worktree change data, we never free its contents. Fix this. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>