Skip to content

Commit fdb6649

Browse files
vincent-mailholsuryasaimadhu
authored andcommitted
x86/asm/bitops: Use __builtin_ctzl() to evaluate constant expressions
If x is not 0, __ffs(x) is equivalent to: (unsigned long)__builtin_ctzl(x) And if x is not ~0UL, ffz(x) is equivalent to: (unsigned long)__builtin_ctzl(~x) Because __builting_ctzl() returns an int, a cast to (unsigned long) is necessary to avoid potential warnings on implicit casts. Concerning the edge cases, __builtin_ctzl(0) is always undefined, whereas __ffs(0) and ffz(~0UL) may or may not be defined, depending on the processor. Regardless, for both functions, developers are asked to check against 0 or ~0UL so replacing __ffs() or ffz() by __builting_ctzl() is safe. For x86_64, the current __ffs() and ffz() implementations do not produce optimized code when called with a constant expression. On the contrary, the __builtin_ctzl() folds into a single instruction. However, for non constant expressions, the __ffs() and ffz() asm versions of the kernel remains slightly better than the code produced by GCC (it produces a useless instruction to clear eax). Use __builtin_constant_p() to select between the kernel's __ffs()/ffz() and the __builtin_ctzl() depending on whether the argument is constant or not. ** Statistics ** On a allyesconfig, before...: $ objdump -d vmlinux.o | grep tzcnt | wc -l 3607 ...and after: $ objdump -d vmlinux.o | grep tzcnt | wc -l 2600 So, roughly 27.9% of the calls to either __ffs() or ffz() were using constant expressions and could be optimized out. (tests done on linux v5.18-rc5 x86_64 using GCC 11.2.1) Note: on x86_64, the BSF instruction produces TZCNT when used with the REP prefix (which explain the use of `grep tzcnt' instead of `grep bsf' in above benchmark). c.f. [1] [1] e26a44a ("x86: Use REP BSF unconditionally") [ bp: Massage commit message. ] Signed-off-by: Vincent Mailhol <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Yury Norov <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent 146034f commit fdb6649

File tree

1 file changed

+19
-9
lines changed

1 file changed

+19
-9
lines changed

arch/x86/include/asm/bitops.h

Lines changed: 19 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -247,17 +247,30 @@ arch_test_bit_acquire(unsigned long nr, const volatile unsigned long *addr)
247247
variable_test_bit(nr, addr);
248248
}
249249

250+
static __always_inline unsigned long variable__ffs(unsigned long word)
251+
{
252+
asm("rep; bsf %1,%0"
253+
: "=r" (word)
254+
: "rm" (word));
255+
return word;
256+
}
257+
250258
/**
251259
* __ffs - find first set bit in word
252260
* @word: The word to search
253261
*
254262
* Undefined if no bit exists, so code should check against 0 first.
255263
*/
256-
static __always_inline unsigned long __ffs(unsigned long word)
264+
#define __ffs(word) \
265+
(__builtin_constant_p(word) ? \
266+
(unsigned long)__builtin_ctzl(word) : \
267+
variable__ffs(word))
268+
269+
static __always_inline unsigned long variable_ffz(unsigned long word)
257270
{
258271
asm("rep; bsf %1,%0"
259272
: "=r" (word)
260-
: "rm" (word));
273+
: "r" (~word));
261274
return word;
262275
}
263276

@@ -267,13 +280,10 @@ static __always_inline unsigned long __ffs(unsigned long word)
267280
*
268281
* Undefined if no zero exists, so code should check against ~0UL first.
269282
*/
270-
static __always_inline unsigned long ffz(unsigned long word)
271-
{
272-
asm("rep; bsf %1,%0"
273-
: "=r" (word)
274-
: "r" (~word));
275-
return word;
276-
}
283+
#define ffz(word) \
284+
(__builtin_constant_p(word) ? \
285+
(unsigned long)__builtin_ctzl(~word) : \
286+
variable_ffz(word))
277287

278288
/*
279289
* __fls: find last set bit in word

0 commit comments

Comments
 (0)