Skip to content

Commit 3931261

Browse files
ardbiesheuvelwilldeacon
authored andcommitted
arm64: fpsimd: Bring cond_yield asm macro in line with new rules
We no longer disable softirqs or preemption when doing kernel mode SIMD, and so for fully preemptible kernels, there is no longer a need to do any explicit yielding (and for non-preemptible kernels, yielding is not needed either). That leaves voluntary preemption, where only explicit yield calls may result in a reschedule. To retain the existing behavior for such a configuration, we should take the new situation into account, where the preempt count will be zero rather than one, and yielding to pending softirqs is unnecessary. Fixes: aefbab8 ("arm64: fpsimd: Preserve/restore kernel mode NEON at context switch") Signed-off-by: Ard Biesheuvel <[email protected]> Reviewed-by: Mark Brown <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent 8c5a19c commit 3931261

File tree

2 files changed

+9
-18
lines changed

2 files changed

+9
-18
lines changed

arch/arm64/include/asm/assembler.h

Lines changed: 9 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -760,32 +760,25 @@ alternative_endif
760760
.endm
761761

762762
/*
763-
* Check whether preempt/bh-disabled asm code should yield as soon as
764-
* it is able. This is the case if we are currently running in task
765-
* context, and either a softirq is pending, or the TIF_NEED_RESCHED
766-
* flag is set and re-enabling preemption a single time would result in
767-
* a preempt count of zero. (Note that the TIF_NEED_RESCHED flag is
768-
* stored negated in the top word of the thread_info::preempt_count
763+
* Check whether asm code should yield as soon as it is able. This is
764+
* the case if we are currently running in task context, and the
765+
* TIF_NEED_RESCHED flag is set. (Note that the TIF_NEED_RESCHED flag
766+
* is stored negated in the top word of the thread_info::preempt_count
769767
* field)
770768
*/
771-
.macro cond_yield, lbl:req, tmp:req, tmp2:req
769+
.macro cond_yield, lbl:req, tmp:req, tmp2
770+
#ifdef CONFIG_PREEMPT_VOLUNTARY
772771
get_current_task \tmp
773772
ldr \tmp, [\tmp, #TSK_TI_PREEMPT]
774773
/*
775774
* If we are serving a softirq, there is no point in yielding: the
776775
* softirq will not be preempted no matter what we do, so we should
777-
* run to completion as quickly as we can.
776+
* run to completion as quickly as we can. The preempt_count field will
777+
* have BIT(SOFTIRQ_SHIFT) set in this case, so the zero check will
778+
* catch this case too.
778779
*/
779-
tbnz \tmp, #SOFTIRQ_SHIFT, .Lnoyield_\@
780-
#ifdef CONFIG_PREEMPTION
781-
sub \tmp, \tmp, #PREEMPT_DISABLE_OFFSET
782780
cbz \tmp, \lbl
783781
#endif
784-
adr_l \tmp, irq_stat + IRQ_CPUSTAT_SOFTIRQ_PENDING
785-
get_this_cpu_offset \tmp2
786-
ldr w\tmp, [\tmp, \tmp2]
787-
cbnz w\tmp, \lbl // yield on pending softirq in task context
788-
.Lnoyield_\@:
789782
.endm
790783

791784
/*

arch/arm64/kernel/asm-offsets.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,8 +117,6 @@ int main(void)
117117
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
118118
BLANK();
119119
DEFINE(PREEMPT_DISABLE_OFFSET, PREEMPT_DISABLE_OFFSET);
120-
DEFINE(SOFTIRQ_SHIFT, SOFTIRQ_SHIFT);
121-
DEFINE(IRQ_CPUSTAT_SOFTIRQ_PENDING, offsetof(irq_cpustat_t, __softirq_pending));
122120
BLANK();
123121
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
124122
BLANK();

0 commit comments

Comments
 (0)