Skip to content

Commit ecd2884

Browse files
vireshkrafaeljw
authored andcommitted
cpufreq: schedutil: Don't set next_freq to UINT_MAX
The schedutil driver sets sg_policy->next_freq to UINT_MAX on certain occasions to discard the cached value of next freq: - In sugov_start(), when the schedutil governor is started for a group of CPUs. - And whenever we need to force a freq update before rate-limit duration, which happens when: - there is an update in cpufreq policy limits. - Or when the utilization of DL scheduling class increases. In return, get_next_freq() doesn't return a cached next_freq value but recalculates the next frequency instead. But having special meaning for a particular value of frequency makes the code less readable and error prone. We recently fixed a bug where the UINT_MAX value was considered as valid frequency in sugov_update_single(). All we need is a flag which can be used to discard the value of sg_policy->next_freq and we already have need_freq_update for that. Lets reuse it instead of setting next_freq to UINT_MAX. Signed-off-by: Viresh Kumar <[email protected]> Reviewed-by: Joel Fernandes (Google) <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
1 parent 1b04722 commit ecd2884

File tree

1 file changed

+6
-12
lines changed

1 file changed

+6
-12
lines changed

kernel/sched/cpufreq_schedutil.c

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -95,15 +95,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
9595
if (sg_policy->work_in_progress)
9696
return false;
9797

98-
if (unlikely(sg_policy->need_freq_update)) {
99-
sg_policy->need_freq_update = false;
100-
/*
101-
* This happens when limits change, so forget the previous
102-
* next_freq value and force an update.
103-
*/
104-
sg_policy->next_freq = UINT_MAX;
98+
if (unlikely(sg_policy->need_freq_update))
10599
return true;
106-
}
107100

108101
delta_ns = time - sg_policy->last_freq_update_time;
109102

@@ -165,8 +158,10 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
165158

166159
freq = (freq + (freq >> 2)) * util / max;
167160

168-
if (freq == sg_policy->cached_raw_freq && sg_policy->next_freq != UINT_MAX)
161+
if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
169162
return sg_policy->next_freq;
163+
164+
sg_policy->need_freq_update = false;
170165
sg_policy->cached_raw_freq = freq;
171166
return cpufreq_driver_resolve_freq(policy, freq);
172167
}
@@ -305,8 +300,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
305300
* Do not reduce the frequency if the CPU has not been idle
306301
* recently, as the reduction is likely to be premature then.
307302
*/
308-
if (busy && next_f < sg_policy->next_freq &&
309-
sg_policy->next_freq != UINT_MAX) {
303+
if (busy && next_f < sg_policy->next_freq) {
310304
next_f = sg_policy->next_freq;
311305

312306
/* Reset cached freq as next_freq has changed */
@@ -654,7 +648,7 @@ static int sugov_start(struct cpufreq_policy *policy)
654648

655649
sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC;
656650
sg_policy->last_freq_update_time = 0;
657-
sg_policy->next_freq = UINT_MAX;
651+
sg_policy->next_freq = 0;
658652
sg_policy->work_in_progress = false;
659653
sg_policy->need_freq_update = false;
660654
sg_policy->cached_raw_freq = 0;

0 commit comments

Comments
 (0)