sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in touch_core_sched()

Since 3cf78c5d01 ("sched_ext: Unpin and repin rq lock from
balance_scx()"), sched_ext's balance path terminates rq_pin in the outermost
function. This is simpler and in line with what other balance functions are
doing but it loses control over rq->clock_update_flags which makes
assert_clock_udpated() trigger if other CPUs pins the rq lock.

The only place this matters is touch_core_sched() which uses the timestamp
to order tasks from sibling rq's. Switch to sched_clock_cpu(). Later, it may
be better to use per-core dispatch sequence number.

v2: Use sched_clock_cpu() instead of ktime_get_ns() per David.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 3cf78c5d01 ("sched_ext: Unpin and repin rq lock from balance_scx()")
Acked-by: David Vernet <void@manifault.com>
Cc: Peter Zijlstra <peterz@infradead.org>
This commit is contained in:
Tejun Heo 2024-08-30 07:54:41 -10:00
parent 0366017e09
commit 62607d033b

View File

@ -1453,13 +1453,18 @@ static void schedule_deferred(struct rq *rq)
*/
static void touch_core_sched(struct rq *rq, struct task_struct *p)
{
lockdep_assert_rq_held(rq);
#ifdef CONFIG_SCHED_CORE
/*
* It's okay to update the timestamp spuriously. Use
* sched_core_disabled() which is cheaper than enabled().
*
* As this is used to determine ordering between tasks of sibling CPUs,
* it may be better to use per-core dispatch sequence instead.
*/
if (!sched_core_disabled())
p->scx.core_sched_at = rq_clock_task(rq);
p->scx.core_sched_at = sched_clock_cpu(cpu_of(rq));
#endif
}
@ -1476,7 +1481,6 @@ static void touch_core_sched(struct rq *rq, struct task_struct *p)
static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
{
lockdep_assert_rq_held(rq);
assert_clock_updated(rq);
#ifdef CONFIG_SCHED_CORE
if (SCX_HAS_OP(core_sched_before))