Home Home > GIT Browse > SLE12-SP3-AZURE
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTakashi Iwai <tiwai@suse.de>2019-02-18 22:33:19 +0100
committerTakashi Iwai <tiwai@suse.de>2019-02-18 22:33:19 +0100
commiteec003c669ed8912b6acebdbebe1a08c5417191c (patch)
treeefe8dc3298dcd19afaac0dfd53f3c3a7533c9a9a
parent7dff7755ad35f1f80248df5bbd2ec8dcdbc489f6 (diff)
parent4a3eab0d6593383bd47983120f17a6096c751c35 (diff)
Merge branch 'users/dbueso/SLE12-SP3/for-next' into SLE12-SP3
Pull sched fixes from Davidlohr Bueso (bsc#1050549)
-rw-r--r--patches.fixes/0001-locking-rwsem-Fix-possible-missed-wakeup.patch64
-rw-r--r--patches.fixes/0001-sched-wake_q-Document-wake_q_add.patch81
-rw-r--r--patches.fixes/0002-futex-Fix-possible-missed-wakeup.patch63
-rw-r--r--patches.fixes/0002-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch97
-rw-r--r--patches.fixes/0003-sched-wake_q-Reduce-reference-counting-for-special-u.patch181
-rw-r--r--series.conf5
6 files changed, 491 insertions, 0 deletions
diff --git a/patches.fixes/0001-locking-rwsem-Fix-possible-missed-wakeup.patch b/patches.fixes/0001-locking-rwsem-Fix-possible-missed-wakeup.patch
new file mode 100644
index 0000000000..7d053c69de
--- /dev/null
+++ b/patches.fixes/0001-locking-rwsem-Fix-possible-missed-wakeup.patch
@@ -0,0 +1,64 @@
+From a9cf88b3740a83e0c3dc38f2ac41bccccbbe3332 Mon Sep 17 00:00:00 2001
+From: Xie Yongji <xieyongji@baidu.com>
+Date: Mon, 18 Feb 2019 09:49:40 -0800
+Subject: [PATCH 1/3] locking/rwsem: Fix (possible) missed wakeup
+Git-commit: e158488be27b157802753a59b336142dc0eb0380
+Patch-mainline: v5.0-rc4
+References: bsc#1050549
+
+Because wake_q_add() can imply an immediate wakeup (cmpxchg failure
+case), we must not rely on the wakeup being delayed. However, commit:
+
+ e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")
+
+relies on exactly that behaviour in that the wakeup must not happen
+until after we clear waiter->task.
+
+[ peterz: Added changelog. ]
+
+Signed-off-by: Xie Yongji <xieyongji@baidu.com>
+Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Fixes: e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")
+Link: https://lkml.kernel.org/r/1543495830-2644-1-git-send-email-xieyongji@baidu.com
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
+
+---
+ kernel/locking/rwsem-xadd.c | 11 +++++++++--
+ 1 file changed, 9 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
+index 27316d8f7d87..1dad74958b1c 100644
+--- a/kernel/locking/rwsem-xadd.c
++++ b/kernel/locking/rwsem-xadd.c
+@@ -207,14 +207,21 @@ __rwsem_mark_wake(struct rw_semaphore *sem,
+ next = waiter->list.next;
+ tsk = waiter->task;
+
+- wake_q_add(wake_q, tsk);
++ get_task_struct(tsk);
+ /*
+- * Ensure that the last operation is setting the reader
++ * Ensure calling get_task_struct() before setting the reader
+ * waiter to nil such that rwsem_down_read_failed() cannot
+ * race with do_exit() by always holding a reference count
+ * to the task to wakeup.
+ */
+ smp_store_release(&waiter->task, NULL);
++ /*
++ * Ensure issuing the wakeup (either by us or someone else)
++ * after setting the reader waiter to nil.
++ */
++ wake_q_add(wake_q, tsk);
++ /* wake_q_add() already take the task ref */
++ put_task_struct(tsk);
+ } while (--loop);
+
+ sem->wait_list.next = next;
+--
+2.16.4
+
diff --git a/patches.fixes/0001-sched-wake_q-Document-wake_q_add.patch b/patches.fixes/0001-sched-wake_q-Document-wake_q_add.patch
new file mode 100644
index 0000000000..6dd69754ce
--- /dev/null
+++ b/patches.fixes/0001-sched-wake_q-Document-wake_q_add.patch
@@ -0,0 +1,81 @@
+From 33a2c192f7e9f9869c4bfb246d5befef544bb3b9 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Mon, 18 Feb 2019 09:09:22 -0800
+Subject: [PATCH 1/2] sched/wake_q: Document wake_q_add()
+Git-commit: e6018c0f5c996e61639adce6a0697391a2861916
+Patch-mainline: v5.0-rc4
+References: bsc#1050549
+
+The only guarantee provided by wake_q_add() is that a wakeup will
+happen after it, it does _NOT_ guarantee the wakeup will be delayed
+until the matching wake_up_q().
+
+If wake_q_add() fails the cmpxchg() a concurrent wakeup is pending and
+that can happen at any time after the cmpxchg(). This means we should
+not rely on the wakeup happening at wake_q_up(), but should be ready
+for wake_q_add() to issue the wakeup.
+
+The delay; if provided (most likely); should only result in more efficient
+behaviour.
+
+Reported-by: Yongji Xie <elohimes@gmail.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Davidlohr Bueso <dave@stgolabs.net>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Waiman Long <longman@redhat.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
+
+---
+ include/linux/sched.h | 6 +++++-
+ kernel/sched/core.c | 12 ++++++++++++
+ 2 files changed, 17 insertions(+), 1 deletion(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 690ecc765880..ebe1aa236fba 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -982,9 +982,13 @@ enum cpu_idle_type {
+ * called near the end of a function, where the fact that the queue is
+ * not used again will be easy to see by inspection.
+ *
+- * Note that this can cause spurious wakeups. schedule() callers
++ * NOTE that this can cause spurious wakeups. schedule() callers
+ * must ensure the call is done inside a loop, confirming that the
+ * wakeup condition has in fact occurred.
++ *
++ * NOTE that there is no guarantee the wakeup will happen any later than the
++ * wake_q_add() location. Therefore task must be ready to be woken at the
++ * location of the wake_q_add().
+ */
+ struct wake_q_node {
+ struct wake_q_node *next;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index e1ddab401dc9..5f68564cf53d 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -588,6 +588,18 @@ static bool set_nr_if_polling(struct task_struct *p)
+ #endif
+ #endif
+
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
+ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
+ {
+ struct wake_q_node *node = &task->wake_q;
+--
+2.16.4
+
diff --git a/patches.fixes/0002-futex-Fix-possible-missed-wakeup.patch b/patches.fixes/0002-futex-Fix-possible-missed-wakeup.patch
new file mode 100644
index 0000000000..b28fc7ce1c
--- /dev/null
+++ b/patches.fixes/0002-futex-Fix-possible-missed-wakeup.patch
@@ -0,0 +1,63 @@
+From 521fa9e2590a78f9a71c71bf2ea2a1ad3669b8c0 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Mon, 18 Feb 2019 09:50:26 -0800
+Subject: [PATCH 2/3] futex: Fix (possible) missed wakeup
+Git-commit: b061c38bef43406df8e73c5be06cbfacad5ee6ad
+Patch-mainline: v5.0-rc4
+References: bsc#1050549
+
+We must not rely on wake_q_add() to delay the wakeup; in particular
+Commit:
+
+ 1d0dcb3ad9d3 ("futex: Implement lockless wakeups")
+
+moved wake_q_add() before smp_store_release(&q->lock_ptr, NULL), which
+could result in futex_wait() waking before observing ->lock_ptr ==
+NULL and going back to sleep again.
+
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Fixes: 1d0dcb3ad9d3 ("futex: Implement lockless wakeups")
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
+
+---
+ kernel/futex.c | 13 ++++++++-----
+ 1 file changed, 8 insertions(+), 5 deletions(-)
+
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 643d54f987ef..c3a7a434004e 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -1275,11 +1275,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
+ if (WARN(q->pi_state || q->rt_waiter, "refusing to wake PI futex\n"))
+ return;
+
+- /*
+- * Queue the task for later wakeup for after we've released
+- * the hb->lock. wake_q_add() grabs reference to p.
+- */
+- wake_q_add(wake_q, p);
++ get_task_struct(p);
+ __unqueue_futex(q);
+ /*
+ * The waiting task can free the futex_q as soon as
+@@ -1289,6 +1285,13 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
+ */
+ smp_wmb();
+ q->lock_ptr = NULL;
++
++ /*
++ * Queue the task for later wakeup for after we've released
++ * the hb->lock. wake_q_add() grabs reference to p.
++ */
++ wake_q_add(wake_q, p);
++ put_task_struct(p);
+ }
+
+ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
+--
+2.16.4
+
diff --git a/patches.fixes/0002-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch b/patches.fixes/0002-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch
new file mode 100644
index 0000000000..4fbd93ab49
--- /dev/null
+++ b/patches.fixes/0002-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch
@@ -0,0 +1,97 @@
+From 63f8f1ba3f28d32883e39f8f6be26fb3f0658d96 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Mon, 18 Feb 2019 09:14:51 -0800
+Subject: [PATCH 2/2] sched/wake_q: Fix wakeup ordering for wake_q
+Git-commit: 4c4e3731564c8945ac5ac90fc2a1e1f21cb79c92
+Patch-mainline: v5.0-rc4
+References: bsc#1050549
+
+Notable cmpxchg() does not provide ordering when it fails, however
+wake_q_add() requires ordering in this specific case too. Without this
+it would be possible for the concurrent wakeup to not observe our
+prior state.
+
+Andrea Parri provided:
+
+ C wake_up_q-wake_q_add
+
+ {
+ int next = 0;
+ int y = 0;
+ }
+
+ P0(int *next, int *y)
+ {
+ int r0;
+
+ /* in wake_up_q() */
+
+ WRITE_ONCE(*next, 1); /* node->next = NULL */
+ smp_mb(); /* implied by wake_up_process() */
+ r0 = READ_ONCE(*y);
+ }
+
+ P1(int *next, int *y)
+ {
+ int r1;
+
+ /* in wake_q_add() */
+
+ WRITE_ONCE(*y, 1); /* wake_cond = true */
+ smp_mb__before_atomic();
+ r1 = cmpxchg_relaxed(next, 1, 2);
+ }
+
+ exists (0:r0=0 /\ 1:r1=0)
+
+ This "exists" clause cannot be satisfied according to the LKMM:
+
+ Test wake_up_q-wake_q_add Allowed
+ States 3
+ 0:r0=0; 1:r1=1;
+ 0:r0=1; 1:r1=0;
+ 0:r0=1; 1:r1=1;
+ No
+ Witnesses
+ Positive: 0 Negative: 3
+ Condition exists (0:r0=0 /\ 1:r1=0)
+ Observation wake_up_q-wake_q_add Never 0 3
+
+Reported-by: Yongji Xie <elohimes@gmail.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Davidlohr Bueso <dave@stgolabs.net>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Waiman Long <longman@redhat.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
+
+
+---
+ kernel/sched/core.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5f68564cf53d..b5f7fd0b5938 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -609,10 +609,11 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
+ * its already queued (either by us or someone else) and will get the
+ * wakeup due to that.
+ *
+- * This cmpxchg() implies a full barrier, which pairs with the write
+- * barrier implied by the wakeup in wake_up_q().
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
+ */
+- if (cmpxchg(&node->next, NULL, WAKE_Q_TAIL))
++ smp_mb__before_atomic();
++ if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL))
+ return;
+
+ get_task_struct(task);
+--
+2.16.4
+
diff --git a/patches.fixes/0003-sched-wake_q-Reduce-reference-counting-for-special-u.patch b/patches.fixes/0003-sched-wake_q-Reduce-reference-counting-for-special-u.patch
new file mode 100644
index 0000000000..344af4055f
--- /dev/null
+++ b/patches.fixes/0003-sched-wake_q-Reduce-reference-counting-for-special-u.patch
@@ -0,0 +1,181 @@
+From 38c5ba7f66c90b3c0b8e68f6a070a8cb9e659855 Mon Sep 17 00:00:00 2001
+From: Davidlohr Bueso <dave@stgolabs.net>
+Date: Mon, 18 Feb 2019 10:03:21 -0800
+Subject: [PATCH 3/3] sched/wake_q: Reduce reference counting for special users
+Patch-mainline: Not yet, queued via tip for v5.1.
+References: bsc#1050549
+
+Some users, specifically futexes and rwsems, required fixes
+that allowed the callers to be safe when wakeups occur before
+they are expected by wake_up_q(). Such scenarios also play
+games and rely on reference counting, and until now were
+pivoting on wake_q doing it. With the wake_q_add() call being
+moved down, this can no longer be the case. As such we end up
+with a a double task refcounting overhead; and these callers
+care enough about this (being rather core-ish).
+
+This patch introduces a wake_q_add_safe() call that serves
+for callers that have already done refcounting and therefore the
+task is 'safe' from wake_q point of view (int that it requires
+reference throughout the entire queue/>wakeup cycle). In the one
+case it has internal reference counting, in the other case it
+consumes the reference counting.
+
+Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Waiman Long <longman@redhat.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Cc: Xie Yongji <xieyongji@baidu.com>
+Cc: Yongji Xie <elohimes@gmail.com>
+Cc: andrea.parri@amarulasolutions.com
+Cc: lilin24@baidu.com
+Cc: liuqi16@baidu.com
+Cc: nixun@baidu.com
+Cc: yuanlinsi01@baidu.com
+Cc: zhangyu31@baidu.com
+Link: https://lkml.kernel.org/r/20181218195352.7orq3upiwfdbrdne@linux-r8p5
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+
+---
+ include/linux/sched.h | 4 +--
+ kernel/futex.c | 3 +--
+ kernel/locking/rwsem-xadd.c | 4 +--
+ kernel/sched/core.c | 60 +++++++++++++++++++++++++++++++++------------
+ 4 files changed, 48 insertions(+), 23 deletions(-)
+
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ebe1aa236fba..3e17daba39e4 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1004,8 +1004,8 @@ struct wake_q_head {
+ #define WAKE_Q(name) \
+ struct wake_q_head name = { WAKE_Q_TAIL, &name.first }
+
+-extern void wake_q_add(struct wake_q_head *head,
+- struct task_struct *task);
++extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
++extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
+ extern void wake_up_q(struct wake_q_head *head);
+
+ /*
+diff --git a/kernel/futex.c b/kernel/futex.c
+index c3a7a434004e..4323f8061654 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -1290,8 +1290,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
+ * Queue the task for later wakeup for after we've released
+ * the hb->lock. wake_q_add() grabs reference to p.
+ */
+- wake_q_add(wake_q, p);
+- put_task_struct(p);
++ wake_q_add_safe(wake_q, p);
+ }
+
+ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
+diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
+index 1dad74958b1c..32f66c0b53e7 100644
+--- a/kernel/locking/rwsem-xadd.c
++++ b/kernel/locking/rwsem-xadd.c
+@@ -219,9 +219,7 @@ __rwsem_mark_wake(struct rw_semaphore *sem,
+ * Ensure issuing the wakeup (either by us or someone else)
+ * after setting the reader waiter to nil.
+ */
+- wake_q_add(wake_q, tsk);
+- /* wake_q_add() already take the task ref */
+- put_task_struct(tsk);
++ wake_q_add_safe(wake_q, tsk);
+ } while (--loop);
+
+ sem->wait_list.next = next;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index b5f7fd0b5938..61f80137dee6 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -588,19 +588,7 @@ static bool set_nr_if_polling(struct task_struct *p)
+ #endif
+ #endif
+
+-/**
+- * wake_q_add() - queue a wakeup for 'later' waking.
+- * @head: the wake_q_head to add @task to
+- * @task: the task to queue for 'later' wakeup
+- *
+- * Queue a task for later wakeup, most likely by the wake_up_q() call in the
+- * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
+- * instantly.
+- *
+- * This function must be used as-if it were wake_up_process(); IOW the task
+- * must be ready to be woken at this location.
+- */
+-void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
+ {
+ struct wake_q_node *node = &task->wake_q;
+
+@@ -614,15 +602,55 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
+ */
+ smp_mb__before_atomic();
+ if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL))
+- return;
+-
+- get_task_struct(task);
++ return false;
+
+ /*
+ * The head is context local, there can be no concurrency.
+ */
+ *head->lastp = node;
+ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
+ }
+
+ void wake_up_q(struct wake_q_head *head)
+--
+2.16.4
+
diff --git a/series.conf b/series.conf
index db7adc2a0f..736f75dfb4 100644
--- a/series.conf
+++ b/series.conf
@@ -24254,6 +24254,10 @@
patches.drivers/ibmveth-Do-not-process-frames-after-calling-napi_res.patch
patches.fixes/acpi-nfit-block-function-zero-dsms.patch
patches.fixes/acpi-nfit-fix-command-supported-detection.patch
+ patches.fixes/0001-sched-wake_q-Document-wake_q_add.patch
+ patches.fixes/0002-sched-wake_q-Fix-wakeup-ordering-for-wake_q.patch
+ patches.fixes/0002-futex-Fix-possible-missed-wakeup.patch
+ patches.fixes/0001-locking-rwsem-Fix-possible-missed-wakeup.patch
patches.arch/x86-pkeys-properly-copy-pkey-state-at-fork
patches.drivers/iommu-amd-call-free_iova_fast-with-pfn-in-map_sg
patches.drivers/iommu-amd-unmap-all-mapped-pages-in-error-path-of-map_sg
@@ -24302,6 +24306,7 @@
patches.suse/0013-net-mpls-prevent-speculative-execution.patch
patches.suse/0014-udf-prevent-speculative-execution.patch
patches.fixes/sched-fair-Avoid-divide-by-zero-when-rebalancing-dom.patch
+ patches.fixes/0003-sched-wake_q-Reduce-reference-counting-for-special-u.patch
########################################################
# interrupts/core