Home Home > GIT Browse > SLE12-SP4-AZURE
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTakashi Iwai <tiwai@suse.de>2019-08-01 07:59:19 +0200
committerTakashi Iwai <tiwai@suse.de>2019-08-01 10:42:52 +0200
commita2a3983fef2c0a89c8f93a28f7a09a90697dd25a (patch)
tree76fba746a434662ab18eae828833be2cc4eb24b0
parent5d42f4382f42c649d0c135842b526d9ccb286a66 (diff)
lib/bitmap.c: make bitmap_parselist() thread-safe and muchrpm-4.12.14-150.32
faster (bsc#1143507).
-rw-r--r--patches.fixes/lib-bitmap.c-make-bitmap_parselist-thread-safe-and-m.patch96
-rw-r--r--series.conf1
2 files changed, 97 insertions, 0 deletions
diff --git a/patches.fixes/lib-bitmap.c-make-bitmap_parselist-thread-safe-and-m.patch b/patches.fixes/lib-bitmap.c-make-bitmap_parselist-thread-safe-and-m.patch
new file mode 100644
index 0000000000..00964020a8
--- /dev/null
+++ b/patches.fixes/lib-bitmap.c-make-bitmap_parselist-thread-safe-and-m.patch
@@ -0,0 +1,96 @@
+From 0a5ce0831d04382aa9e2420e33dff958ddade542 Mon Sep 17 00:00:00 2001
+From: Yury Norov <ynorov@caviumnetworks.com>
+Date: Fri, 8 Sep 2017 16:15:34 -0700
+Subject: [PATCH] lib/bitmap.c: make bitmap_parselist() thread-safe and much faster
+Git-commit: 0a5ce0831d04382aa9e2420e33dff958ddade542
+Patch-mainline: v4.14-rc1
+References: bsc#1143507
+
+Current implementation of bitmap_parselist() uses a static variable to
+save local state while setting bits in the bitmap. It is obviously wrong
+if we assume execution in multiprocessor environment. Fortunately, it's
+possible to rewrite this portion of code to avoid using the static
+variable.
+
+It is also possible to set bits in the mask per-range with bitmap_set(),
+not per-bit, as it is implemented now, with set_bit(); which is way
+faster.
+
+The important side effect of this change is that setting bits in this
+function from now is not per-bit atomic and less memory-ordered. This is
+because set_bit() guarantees the order of memory accesses, while
+bitmap_set() does not. I think that it is the advantage of the new
+approach, because the bitmap_parselist() is intended to initialise bit
+arrays, and user should protect the whole bitmap during initialisation if
+needed. So protecting individual bits looks expensive and useless. Also,
+other range-oriented functions in lib/bitmap.c don't worry much about
+atomicity.
+
+With all that, setting 2k bits in map with the pattern like 0-2047:128/256
+becomes ~50 times faster after applying the patch in my testing
+environment (arm64 hosted on qemu).
+
+The second patch of the series adds the test for bitmap_parselist(). It's
+not intended to cover all tricky cases, just to make sure that I didn't
+screw up during rework.
+
+Link: http://lkml.kernel.org/r/20170807225438.16161-1-ynorov@caviumnetworks.com
+Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
+Cc: Noam Camus <noamca@mellanox.com>
+Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
+Cc: Matthew Wilcox <mawilcox@microsoft.com>
+Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Acked-by: Takashi Iwai <tiwai@suse.de>
+
+---
+ lib/bitmap.c | 18 ++++++------------
+ 1 file changed, 6 insertions(+), 12 deletions(-)
+
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 9a532805364b..c82c61b66e16 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -513,7 +513,7 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ int nmaskbits)
+ {
+ unsigned int a, b, old_a, old_b;
+- unsigned int group_size, used_size;
++ unsigned int group_size, used_size, off;
+ int c, old_c, totaldigits, ndigits;
+ const char __user __force *ubuf = (const char __user __force *)buf;
+ int at_start, in_range, in_partial_range;
+@@ -599,6 +599,8 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ a = old_a;
+ b = old_b;
+ old_a = old_b = 0;
++ } else {
++ used_size = group_size = b - a + 1;
+ }
+ /* if no digit is after '-', it's wrong*/
+ if (at_start && in_range)
+@@ -608,17 +610,9 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ if (b >= nmaskbits)
+ return -ERANGE;
+ while (a <= b) {
+- if (in_partial_range) {
+- static int pos_in_group = 1;
+-
+- if (pos_in_group <= used_size)
+- set_bit(a, maskp);
+-
+- if (a == b || ++pos_in_group > group_size)
+- pos_in_group = 1;
+- } else
+- set_bit(a, maskp);
+- a++;
++ off = min(b - a + 1, used_size);
++ bitmap_set(maskp, a, off);
++ a += group_size;
+ }
+ } while (buflen && c == ',');
+ return 0;
+--
+2.16.4
+
diff --git a/series.conf b/series.conf
index ca128c6708..4be7d40147 100644
--- a/series.conf
+++ b/series.conf
@@ -6706,6 +6706,7 @@
patches.suse/0014-fs-epoll-use-faster-rb_first_cached.patch
patches.suse/0015-mem-memcg-cache-rightmost-node.patch
patches.suse/0016-block-cfq-cache-rightmost-rb_node.patch
+ patches.fixes/lib-bitmap.c-make-bitmap_parselist-thread-safe-and-m.patch
patches.fixes/checkpatch-add-6-missing-types-to-list-types
patches.suse/0001-ipc-sem-drop-sem_checkid-helper.patch
patches.suse/0002-ipc-sem-play-nicer-with-large-nsops-allocations.patch