Home Home > GIT Browse
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTakashi Iwai <tiwai@suse.de>2016-05-30 13:07:40 +0200
committerTakashi Iwai <tiwai@suse.de>2016-05-30 13:07:40 +0200
commiteb7390244ed2cad655e35a895fc65daa30bb9e5d (patch)
tree5ff2a667771c9a8f468e118656b704a7864a466e
parent77504d118359476eee7ccc50d0ad287fb252ad2b (diff)
parent9f4b694395090a62fb371717c2a02299d0f8f61b (diff)
Merge branch 'users/mgorman/SLE12-SP2/for-next' into SLE12-SP2
Pull page_alloc fix and patch refreshes from Mel Gorman.
-rw-r--r--patches.suse/cpuset-use-static-key-better-and-convert-to-new-API.patch3
-rw-r--r--patches.suse/mm-page_alloc-avoid-looking-up-the-first-zone-in-a-zonelist-twice.patch3
-rw-r--r--patches.suse/mm-page_alloc-avoid-unnecessary-zone-lookups-during-pageblock-operations.patch3
-rw-r--r--patches.suse/mm-page_alloc-check-multiple-page-fields-with-a-single-branch.patch3
-rw-r--r--patches.suse/mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch4
-rw-r--r--patches.suse/mm-page_alloc-convert-alloc_flags-to-unsigned.patch3
-rw-r--r--patches.suse/mm-page_alloc-convert-nr_fair_skipped-to-bool.patch3
-rw-r--r--patches.suse/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-PCP-drain.patch3
-rw-r--r--patches.suse/mm-page_alloc-defer-debugging-checks-of-pages-allocated-from-the-PCP.patch3
-rw-r--r--patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch3
-rw-r--r--patches.suse/mm-page_alloc-inline-pageblock-lookup-in-page-free-fast-paths.patch3
-rw-r--r--patches.suse/mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch3
-rw-r--r--patches.suse/mm-page_alloc-inline-zone_statistics.patch3
-rw-r--r--patches.suse/mm-page_alloc-move-__GFP_HARDWALL-modifications-out-of-the-fastpath.patch3
-rw-r--r--patches.suse/mm-page_alloc-only-check-PageCompound-for-high-order-pages.patch3
-rw-r--r--patches.suse/mm-page_alloc-prevent-infinite-loop-in-buffered_rmqueue.patch44
-rw-r--r--patches.suse/mm-page_alloc-pull-out-side-effects-from-free_pages_check.patch3
-rw-r--r--patches.suse/mm-page_alloc-reduce-branches-in-zone_statistics.patch3
-rw-r--r--patches.suse/mm-page_alloc-reduce-cost-of-fair-zone-allocation-policy-retry.patch3
-rw-r--r--patches.suse/mm-page_alloc-remove-field-from-alloc_context.patch3
-rw-r--r--patches.suse/mm-page_alloc-remove-unnecessary-initialisation-from-__alloc_pages_nodemask.patch3
-rw-r--r--patches.suse/mm-page_alloc-remove-unnecessary-initialisation-in-get_page_from_freelist.patch3
-rw-r--r--patches.suse/mm-page_alloc-remove-unnecessary-local-variable-in-get_page_from_freelist.patch3
-rw-r--r--patches.suse/mm-page_alloc-remove-unnecessary-variable-from-free_pcppages_bulk.patch3
-rw-r--r--patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch3
-rw-r--r--patches.suse/mm-page_alloc-shortcut-watermark-checks-for-order-0-pages.patch3
-rw-r--r--patches.suse/mm-page_alloc-shorten-the-page-allocator-fast-path.patch3
-rw-r--r--patches.suse/mm-page_alloc-simplify-last-cpupid-reset.patch3
-rw-r--r--patches.suse/mm-page_alloc-un-inline-the-bad-part-of-free_pages_check.patch3
-rw-r--r--patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch3
-rw-r--r--patches.suse/mm-page_alloc-use-__dec_zone_state-for-order-0-page-allocation.patch3
-rw-r--r--series.conf1
32 files changed, 106 insertions, 30 deletions
diff --git a/patches.suse/cpuset-use-static-key-better-and-convert-to-new-API.patch b/patches.suse/cpuset-use-static-key-better-and-convert-to-new-API.patch
index 654c35263c..1dc40daa37 100644
--- a/patches.suse/cpuset-use-static-key-better-and-convert-to-new-API.patch
+++ b/patches.suse/cpuset-use-static-key-better-and-convert-to-new-API.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:24 +0100
Subject: [PATCH] cpuset: use static key better and convert to new API
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 002f290627c27068087f6204baec7a334e5a3b48
An important function for cpusets is cpuset_node_allowed(), which
optimizes on the fact if there's a single root CPU set, it must be
diff --git a/patches.suse/mm-page_alloc-avoid-looking-up-the-first-zone-in-a-zonelist-twice.patch b/patches.suse/mm-page_alloc-avoid-looking-up-the-first-zone-in-a-zonelist-twice.patch
index 246ed241f8..70f6921f95 100644
--- a/patches.suse/mm-page_alloc-avoid-looking-up-the-first-zone-in-a-zonelist-twice.patch
+++ b/patches.suse/mm-page_alloc-avoid-looking-up-the-first-zone-in-a-zonelist-twice.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: avoid looking up the first zone in a zonelist
twice
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: c33d6c06f60f710f0305ae792773e1c2560e1e51
The allocator fast path looks up the first usable zone in a zonelist and
then get_page_from_freelist does the same job in the zonelist iterator.
diff --git a/patches.suse/mm-page_alloc-avoid-unnecessary-zone-lookups-during-pageblock-operations.patch b/patches.suse/mm-page_alloc-avoid-unnecessary-zone-lookups-during-pageblock-operations.patch
index 834a96978e..b587193a3b 100644
--- a/patches.suse/mm-page_alloc-avoid-unnecessary-zone-lookups-during-pageblock-operations.patch
+++ b/patches.suse/mm-page_alloc-avoid-unnecessary-zone-lookups-during-pageblock-operations.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: avoid unnecessary zone lookups during
pageblock operations
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: f75fb889d18d362e336f8d3fba158a8636d0a063
Pageblocks have an associated bitmap to store migrate types and whether
the pageblock should be skipped during compaction. The bitmap may be
diff --git a/patches.suse/mm-page_alloc-check-multiple-page-fields-with-a-single-branch.patch b/patches.suse/mm-page_alloc-check-multiple-page-fields-with-a-single-branch.patch
index da82158430..d0fe7d7829 100644
--- a/patches.suse/mm-page_alloc-check-multiple-page-fields-with-a-single-branch.patch
+++ b/patches.suse/mm-page_alloc-check-multiple-page-fields-with-a-single-branch.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: check multiple page fields with a single
branch
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 7bfec6f47bb0ffd207c7e813e819235e6c1c0f34
Every page allocated or freed is checked for sanity to avoid corruptions
that are difficult to detect later. A bad page could be due to a number
diff --git a/patches.suse/mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch b/patches.suse/mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch
index 1ac59871ec..bae064c5e6 100644
--- a/patches.suse/mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch
+++ b/patches.suse/mm-page_alloc-check-once-if-a-zone-has-isolated-pageblocks.patch
@@ -4,12 +4,14 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: check once if a zone has isolated pageblocks
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 3777999dd47ec00ec34a151b1d93c0a2b721e822
When bulk freeing pages from the per-cpu lists the zone is checked for
isolated pageblocks on every release. This patch checks it once per
drain.
+[mgorman@techsingularity.net: fix locking radce, per Vlastimil]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
diff --git a/patches.suse/mm-page_alloc-convert-alloc_flags-to-unsigned.patch b/patches.suse/mm-page_alloc-convert-alloc_flags-to-unsigned.patch
index 8a58b2bf97..9c278c5f8c 100644
--- a/patches.suse/mm-page_alloc-convert-alloc_flags-to-unsigned.patch
+++ b/patches.suse/mm-page_alloc-convert-alloc_flags-to-unsigned.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: convert alloc_flags to unsigned
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: c603844bdcb5238980de8d58b393f52d7729d651
alloc_flags is a bitmask of flags but it is signed which does not
necessarily generate the best code depending on the compiler. Even
diff --git a/patches.suse/mm-page_alloc-convert-nr_fair_skipped-to-bool.patch b/patches.suse/mm-page_alloc-convert-nr_fair_skipped-to-bool.patch
index 54df48d944..abfb8690ba 100644
--- a/patches.suse/mm-page_alloc-convert-nr_fair_skipped-to-bool.patch
+++ b/patches.suse/mm-page_alloc-convert-nr_fair_skipped-to-bool.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: convert nr_fair_skipped to bool
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: fa379b9586c7507c607d031dadf3681ed29614a9
The number of zones skipped to a zone expiring its fair zone allocation
quota is irrelevant. Convert to bool.
diff --git a/patches.suse/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-PCP-drain.patch b/patches.suse/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-PCP-drain.patch
index 135d3d91fb..ae37c929d6 100644
--- a/patches.suse/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-PCP-drain.patch
+++ b/patches.suse/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-PCP-drain.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: defer debugging checks of freed pages until a
PCP drain
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 4db7548ccbd9ec8e666f35df4a530f55904dec39
Every page free checks a number of page fields for validity. This catches
premature frees and corruptions but it is also expensive. This patch
diff --git a/patches.suse/mm-page_alloc-defer-debugging-checks-of-pages-allocated-from-the-PCP.patch b/patches.suse/mm-page_alloc-defer-debugging-checks-of-pages-allocated-from-the-PCP.patch
index ec651a8d70..fe6a9625fa 100644
--- a/patches.suse/mm-page_alloc-defer-debugging-checks-of-pages-allocated-from-the-PCP.patch
+++ b/patches.suse/mm-page_alloc-defer-debugging-checks-of-pages-allocated-from-the-PCP.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: defer debugging checks of pages allocated
from the PCP
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 479f854a207ce2b97545a0a83856778b541063d0
Every page allocated checks a number of page fields for validity. This
catches corruption bugs of pages that are already freed but it is
diff --git a/patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch b/patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch
index db0fb8f332..2ca78c788c 100644
--- a/patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch
+++ b/patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:24 +0100
Subject: [PATCH] mm, page_alloc: don't duplicate code in free_pcp_prepare
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: e2769dbdc51f1baa1908ecf6c84d50f19577e1db
The new free_pcp_prepare() function shares a lot of code with
free_pages_prepare(), which makes this a maintenance risk when some future
diff --git a/patches.suse/mm-page_alloc-inline-pageblock-lookup-in-page-free-fast-paths.patch b/patches.suse/mm-page_alloc-inline-pageblock-lookup-in-page-free-fast-paths.patch
index 0e70060388..6105646498 100644
--- a/patches.suse/mm-page_alloc-inline-pageblock-lookup-in-page-free-fast-paths.patch
+++ b/patches.suse/mm-page_alloc-inline-pageblock-lookup-in-page-free-fast-paths.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: inline pageblock lookup in page free fast
paths
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 0b423ca22f95a867f789aab1fe57ee4e378df43b
The function call overhead of get_pfnblock_flags_mask() is measurable in
the page free paths. This patch uses an inlined version that is faster.
diff --git a/patches.suse/mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch b/patches.suse/mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch
index 3315b923be..2ce8cd3780 100644
--- a/patches.suse/mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch
+++ b/patches.suse/mm-page_alloc-inline-the-fast-path-of-the-zonelist-iterator.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: inline the fast path of the zonelist iterator
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 682a3385e7734fa3abbd504cbeb5fe91793f1827
The page allocator iterates through a zonelist for zones that match the
addressing limitations and nodemask of the caller but many allocations
diff --git a/patches.suse/mm-page_alloc-inline-zone_statistics.patch b/patches.suse/mm-page_alloc-inline-zone_statistics.patch
index b8cb97a6a2..edbe57a596 100644
--- a/patches.suse/mm-page_alloc-inline-zone_statistics.patch
+++ b/patches.suse/mm-page_alloc-inline-zone_statistics.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: inline zone_statistics
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 060e74173f292fb3e0398b3dca8765568d195ff1
zone_statistics has one call-site but it's a public function. Make it
static and inline.
diff --git a/patches.suse/mm-page_alloc-move-__GFP_HARDWALL-modifications-out-of-the-fastpath.patch b/patches.suse/mm-page_alloc-move-__GFP_HARDWALL-modifications-out-of-the-fastpath.patch
index ddab62ad84..7017fc990b 100644
--- a/patches.suse/mm-page_alloc-move-__GFP_HARDWALL-modifications-out-of-the-fastpath.patch
+++ b/patches.suse/mm-page_alloc-move-__GFP_HARDWALL-modifications-out-of-the-fastpath.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: move __GFP_HARDWALL modifications out of the
fastpath
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 83d4ca8148fd9092715fd8ef75b30bbfd67fd2a9
__GFP_HARDWALL only has meaning in the context of cpusets but the fast
path always applies the flag on the first attempt. Move the manipulations
diff --git a/patches.suse/mm-page_alloc-only-check-PageCompound-for-high-order-pages.patch b/patches.suse/mm-page_alloc-only-check-PageCompound-for-high-order-pages.patch
index 1f5c8a4cbf..75ab1cf62e 100644
--- a/patches.suse/mm-page_alloc-only-check-PageCompound-for-high-order-pages.patch
+++ b/patches.suse/mm-page_alloc-only-check-PageCompound-for-high-order-pages.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:21 +0100
Subject: [PATCH] mm, page_alloc: only check PageCompound for high-order pages
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: d61f8590397480981f0d3ee7a2b38b5ea990db52
Another year, another round of page allocator optimisations focusing this
time on the alloc and free fast paths. This should be of help to
diff --git a/patches.suse/mm-page_alloc-prevent-infinite-loop-in-buffered_rmqueue.patch b/patches.suse/mm-page_alloc-prevent-infinite-loop-in-buffered_rmqueue.patch
new file mode 100644
index 0000000000..23bd4c0267
--- /dev/null
+++ b/patches.suse/mm-page_alloc-prevent-infinite-loop-in-buffered_rmqueue.patch
@@ -0,0 +1,44 @@
+From 10b2b6a3041812371c8802f06426cb2209bfe8c4 Mon Sep 17 00:00:00 2001
+From: Vlastimil Babka <vbabka@suse.cz>
+Date: Wed, 18 May 2016 09:41:01 +0200
+Subject: [PATCH] mm, page_alloc: prevent infinite loop in buffered_rmqueue()
+
+References: bnc#971975 VM performance -- page allocator
+Patch-mainline: No, expected 4.7 and queued in linux-mm
+
+In DEBUG_VM kernel, we can hit infinite loop for order == 0 in
+buffered_rmqueue() when check_new_pcp() returns 1, because the bad page is
+never removed from the pcp list. Fix this by removing the page before retrying.
+Also we don't need to check if page is non-NULL, because we simply grab it from
+the list which was just tested for being non-empty.
+
+Fixes: http://www.ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-pcp-drain.patch
+Reported-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
+Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
+Signed-off-by: Mel Gorman <mgorman@suse.de>
+---
+ mm/page_alloc.c | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index f8f3bfc435ee..bb320cde4d6d 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2609,11 +2609,12 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
+ page = list_entry(list->prev, struct page, lru);
+ else
+ page = list_entry(list->next, struct page, lru);
+- } while (page && check_new_pcp(page));
+
+- __dec_zone_state(zone, NR_ALLOC_BATCH);
+- list_del(&page->lru);
+- pcp->count--;
++ __dec_zone_state(zone, NR_ALLOC_BATCH);
++ list_del(&page->lru);
++ pcp->count--;
++
++ } while (check_new_pcp(page));
+ } else {
+ if (unlikely(gfp_flags & __GFP_NOFAIL)) {
+ /*
diff --git a/patches.suse/mm-page_alloc-pull-out-side-effects-from-free_pages_check.patch b/patches.suse/mm-page_alloc-pull-out-side-effects-from-free_pages_check.patch
index abadfcf5b0..5f1f954f75 100644
--- a/patches.suse/mm-page_alloc-pull-out-side-effects-from-free_pages_check.patch
+++ b/patches.suse/mm-page_alloc-pull-out-side-effects-from-free_pages_check.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: pull out side effects from free_pages_check
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: da838d4fcba675cbf864f225d76f970e91220ee6
Check without side-effects should be easier to maintain. It also removes
the duplicated cpupid and flags reset done in !DEBUG_VM variant of both
diff --git a/patches.suse/mm-page_alloc-reduce-branches-in-zone_statistics.patch b/patches.suse/mm-page_alloc-reduce-branches-in-zone_statistics.patch
index 315dad0c88..fb6f5fdce3 100644
--- a/patches.suse/mm-page_alloc-reduce-branches-in-zone_statistics.patch
+++ b/patches.suse/mm-page_alloc-reduce-branches-in-zone_statistics.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: reduce branches in zone_statistics
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: b9f00e147f27d86691f7f52a3c8126d25432477c
zone_statistics has more branches than it really needs to take an unlikely
GFP flag into account. Reduce the number and annotate the unlikely flag.
diff --git a/patches.suse/mm-page_alloc-reduce-cost-of-fair-zone-allocation-policy-retry.patch b/patches.suse/mm-page_alloc-reduce-cost-of-fair-zone-allocation-policy-retry.patch
index 883b1383ea..f466e9d96a 100644
--- a/patches.suse/mm-page_alloc-reduce-cost-of-fair-zone-allocation-policy-retry.patch
+++ b/patches.suse/mm-page_alloc-reduce-cost-of-fair-zone-allocation-policy-retry.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: reduce cost of fair zone allocation policy
retry
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 305347550becd08fdb576df32fc0767842ed71a6
The fair zone allocation policy is not without cost but it can be reduced
slightly. This patch removes an unnecessary local variable, checks the
diff --git a/patches.suse/mm-page_alloc-remove-field-from-alloc_context.patch b/patches.suse/mm-page_alloc-remove-field-from-alloc_context.patch
index ecd9574d51..158d7b03bd 100644
--- a/patches.suse/mm-page_alloc-remove-field-from-alloc_context.patch
+++ b/patches.suse/mm-page_alloc-remove-field-from-alloc_context.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: remove field from alloc_context
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 93ea9964d14ad583492ffb9ab7543f015876aaf2
The classzone_idx can be inferred from preferred_zoneref so remove the
unnecessary field and save stack space.
diff --git a/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-from-__alloc_pages_nodemask.patch b/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-from-__alloc_pages_nodemask.patch
index fee517deb8..5577efb5d0 100644
--- a/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-from-__alloc_pages_nodemask.patch
+++ b/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-from-__alloc_pages_nodemask.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: remove unnecessary initialisation from
__alloc_pages_nodemask()
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 5bb1b169757875a72e05bfcbb76e22602cb1a760
page is guaranteed to be set before it is read with or without the
initialisation.
diff --git a/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-in-get_page_from_freelist.patch b/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-in-get_page_from_freelist.patch
index 7eee9d40b7..3e721fac03 100644
--- a/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-in-get_page_from_freelist.patch
+++ b/patches.suse/mm-page_alloc-remove-unnecessary-initialisation-in-get_page_from_freelist.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: remove unnecessary initialisation in
get_page_from_freelist
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: be06af002f6d50de10fd602ce3a6aa5d28e88d38
See subject.
diff --git a/patches.suse/mm-page_alloc-remove-unnecessary-local-variable-in-get_page_from_freelist.patch b/patches.suse/mm-page_alloc-remove-unnecessary-local-variable-in-get_page_from_freelist.patch
index 18586fe12b..c6bb541bc1 100644
--- a/patches.suse/mm-page_alloc-remove-unnecessary-local-variable-in-get_page_from_freelist.patch
+++ b/patches.suse/mm-page_alloc-remove-unnecessary-local-variable-in-get_page_from_freelist.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: remove unnecessary local variable in
get_page_from_freelist
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 4dfa6cd8fdb1682586d463bc34888980fe98eb46
zonelist here is a copy of a struct field that is used once. Ditch it.
diff --git a/patches.suse/mm-page_alloc-remove-unnecessary-variable-from-free_pcppages_bulk.patch b/patches.suse/mm-page_alloc-remove-unnecessary-variable-from-free_pcppages_bulk.patch
index aed9464c7b..c5a8ea9aa1 100644
--- a/patches.suse/mm-page_alloc-remove-unnecessary-variable-from-free_pcppages_bulk.patch
+++ b/patches.suse/mm-page_alloc-remove-unnecessary-variable-from-free_pcppages_bulk.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: remove unnecessary variable from
free_pcppages_bulk
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: e5b31ac2ca2cd0cf6bf2fcbb708ed01466c89aaa
The original count is never reused so it can be removed.
diff --git a/patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch b/patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch
index 288cde05c0..0502d69f44 100644
--- a/patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch
+++ b/patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: restore the original nodemask if the fast
path allocation failed
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 4741526b83c5d3a3d661d1896f9e7414c5730bcb
The page allocator fast path uses either the requested nodemask or
cpuset_current_mems_allowed if cpusets are enabled. If the allocation
diff --git a/patches.suse/mm-page_alloc-shortcut-watermark-checks-for-order-0-pages.patch b/patches.suse/mm-page_alloc-shortcut-watermark-checks-for-order-0-pages.patch
index 256dbdeb97..2c94a5cf77 100644
--- a/patches.suse/mm-page_alloc-shortcut-watermark-checks-for-order-0-pages.patch
+++ b/patches.suse/mm-page_alloc-shortcut-watermark-checks-for-order-0-pages.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: shortcut watermark checks for order-0 pages
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 48ee5f3696f62496481a8b6d852bcad9b3ebbe37
Watermarks have to be checked on every allocation including the number of
pages being allocated and whether reserves can be accessed. The reserves
diff --git a/patches.suse/mm-page_alloc-shorten-the-page-allocator-fast-path.patch b/patches.suse/mm-page_alloc-shorten-the-page-allocator-fast-path.patch
index 92dd0ac4c7..e2167a5d89 100644
--- a/patches.suse/mm-page_alloc-shorten-the-page-allocator-fast-path.patch
+++ b/patches.suse/mm-page_alloc-shorten-the-page-allocator-fast-path.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: shorten the page allocator fast path
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 4fcb0971175f6037590d7b7772fe6619261d2165
The page allocator fast path checks page multiple times unnecessarily.
This patch avoids all the slowpath checks if the first allocation attempt
diff --git a/patches.suse/mm-page_alloc-simplify-last-cpupid-reset.patch b/patches.suse/mm-page_alloc-simplify-last-cpupid-reset.patch
index 2233357e85..43827b358a 100644
--- a/patches.suse/mm-page_alloc-simplify-last-cpupid-reset.patch
+++ b/patches.suse/mm-page_alloc-simplify-last-cpupid-reset.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:22 +0100
Subject: [PATCH] mm, page_alloc: simplify last cpupid reset
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 09940a4f1e816abe3248fa0d185fc0e7f54c8c12
The current reset unnecessarily clears flags and makes pointless calculations.
diff --git a/patches.suse/mm-page_alloc-un-inline-the-bad-part-of-free_pages_check.patch b/patches.suse/mm-page_alloc-un-inline-the-bad-part-of-free_pages_check.patch
index ea9e9f1d22..09210d5ff7 100644
--- a/patches.suse/mm-page_alloc-un-inline-the-bad-part-of-free_pages_check.patch
+++ b/patches.suse/mm-page_alloc-un-inline-the-bad-part-of-free_pages_check.patch
@@ -4,7 +4,8 @@ Date: Tue, 3 May 2016 09:43:23 +0100
Subject: [PATCH] mm, page_alloc: un-inline the bad part of free_pages_check
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: bb552ac6c6b4f24e7a7b491286f87b63f9478d42
!DEBUG_VM size and bloat-o-meter:
diff --git a/patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch b/patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch
index 3f4dbf16e5..149fc50295 100644
--- a/patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch
+++ b/patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: uninline the bad page part of
check_new_page()
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 4e6118016eb7986109ad61b00186579f384f956a
Bad pages should be rare so the code handling them doesn't need to be
inline for performance reasons. Put it to separate function which returns
diff --git a/patches.suse/mm-page_alloc-use-__dec_zone_state-for-order-0-page-allocation.patch b/patches.suse/mm-page_alloc-use-__dec_zone_state-for-order-0-page-allocation.patch
index eb0962b420..6b68ffb213 100644
--- a/patches.suse/mm-page_alloc-use-__dec_zone_state-for-order-0-page-allocation.patch
+++ b/patches.suse/mm-page_alloc-use-__dec_zone_state-for-order-0-page-allocation.patch
@@ -5,7 +5,8 @@ Subject: [PATCH] mm, page_alloc: use __dec_zone_state for order-0 page
allocation
References: bnc#971975 VM performance -- page allocator
-Patch-mainline: No, expected 4.7 and queued in linux-mm
+Patch-mainline: v4.7
+Git-commit: 754078eb45df8069f389f3371002e7e24962e1a2
__dec_zone_state is cheaper to use for removing an order-0 page as it has
fewer conditions to check.
diff --git a/series.conf b/series.conf
index 56791fba16..b755877247 100644
--- a/series.conf
+++ b/series.conf
@@ -217,6 +217,7 @@
patches.suse/mm-page_alloc-don-t-duplicate-code-in-free_pcp_prepare.patch
patches.suse/mm-page_alloc-uninline-the-bad-page-part-of-check_new_page.patch
patches.suse/mm-page_alloc-restore-the-original-nodemask-if-the-fast-path-allocation-failed.patch
+ patches.suse/mm-page_alloc-prevent-infinite-loop-in-buffered_rmqueue.patch
# -stable pending times/clock_gettime performance (bnc#981795)
# Patch slightly improves the situation but still short of expectations