Home Home > GIT Browse
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMichal Hocko <mhocko@suse.com>2017-12-04 08:50:50 +0100
committerMichal Hocko <mhocko@suse.com>2017-12-04 08:50:50 +0100
commit36319500a07cf99d643f53d7cf4f5e8a4c3da7eb (patch)
tree10c12a11be8bbbb4f8d974b705ea2da0d2981389
parentdeef723eb671b12a5318ffa47f58c2029c3f0681 (diff)
Refresh
patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch. merged the final version merged upstream. maybe_pmd_mkwrite parts were separated to its own patch was is broken wrt. CoW (our tree had fixed that up). Anyway, let's way until that sorts out and we will merge the final and working fix. Keep only the main fix for the meantime.
-rw-r--r--patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch62
1 files changed, 15 insertions, 47 deletions
diff --git a/patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch b/patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch
index d486482680..43a2065997 100644
--- a/patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch
+++ b/patches.fixes/mm-thp-do-not-dirty-huge-pte-on-read-fault.patch
@@ -1,18 +1,24 @@
Date: Fri, 24 Nov 2017 13:22:27 +0300
From: "Kirill A. Shutemov" <kirill@shutemov.name>
Subject: mm, thp: do not dirty huge pages on read fault
-Patch-mainline: not yet (discussed on security@kernel.org)
+Patch-mainline: 4.15-rc2
+Git-commit: a8f97366452ed491d13cf1e44241bc0b5740b1f0
References: bnc#1069496, CVE-2017-1000405
-touch_p[mu]d dirty p[mu]d even for the read fault. This breaks CoW bail
-out logic which realies on the dirty bit. As a result, a read only
-mapping can be overrwriten with all due security consequences. Fix this
-by checking FOLL_WRITE when dirtying page tables in g-u-p path and also
-never dirty page tables when the underlying vma is read only.
+mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
-Fixes: 8310d48b125d ("mm/huge_memory.c: respect FOLL_FORCE/FOLL_COW for thp")
-Reported-by: Bindecy <contact@bindecy.com>.
-Signed-off-by: "Kirill A. Shutemov" <kirill@shutemov.name>
+Currently, we unconditionally make page table dirty in touch_pmd().
+It may result in false-positive can_follow_write_pmd().
+
+We may avoid the situation, if we would only make the page table entry
+dirty if caller asks for write access -- FOLL_WRITE.
+
+The patch also changes touch_pud() in the same way.
+
+Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Hugh Dickins <hughd@google.com>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
@@ -22,24 +28,6 @@ Signed-off-by: Michal Hocko <mhocko@suse.com>
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
-@@ -709,7 +709,7 @@ __setup("transparent_hugepage=", setup_t
- pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
- {
- if (likely(vma->vm_flags & VM_WRITE))
-- pmd = pmd_mkwrite(pmd);
-+ pmd = pmd_mkwrite(pmd_mkdirty(pmd));
- return pmd;
- }
-
-@@ -779,7 +779,7 @@ static int __do_huge_pmd_anonymous_page(
- }
-
- entry = mk_huge_pmd(page, vma->vm_page_prot);
-- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
-+ entry = maybe_pmd_mkwrite(entry, vma);
- page_add_new_anon_rmap(page, vma, haddr);
- mem_cgroup_commit_charge(page, memcg, false);
- lru_cache_add_active_or_unevictable(page, vma);
@@ -1334,17 +1334,12 @@ struct page *follow_trans_huge_pmd(struc
VM_BUG_ON_PAGE(!PageHead(page), page);
if (flags & FOLL_TOUCH) {
@@ -63,23 +51,3 @@ Signed-off-by: Michal Hocko <mhocko@suse.com>
update_mmu_cache_pmd(vma, addr, pmd);
}
if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) {
-@@ -2682,7 +2677,7 @@ static void collapse_huge_page(struct mm
- pgtable = pmd_pgtable(_pmd);
-
- _pmd = mk_huge_pmd(new_page, vma->vm_page_prot);
-- _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
-+ _pmd = maybe_pmd_mkwrite(_pmd, vma);
-
- /*
- * spin_lock() below is not the equivalent of smp_wmb(), so
---- a/mm/migrate.c
-+++ b/mm/migrate.c
-@@ -1815,7 +1815,7 @@ fail_putback:
- orig_entry = *pmd;
- entry = mk_pmd(new_page, vma->vm_page_prot);
- entry = pmd_mkhuge(entry);
-- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
-+ entry = maybe_pmd_mkwrite(entry, vma);
-
- /*
- * Clear the old entry under pagetable lock and establish the new PTE.