From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Andrew Morton <akpm@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Steve Capper <steve.capper@linaro.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@suse.cz>, Jerome Marchand <jmarchan@redhat.com>, Sasha Levin <sasha.levin@oracle.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv6 29/36] thp: implement split_huge_pmd() Date: Wed, 3 Jun 2015 20:06:00 +0300 [thread overview] Message-ID: <1433351167-125878-30-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1433351167-125878-1-git-send-email-kirill.shutemov@linux.intel.com> Original split_huge_page() combined two operations: splitting PMDs into tables of PTEs and splitting underlying compound page. This patch implements split_huge_pmd() which split given PMD without splitting other PMDs this page mapped with or underlying compound page. Without tail page refcounting, implementation of split_huge_pmd() is pretty straight-forward. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> --- include/linux/huge_mm.h | 11 ++++- mm/huge_memory.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 129 insertions(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 67b3f3760f4a..9cfce23ab9d0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -94,7 +94,16 @@ extern unsigned long transparent_hugepage_flags; #define split_huge_page_to_list(page, list) BUILD_BUG() #define split_huge_page(page) BUILD_BUG() -#define split_huge_pmd(__vma, __pmd, __address) BUILD_BUG() + +void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long address); + +#define split_huge_pmd(__vma, __pmd, __address) \ + do { \ + pmd_t *____pmd = (__pmd); \ + if (pmd_trans_huge(*____pmd)) \ + __split_huge_pmd(__vma, __pmd, __address); \ + } while (0) #if HPAGE_PMD_ORDER >= MAX_ORDER #error "hugepages can't be allocated by the buddy allocator" diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b0a13d2f28c..0f1f5731a893 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2547,6 +2547,125 @@ static int khugepaged(void *none) return 0; } +static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, + unsigned long haddr, pmd_t *pmd) +{ + struct mm_struct *mm = vma->vm_mm; + pgtable_t pgtable; + pmd_t _pmd; + int i; + + /* leave pmd empty until pte is filled */ + pmdp_huge_clear_flush_notify(vma, haddr, pmd); + + pgtable = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, pgtable); + + for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { + pte_t *pte, entry; + entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot); + entry = pte_mkspecial(entry); + pte = pte_offset_map(&_pmd, haddr); + VM_BUG_ON(!pte_none(*pte)); + set_pte_at(mm, haddr, pte, entry); + pte_unmap(pte); + } + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pgtable); + put_huge_zero_page(); +} + +static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long haddr) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pgtable_t pgtable; + pmd_t _pmd; + bool young, write; + int i; + + VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); + VM_BUG_ON_VMA(vma->vm_start > haddr, vma); + VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); + VM_BUG_ON(!pmd_trans_huge(*pmd)); + + count_vm_event(THP_SPLIT_PMD); + + if (is_huge_zero_pmd(*pmd)) + return __split_huge_zero_page_pmd(vma, haddr, pmd); + + page = pmd_page(*pmd); + VM_BUG_ON_PAGE(!page_count(page), page); + atomic_add(HPAGE_PMD_NR - 1, &page->_count); + write = pmd_write(*pmd); + young = pmd_young(*pmd); + + /* leave pmd empty until pte is filled */ + pmdp_huge_clear_flush_notify(vma, haddr, pmd); + + pgtable = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, pgtable); + + for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { + pte_t entry, *pte; + /* + * Note that NUMA hinting access restrictions are not + * transferred to avoid any possibility of altering + * permissions across VMAs. + */ + entry = mk_pte(page + i, vma->vm_page_prot); + entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (!write) + entry = pte_wrprotect(entry); + if (!young) + entry = pte_mkold(entry); + pte = pte_offset_map(&_pmd, haddr); + BUG_ON(!pte_none(*pte)); + set_pte_at(mm, haddr, pte, entry); + atomic_inc(&page[i]._mapcount); + pte_unmap(pte); + } + + if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { + /* Last compound_mapcount is gone. */ + __dec_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES); + if (PageDoubleMap(page)) { + /* No need in mapcount reference anymore */ + ClearPageDoubleMap(page); + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_dec(&page[i]._mapcount); + } + } else if (!TestSetPageDoubleMap(page)) { + /* + * The first PMD split for the compound page and we still + * have other PMD mapping of the page: bump _mapcount in + * every small page. + * This reference will go away with last compound_mapcount. + */ + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_inc(&page[i]._mapcount); + } + + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pgtable); +} + +void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long address) +{ + spinlock_t *ptl; + struct mm_struct *mm = vma->vm_mm; + unsigned long haddr = address & HPAGE_PMD_MASK; + + mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE); + ptl = pmd_lock(mm, pmd); + if (likely(pmd_trans_huge(*pmd))) + __split_huge_pmd_locked(vma, pmd, haddr); + spin_unlock(ptl); + mmu_notifier_invalidate_range_end(mm, haddr, haddr + HPAGE_PMD_SIZE); +} + static void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address) { -- 2.1.4
WARNING: multiple messages have this Message-ID (diff)
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> To: Andrew Morton <akpm@linux-foundation.org>, Andrea Arcangeli <aarcange@redhat.com>, Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com>, Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Christoph Lameter <cl@gentwo.org>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Steve Capper <steve.capper@linaro.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@suse.cz>, Jerome Marchand <jmarchan@redhat.com>, Sasha Levin <sasha.levin@oracle.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org, "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Subject: [PATCHv6 29/36] thp: implement split_huge_pmd() Date: Wed, 3 Jun 2015 20:06:00 +0300 [thread overview] Message-ID: <1433351167-125878-30-git-send-email-kirill.shutemov@linux.intel.com> (raw) In-Reply-To: <1433351167-125878-1-git-send-email-kirill.shutemov@linux.intel.com> Original split_huge_page() combined two operations: splitting PMDs into tables of PTEs and splitting underlying compound page. This patch implements split_huge_pmd() which split given PMD without splitting other PMDs this page mapped with or underlying compound page. Without tail page refcounting, implementation of split_huge_pmd() is pretty straight-forward. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> --- include/linux/huge_mm.h | 11 ++++- mm/huge_memory.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 129 insertions(+), 1 deletion(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 67b3f3760f4a..9cfce23ab9d0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -94,7 +94,16 @@ extern unsigned long transparent_hugepage_flags; #define split_huge_page_to_list(page, list) BUILD_BUG() #define split_huge_page(page) BUILD_BUG() -#define split_huge_pmd(__vma, __pmd, __address) BUILD_BUG() + +void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long address); + +#define split_huge_pmd(__vma, __pmd, __address) \ + do { \ + pmd_t *____pmd = (__pmd); \ + if (pmd_trans_huge(*____pmd)) \ + __split_huge_pmd(__vma, __pmd, __address); \ + } while (0) #if HPAGE_PMD_ORDER >= MAX_ORDER #error "hugepages can't be allocated by the buddy allocator" diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b0a13d2f28c..0f1f5731a893 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2547,6 +2547,125 @@ static int khugepaged(void *none) return 0; } +static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, + unsigned long haddr, pmd_t *pmd) +{ + struct mm_struct *mm = vma->vm_mm; + pgtable_t pgtable; + pmd_t _pmd; + int i; + + /* leave pmd empty until pte is filled */ + pmdp_huge_clear_flush_notify(vma, haddr, pmd); + + pgtable = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, pgtable); + + for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { + pte_t *pte, entry; + entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot); + entry = pte_mkspecial(entry); + pte = pte_offset_map(&_pmd, haddr); + VM_BUG_ON(!pte_none(*pte)); + set_pte_at(mm, haddr, pte, entry); + pte_unmap(pte); + } + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pgtable); + put_huge_zero_page(); +} + +static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long haddr) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pgtable_t pgtable; + pmd_t _pmd; + bool young, write; + int i; + + VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); + VM_BUG_ON_VMA(vma->vm_start > haddr, vma); + VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); + VM_BUG_ON(!pmd_trans_huge(*pmd)); + + count_vm_event(THP_SPLIT_PMD); + + if (is_huge_zero_pmd(*pmd)) + return __split_huge_zero_page_pmd(vma, haddr, pmd); + + page = pmd_page(*pmd); + VM_BUG_ON_PAGE(!page_count(page), page); + atomic_add(HPAGE_PMD_NR - 1, &page->_count); + write = pmd_write(*pmd); + young = pmd_young(*pmd); + + /* leave pmd empty until pte is filled */ + pmdp_huge_clear_flush_notify(vma, haddr, pmd); + + pgtable = pgtable_trans_huge_withdraw(mm, pmd); + pmd_populate(mm, &_pmd, pgtable); + + for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { + pte_t entry, *pte; + /* + * Note that NUMA hinting access restrictions are not + * transferred to avoid any possibility of altering + * permissions across VMAs. + */ + entry = mk_pte(page + i, vma->vm_page_prot); + entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (!write) + entry = pte_wrprotect(entry); + if (!young) + entry = pte_mkold(entry); + pte = pte_offset_map(&_pmd, haddr); + BUG_ON(!pte_none(*pte)); + set_pte_at(mm, haddr, pte, entry); + atomic_inc(&page[i]._mapcount); + pte_unmap(pte); + } + + if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { + /* Last compound_mapcount is gone. */ + __dec_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES); + if (PageDoubleMap(page)) { + /* No need in mapcount reference anymore */ + ClearPageDoubleMap(page); + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_dec(&page[i]._mapcount); + } + } else if (!TestSetPageDoubleMap(page)) { + /* + * The first PMD split for the compound page and we still + * have other PMD mapping of the page: bump _mapcount in + * every small page. + * This reference will go away with last compound_mapcount. + */ + for (i = 0; i < HPAGE_PMD_NR; i++) + atomic_inc(&page[i]._mapcount); + } + + smp_wmb(); /* make pte visible before pmd */ + pmd_populate(mm, pmd, pgtable); +} + +void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long address) +{ + spinlock_t *ptl; + struct mm_struct *mm = vma->vm_mm; + unsigned long haddr = address & HPAGE_PMD_MASK; + + mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE); + ptl = pmd_lock(mm, pmd); + if (likely(pmd_trans_huge(*pmd))) + __split_huge_pmd_locked(vma, pmd, haddr); + spin_unlock(ptl); + mmu_notifier_invalidate_range_end(mm, haddr, haddr + HPAGE_PMD_SIZE); +} + static void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address) { -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-06-03 17:19 UTC|newest] Thread overview: 114+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-06-03 17:05 [PATCHv6 00/36] THP refcounting redesign Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 01/36] mm, proc: adjust PSS calculation Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-09 12:29 ` Vlastimil Babka 2015-06-09 12:29 ` Vlastimil Babka 2015-06-22 10:02 ` Kirill A. Shutemov 2015-06-22 10:02 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 02/36] rmap: add argument to charge compound page Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 03/36] memcg: adjust to support new THP refcounting Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 04/36] mm, thp: adjust conditions when we can reuse the page on WP fault Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 05/36] mm: adjust FOLL_SPLIT for new refcounting Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 06/36] mm: handle PTE-mapped tail pages in gerneric fast gup implementaiton Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 07/36] thp, mlock: do not allow huge pages in mlocked area Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 08/36] khugepaged: ignore pmd tables with THP mapped with ptes Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 09/36] thp: rename split_huge_page_pmd() to split_huge_pmd() Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 10/36] mm, vmstats: new THP splitting event Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 11/36] mm: temporally mark THP broken Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 12/36] thp: drop all split_huge_page()-related code Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 13/36] mm: drop tail page refcounting Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-09 13:59 ` Vlastimil Babka 2015-06-09 13:59 ` Vlastimil Babka 2015-06-03 17:05 ` [PATCHv6 14/36] futex, thp: remove special case for THP in get_futex_key Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 15/36] ksm: prepare to new THP semantics Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 16/36] mm, thp: remove compound_lock Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 17/36] arm64, thp: remove infrastructure for handling splitting PMDs Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 18/36] arm, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 19/36] mips, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 20/36] powerpc, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 21/36] s390, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 22/36] sparc, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 23/36] tile, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 24/36] x86, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 25/36] mm, " Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 26/36] mm: rework mapcount accounting to enable 4k mapping of THPs Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-10 13:47 ` Vlastimil Babka 2015-06-10 13:47 ` Vlastimil Babka 2015-06-22 10:22 ` Kirill A. Shutemov 2015-06-22 10:22 ` Kirill A. Shutemov 2015-06-03 17:05 ` [PATCHv6 27/36] mm: differentiate page_mapped() from page_mapcount() for compound pages Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-09 10:58 ` Kirill A. Shutemov 2015-06-09 10:58 ` Kirill A. Shutemov 2015-06-10 14:34 ` Vlastimil Babka 2015-06-10 14:34 ` Vlastimil Babka 2015-06-03 17:05 ` [PATCHv6 28/36] mm, numa: skip PTE-mapped THP on numa fault Kirill A. Shutemov 2015-06-03 17:05 ` Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov [this message] 2015-06-03 17:06 ` [PATCHv6 29/36] thp: implement split_huge_pmd() Kirill A. Shutemov 2015-06-11 9:49 ` Vlastimil Babka 2015-06-11 9:49 ` Vlastimil Babka 2015-06-22 11:14 ` Kirill A. Shutemov 2015-06-22 11:14 ` Kirill A. Shutemov 2015-06-22 16:01 ` Vlastimil Babka 2015-06-22 16:01 ` Vlastimil Babka 2015-06-03 17:06 ` [PATCHv6 30/36] thp: add option to setup migration entiries during PMD split Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 31/36] thp, mm: split_huge_page(): caller need to lock page Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 32/36] thp: reintroduce split_huge_page() Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-10 15:44 ` Vlastimil Babka 2015-06-10 15:44 ` Vlastimil Babka 2015-06-22 11:28 ` Kirill A. Shutemov 2015-06-22 11:28 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 33/36] migrate_pages: try to split pages on qeueuing Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-11 9:27 ` Vlastimil Babka 2015-06-11 9:27 ` Vlastimil Babka 2015-06-22 11:35 ` Kirill A. Shutemov 2015-06-22 11:35 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 34/36] thp: introduce deferred_split_huge_page() Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 35/36] mm: re-enable THP Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-03 17:06 ` [PATCHv6 36/36] thp: update documentation Kirill A. Shutemov 2015-06-03 17:06 ` Kirill A. Shutemov 2015-06-11 12:30 ` Vlastimil Babka 2015-06-11 12:30 ` Vlastimil Babka 2015-06-22 13:18 ` Kirill A. Shutemov 2015-06-22 13:18 ` Kirill A. Shutemov 2015-06-22 16:07 ` Vlastimil Babka 2015-06-22 16:07 ` Vlastimil Babka 2015-06-16 13:17 ` [PATCHv6 00/36] THP refcounting redesign Jerome Marchand 2015-06-22 13:21 ` Kirill A. Shutemov 2015-06-22 13:21 ` Kirill A. Shutemov 2015-06-22 13:32 ` Jerome Marchand 2015-06-22 13:39 ` Kirill A. Shutemov 2015-06-22 13:39 ` Kirill A. Shutemov
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1433351167-125878-30-git-send-email-kirill.shutemov@linux.intel.com \ --to=kirill.shutemov@linux.intel.com \ --cc=aarcange@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.vnet.ibm.com \ --cc=cl@gentwo.org \ --cc=dave.hansen@intel.com \ --cc=hannes@cmpxchg.org \ --cc=hughd@google.com \ --cc=jmarchan@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mhocko@suse.cz \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=riel@redhat.com \ --cc=sasha.levin@oracle.com \ --cc=steve.capper@linaro.org \ --cc=vbabka@suse.cz \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.