Commit graph

5919 commits

Author SHA1 Message Date
Rabin Vincent
204a535e11 mm: show migration types in show_mem
This is useful to diagnose the reason for page allocation failure for
cases where there appear to be several free pages.

Example, with this alloc_pages(GFP_ATOMIC) failure:

 swapper/0: page allocation failure: order:0, mode:0x0
 ...
 Mem-info:
 Normal per-cpu:
 CPU    0: hi:   90, btch:  15 usd:  48
 CPU    1: hi:   90, btch:  15 usd:  21
 active_anon:0 inactive_anon:0 isolated_anon:0
  active_file:0 inactive_file:84 isolated_file:0
  unevictable:0 dirty:0 writeback:0 unstable:0
  free:4026 slab_reclaimable:75 slab_unreclaimable:484
  mapped:0 shmem:0 pagetables:0 bounce:0
 Normal free:16104kB min:2296kB low:2868kB high:3444kB active_anon:0kB
 inactive_anon:0kB active_file:0kB inactive_file:336kB unevictable:0kB
 isolated(anon):0kB isolated(file):0kB present:331776kB mlocked:0kB
 dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:300kB
 slab_unreclaimable:1936kB kernel_stack:328kB pagetables:0kB unstable:0kB
 bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
 lowmem_reserve[]: 0 0

Before the patch, it's hard (for me, at least) to say why all these free
chunks weren't considered for allocation:

 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB 1*512kB
 1*1024kB 1*2048kB 3*4096kB = 16128kB

After the patch, it's obvious that the reason is that all of these are
in the MIGRATE_CMA (C) freelist:

 Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 1*256kB (C) 1*512kB
 (C) 1*1024kB (C) 1*2048kB (C) 3*4096kB (C) = 16128kB

Change-Id: Ic5fe77d762e0c03715bfb917774e7c4f03ac43f5
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-03-07 15:25:06 -08:00
Michal Nazarewicz
a1357d5f1b mm: cma: on movable allocations try MIGRATE_CMA first
It has been observed that system tends to keep a lot of CMA free pages
even in very high memory pressure use cases.  The CMA fallback for
movable pages is used very rarely, only when system is completely
pruned from MOVABLE pages.  This means that the out-of-memory is
triggered for unmovable allocations even when there are many CMA pages
available.  This problem was not observed previously since movable
pages were used as a fallback for unmovable allocations.

To avoid such situation this commit changes the allocation order so
that on movable allocations the MIGRATE_CMA pageblocks are used first.

This change means that the MIGRATE_CMA can be removed from fallback
path of the MIGRATE_MOVABLE type.  This means that the
__rmqueue_fallback() function will never deal with CMA pages and thus
all the checks around MIGRATE_CMA can be removed from that function.

Change-Id: Ie13312d62a6af12d7aa78b4283ed25535a6d49fd
CRs-Fixed: 435287
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-03-07 15:25:05 -08:00
Liam Mark
bd6b0d88ed android/lowmemorykiller: Selectively count free CMA pages
In certain memory configurations there can be a large number of
CMA pages which are not suitable to satisfy certain memory
requests.

This large number of unsuitable pages can cause the
lowmemorykiller to not kill any tasks because the
lowmemorykiller counts all free pages.
In order to ensure the lowmemorykiller properly evaluates the
free memory only count the free CMA pages if they are suitable
for satisfying the memory request.

Change-Id: I7f06d53e2d8cfe7439e5561fe6e5209ce73b1c90
CRs-fixed: 437016
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2013-03-07 15:24:17 -08:00
Laura Abbott
a5e1696551 mm: Use correct define for CMA features
CMA features may ifdef out parts of the code with
CONFIG_CMA. Older code uses CONFIG_DMA_CMA. Switch
to using the newer CONFIG_CMA to ensure the code gets
compiled when needed.

Change-Id: I3cae639797787b4926a6c5e057de973b66196707
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:58 -08:00
Larry Bassel
bad999e743 mm: make counts of CMA free pages correct
Both patches needed, second patch (among other things) fixes
a bug in the first.

commit 2139cbe627
Author: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Date:   Mon Oct 8 16:32:00 2012 -0700

    cma: fix counting of isolated pages

    Isolated free pages shouldn't be accounted to NR_FREE_PAGES counter.  Fix
    it by properly decreasing/increasing NR_FREE_PAGES counter in
    set_migratetype_isolate()/unset_migratetype_isolate() and removing counter
    adjustment for isolated pages from free_one_page() and split_free_page().

    Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
    Cc: Marek Szyprowski <m.szyprowski@samsung.com>
    Cc: Michal Nazarewicz <mina86@mina86.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Hugh Dickins <hughd@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [lbassel@codeaurora.org: backport from 3.7, small changes needed]
    Signed-off-by: Larry Bassel <lbassel@codeaurora.org>

commit d1ce749a0d
Author: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Date:   Mon Oct 8 16:32:02 2012 -0700

    cma: count free CMA pages

    Add NR_FREE_CMA_PAGES counter to be later used for checking watermark in
    __zone_watermark_ok().  For simplicity and to avoid #ifdef hell make this
    counter always available (not only when CONFIG_CMA=y).

    [akpm@linux-foundation.org: use conventional migratetype naming]
    Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
    Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
    Cc: Marek Szyprowski <m.szyprowski@samsung.com>
    Cc: Michal Nazarewicz <mina86@mina86.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Hugh Dickins <hughd@google.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [lbassel@codeaurora.org: backport from 3.7, small changes needed]
    Signed-off-by: Larry Bassel <lbassel@codeaurora.org>

Change-Id: I7d4f5fe0b6931192706337e0b730f43e7cccd031
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:58 -08:00
Laura Abbott
3c2b534580 mm: Use aligned zone start for pfn_to_bitidx calculation
The current calculation in pfn_to_bitidx assumes that
(pfn - zone->zone_start_pfn) >> pageblock_order will return the
same bit for all pfn in a pageblock. If zone_start_pfn is not
aligned to pageblock_nr_pages, this may not always be correct.

Consider the following with pageblock order = 10, zone start 2MB:

pfn     | pfn - zone start | (pfn - zone start) >> page block order
----------------------------------------------------------------
0x26000 | 0x25e00	   |  0x97
0x26100 | 0x25f00	   |  0x97
0x26200 | 0x26000	   |  0x98
0x26300 | 0x26100	   |  0x98

This means that calling {get,set}_pageblock_migratetype on a single
page will not set the migratetype for the full block. Fix this by
rounding down zone_start_pfn when doing the bitidx calculation.

Change-Id: I13e2f53f50db294f38ec86138c17c6fe29f0ee82
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:38 -08:00
Minchan Kim
360ffdd941 cma: fix migration mode
__alloc_contig_migrate_range calls migrate_pages with wrong argument
for migrate_mode. Fix it.

Change-Id: I84697cf7c6aef6253e9ee7e5b3028c946b95e253
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:37 -08:00
woojoong.lee
200d9f6ddf cma : use migrate_prep() instead of migrate_prep_local()
__alloc_contig_migrate_range() should use all possible ways to get all the
pages migrated from the given memory range, so pruning per-cpu lru lists
for all CPUs is required, regadless the cost of such operation. Otherwise
some pages which got stuck at per-cpu lru list might get missed by
migration procedure causing the contiguous allocation to fail.

Change-Id: I70cc0864c57dd49e89f57797122a3fd0f300647a
Signed-off-by: woojoong.lee <woojoong.lee@samsung.com>
Reviewed-on: http://165.213.202.130:8080/43063
Tested-by: System S/W SCM <scm.systemsw@samsung.com>
Reviewed-by: daeho jeong <daeho.jeong@samsung.com>
Reviewed-by: Jeong-Ho Kim <jammer@samsung.com>
Tested-by: Jeong-Ho Kim <jammer@samsung.com>
[lauraa@codeaurora.org: Applied to correct file]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Laura Abbott
9fa56006b4 mm: Add is_cma_pageblock definition
Bring back the is_cma_pageblock definition for determining if a
page is CMA or not.

Change-Id: I39fd546e22e240b752244832c79514f109c8e84b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Liam Mark
39832ef865 mm: split_free_page ignore memory watermarks for CMA
Memory watermarks were sometimes preventing CMA allocations
in low memory.

Change-Id: I550ec987cbd6bc6dadd72b4a764df20cd0758479
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Laura Abbott
24c697b28a mm: Don't use CMA pages for writes
If CMA pages are used for writes, the writes may not complete
fast enough for CMA to be allocated within a reasonable amount
of time. If we get a CMA page, get another one to use instead.

Change-Id: I19d8ba655da7525d68d5947337d500566998971c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:35 -08:00
Heesub Shin
d4c6e690a5 cma: fix race condition on a page
cruel, brute-force method for letting cma/migration to
finish its job without stealing the lock
migration_entry_wait() and creating a live-lock on the
faulted page. This patch solves the case of
page->_count == 2 migration failure.

Change-Id: Ia94542a80e44a213831291af289bbf5ee6880bfd
Signed-off-by: Heesub Shin <heesub.shin@samsung.com>
Reviewed-on: http://165.213.202.130:8080/39341
Tested-by: System S/W SCM <scm.systemsw@samsung.com>
Tested-by: Dongjun Shin <d.j.shin@samsung.com>
Reviewed-by: Hyunju Ahn <hyunju.ahn@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:35 -08:00
Rabin Vincent
a71dc49243 mm: cma: don't replace lowmem pages with highmem
The filesystem layer expects pages in the block device's mapping to not
be in highmem (the mapping's gfp mask is set in bdget()), but CMA can
currently replace lowmem pages with highmem pages, leading to crashes in
filesystem code such as the one below:

  Unable to handle kernel NULL pointer dereference at virtual address 00000400
  pgd = c0c98000
  [00000400] *pgd=00c91831, *pte=00000000, *ppte=00000000
  Internal error: Oops: 817 [#1] PREEMPT SMP ARM
  CPU: 0    Not tainted  (3.5.0-rc5+ #80)
  PC is at __memzero+0x24/0x80
  ...
  Process fsstress (pid: 323, stack limit = 0xc0cbc2f0)
  Backtrace:
  [<c010e3f0>] (ext4_getblk+0x0/0x180) from [<c010e58c>] (ext4_bread+0x1c/0x98)
  [<c010e570>] (ext4_bread+0x0/0x98) from [<c0117944>] (ext4_mkdir+0x160/0x3bc)
   r4:c15337f0
  [<c01177e4>] (ext4_mkdir+0x0/0x3bc) from [<c00c29e0>] (vfs_mkdir+0x8c/0x98)
  [<c00c2954>] (vfs_mkdir+0x0/0x98) from [<c00c2a60>] (sys_mkdirat+0x74/0xac)
   r6:00000000 r5:c152eb40 r4:000001ff r3:c14b43f0
  [<c00c29ec>] (sys_mkdirat+0x0/0xac) from [<c00c2ab8>] (sys_mkdir+0x20/0x24)
   r6:beccdcf0 r5:00074000 r4:beccdbbc
  [<c00c2a98>] (sys_mkdir+0x0/0x24) from [<c000e3c0>] (ret_fast_syscall+0x0/0x30)

Fix this by replacing only highmem pages with highmem.

Change-Id: I6af2d509af48b5a586037be14bd3593b3f269d95
Reported-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Rabin Vincent <rabin@rab.in>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:18:02 -08:00
Yong Wang
9fc2e9e0e1 bdi: use deferable timer for sync_supers task
sync_supers task currently wakes up periodically for superblock
writeback. This hurts power on battery driven devices. This patch
turns this housekeeping timer into a deferable timer so that it
does not fire when system is really idle.

CRs-Fixed: 353700
Change-Id: Idc7953b5d0580546808bc5832291ca570837ee7f
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
Signed-off-by: Xia Wu <xia.wu@intel.com>
Signed-off-by: Krishna Vanka <kvanka@codeaurora.org>
2013-02-27 18:16:50 -08:00
Marek Szyprowski
3d46ca5672 mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks
alloc_contig_range() performs memory allocation so it also should keep
track on keeping the correct level of memory watermarks. This commit adds
a call to *_slowpath style reclaim to grab enough pages to make sure that
the final collection of contiguous pages from freelists will not starve
the system.

Change-Id: I2d68d9ac2cfcd32ca6f515fc7e44e8d9d850dff1
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:15 -08:00
Marek Szyprowski
b68303ab8d mm: extract reclaim code from __alloc_pages_direct_reclaim()
This patch extracts common reclaim code from __alloc_pages_direct_reclaim()
function to separate function: __perform_reclaim() which can be later used
by alloc_contig_range().

Change-Id: Ia9d8b82018d91dc669488955b20f69f1cba43147
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:03 -08:00
Mel Gorman
2c063ac1c8 mm: Serialize access to min_free_kbytes
There is a race between the min_free_kbytes sysctl, memory hotplug
and transparent hugepage support enablement.  Memory hotplug uses a
zonelists_mutex to avoid a race when building zonelists. Reuse it to
serialise watermark updates.

Change-Id: I31786592a8cc03e579ee01d99d7eba76e926263f
[a.p.zijlstra@chello.nl: Older patch fixed the race with spinlock]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:03 -08:00
Michal Nazarewicz
944e4004e2 mm: page_isolation: MIGRATE_CMA isolation functions added
This commit changes various functions that change pages and
pageblocks migrate type between MIGRATE_ISOLATE and
MIGRATE_MOVABLE in such a way as to allow to work with
MIGRATE_CMA migrate type.

Change-Id: Ib3a0b04cae49396b206a39bfced470e218ab1f90
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:02 -08:00
Michal Nazarewicz
269a4c9264 mm: mmzone: MIGRATE_CMA migration type added
The MIGRATE_CMA migration type has two main characteristics:
(i) only movable pages can be allocated from MIGRATE_CMA
pageblocks and (ii) page allocator will never change migration
type of MIGRATE_CMA pageblocks.

This guarantees (to some degree) that page in a MIGRATE_CMA page
block can always be migrated somewhere else (unless there's no
memory left in the system).

It is designed to be used for allocating big chunks (eg. 10MiB)
of physically contiguous memory.  Once driver requests
contiguous memory, pages from MIGRATE_CMA pageblocks may be
migrated away to create a contiguous block.

To minimise number of migrations, MIGRATE_CMA migration type
is the last type tried when page allocator falls back to other
migration types when requested.

Change-Id: I2bb0954de8be4f212b03dea0e5a508048684bda2
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:14:01 -08:00
Michal Nazarewicz
629b2bf987 mm: page_alloc: change fallbacks array handling
This commit adds a row for MIGRATE_ISOLATE type to the fallbacks array
which was missing from it.  It also, changes the array traversal logic
a little making MIGRATE_RESERVE an end marker.  The letter change,
removes the implicit MIGRATE_UNMOVABLE from the end of each row which
was read by __rmqueue_fallback() function.

Change-Id: Icdbbebb9eece2468c0b963964be9a4c579cbc775
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:59 -08:00
Michal Nazarewicz
8b32931307 mm: page_alloc: introduce alloc_contig_range()
This commit adds the alloc_contig_range() function which tries
to allocate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

Change-Id: I659b133b1c9991568bfb6bd09c7792e15f2a2bfb
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:59 -08:00
Michal Nazarewicz
a7d917fb4c mm: compaction: export some of the functions
This commit exports some of the functions from compaction.c file
outside of it adding their declaration into internal.h header
file so that other mm related code can use them.

This forced compaction.c to always be compiled (as opposed to being
compiled only if CONFIG_COMPACTION is defined) but as to avoid
introducing code that user did not ask for, part of the compaction.c
is now wrapped in on #ifdef.

Change-Id: Id51bc882d1befd5afef2c8d1e5dcc7993496893c
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:58 -08:00
Michal Nazarewicz
88a9ee7a9e mm: compaction: introduce isolate_freepages_range()
This commit introduces isolate_freepages_range() function which
generalises isolate_freepages_block() so that it can be used on
arbitrary PFN ranges.

isolate_freepages_block() is left with only minor changes.

Change-Id: I59917adce1fa71ef9f0534abb5ba07f84c6dcdc7
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:58 -08:00
Michal Nazarewicz
b3eba5647b mm: compaction: introduce map_pages()
This commit creates a map_pages() function which map pages freed
using split_free_pages().  This merely moves some code from
isolate_freepages() so that it can be reused in other places.

Change-Id: Ia67eaee00ca88a402f21a651de409bd93ae00312
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:57 -08:00
Michal Nazarewicz
78a0b54b30 mm: compaction: introduce isolate_migratepages_range()
This commit introduces isolate_migratepages_range() function which
extracts functionality from isolate_migratepages() so that it can be
used on arbitrary PFN ranges.

isolate_migratepages() function is implemented as a simple wrapper
around isolate_migratepages_range().

Change-Id: I8e82434ba75d9862bec485392eb079c9d3b70ae0
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:57 -08:00
Michal Nazarewicz
aa8a31b992 mm: page_alloc: remove trailing whitespace
Change-Id: I1f112fa3be958d1f9d24ebd076ef4ddcf91fe868
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-02-27 18:13:56 -08:00
Shashank Mittal
48ac59797a mm: Fix a compiler warning.
Fix compiler warning for a variable not initialized.

Change-Id: Ieedeb1cfb5a22eb5f671e6bfd1361315347a49af
Signed-off-by: Shashank Mittal <mittals@codeaurora.org>
2013-02-27 18:11:30 -08:00
Stephen Boyd
84d1c1a3a3 Merge branch 'goog/googly' (early part) into goog/msm-soc-3.4
Fix NR_IPI to be 7 instead of 6 because both googly and core add
an IPI.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>

Conflicts:
	arch/arm/Kconfig
	arch/arm/common/Makefile
	arch/arm/include/asm/hardware/cache-l2x0.h
	arch/arm/mm/cache-l2x0.c
	arch/arm/mm/mmu.c
	include/linux/wakelock.h
	kernel/power/Kconfig
	kernel/power/Makefile
	kernel/power/main.c
	kernel/power/power.h
2013-02-25 11:25:46 -08:00
Larry Bassel
0cdae47a9f mm: make physical memory offline work
In recent versions, the platform specific physical
offline returns the number of bytes offlined, so
a value of 0 indicates an error, not success as in
older versions. Make sure that the memory
for the original memory resource nodes is not
freed via kfree, as this memory was obtained
from alloc_bootmem very early in the system's life.

Change-Id: Iffcdd8be4483e043d7605fce596ed438b15f3e02
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 2421717cb10a06814d7bdb431485aa3a5e364f36)
2013-02-20 02:44:04 -08:00
Larry Bassel
ee1f3b0c9b msm: Add low power mode for dynamic memory managment.
Add Low Power mode TAG.
Add new API's for mem lowpower modes.
Create new sys file for mem low power modes.
Set SECTION_SIZE_BITS to 28.
Change NPA_MEMORY_NODE_NAME to "/mem/apps/ddr_dpd".
Fix NPA node create function to do atomic_inc()
in atomic_dec_and_test() failure case.

Change-Id: Ia5cb18b99338c43165d5401e619c773cd8d6b3f6
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit b054046e708f8c5b044e76c2df6f72fd607be558)

Conflicts:

	arch/arm/include/asm/setup.h
	arch/arm/kernel/setup.c
	arch/arm/mach-msm/include/mach/memory.h
	arch/arm/mach-msm/memory.c
	drivers/base/memory.c
	include/linux/memory_hotplug.h
2013-02-20 02:44:04 -08:00
Larry Bassel
0f9c403ce0 mm: add support for putting memory in a low-power state
The file /sys/devices/system/memory/low_power now exists.

Writing a physical address into this file will put this
section of memory into a low power state (retaining contents)
if the architecture and platform supports it.

Change-Id: I70592d37f1091a1b533f2374546ba67b50ea7d30
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit 1f4d1c8e295aaf66b23309caa0d03b09b7009b99)

Conflicts:

	drivers/base/memory.c
	include/linux/memory_hotplug.h
2013-02-20 02:44:03 -08:00
Larry Bassel
c80281297a Add physical memory hotremove API
This provides the physical memory hotremove API (this is needed since our
physical memory removal is done by powering it off, which needs
to be requested by userspace, not by someone physically pulling memory
out of a machine).

Change-Id: Ic34426a91a1aac2bd4a45677ee00c2b7a3f84746
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit d651b6964bbb50d3c1fee6f76467a0f867286dfb)
2013-02-20 02:44:02 -08:00
Jack Cheung
f53e4210e3 mm: Fix zone->present_pages underflow
If offlined_pages is greater than
zone->present_pages, underflow will occur.

This change will set zone->present_pages to 0 if
offlined_pages is greater.

Change-Id: I728e90c60fb7fc391de7b9c4828ab264ca38653b
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 80c201e25e8dbc00427b73d90b1527c356526442)
2013-02-20 02:44:02 -08:00
Jack Cheung
a76d85b895 mm: Fix infinite loop when offlining
When page migration is unable to find free memory it
will go into an infinite loop because the
all_unreclaimable flag is never set.

This change will allow memory offlining to fail
gracefully if there is not enough memory for page
migration.

__GFP_NORETRY tells the page_alloc to not retry.
__GFP_NOWARN suppresses page fault warnings when
page allocation fails.
__GFP_NOMEMALLOC prevents it from aggressively
allocating beyond zone watermarks.

Change-Id: I94dfd9059851c7b24953f44a4018a3bbac840688
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 1e6ec3c7e399aa84d29ced48b16a91acaba962f0)
2013-02-20 02:44:01 -08:00
Jack Cheung
ea702500f8 mm: Drain pages after onlining
When onlining, the onlined pages must be added to the kernel's
list of free pages using __free_page(). However, pages are not
immediately added but placed in a queue to be processed
when the queue size reaches a watermark. The last pages in
the queue may not be processed in time, and if you try to
offline that memory before it is processed, offlining will
always fail.

This fix calls drain_all_pages(), which will process every
free page in the queue. This ensures that all pages are
accounted for when onlining and nothing gets stuck in the queue.

Change-Id: I54dbc0749556702407090e51ce9246abc5db7d1c
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit aa7e9dec5cfd309cb9eb6cb56a284a61607a925a)
2013-02-20 02:44:01 -08:00
Jesse Tannahill
92ab47cec0 mm: Make memory hotplug aware of memmap holes
This patch prevents memory hotplug from marking pages of the memmap that
only reference holes in the physical address space as private. Some
architectures (including ARM) attempt to free these unneeded parts of the
memmap, and attempting to free a private page will throw bad_page warnings
and tie up the memory indefinitely.

This patch also allows early_pfn_valid to be architecture specific and
defines it for ARM. The definition for ARM takes into account memory banks
and the holes in physical memory.

CRs-Fixed: 247010

Change-Id: Iad88d427b1b923a808b026c22d2899fa0483cb9e
Signed-off-by: jesset@codeaurora.org
(cherry picked from commit 0b610c773ad6281a3d217fbbe894b2476e9e71dd)

Conflicts:

	arch/arm/mm/init.c
2013-02-20 02:44:00 -08:00
KyongHo
ccf87696ca mm: fix faulty initialization in vmalloc_init()
The transfer of ->flags causes some of the static mapping virtual
addresses to be prematurely freed (before the mapping is removed) because
VM_LAZY_FREE gets "set" if tmp->flags has VM_IOREMAP set.  This might
cause subsequent vmalloc/ioremap calls to fail because it might allocate
one of the freed virtual address ranges that aren't unmapped.

va->flags has different types of flags from tmp->flags.  If a region with
VM_IOREMAP set is registered with vm_area_add_early(), it will be removed
by __purge_vmap_area_lazy().

Fix vmalloc_init() to correctly initialize vmap_area for the given
vm_struct.

Also initialise va->vm.  If it is not set, find_vm_area() for the early
vm regions will always fail.

Signed-off-by: KyongHo Cho <pullip.cho@samsung.com>
Cc: "Olav Haugan" <ohaugan@codeaurora.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
(cherry picked from commit fa002621c590c56e13cd86e944919a5771a6e03e)
2013-02-20 02:43:59 -08:00
David Vrabel
9988da2922 mm: sync vmalloc address space page tables in alloc_vm_area()
commit 461ae488ec upstream.

Xen backend drivers (e.g., blkback and netback) would sometimes fail to
map grant pages into the vmalloc address space allocated with
alloc_vm_area().  The GNTTABOP_map_grant_ref would fail because Xen could
not find the page (in the L2 table) containing the PTEs it needed to
update.

(XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000

netback and blkback were making the hypercall from a kernel thread where
task->active_mm != &init_mm and alloc_vm_area() was only updating the page
tables for init_mm.  The usual method of deferring the update to the page
tables of other processes (i.e., after taking a fault) doesn't work as a
fault cannot occur during the hypercall.

This would work on some systems depending on what else was using vmalloc.

Fix this by reverting ef691947d8 ("vmalloc: remove vmalloc_sync_all()
from alloc_vm_area()") and add a comment to explain why it's needed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir.xen@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>

(cherry picked from commit d63c8a029e509ad48ee9290874731789f9008537)
2013-02-20 02:43:59 -08:00
Larry Bassel
514cd9bbbc mm: use required fixed size of movable zone if FIX_MOVABLE_ZONE
If FIX_MOVABLE_ZONE is enabled, we want a specific size and
location of ZONE_MOVABLE.

Change-Id: I0b858c7310cd328e1118abc9d5fe6f364bb4ffad
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
(cherry picked from commit e6436864f939e77226c8e971610539649ed5f869)
2013-02-20 02:43:58 -08:00
Jack Cheung
f6e34be773 mm: Add total_unmovable_pages global variable
Vmalloc will exit if the amount it needs to allocate is
greater than totalram_pages. Vmalloc cannot allocate
from the movable zone, so pages in the movable zone should
not be counted.

This change adds a new global variable: total_unmovable_pages.
It is calculated in init.c, based on totalram_pages minus
the pages in the movable zone. Vmalloc now looks at this new
global instead of totalram_pages.

total_unmovable_pages can be modified during memory_hotplug.
If the zone you are offlining/onlining is unmovable, then
you modify it similar to totalram_pages.  If the zone is
movable, then no change is needed.

Change-Id: Ie55c41051e9ad4b921eb04ecbb4798a8bd2344d6
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 59f9f1c9ae463a3d4499cd9353619f8b1993371b)

Conflicts:

	arch/arm/mm/init.c
	mm/memory_hotplug.c
	mm/page_alloc.c
	mm/vmalloc.c
2013-02-20 02:43:58 -08:00
Jack Cheung
18e44d3eaf mm: Cast lowmem_reserve to long
z->lowmem_reserve[classzone_idx] is an unsigned long but
free_pages and min are longs. If free_pages is
negative, the function will incorrectly return true
because it will treat the negative long as a large,
positive unsigned long.

This change casts z->lowmem_reserve to a long and
fixes a typo in the comment.

Change-Id: Icada1fa5ca650fbcdb0656f637adbb98f223eec5
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
(cherry picked from commit 9f41da81017657a194a4e145bab337f13a4d7fd9)
2013-02-20 02:43:57 -08:00
Naveen Ramaraj
4fe9e26803 msm: mm: Fix errors when turning on SPARSEMEM
Carry forward the fix from 2.6.29b kernel to handle the
NR_SECTION_ROOTS == 0 case when turning on SPARSEMEM on MSM.
Also fix the linux coding style typos by replacing
"foo* bar" with "foo *bar".

Change-Id: I77f259f3d62980a37f4ae7c4680a3617c9b4f563
Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
(cherry picked from commit 1ff28b9751596dbaf55a3e40f593d3013380a3ac)
2013-02-20 02:43:57 -08:00
Stephen Boyd
321d16d801 memblock: Add memblock_overlaps_memory()
Add a new function, memblock_overlaps_memory(), to check if a
region overlaps with a memory bank. This will be used by
peripheral loader code to detect when kernel memory would be
overwritten.

Change-Id: I851f8f416a0f36e85c0e19536b5209f7d4bd431c
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
(cherry picked from commit 1aa4e5a974b3087d29510802810170c071df8546)

Conflicts:

	include/linux/memblock.h
2013-02-20 02:43:56 -08:00
Bryan Huntsman
e900f7edd3 ARM: allow memory hotplug/hotremove
Add ARM to the supported list of architectures for MEMORY_HOTPLUG.  For
ARM, the selection of MEMORY_HOTPLUG/REMOVE is specific to the sub-arch
and has to be explicitly enabled by the sub-arch.

Signed-off-by: Bryan Huntsman <bryanh@codeaurora.org>
(cherry picked from commit 45f580d9c36b93204882dffc6fb9f4a254c3d34a)
2013-02-20 01:31:58 -08:00
Colin Cross
5500e4fab2 Merge commit 'v3.4' into android-3.4 2012-05-25 13:56:28 -07:00
Hugh Dickins
62ade86ab6 memcg,thp: fix res_counter:96 regression
Occasionally, testing memcg's move_charge_at_immigrate on rc7 shows
a flurry of hundreds of warnings at kernel/res_counter.c:96, where
res_counter_uncharge_locked() does WARN_ON(counter->usage < val).

The first trace of each flurry implicates __mem_cgroup_cancel_charge()
of mc.precharge, and an audit of mc.precharge handling points to
mem_cgroup_move_charge_pte_range()'s THP handling in commit 12724850e8
("memcg: avoid THP split in task migration").

Checking !mc.precharge is good everywhere else, when a single page is to
be charged; but here the "mc.precharge -= HPAGE_PMD_NR" likely to
follow, is liable to result in underflow (a lot can change since the
precharge was estimated).

Simply check against HPAGE_PMD_NR: there's probably a better
alternative, trying precharge for more, splitting if unsuccessful; but
this one-liner is safer for now - no kernel/res_counter.c:96 warnings
seen in 26 hours.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-19 10:10:27 -07:00
majianpeng
02e1a9cd1e slub: missing test for partial pages flush work in flush_all()
I found some kernel messages such as:

    SLUB raid5-md127: kmem_cache_destroy called for cache that still has objects.
    Pid: 6143, comm: mdadm Tainted: G           O 3.4.0-rc6+        #75
    Call Trace:
    kmem_cache_destroy+0x328/0x400
    free_conf+0x2d/0xf0 [raid456]
    stop+0x41/0x60 [raid456]
    md_stop+0x1a/0x60 [md_mod]
    do_md_stop+0x74/0x470 [md_mod]
    md_ioctl+0xff/0x11f0 [md_mod]
    blkdev_ioctl+0xd8/0x7a0
    block_ioctl+0x3b/0x40
    do_vfs_ioctl+0x96/0x560
    sys_ioctl+0x91/0xa0
    system_call_fastpath+0x16/0x1b

Then using kmemleak I found these messages:

    unreferenced object 0xffff8800b6db7380 (size 112):
      comm "mdadm", pid 5783, jiffies 4294810749 (age 90.589s)
      hex dump (first 32 bytes):
        01 01 db b6 ad 4e ad de ff ff ff ff ff ff ff ff  .....N..........
        ff ff ff ff ff ff ff ff 98 40 4a 82 ff ff ff ff  .........@J.....
      backtrace:
        kmemleak_alloc+0x21/0x50
        kmem_cache_alloc+0xeb/0x1b0
        kmem_cache_open+0x2f1/0x430
        kmem_cache_create+0x158/0x320
        setup_conf+0x649/0x770 [raid456]
        run+0x68b/0x840 [raid456]
        md_run+0x529/0x940 [md_mod]
        do_md_run+0x18/0xc0 [md_mod]
        md_ioctl+0xba8/0x11f0 [md_mod]
        blkdev_ioctl+0xd8/0x7a0
        block_ioctl+0x3b/0x40
        do_vfs_ioctl+0x96/0x560
        sys_ioctl+0x91/0xa0
        system_call_fastpath+0x16/0x1b

This bug was introduced by commit a8364d5555 ("slub: only IPI CPUs that
have per cpu obj to flush"), which did not include checks for per cpu
partial pages being present on a cpu.

Signed-off-by: majianpeng <majianpeng@gmail.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Tested-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-17 18:00:51 -07:00
Colin Cross
ec0b571c19 Merge commit 'v3.4-rc7' into android-3.4 2012-05-14 16:41:02 -07:00
Hugh Dickins
1b76b02f15 mm: raise MemFree by reverting percpu_pagelist_fraction to 0
Why is there less MemFree than there used to be?  It perturbed a test,
so I've just been bisecting linux-next, and now find the offender went
upstream yesterday.

Commit 93278814d3 "mm: fix division by 0 in percpu_pagelist_fraction()"
mistakenly initialized percpu_pagelist_fraction to the sysctl's minimum 8,
which leaves 1/8th of memory on percpu lists (on each cpu??); but most of
us expect it to be left unset at 0 (and it's not then used as a divisor).

  MemTotal: 8061476kB  8061476kB  8061476kB  8061476kB  8061476kB  8061476kB
  Repetitive test with percpu_pagelist_fraction 8:
  MemFree:  6948420kB  6237172kB  6949696kB  6840692kB  6949048kB  6862984kB
  Same test with percpu_pagelist_fraction back to 0:
  MemFree:  7945000kB  7944908kB  7948568kB  7949060kB  7948796kB  7948812kB

Signed-off-by: Hugh Dickins <hughd@google.com>
[ We really should fix the crazy sysctl interface too, but that's a
  separate thing - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-11 09:23:39 -07:00
Linus Torvalds
7c283324da Merge branch 'akpm' (Andrew's patch-bomb)
Merge misc fixes from Andrew Morton.

* emailed from Andrew Morton <akpm@linux-foundation.org>: (8 patches)
  MAINTAINERS: add maintainer for LED subsystem
  mm: nobootmem: fix sign extend problem in __free_pages_memory()
  drivers/leds: correct __devexit annotations
  memcg: free spare array to avoid memory leak
  namespaces, pid_ns: fix leakage on fork() failure
  hugetlb: prevent BUG_ON in hugetlb_fault() -> hugetlb_cow()
  mm: fix division by 0 in percpu_pagelist_fraction()
  proc/pid/pagemap: correctly report non-present ptes and holes between vmas
2012-05-10 15:17:24 -07:00