mm: adjust page migration heuristic

The page allocator's heuristic to decide when to migrate page blocks to
unmovable seems to have been tuned on architectures that do not have
kernel drivers that would make unmovable allocations of several
megabytes or greater--ie, no cameras or shared-memory GPUs. The number
of allocations from these drivers may be unbounded and may occupy a
significant percentage of overall system memory (>50%). As a result,
every Android device has suffered to some extent from increasing
fragmentation due to unmovable page block migration over time.

This change adjusts the page migration heuristic to only migrate page
blocks for unmovable allocations when the order of the requested
allocation is order-5 or greater. This prevents migration due to GPU and
ion allocations so long as kernel drivers allocate memory at runtime
using order-4 or smaller pages.

Experimental results running the Android longevity test suite on a Nexus
5X for 10 hours:

old heuristic: 116 unmovable blocks after boot -> 281 unmovable blocks
new heuristic: 105 unmovable blocks after boot -> 101 unmovable blocks

bug 26916944

Change-Id: I5b7ccbbafa4049a2f47f399df4cb4779689f4c40
This commit is contained in:
Tim Murray 2016-02-29 10:10:34 -08:00 committed by syphyr
parent d8c37712a8
commit 6c39793bbe
1 changed files with 6 additions and 4 deletions

View File

@ -1110,7 +1110,8 @@ static void change_pageblock_range(struct page *pageblock_page,
* as well.
*/
static void try_to_steal_freepages(struct zone *zone, struct page *page,
int start_type, int fallback_type)
int start_type, int fallback_type,
int start_order)
{
int current_order = page_order(page);
@ -1122,7 +1123,8 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
if (current_order >= pageblock_order / 2 ||
start_type == MIGRATE_RECLAIMABLE ||
start_type == MIGRATE_UNMOVABLE ||
// allow unmovable allocs up to 64K without migrating blocks
(start_type == MIGRATE_UNMOVABLE && start_order >= 5) ||
page_group_by_mobility_disabled) {
int pages;
@ -1168,8 +1170,8 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
if (!is_migrate_cma(migratetype)) {
try_to_steal_freepages(zone, page,
start_migratetype,
migratetype);
start_migratetype,
migratetype, order);
} else {
/*
* When borrowing from MIGRATE_CMA, we need to