mm: vmscan: support complete shrinker reclaim

Ensure that shrinkers are given the option to completely drop
their caches even when their caches are smaller than the batch size.

This change helps improve memory headroom by ensuring that under
significant memory pressure shrinkers can drop all of their caches.

This change only attempts to more aggressively call the shrinkers
during background memory reclaim inorder to avoid hurting the
perforamnce of direct memory reclaim.

Change-Id: I8dbc29c054add639e4810e36fd2c8a063e5c52f3
Signed-off-by: Liam Mark <lmark@codeaurora.org>
This commit is contained in:
Liam Mark 2014-08-12 10:54:04 -07:00
parent 648cfa62fe
commit ec8ce5ecc8

View file

@ -293,6 +293,10 @@ unsigned long shrink_slab(struct shrink_control *shrink,
long new_nr;
long batch_size = shrinker->batch ? shrinker->batch
: SHRINK_BATCH;
long min_cache_size = batch_size;
if (current_is_kswapd())
min_cache_size = 0;
max_pass = do_shrinker_shrink(shrinker, shrink, 0);
if (max_pass <= 0)
@ -344,9 +348,12 @@ unsigned long shrink_slab(struct shrink_control *shrink,
nr_pages_scanned, lru_pages,
max_pass, delta, total_scan);
while (total_scan >= batch_size) {
while (total_scan > min_cache_size) {
int nr_before;
if (total_scan < batch_size)
batch_size = total_scan;
nr_before = do_shrinker_shrink(shrinker, shrink, 0);
shrink_ret = do_shrinker_shrink(shrinker, shrink,
batch_size);