Commit Graph

446909 Commits

Author SHA1 Message Date
syphyr d132d6387f ANDROID: Remove conflicting Samsung options for upstream changes
In order to bring lowmemorykiller in sync with Google sources,
the following Samsung specific changes have been removed:

SEC_TIMEOUT_LOW_MEMORY_KILLER
SEC_DEBUG_LMK_MEMINFO
SEC_DEBUG_LMK_COUNT_INFO

These options are not used upstream and conflict.
2019-07-27 22:09:50 +02:00
Tim Murray 9697139d52 lowmemorykiller: account for unevictable pages
lowmemorykiller was not taking into account unevictable pages when
deciding what level to kill. If significant amounts of memory were
pinned, this caused lowmemorykiller to effectively stop at a much higher
level than it should.

bug 31255977

Change-Id: I763ecbfef8c56d65bb8f6147ae810692bd81b6e2
2019-07-27 22:09:50 +02:00
Vinayak Menon 862f4f71e0 staging: android: lowmemorykiller: neglect swap cached pages in other_file
With ZRAM enabled it is observed that lowmemory killer
doesn't trigger properly. swap cached pages are
accounted in NR_FILE, and lowmemorykiller considers
this as reclaimable and adds to other_file. But these
pages can't be reclaimed unless lowmemorykiller triggers.
So subtract swap pages from other_file.

Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 058dbde928597e7a8bd04e28e77e5cfc4270591d)

Change-Id: I217e831bbe1db830e6d61c7943e442a32a7548a1
2019-07-27 22:09:49 +02:00
Thierry Strudel 8a04cc5c75 Revert "android/lowmemorykiller: Check all tasks for death pending"
This reverts commit 7ec0cf6d3f.
2019-07-27 22:09:49 +02:00
Thierry Strudel 919a123cc9 Revert "android/lowmemorykiller: Wait for memory to be freed"
This reverts commit 25f87e504b.

Change-Id: If8709a45b36fb73bee4e9c3e4e99937a8a952651
2019-07-27 22:09:48 +02:00
Thierry Strudel 6f6c955aa5 Revert "android/lowmemorykiller: Ignore tasks with freed mm"
This reverts commit 05d5ad4d0a.

Change-Id: Ib5770b5b38f123116322507646ae9bce6c3c186a
2019-07-27 22:09:48 +02:00
Thierry Strudel 44ba2354e3 Revert "android: lowmemorykiller: add lmk parameters tunning code."
This reverts commit f92471abb9.
2019-07-27 22:09:48 +02:00
Thierry Strudel 6b1475d492 Revert "android/lowmemorykiller: Selectively count free CMA pages"
This reverts commit 06e8520b10.
2019-07-27 22:09:47 +02:00
Thierry Strudel 0675816e5b Revert "lowmemorykiller: Account for highmem during kswapd reclaim"
This reverts commit e137b1a41f.
2019-07-27 22:09:47 +02:00
Thierry Strudel 70bbc4c513 Revert "lowmemorykiller: enhance debug information"
This reverts commit ba79232663.

Change-Id: I6a1b524ccdcd7c963cd0c380061b1b05c9a3fe3e
2019-07-27 22:09:46 +02:00
Thierry Strudel 6b8d46ce77 Revert "lowmemorykiller: Dump out slab state information"
This reverts commit ed1aff26c1.
2019-07-27 22:09:46 +02:00
Thierry Strudel 5cfdb9221d Revert "lowmemorykiller: Run the lowmemory notifier when killing"
This reverts commit f49905e2be.
2019-07-27 22:09:45 +02:00
Thierry Strudel da8ab3ac0e Revert "lowmemorykiller: use for_each_thread instead of buggy while_each_thread"
This reverts commit 4e352bff294dc89bbd9fc74646d9fe01cbfd6e02.
2019-07-27 22:09:45 +02:00
Thierry Strudel 200ed15225 Revert "lowmemorykiller: Don't count swap cache pages twice"
This reverts commit 52acbe414c1643066b299c1e9cdae7f4f188d419.
2019-07-27 22:09:45 +02:00
Thierry Strudel 9174c4f83f Revert "lowmemorykiller: Do proper NULL checks"
This reverts commit a7d54d72883cf7cb31c059e31125695babbf2b8d.
2019-07-27 22:09:44 +02:00
syphyr dece380b97 Revert "lowmemorykiller: adapt to vmpressure"
This reverts commit a7668cd5e2.
2019-07-27 22:09:44 +02:00
syphyr 46a47a6d0a Revert "lowmemorykiller: avoid false adaptive LMK triggers"
This reverts commit deafbd6437.
2019-07-27 22:09:43 +02:00
syphyr 688ad4c9cd Revert "lowmemorykiller: Introduce sysfs node for ALMK and PPR adj threshold"
This reverts commit b0c67828b5.
2019-07-27 22:09:43 +02:00
Thierry Strudel bfd76409e1 Revert "android/lowmemorykiller: Account for total_swapcache_pages"
This reverts commit 3a610c281c.
2019-07-27 22:09:43 +02:00
Thierry Strudel d7b96a1cf4 Revert "lowmemorykiller: Don't count reserve page twice"
This reverts commit 1fb8384f99.
2019-07-27 22:09:42 +02:00
syphyr 6ee8027b59 lowmemorykiller: Remove Samsung specific code 2019-07-27 22:09:42 +02:00
Darrick J. Wong de925a0e9d tmpfs: fix uninitialized return value in shmem_link
When we made the shmem_reserve_inode call in shmem_link conditional, we
forgot to update the declaration for ret so that it always has a known
value.  Dan Carpenter pointed out this deficiency in the original patch.

Fixes: 1062af920c07 ("tmpfs: fix link accounting when a tmpfile is linked in")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Matej Kupljen <matej.kupljen@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 29b00e609960ae0fcff382f4c7079dd0874a5311)
Change-Id: I648253008c7977fabb9d04655c11f99a536f43ca
2019-07-27 22:09:41 +02:00
Darrick J. Wong 6215edf530 tmpfs: fix link accounting when a tmpfile is linked in
tmpfs has a peculiarity of accounting hard links as if they were
separate inodes: so that when the number of inodes is limited, as it is
by default, a user cannot soak up an unlimited amount of unreclaimable
dcache memory just by repeatedly linking a file.

But when v3.11 added O_TMPFILE, and the ability to use linkat() on the
fd, we missed accommodating this new case in tmpfs: "df -i" shows that
an extra "inode" remains accounted after the file is unlinked and the fd
closed and the actual inode evicted.  If a user repeatedly links
tmpfiles into a tmpfs, the limit will be hit (ENOSPC) even after they
are deleted.

Just skip the extra reservation from shmem_link() in this case: there's
a sense in which this first link of a tmpfile is then cheaper than a
hard link of another file, but the accounting works out, and there's
still good limiting, so no need to do anything more complicated.

Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1902182134370.7035@eggly.anvils
Fixes: f4e0c30c191 ("allow the temp files created by open() to be linked to")
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Reported-by: Matej Kupljen <matej.kupljen@gmail.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 1062af920c07f5b54cf5060fde3339da6df0cf6b)
Change-Id: I194ccdf709b1a4a385e00e07f97085521bcabdc1
2019-07-27 22:09:41 +02:00
syphyr b67119054a lowmemorykiller: Match case of config settings 2019-07-27 22:09:40 +02:00
Naveen Ramaraj 49b3408fe8 mm: Fix mem_init_print_info() for UML
UML uses _end instead of __bss_stop to represent the boundary

Bug: 21631098
Change-Id: I9ad92d99d3de2ca89497b1a14d98322c43ea99fa
Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
2019-07-27 22:09:40 +02:00
Tim Murray 7afbc50a96 mm: improve migration heuristic
Some users were still seeing extreme unmovable page block migration over
time due to unmovable allocations stealing mostly free movable
blocks. Reduce the likelihood of this by only allowing unmovable
allocations to aggressively steal reclaimable pageblocks.

bug 26916944

Change-Id: I87fe0b0963ea967e4edf1ef60ae3fd297bf6978c
2019-07-27 22:09:39 +02:00
Tim Murray 6c39793bbe mm: adjust page migration heuristic
The page allocator's heuristic to decide when to migrate page blocks to
unmovable seems to have been tuned on architectures that do not have
kernel drivers that would make unmovable allocations of several
megabytes or greater--ie, no cameras or shared-memory GPUs. The number
of allocations from these drivers may be unbounded and may occupy a
significant percentage of overall system memory (>50%). As a result,
every Android device has suffered to some extent from increasing
fragmentation due to unmovable page block migration over time.

This change adjusts the page migration heuristic to only migrate page
blocks for unmovable allocations when the order of the requested
allocation is order-5 or greater. This prevents migration due to GPU and
ion allocations so long as kernel drivers allocate memory at runtime
using order-4 or smaller pages.

Experimental results running the Android longevity test suite on a Nexus
5X for 10 hours:

old heuristic: 116 unmovable blocks after boot -> 281 unmovable blocks
new heuristic: 105 unmovable blocks after boot -> 101 unmovable blocks

bug 26916944

Change-Id: I5b7ccbbafa4049a2f47f399df4cb4779689f4c40
2019-07-27 22:09:39 +02:00
Vlastimil Babka d8c37712a8 mm: more aggressive page stealing for UNMOVABLE allocations
When allocation falls back to stealing free pages of another migratetype,
it can decide to steal extra pages, or even the whole pageblock in order
to reduce fragmentation, which could happen if further allocation
fallbacks pick a different pageblock.  In try_to_steal_freepages(), one of
the situations where extra pages are stolen happens when we are trying to
allocate a MIGRATE_RECLAIMABLE page.

However, MIGRATE_UNMOVABLE allocations are not treated the same way,
although spreading such allocation over multiple fallback pageblocks is
arguably even worse than it is for RECLAIMABLE allocations.  To minimize
fragmentation, we should minimize the number of such fallbacks, and thus
steal as much as is possible from each fallback pageblock.

Note that in theory this might put more pressure on movable pageblocks and
cause movable allocations to steal back from unmovable pageblocks.
However, movable allocations are not as aggressive with stealing, and do
not cause permanent fragmentation, so the tradeoff is reasonable, and
evaluation seems to support the change.

This patch thus adds a check for MIGRATE_UNMOVABLE to the decision to
steal extra free pages.  When evaluating with stress-highalloc from
mmtests, this has reduced the number of MIGRATE_UNMOVABLE fallbacks to
roughly 1/6.  The number of these fallbacks stealing from MIGRATE_MOVABLE
block is reduced to 1/3.  There was no observation of growing number of
unmovable pageblocks over time, and also not of increased movable
allocation fallbacks.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:39 +02:00
Vlastimil Babka b67f636eef mm: always steal split buddies in fallback allocations
When allocation falls back to another migratetype, it will steal a page
with highest available order, and (depending on this order and desired
migratetype), it might also steal the rest of free pages from the same
pageblock.

Given the preference of highest available order, it is likely that it will
be higher than the desired order, and result in the stolen buddy page
being split.  The remaining pages after split are currently stolen only
when the rest of the free pages are stolen.  This can however lead to
situations where for MOVABLE allocations we split e.g.  order-4 fallback
UNMOVABLE page, but steal only order-0 page.  Then on the next MOVABLE
allocation (which may be batched to fill the pcplists) we split another
order-3 or higher page, etc.  By stealing all pages that we have split, we
can avoid further stealing.

This patch therefore adjusts the page stealing so that buddy pages created
by split are always stolen.  This has effect only on MOVABLE allocations,
as RECLAIMABLE and UNMOVABLE allocations already always do that in
addition to stealing the rest of free pages from the pageblock.  The
change also allows to simplify try_to_steal_freepages() and factor out CMA
handling.

According to Mel, it has been intended since the beginning that buddy
pages after split would be stolen always, but it doesn't seem like it was
ever the case until commit 47118af076 ("mm: mmzone: MIGRATE_CMA
migration type added").  The commit has unintentionally introduced this
behavior, but was reverted by commit 0cbef29a7821 ("mm:
__rmqueue_fallback() should respect pageblock type").  Neither included
evaluation.

My evaluation with stress-highalloc from mmtests shows about 2.5x
reduction of page stealing events for MOVABLE allocations, without
affecting the page stealing events for other allocation migratetypes.

Change-Id: I2c5b1a7fd01fc080efb689da07d380abd0e030ee
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:38 +02:00
Vlastimil Babka 674e801619 mm: when stealing freepages, also take pages created by splitting buddy page
When studying page stealing, I noticed some weird looking decisions in
try_to_steal_freepages().  The first I assume is a bug (Patch 1), the
following two patches were driven by evaluation.

Testing was done with stress-highalloc of mmtests, using the
mm_page_alloc_extfrag tracepoint and postprocessing to get counts of how
often page stealing occurs for individual migratetypes, and what
migratetypes are used for fallbacks.  Arguably, the worst case of page
stealing is when UNMOVABLE allocation steals from MOVABLE pageblock.
RECLAIMABLE allocation stealing from MOVABLE allocation is also not ideal,
so the goal is to minimize these two cases.

The evaluation of v2 wasn't always clear win and Joonsoo questioned the
results.  Here I used different baseline which includes RFC compaction
improvements from [1].  I found that the compaction improvements reduce
variability of stress-highalloc, so there's less noise in the data.

First, let's look at stress-highalloc configured to do sync compaction,
and how these patches reduce page stealing events during the test.  First
column is after fresh reboot, other two are reiterations of test without
reboot.  That was all accumulater over 5 re-iterations (so the benchmark
was run 5x3 times with 5 fresh restarts).

Baseline:

                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                  5-nothp-1       5-nothp-2       5-nothp-3
Page alloc extfrag event                               10264225     8702233    10244125
Extfrag fragmenting                                    10263271     8701552    10243473
Extfrag fragmenting for unmovable                         13595       17616       15960
Extfrag fragmenting unmovable placed with movable          7989       12193        8447
Extfrag fragmenting for reclaimable                         658        1840        1817
Extfrag fragmenting reclaimable placed with movable         558        1677        1679
Extfrag fragmenting for movable                        10249018     8682096    10225696

With Patch 1:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                  6-nothp-1       6-nothp-2       6-nothp-3
Page alloc extfrag event                               11834954     9877523     9774860
Extfrag fragmenting                                    11833993     9876880     9774245
Extfrag fragmenting for unmovable                          7342       16129       11712
Extfrag fragmenting unmovable placed with movable          4191       10547        6270
Extfrag fragmenting for reclaimable                         373        1130         923
Extfrag fragmenting reclaimable placed with movable         302         906         738
Extfrag fragmenting for movable                        11826278     9859621     9761610

With Patch 2:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                  7-nothp-1       7-nothp-2       7-nothp-3
Page alloc extfrag event                                4725990     3668793     3807436
Extfrag fragmenting                                     4725104     3668252     3806898
Extfrag fragmenting for unmovable                          6678        7974        7281
Extfrag fragmenting unmovable placed with movable          2051        3829        4017
Extfrag fragmenting for reclaimable                         429        1208        1278
Extfrag fragmenting reclaimable placed with movable         369         976        1034
Extfrag fragmenting for movable                         4717997     3659070     3798339

With Patch 3:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                  8-nothp-1       8-nothp-2       8-nothp-3
Page alloc extfrag event                                5016183     4700142     3850633
Extfrag fragmenting                                     5015325     4699613     3850072
Extfrag fragmenting for unmovable                          1312        3154        3088
Extfrag fragmenting unmovable placed with movable          1115        2777        2714
Extfrag fragmenting for reclaimable                         437        1193        1097
Extfrag fragmenting reclaimable placed with movable         330         969         879
Extfrag fragmenting for movable                         5013576     4695266     3845887

In v2 we've seen apparent regression with Patch 1 for unmovable events,
this is now gone, suggesting it was indeed noise.  Here, each patch
improves the situation for unmovable events.  Reclaimable is improved by
patch 1 and then either the same modulo noise, or perhaps sligtly worse -
a small price for unmovable improvements, IMHO.  The number of movable
allocations falling back to other migratetypes is most noisy, but it's
reduced to half at Patch 2 nevertheless.  These are least critical as
compaction can move them around.

If we look at success rates, the patches don't affect them, that didn't change.

Baseline:
                             3.19-rc4              3.19-rc4              3.19-rc4
                            5-nothp-1             5-nothp-2             5-nothp-3
Success 1 Min         49.00 (  0.00%)       42.00 ( 14.29%)       41.00 ( 16.33%)
Success 1 Mean        51.00 (  0.00%)       45.00 ( 11.76%)       42.60 ( 16.47%)
Success 1 Max         55.00 (  0.00%)       51.00 (  7.27%)       46.00 ( 16.36%)
Success 2 Min         53.00 (  0.00%)       47.00 ( 11.32%)       44.00 ( 16.98%)
Success 2 Mean        59.60 (  0.00%)       50.80 ( 14.77%)       48.20 ( 19.13%)
Success 2 Max         64.00 (  0.00%)       56.00 ( 12.50%)       52.00 ( 18.75%)
Success 3 Min         84.00 (  0.00%)       82.00 (  2.38%)       78.00 (  7.14%)
Success 3 Mean        85.60 (  0.00%)       82.80 (  3.27%)       79.40 (  7.24%)
Success 3 Max         86.00 (  0.00%)       83.00 (  3.49%)       80.00 (  6.98%)

Patch 1:
                             3.19-rc4              3.19-rc4              3.19-rc4
                            6-nothp-1             6-nothp-2             6-nothp-3
Success 1 Min         49.00 (  0.00%)       44.00 ( 10.20%)       44.00 ( 10.20%)
Success 1 Mean        51.80 (  0.00%)       46.00 ( 11.20%)       45.80 ( 11.58%)
Success 1 Max         54.00 (  0.00%)       49.00 (  9.26%)       49.00 (  9.26%)
Success 2 Min         58.00 (  0.00%)       49.00 ( 15.52%)       48.00 ( 17.24%)
Success 2 Mean        60.40 (  0.00%)       51.80 ( 14.24%)       50.80 ( 15.89%)
Success 2 Max         63.00 (  0.00%)       54.00 ( 14.29%)       55.00 ( 12.70%)
Success 3 Min         84.00 (  0.00%)       81.00 (  3.57%)       79.00 (  5.95%)
Success 3 Mean        85.00 (  0.00%)       81.60 (  4.00%)       79.80 (  6.12%)
Success 3 Max         86.00 (  0.00%)       82.00 (  4.65%)       82.00 (  4.65%)

Patch 2:

                             3.19-rc4              3.19-rc4              3.19-rc4
                            7-nothp-1             7-nothp-2             7-nothp-3
Success 1 Min         50.00 (  0.00%)       44.00 ( 12.00%)       39.00 ( 22.00%)
Success 1 Mean        52.80 (  0.00%)       45.60 ( 13.64%)       42.40 ( 19.70%)
Success 1 Max         55.00 (  0.00%)       46.00 ( 16.36%)       47.00 ( 14.55%)
Success 2 Min         52.00 (  0.00%)       48.00 (  7.69%)       45.00 ( 13.46%)
Success 2 Mean        53.40 (  0.00%)       49.80 (  6.74%)       48.80 (  8.61%)
Success 2 Max         57.00 (  0.00%)       52.00 (  8.77%)       52.00 (  8.77%)
Success 3 Min         84.00 (  0.00%)       81.00 (  3.57%)       79.00 (  5.95%)
Success 3 Mean        85.00 (  0.00%)       82.40 (  3.06%)       79.60 (  6.35%)
Success 3 Max         86.00 (  0.00%)       83.00 (  3.49%)       80.00 (  6.98%)

Patch 3:
                             3.19-rc4              3.19-rc4              3.19-rc4
                            8-nothp-1             8-nothp-2             8-nothp-3
Success 1 Min         46.00 (  0.00%)       44.00 (  4.35%)       42.00 (  8.70%)
Success 1 Mean        50.20 (  0.00%)       45.60 (  9.16%)       44.00 ( 12.35%)
Success 1 Max         52.00 (  0.00%)       47.00 (  9.62%)       47.00 (  9.62%)
Success 2 Min         53.00 (  0.00%)       49.00 (  7.55%)       48.00 (  9.43%)
Success 2 Mean        55.80 (  0.00%)       50.60 (  9.32%)       49.00 ( 12.19%)
Success 2 Max         59.00 (  0.00%)       52.00 ( 11.86%)       51.00 ( 13.56%)
Success 3 Min         84.00 (  0.00%)       80.00 (  4.76%)       79.00 (  5.95%)
Success 3 Mean        85.40 (  0.00%)       81.60 (  4.45%)       80.40 (  5.85%)
Success 3 Max         87.00 (  0.00%)       83.00 (  4.60%)       82.00 (  5.75%)

While there's no improvement here, I consider reduced fragmentation events
to be worth on its own.  Patch 2 also seems to reduce scanning for free
pages, and migrations in compaction, suggesting it has somewhat less work
to do:

Patch 1:

Compaction stalls                 4153        3959        3978
Compaction success                1523        1441        1446
Compaction failures               2630        2517        2531
Page migrate success           4600827     4943120     5104348
Page migrate failure             19763       16656       17806
Compaction pages isolated      9597640    10305617    10653541
Compaction migrate scanned    77828948    86533283    87137064
Compaction free scanned      517758295   521312840   521462251
Compaction cost                   5503        5932        6110

Patch 2:

Compaction stalls                 3800        3450        3518
Compaction success                1421        1316        1317
Compaction failures               2379        2134        2201
Page migrate success           4160421     4502708     4752148
Page migrate failure             19705       14340       14911
Compaction pages isolated      8731983     9382374     9910043
Compaction migrate scanned    98362797    96349194    98609686
Compaction free scanned      496512560   469502017   480442545
Compaction cost                   5173        5526        5811

As with v2, /proc/pagetypeinfo appears unaffected with respect to numbers
of unmovable and reclaimable pageblocks.

Configuring the benchmark to allocate like THP page fault (i.e.  no sync
compaction) gives much noisier results for iterations 2 and 3 after
reboot.  This is not so surprising given how [1] offers lower improvements
in this scenario due to less restarts after deferred compaction which
would change compaction pivot.

Baseline:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                    5-thp-1         5-thp-2         5-thp-3
Page alloc extfrag event                                8148965     6227815     6646741
Extfrag fragmenting                                     8147872     6227130     6646117
Extfrag fragmenting for unmovable                         10324       12942       15975
Extfrag fragmenting unmovable placed with movable          5972        8495       10907
Extfrag fragmenting for reclaimable                         601        1707        2210
Extfrag fragmenting reclaimable placed with movable         520        1570        2000
Extfrag fragmenting for movable                         8136947     6212481     6627932

Patch 1:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                    6-thp-1         6-thp-2         6-thp-3
Page alloc extfrag event                                8345457     7574471     7020419
Extfrag fragmenting                                     8343546     7573777     7019718
Extfrag fragmenting for unmovable                         10256       18535       30716
Extfrag fragmenting unmovable placed with movable          6893       11726       22181
Extfrag fragmenting for reclaimable                         465        1208        1023
Extfrag fragmenting reclaimable placed with movable         353         996         843
Extfrag fragmenting for movable                         8332825     7554034     6987979

Patch 2:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                    7-thp-1         7-thp-2         7-thp-3
Page alloc extfrag event                                3512847     3020756     2891625
Extfrag fragmenting                                     3511940     3020185     2891059
Extfrag fragmenting for unmovable                          9017        6892        6191
Extfrag fragmenting unmovable placed with movable          1524        3053        2435
Extfrag fragmenting for reclaimable                         445        1081        1160
Extfrag fragmenting reclaimable placed with movable         375         918         986
Extfrag fragmenting for movable                         3502478     3012212     2883708

Patch 3:
                                                   3.19-rc4        3.19-rc4        3.19-rc4
                                                    8-thp-1         8-thp-2         8-thp-3
Page alloc extfrag event                                3181699     3082881     2674164
Extfrag fragmenting                                     3180812     3082303     2673611
Extfrag fragmenting for unmovable                          1201        4031        4040
Extfrag fragmenting unmovable placed with movable           974        3611        3645
Extfrag fragmenting for reclaimable                         478        1165        1294
Extfrag fragmenting reclaimable placed with movable         387         985        1030
Extfrag fragmenting for movable                         3179133     3077107     2668277

The improvements for first iteration are clear, the rest is much noisier
and can appear like regression for Patch 1.  Anyway, patch 2 rectifies it.

Allocation success rates are again unaffected so there's no point in
making this e-mail any longer.

[1] http://marc.info/?l=linux-mm&m=142166196321125&w=2

This patch (of 3):

When __rmqueue_fallback() is called to allocate a page of order X, it will
find a page of order Y >= X of a fallback migratetype, which is different
from the desired migratetype.  With the help of try_to_steal_freepages(),
it may change the migratetype (to the desired one) also of:

1) all currently free pages in the pageblock containing the fallback page
2) the fallback pageblock itself
3) buddy pages created by splitting the fallback page (when Y > X)

These decisions take the order Y into account, as well as the desired
migratetype, with the goal of preventing multiple fallback allocations
that could e.g.  distribute UNMOVABLE allocations among multiple
pageblocks.

Originally, decision for 1) has implied the decision for 3).  Commit
47118af076 ("mm: mmzone: MIGRATE_CMA migration type added") changed that
(probably unintentionally) so that the buddy pages in case 3) are always
changed to the desired migratetype, except for CMA pageblocks.

Commit fef903efcf0c ("mm/page_allo.c: restructure free-page stealing code
and fix a bug") did some refactoring and added a comment that the case of
3) is intended.  Commit 0cbef29a7821 ("mm: __rmqueue_fallback() should
respect pageblock type") removed the comment and tried to restore the
original behavior where 1) implies 3), but due to the previous
refactoring, the result is instead that only 2) implies 3) - and the
conditions for 2) are less frequently met than conditions for 1).  This
may increase fragmentation in situations where the code decides to steal
all free pages from the pageblock (case 1)), but then gives back the buddy
pages produced by splitting.

This patch restores the original intended logic where 1) implies 3).
During testing with stress-highalloc from mmtests, this has shown to
decrease the number of events where UNMOVABLE and RECLAIMABLE allocations
steal from MOVABLE pageblocks, which can lead to permanent fragmentation.
In some cases it has increased the number of events when MOVABLE
allocations steal from UNMOVABLE or RECLAIMABLE pageblocks, but these are
fixable by sync compaction and thus less harmful.

Note that evaluation has shown that the behavior introduced by
47118af076 for buddy pages in case 3) is actually even better than the
original logic, so the following patch will introduce it properly once
again.  For stable backports of this patch it makes thus sense to only fix
versions containing 0cbef29a7821.

[iamjoonsoo.kim@lge.com: tracepoint fix]
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: <stable@vger.kernel.org>	[3.13+ containing 0cbef29a7821]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:38 +02:00
KOSAKI Motohiro c865a91570 mm: get rid of unnecessary overhead of trace_mm_page_alloc_extfrag()
In general, every tracepoint should be zero overhead if it is disabled.
However, trace_mm_page_alloc_extfrag() is one of exception.  It evaluate
"new_type == start_migratetype" even if tracepoint is disabled.

However, the code can be moved into tracepoint's TP_fast_assign() and
TP_fast_assign exist exactly such purpose.  This patch does it.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:37 +02:00
Srivatsa S. Bhat d2b37143d9 mm/page_alloc.c: fix the value of fallback_migratetype in alloc_extfrag tracepoint()
In the current code, the value of fallback_migratetype that is printed
using the mm_page_alloc_extfrag tracepoint, is the value of the
migratetype *after* it has been set to the preferred migratetype (if the
ownership was changed).  Obviously that wouldn't have been the original
intent.  (We already have a separate 'change_ownership' field to tell
whether the ownership of the pageblock was changed from the
fallback_migratetype to the preferred type.)

The intent of the fallback_migratetype field is to show the migratetype
from which we borrowed pages in order to satisfy the allocation request.
So fix the code to print that value correctly.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:37 +02:00
Vlastimil Babka 16159966b6 mm/page_alloc: prevent MIGRATE_RESERVE pages from being misplaced
For the MIGRATE_RESERVE pages, it is useful when they do not get
misplaced on free_list of other migratetype, otherwise they might get
allocated prematurely and e.g.  fragment the MIGRATE_RESEVE pageblocks.
While this cannot be avoided completely when allocating new
MIGRATE_RESERVE pageblocks in min_free_kbytes sysctl handler, we should
prevent the misplacement where possible.

Currently, it is possible for the misplacement to happen when a
MIGRATE_RESERVE page is allocated on pcplist through rmqueue_bulk() as a
fallback for other desired migratetype, and then later freed back
through free_pcppages_bulk() without being actually used.  This happens
because free_pcppages_bulk() uses get_freepage_migratetype() to choose
the free_list, and rmqueue_bulk() calls set_freepage_migratetype() with
the *desired* migratetype and not the page's original MIGRATE_RESERVE
migratetype.

This patch fixes the problem by moving the call to
set_freepage_migratetype() from rmqueue_bulk() down to
__rmqueue_smallest() and __rmqueue_fallback() where the actual page's
migratetype (e.g.  from which free_list the page is taken from) is used.
Note that this migratetype might be different from the pageblock's
migratetype due to freepage stealing decisions.  This is OK, as page
stealing never uses MIGRATE_RESERVE as a fallback, and also takes care
to leave all MIGRATE_CMA pages on the correct freelist.

Therefore, as an additional benefit, the call to
get_pageblock_migratetype() from rmqueue_bulk() when CMA is enabled, can
be removed completely.  This relies on the fact that MIGRATE_CMA
pageblocks are created only during system init, and the above.  The
related is_migrate_isolate() check is also unnecessary, as memory
isolation has other ways to move pages between freelists, and drain pcp
lists containing pages that should be isolated.  The buffered_rmqueue()
can also benefit from calling get_freepage_migratetype() instead of
get_pageblock_migratetype().

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Yong-Taek Lee <ytk.lee@samsung.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Suggested-by: Mel Gorman <mgorman@suse.de>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: "Wang, Yalin" <Yalin.Wang@sonymobile.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:36 +02:00
KOSAKI Motohiro 666f0d1fa2 mm: __rmqueue_fallback() should respect pageblock type
When __rmqueue_fallback() doesn't find a free block with the required size
it splits a larger page and puts the rest of the page onto the free list.

But it has one serious mistake.  When putting back, __rmqueue_fallback()
always use start_migratetype if type is not CMA.  However,
__rmqueue_fallback() is only called when all of the start_migratetype
queue is empty.  That said, __rmqueue_fallback always puts back memory to
the wrong queue except try_to_steal_freepages() changed pageblock type
(i.e.  requested size is smaller than half of page block).  The end result
is that the antifragmentation framework increases fragmenation instead of
decreasing it.

Mel's original anti fragmentation does the right thing.  But commit
47118af076 ("mm: mmzone: MIGRATE_CMA migration type added") broke it.

This patch restores sane and old behavior.  It also removes an incorrect
comment which was introduced by commit fef903efcf0c ("mm/page_alloc.c:
restructure free-page stealing code and fix a bug").

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-27 22:09:36 +02:00
syphyr f85936494a mm: Fix compile of zone counter for cma pages
The restructure of free-page stealing code requires a
compile fix that was introduced by the zone counter for
cma pages.

fixes:

"mm/page_allo.c: restructure free-page stealing code and
fix a bug"

"mm: add zone counter for cma pages"
2019-07-27 22:09:35 +02:00
Srivatsa S. Bhat 8a9d0c9766 mm/page_allo.c: restructure free-page stealing code and fix a bug
The free-page stealing code in __rmqueue_fallback() is somewhat hard to
follow, and has an incredible amount of subtlety hidden inside!

First off, there is a minor bug in the reporting of change-of-ownership of
pageblocks.  Under some conditions, we try to move upto
'pageblock_nr_pages' no.  of pages to the preferred allocation list.  But
we change the ownership of that pageblock to the preferred type only if we
manage to successfully move atleast half of that pageblock (or if
page_group_by_mobility_disabled is set).

However, the current code ignores the latter part and sets the
'migratetype' variable to the preferred type, irrespective of whether we
actually changed the pageblock migratetype of that block or not.  So, the
page_alloc_extfrag tracepoint can end up printing incorrect info (i.e.,
'change_ownership' might be shown as 1 when it must have been 0).

So fixing this involves moving the update of the 'migratetype' variable to
the right place.  But looking closer, we observe that the 'migratetype'
variable is used subsequently for checks such as "is_migrate_cma()".
Obviously the intent there is to check if the *fallback* type is
MIGRATE_CMA, but since we already set the 'migratetype' variable to
start_migratetype, we end up checking if the *preferred* type is
MIGRATE_CMA!!

To make things more interesting, this actually doesn't cause a bug in
practice, because we never change *anything* if the fallback type is CMA.

So, restructure the code in such a way that it is trivial to understand
what is going on, and also fix the above mentioned bug.  And while at it,
also add a comment explaining the subtlety behind the migratetype used in
the call to expand().

[akpm@linux-foundation.org: remove unneeded `inline', small coding-style fix]
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Change-Id: I2e84c3b2a45dc063402117dd74179585caa7234c
2019-07-27 22:09:35 +02:00
syphyr 6604f82576 mm/page_allo.c: Remove Samsung specific code from rmqueue_fallback
This will be added back to try_to_steal_freepages in a future
commit.
2019-07-27 22:09:35 +02:00
Peter Zijlstra 0bab0a32f6 sched/core: Fix TASK_DEAD race in finish_task_switch()
commit 95913d97914f44db2b81271c2e2ebd4d2ac2df83 upstream.

So the problem this patch is trying to address is as follows:

        CPU0                            CPU1

        context_switch(A, B)
                                        ttwu(A)
                                          LOCK A->pi_lock
                                          A->on_cpu == 0
        finish_task_switch(A)
          prev_state = A->state  <-.
          WMB                      |
          A->on_cpu = 0;           |
          UNLOCK rq0->lock         |
                                   |    context_switch(C, A)
                                   `--  A->state = TASK_DEAD
          prev_state == TASK_DEAD
            put_task_struct(A)
                                        context_switch(A, C)
                                        finish_task_switch(A)
                                          A->state == TASK_DEAD
                                            put_task_struct(A)

The argument being that the WMB will allow the load of A->state on CPU0
to cross over and observe CPU1's store of A->state, which will then
result in a double-drop and use-after-free.

Now the comment states (and this was true once upon a long time ago)
that we need to observe A->state while holding rq->lock because that
will order us against the wakeup; however the wakeup will not in fact
acquire (that) rq->lock; it takes A->pi_lock these days.

We can obviously fix this by upgrading the WMB to an MB, but that is
expensive, so we'd rather avoid that.

The alternative this patch takes is: smp_store_release(&A->on_cpu, 0),
which avoids the MB on some archs, but not important ones like ARM.

Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Cc: manfred@colorfullife.com
Cc: will.deacon@arm.com
Fixes: e4a52bcb9a ("sched: Remove rq->lock from the first half of ttwu()")
Link: http://lkml.kernel.org/r/20150929124509.GG3816@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2019-07-27 22:09:34 +02:00
Oleg Nesterov 5db6c98f70 exit: fix race between wait_consider_task() and wait_task_zombie()
commit 3245d6acab981a2388ffb877c7ecc97e763c59d4 upstream.

wait_consider_task() checks EXIT_ZOMBIE after EXIT_DEAD/EXIT_TRACE and
both checks can fail if we race with EXIT_ZOMBIE -> EXIT_DEAD/EXIT_TRACE
change in between, gcc needs to reload p->exit_state after
security_task_wait().  In this case ->notask_error will be wrongly
cleared and do_wait() can hang forever if it was the last eligible
child.

Many thanks to Arne who carefully investigated the problem.

Note: this bug is very old but it was pure theoretical until commit
b3ab03160dfa ("wait: completely ignore the EXIT_DEAD tasks").  Before
this commit "-O2" was probably enough to guarantee that compiler won't
read ->exit_state twice.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Arne Goedeke <el@laramies.com>
Tested-by: Arne Goedeke <el@laramies.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
2019-07-27 22:09:34 +02:00
Srinivas Girigowda 3879ba81de qcacld-2.0: Validate packet length, before processing PTT commands
propagation from qcacld-3.0 to qcacld-2.0.

There is a possibility of buffer overread while processing PTT
commands, because of packet length check is missing.

While processing PTT commands, validate packet length to make sure
there is no buffer overread.

Change-Id: I63da658605a360f51a62c18fbc9ba7c60fb19525
CRs-Fixed: 2125577
Bug: 65853393
Signed-off-by: Srinivas Girigowda <sgirigow@codeaurora.org>
2019-07-27 22:09:33 +02:00
Mukul Sharma 68f06ef28e qcacld-2.0: Avoid OEM message overread
Propagation from qcacld-3.0 to qcacld-2.0

Currently in oem_cmd_handler() the CLD80211_ATTR_DATA is processed as
an OEM message without first verifying that the payload has a
sufficient length. This can lead to overreading the buffer. Add length
checks to make sure the payload is large enough to hold the message it
is supposed to encapsulate.

Bug: 67582682
Change-Id: Ifaa7d1cce5bd427bfeca14cab5a44c4cb72ce59f
CRs-Fixed: 2058471
Signed-off-by: Ahmed ElArabawy <arabawy@google.com>
2019-07-27 22:09:33 +02:00
Srinivas Girigowda e1eb41fef5 qcacld-2.0: Add support to use generic netlink sockets for userspace apps
Currently user space communication functions[cnss diag, PTT socket app]
in host driver uses netlink user sockets which is a security concern from
Linux Android SE policies.

Add support for to use netlink family cld80211 which uses generic
netlink sockets.

Change-Id: I4ea49ac6d7c9381212c93567fdc40f90e04dfba4
CRs-Fixed: 1112784
Bug: 32775496
Signed-off-by: Srinivas Girigowda <sgirigow@codeaurora.org>
2019-07-27 22:09:32 +02:00
Srinivas Girigowda 1b8116f91b qcacld-2.0: Remove BTC code to reduce driver size
BTC code is only used for WCN chipset where BT COEX module was running
on host. While for Rome solution, BT COEX module is moved down to FW.
Remove it to reduce driver size.

Change-Id: I0548dd704a2a2b6bd36d01e3e3f4963b8c19d02b
CRs-Fixed: 1058780
Bug: 32775496
Signed-off-by: Srinivas Girigowda <sgirigow@codeaurora.org>
2019-07-27 22:09:32 +02:00
syphyr f7394791bb defconfig: Enable cnss_genl driver compilation
cnss_genl driver creates a netlink family and multicast groups
to facilitate communication between WLAN driver and userspace.

Define flag CONFIG_CNSS_GENL and set to 'y'(yes) to enable
compilation of the cnss_genl driver inorder to use the same.

Change-Id: I125dc51687e88e0af2ca8413b7029163e4a6ca9f
2019-07-27 22:09:31 +02:00
Srinivas Girigowda e7cc63ad89 Driver to create cld80211 nl family at bootup time
Create cnss_genl driver to create a netlink family cld80211
and make it available to cld driver and applications when
they query for it.
This driver creates multicast groups to facilitate communication
from cld driver to userspace and allows cld driver to register
for different commands from user space.

Resolve compilation errors and tweak netlink family creation

Change-Id: I0795dd08b6429fad60187fee724b3fd3ccfa5603
CRs-Fixed: 1100401
Bug: 32775496
Signed-off-by: Srinivas Dasari <dasaris@codeaurora.org>
Signed-off-by: Srinivas Girigowda <sgirigow@codeaurora.org>
2019-07-27 22:09:31 +02:00
Srinivas Girigowda 22691f3714 qcacld-2.0: Print cmd in hostapd_ioctl
Print cmd in hostapd_ioctl.

Change-Id: Ife96018ba27c952fe2d9c593955e150984547220
Bug: 35668243
Signed-off-by: Srinivas Girigowda <sgirigow@codeaurora.org>
2019-07-27 22:09:31 +02:00
Jingxiang Ge 3a34ea1eb5 qcacld-2.0: Avoid possible information leak in send_btc_nlink_msg
In function send_btc_nlink_msg skb alloc is done but the allocated
memory is not initialized. NLMSG_SPACE is used at many places in this
function which does 4 bytes allignment of the buffer. skb_put
adjusts the tail pointer according to this 4 byte allignment results
in padding some extra bytes. Since these bytes are not initialized
it leads to information leak.

To resolve this issue, initialize the skb with zero after alloc skb.

Change-Id: I9d4d2030927c4aedf8c201bf875741b8c800ee7e
CRs-Fixed: 2288807
2019-07-27 22:09:30 +02:00
mohamed.khadri b1ab78978c usb: f_fs: set ep->driver_data on unbind
if ep->driver_data is set to NULL in ffs_func_eps_disable, this ep could
be claimed by other gadgets using usb_ep_autoconfig, which also marks
ep->desc = NULL. On the next call to ffs_func_eps_enable, an invalid ep
is referenced. Any pending io reads could use stale ep reference, leading
to NULL deference while accessing desc, here ep->ep->desc->wMaxPacketSize

Bug: 27340369

Change-Id: I80b7caa463be9fa5ae495470cf09c6c32478ad1c
Signed-off-by: Mohamed Khadri <mohamed.khadri@lge.com>
2019-07-27 22:09:30 +02:00
Mayank Rana 4e9f310ad2 f_fs: Use pr_err_ratelimited with epfile_io error case
In some cases where adbd is trying to send data but USB bus is
suspended, all read/write request would fail. This results into
more error log on console causing watchdog bark. Hence use
pr_err_ratelimited() to reduce error log on console.

CRs-Fixed: 818095
Change-Id: I25e2a0fdc53cf6f34d8e32223c46e8706f931450
Signed-off-by: Mayank Rana <mrana@codeaurora.org>
Signed-off-by: Ajay Dudani <adudani@codeaurora.org>
2019-07-27 22:09:29 +02:00
Thierry Strudel 2b6c5a9bd7 USB: f_fs: print error only when not suspending
wait_event_interruptible will report a valid -ERESTARTSYS value when
going for suspend, just don't report error message in this

Change-Id: I3477f888d96ee3a52805108b4456a6863f20a6c7
Signed-off-by: Thierry Strudel <tstrudel@google.com>
2019-07-27 22:09:29 +02:00