Commit Graph

314763 Commits

Author SHA1 Message Date
followmsi 1bcfb01071 msm_rng: Fix build
Change-Id: I65b2862ab63682b16f191e4906b8ea1ef4b146da
Change-Id: I3be83394379ce1710e78515a7a8413f74d2bfebe
2023-03-24 20:53:08 +01:00
followmsi 0bd3f853ab Add include/linux/qrng.h
Change-Id: Ia43fdace113b2cdf7ae2dde69bb39347b896e549
2023-03-24 20:53:08 +01:00
Dinesh K Garg 15acebb33e msm_rng: Resolve race condition issues
Resolve race condition between initializing the mutex vs hwrng
register.
Remove the HWRNG FIFO, not required in Software.

Change-Id: I9fa3e5c7e2e9e14feb88a4656dcfab7dec3cbd67
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
2023-03-24 20:53:08 +01:00
Dinesh K Garg 8df7aab686 msm_rng: Enable/ Disable Bus bandwidth for every RNG read call
This patch adds calls to enable and disable bus bandwidth
for every RNG Read call.

Change-Id: Ia1ac31ffa79a8be2761c243eee9bf87f25422c24
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
2023-03-24 20:53:07 +01:00
Dinesh K Garg 969302f5a7 msm_rng: Change long to u32 type for buffers reading from RNG HW
Changing the type from long to u32 of the buffer that is used to
read data from the RNG hardware to reflect the size of the HW
register

Change-Id: I54e8eeae40a66c8033637044e627023150d6c411
Acked-by: Baranidharan Muthukumaran <bmuthuku@qti.qualcomm.com>
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
(cherry picked from commit 7bcafc734d)
2023-03-24 20:53:07 +01:00
followmsi db92d029a4 Import new HW_RANDOM_MSM
Change-Id: If9f43dd61d7e07f8e695455d8705ba881c672cbd
2023-03-24 20:53:07 +01:00
Eric Biggers 5aba32ba26 random: properly align get_random_int_hash
commit b1132deac01c2332d234fa821a70022796b79182 upstream.

get_random_long() reads from the get_random_int_hash array using an
unsigned long pointer.  For this code to be guaranteed correct on all
architectures, the array must be aligned to an unsigned long boundary.

Signed-off-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I934433b231a96a0ded6dc8c41d93b536f2f0b3a2
2023-03-24 20:53:06 +01:00
Theodore Ts'o 095f11b227 random: mix rdrand with entropy sent in from userspace
commit 81e69df38e2911b642ec121dec319fad2a4782f3 upstream.

Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:

    https://bugzilla.redhat.com/show_bug.cgi?id=1572944

It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random".  Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.

So if RDRAND is available, mix it into entropy submitted from
userspace.  It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream.  And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.

This also provides some protection against someone who is able to read
or set the entropy seed file.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I4d4ac36af58c5f3d1a5066b54afff4b8f478affd
2023-03-24 20:53:06 +01:00
Jason A. Donenfeld f44d2c1f42 random: always use batched entropy for get_random_u{32,64}
commit 69efea712f5b0489e67d07565aad5c94e09a3e52 upstream.

It turns out that RDRAND is pretty slow. Comparing these two
constructions:

  for (i = 0; i < CHACHA_BLOCK_SIZE; i += sizeof(ret))
    arch_get_random_long(&ret);

and

  long buf[CHACHA_BLOCK_SIZE / sizeof(long)];
  extract_crng((u8 *)buf);

it amortizes out to 352 cycles per long for the top one and 107 cycles
per long for the bottom one, on Coffee Lake Refresh, Intel Core i9-9880H.

And importantly, the top one has the drawback of not benefiting from the
real rng, whereas the bottom one has all the nice benefits of using our
own chacha rng. As get_random_u{32,64} gets used in more places (perhaps
beyond what it was originally intended for when it was introduced as
get_random_{int,long} back in the md5 monstrosity era), it seems like it
might be a good thing to strengthen its posture a tiny bit. Doing this
should only be stronger and not any weaker because that pool is already
initialized with a bunch of rdrand data (when available). This way, we
get the benefits of the hardware rng as well as our own rng.

Another benefit of this is that we no longer hit pitfalls of the recent
stream of AMD bugs in RDRAND. One often used code pattern for various
things is:

  do {
  	val = get_random_u32();
  } while (hash_table_contains_key(val));

That recent AMD bug rendered that pattern useless, whereas we're really
very certain that chacha20 output will give pretty distributed numbers,
no matter what.

So, this simplification seems better both from a security perspective
and from a performance perspective.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://lore.kernel.org/r/20200221201037.30231-1-Jason@zx2c4.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Change-Id: I868e7472df493f410744dd8ed210dd1c4ca5e55d
2023-03-24 20:53:06 +01:00
Theodore Ts'o db136ed6aa random: fix locking dependency with the tasklist_lock
Commit 6133705494 introduced a circular lock dependency because
posix_cpu_timers_exit() is called by release_task(), which is holding
a writer lock on tasklist_lock, and this can cause a deadlock since
kill_fasync() gets called with nonblocking_pool.lock taken.

There's no reason why kill_fasync() needs to be taken while the random
pool is locked, so move it out to fix this locking dependency.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Russ Dill <Russ.Dill@gmail.com>
Cc: stable@kernel.org
2023-03-24 20:53:05 +01:00
Jiri Kosina 89e3ba5431 random: make it possible to enable debugging without rebuild
The module parameter that turns debugging mode (which basically means
printing a few extra lines during runtime) is in '#if 0' block. Forcing
everyone who would like to see how entropy is behaving on his system to
rebuild seems to be a little bit too harsh.

If we were concerned about speed, we could potentially turn 'debug' into a
static key, but I don't think it's necessary.

Drop the '#if 0' block to allow using the 'debug' parameter without rebuilding.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2023-03-24 20:53:05 +01:00
Jarod Wilson a0d5d068db random: prime last_data value per fips requirements
The value stored in last_data must be primed for FIPS 140-2 purposes. Upon
first use, either on system startup or after an RNDCLEARPOOL ioctl, we
need to take an initial random sample, store it internally in last_data,
then pass along the value after that to the requester, so that consistency
checks aren't being run against stale and possibly known data.

CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: "David S. Miller" <davem@davemloft.net>
CC: Matt Mackall <mpm@selenic.com>
CC: linux-crypto@vger.kernel.org
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2023-03-24 20:53:05 +01:00
Jiri Kosina 665e955212 random: fix debug format strings
Fix the following warnings in formatting debug output:

drivers/char/random.c: In function ‘xfer_secondary_pool’:
drivers/char/random.c:827: warning: format ‘%d’ expects type ‘int’, but argument 7 has type ‘size_t’
drivers/char/random.c: In function ‘account’:
drivers/char/random.c:859: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
drivers/char/random.c:881: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘size_t’
drivers/char/random.c: In function ‘random_read’:
drivers/char/random.c:1141: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
drivers/char/random.c:1145: warning: format ‘%d’ expects type ‘int’, but argument 6 has type ‘long unsigned int’

by using '%zd' instead of '%d' to properly denote ssize_t/size_t conversion.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2023-03-24 20:53:04 +01:00
Thomas Gleixner 81cbb3b1dc locking: Various static lock initializer fixes
The static lock initializers want to be fed the proper name of the
lock and not some random string. In mainline random strings are
obfuscating the readability of debug output, but for RT they prevent
the spinlock substitution. Fix it up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2023-03-24 20:53:04 +01:00
Andrew Lutomirski 47d838042a hwrng: core - Don't use a stack buffer in add_early_randomness()
commit 6d4952d9d9d4dc2bb9c0255d95a09405a1e958f7 upstream.

hw_random carefully avoids using a stack buffer except in
add_early_randomness().  This causes a crash in virtio_rng if
CONFIG_VMAP_STACK=y.

Reported-by: Matt Mullins <mmullins@mmlx.us>
Tested-by: Matt Mullins <mmullins@mmlx.us>
Fixes: d3cc799647 ("hwrng: fetch randomness only after device init")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Change-Id: I4b47fc4e5a5623ffe3538c8872784e8f0ae30fd2
2023-03-24 20:53:04 +01:00
Martin Schwidefsky ffe6845842 hwrng: core - correct error check of kthread_run call
[ Upstream commit 17fb874dee093139923af8ed36061faa92cc8e79 ]

The kthread_run() function can return two different error values
but the hwrng core only checks for -ENOMEM. If the other error
value -EINTR is returned it is assigned to hwrng_fill and later
used on a kthread_stop() call which naturally crashes.

Cc: stable@vger.kernel.org
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Change-Id: I4ad273645fdf8cb16f4a04d274e40330ab77bade
2023-03-24 20:53:03 +01:00
Laurent Vivier fda3f8e888 hwrng: core - don't wait on add_early_randomness()
commit 78887832e76541f77169a24ac238fccb51059b63 upstream.

add_early_randomness() is called by hwrng_register() when the
hardware is added. If this hardware and its module are present
at boot, and if there is no data available the boot hangs until
data are available and can't be interrupted.

For instance, in the case of virtio-rng, in some cases the host can be
not able to provide enough entropy for all the guests.

We can have two easy ways to reproduce the problem but they rely on
misconfiguration of the hypervisor or the egd daemon:

- if virtio-rng device is configured to connect to the egd daemon of the
host but when the virtio-rng driver asks for data the daemon is not
connected,

- if virtio-rng device is configured to connect to the egd daemon of the
host but the egd daemon doesn't provide data.

The guest kernel will hang at boot until the virtio-rng driver provides
enough data.

To avoid that, call rng_get_data() in non-blocking mode (wait=0)
from add_early_randomness().

Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Fixes: d9e7972619 ("hwrng: add randomness to system from rng...")
Cc: <stable@vger.kernel.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Iaa47b1d5351140ecc8137441979a88c258ef8f38
2023-03-24 20:53:03 +01:00
nyyu b27211a486 hwrng: fetch randomness only after device init
Commit d9e7972 "hwrng: add randomness to system from rng sources"
added a call to rng_get_data() from the hwrng_register() function.
However, some rng devices need initialization before data can be read
from them.

This commit makes the call to rng_get_data() depend on no init fn
pointer being registered by the device.  If an init function is
registered, this call is made after device init.

CC: Kees Cook <keescook@chromium.org>
CC: Jason Cooper <jason@lakedaemon.net>
CC: Herbert Xu <herbert@gondor.apana.org.au>
CC: <stable@vger.kernel.org> # For v3.15+
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24 20:53:03 +01:00
Stephen Boyd 7740dd2d25 hwrng: Pass entropy to add_hwgenerator_randomness() in bits, not bytes
rng_get_data() returns the number of bytes read from the hardware.
The entropy argument to add_hwgenerator_randomness() is passed
directly to credit_entropy_bits() so we should be passing the
number of bits, not bytes here.

Fixes: be4000bc46 "hwrng: create filler thread"
Acked-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2023-03-24 20:53:02 +01:00
Torsten Duwe 5afcc6e08a hw_random: fix sparse warning (NULL vs 0 for pointer)
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2023-03-24 20:53:02 +01:00
Torsten Duwe 27e9ae2cc7 hwrng: create filler thread
This can be viewed as the in-kernel equivalent of hwrngd;
like FUSE it is a good thing to have a mechanism in user land,
but for some reasons (simplicity, secrecy, integrity, speed)
it may be better to have it in kernel space.

This patch creates a thread once a hwrng registers, and uses
the previously established add_hwgenerator_randomness() to feed
its data to the input pool as long as needed. A derating factor
is used to bias the entropy estimation and to disable this
mechanism entirely when set to zero.

Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Acked-by: H. Peter Anvin <hpa@zytor.com>
2023-03-24 20:53:02 +01:00
nyyu 37320bf409 drivers/char: delete non-required instances of include <linux/init.h> 2023-03-24 20:53:01 +01:00
Kees Cook 3f674bdd8d hwrng: add randomness to system from rng sources
When bringing a new RNG source online, it seems like it would make sense
to use some of its bytes to make the system entropy pool more random,
as done with all sorts of other devices that contain per-device or
per-boot differences.

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24 20:53:01 +01:00
Dan Carpenter a303e3006b hwrng: cleanup in hwrng_register()
My static checker complains that:

	drivers/char/hw_random/core.c:341 hwrng_register()
	warn: we tested 'old_rng' before and it was 'false'

The problem is that sometimes we test "if (!old_rng)" and sometimes we
test "if (must_register_misc)".  The static checker knows they are
equivalent but a human being reading the code could easily be confused.

I have simplified the code by removing the "must_register_misc" variable
and I have removed the redundant check on "if (!old_rng)".

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-24 20:53:01 +01:00
Satoru Takeuchi 1bb09a594b hw_random: free rng_buffer at module exit
rng-core module allocates rng_buffer by kmalloc() since commit
f7f154f124. But this buffer won't be
freed and there is a memory leak possibility at module exit.

Signed-off-by: Satoru Takeuchi <satoru.takeuchi@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2023-03-24 20:53:01 +01:00
Torsten Duwe 65326fb405 random: add_hwgenerator_randomness() for feeding entropy from devices
This patch adds an interface to the random pool for feeding entropy
in-kernel.

Change-Id: Id9031d6b615877ea957ee2d2aba1f8150ec699b9
Signed-off-by: Torsten Duwe <duwe@suse.de>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Git-commit: c84dbf61a7
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
[sboyd@codeaurora.org: Bring in ENTROPY_BITS() define,
s/random_write_wakeup_bits/random_write_wakeup_thresh/, and pass
NULL to mix_pool_bytes last parameter]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2023-03-24 20:53:00 +01:00
Robin Jarry 00cffbf482 kbuild: use HOSTLDFLAGS for single .c executables
When compiling executables from a single .c file, the linker is also
invoked. Pass the HOSTLDFLAGS like for other linker commands.

Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Change-Id: I8332132775e7cc220eb7d1e20911223bca65e1e2
2023-03-24 20:12:56 +01:00
followmsi 3a6cf41324 Merge branch 'lineage-18.1' of https://github.com/LineageOS/android_kernel_google_msm into followmsi-12 2023-03-24 15:14:21 +01:00
Xin Long 103a20d70a ipv6: check sk sk_type and protocol early in ip_mroute_set/getsockopt
[ Upstream commit 99253eb750fda6a644d5188fb26c43bad8d5a745 ]

Commit 5e1859fbcc ("ipv4: ipmr: various fixes and cleanups") fixed
the issue for ipv4 ipmr:

  ip_mroute_setsockopt() & ip_mroute_getsockopt() should not
  access/set raw_sk(sk)->ipmr_table before making sure the socket
  is a raw socket, and protocol is IGMP

The same fix should be done for ipv6 ipmr as well.

This patch can fix the panic caused by overwriting the same offset
as ipmr_table as in raw_sk(sk) when accessing other type's socket
by ip_mroute_setsockopt().

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Change-Id: I8d48f4611a2f2d0cb7ad5146036f571f12ecb1fc
CVE-2017-18509
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:38:56 +01:00
Eric Dumazet 8dc08ba699 ipv4: ipmr: various fixes and cleanups
1) ip_mroute_setsockopt() & ip_mroute_getsockopt() should not
   access/set raw_sk(sk)->ipmr_table before making sure the socket
   is a raw socket, and protocol is IGMP

2) MRT_INIT should return -EINVAL if optlen != sizeof(int), not
   -ENOPROTOOPT

3) MRT_ASSERT & MRT_PIM should validate optlen

4) " (v) ? 1 : 0 " can be written as " !!v "

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
Change-Id: I606493fe572b29f2c6a709833e799e20f2212882
2023-02-18 18:38:56 +01:00
Lorenzo Colitti afd1d2b38a net: inet: Support UID-based routing in IP protocols.
- Use the UID in routing lookups made by protocol connect() and
  sendmsg() functions.
- Make sure that routing lookups triggered by incoming packets
  (e.g., Path MTU discovery) take the UID of the socket into
  account.
- For packets not associated with a userspace socket, (e.g., ping
  replies) use UID 0 inside the user namespace corresponding to
  the network namespace the socket belongs to. This allows
  all namespaces to apply routing and iptables rules to
  kernel-originated traffic in that namespaces by matching UID 0.
  This is better than using the UID of the kernel socket that is
  sending the traffic, because the UID of kernel sockets created
  at namespace creation time (e.g., the per-processor ICMP and
  TCP sockets) is the UID of the user that created the socket,
  which might not be mapped in the namespace.

Bug: 16355602
Change-Id: I910504b508948057912bc188fd1e8aca28294de3
Tested: compiles allnoconfig, allyesconfig, allmodconfig
Tested: https://android-review.googlesource.com/253302
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:38:56 +01:00
Eric Biggers 1396cfea05 net: socket: don't set sk_uid to garbage value in ->setattr()
->setattr() was recently implemented for socket files to sync the socket
inode's uid to the new 'sk_uid' member of struct sock.  It does this by
copying over the ia_uid member of struct iattr.  However, ia_uid is
actually only valid when ATTR_UID is set in ia_valid, indicating that
the uid is being changed, e.g. by chown.  Other metadata operations such
as chmod or utimes leave ia_uid uninitialized.  Therefore, sk_uid could
be set to a "garbage" value from the stack.

Fix this by only copying the uid over when ATTR_UID is set.

[backport of net e1a3a60a2ebe991605acb14cd58e39c0545e174e]

Bug: 16355602
Change-Id: I20e53848e54282b72a388ce12bfa88da5e3e9efe
Fixes: 86741ec25462 ("net: core: Add a UID field to struct sock.")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:37:20 +01:00
Lorenzo Colitti d3cd043531 net: core: Add a UID field to struct sock.
Protocol sockets (struct sock) don't have UIDs, but most of the
time, they map 1:1 to userspace sockets (struct socket) which do.

Various operations such as the iptables xt_owner match need
access to the "UID of a socket", and do so by following the
backpointer to the struct socket. This involves taking
sk_callback_lock and doesn't work when there is no socket
because userspace has already called close().

Simplify this by adding a sk_uid field to struct sock whose value
matches the UID of the corresponding struct socket. The semantics
are as follows:

1. Whenever sk_socket is non-null: sk_uid is the same as the UID
   in sk_socket, i.e., matches the return value of sock_i_uid.
   Specifically, the UID is set when userspace calls socket(),
   fchown(), or accept().
2. When sk_socket is NULL, sk_uid is defined as follows:
   - For a socket that no longer has a sk_socket because
     userspace has called close(): the previous UID.
   - For a cloned socket (e.g., an incoming connection that is
     established but on which userspace has not yet called
     accept): the UID of the socket it was cloned from.
   - For a socket that has never had an sk_socket: UID 0 inside
     the user namespace corresponding to the network namespace
     the socket belongs to.

Kernel sockets created by sock_create_kern are a special case
of #1 and sk_uid is the user that created them. For kernel
sockets created at network namespace creation time, such as the
per-processor ICMP and TCP sockets, this is the user that created
the network namespace.

[Backport of net-next 86741ec25462e4c8cdce6df2f41ead05568c7d5e]

Bug: 16355602
Change-Id: I73e1a57dfeedf672f4c2dfc9ce6867838b55974b
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:37:04 +01:00
Masatake YAMATO 941eda2515 net: Providing protocol type via system.sockprotoname xattr of /proc/PID/fd entries
lsof reports some of socket descriptors as "can't identify protocol" like:

    [yamato@localhost]/tmp% sudo lsof | grep dbus | grep iden
    dbus-daem   652          dbus    6u     sock ... 17812 can't identify protocol
    dbus-daem   652          dbus   34u     sock ... 24689 can't identify protocol
    dbus-daem   652          dbus   42u     sock ... 24739 can't identify protocol
    dbus-daem   652          dbus   48u     sock ... 22329 can't identify protocol
    ...

lsof cannot resolve the protocol used in a socket because procfs
doesn't provide the map between inode number on sockfs and protocol
type of the socket.

For improving the situation this patch adds an extended attribute named
'system.sockprotoname' in which the protocol name for
/proc/PID/fd/SOCKET is stored. So lsof can know the protocol for a
given /proc/PID/fd/SOCKET with getxattr system call.

A few weeks ago I submitted a patch for the same purpose. The patch
was introduced /proc/net/sockfs which enumerates inodes and protocols
of all sockets alive on a system. However, it was rejected because (1)
a global lock was needed, and (2) the layout of struct socket was
changed with the patch.

This patch doesn't use any global lock; and doesn't change the layout
of any structs.

In this patch, a protocol name is stored to dentry->d_name of sockfs
when new socket is associated with a file descriptor. Before this
patch dentry->d_name was not used; it was just filled with empty
string. lsof may use an extended attribute named
'system.sockprotoname' to retrieve the value of dentry->d_name.

It is nice if we can see the protocol name with ls -l
/proc/PID/fd. However, "socket:[#INODE]", the name format returned
from sockfs_dname() was already defined. To keep the compatibility
between kernel and user land, the extended attribute is used to
prepare the value of dentry->d_name.

Change-Id: I04143ee6da5c236835a897086fc0de819abb0cdc
Signed-off-by: Masatake YAMATO <yamato@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:37:00 +01:00
Sabrina Dubroca 82794d1071 BACKPORT: tcp: fix recv with flags MSG_WAITALL | MSG_PEEK
Currently, tcp_recvmsg enters a busy loop in sk_wait_data if called
with flags = MSG_WAITALL | MSG_PEEK.

sk_wait_data waits for sk_receive_queue not empty, but in this case,
the receive queue is not empty, but does not contain any skb that we
can use.

Add a "last skb seen on receive queue" argument to sk_wait_data, so
that it sleeps until the receive queue has new skbs.

Change-Id: If58492ae474effe058541f7e9a0c03dc24155393
Link: https://bugzilla.kernel.org/show_bug.cgi?id=99461
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=18493
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1205258
Reported-by: Enrico Scholz <rh-bugzilla@ensc.de>
Reported-by: Dan Searle <dan@censornet.com>
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:36:52 +01:00
Joe Perches 721c4f160d BACKPORT: sock.h: Remove extern from function prototypes
There are a mix of function prototypes with and without extern
in the kernel sources.  Standardize on not using extern for
function prototypes.

Function prototypes don't need to be written with extern.
extern is assumed by the compiler.  Its use is as unnecessary as
using auto to declare automatic/local variables in a block.

Change-Id: I65041ae9f8472dc9f84f5f8e09dc3ac859c7d05a
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Adrian DC <radian.dc@gmail.com>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:36:36 +01:00
Daniel Borkmann e9ff904465 BACKPORT: net: sock: make sock_tx_timestamp void
Currently, sock_tx_timestamp() always returns 0. The comment that
describes the sock_tx_timestamp() function wrongly says that it
returns an error when an invalid argument is passed (from commit
20d4947353, ``net: socket infrastructure for SO_TIMESTAMPING'').
Make the function void, so that we can also remove all the unneeded
if conditions that check for such a _non-existant_ error case in the
output path.

Change-Id: Ibdfd5071737190371d4abec5ae76046b5aa8de23
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:32:19 +01:00
Eric Dumazet cab65020e8 BACKPORT: net: include/net/sock.h cleanup
bool/const conversions where possible

__inline__ -> inline

space cleanups

Change-Id: I0ee0135e737edd702f753fac182b293ec5cc652a
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:32:14 +01:00
Will Deacon 9587d567d2 BACKPORT: ipc: add COMPAT_SHMLBA support
If the SHMLBA definition for a native task differs from the definition for
a compat task, the do_shmat() function would need to handle both.

This patch introduces COMPAT_SHMLBA, which is used by the compat shmat
syscall when calling the ipc code and allows architectures such as AArch64
(where the native SHMLBA is 64k but the compat (AArch32) definition is
16k) to provide the correct semantics for compat IPC system calls.

Change-Id: I0292f1fedabaa6cbbab843611aa76a8f50f47771
Cc: David S. Miller <davem@davemloft.net>
Cc: Chris Zankel <chris@zankel.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Kevin F. Haggerty <haggertk@lineageos.org>
2023-02-18 18:32:08 +01:00
Roger Hu 7c56badc9a kernel: Revert "tcp: do not lock listener to process SYN packets"
This commit belongs to the patch set (https://lwn.net/Articles/659199/)
that attempts to remove the use of locks on the socket table by
relocating the SYN table to a separate hash table and adding a spin lock
to protect the SYN request queue. Adding only this commit introduces a
race condition for LineageOS kernels for TCP listens, since the TCP SYN
data structures can be corrupted.

A TCP curl bomb on a TCP listen port will corrupt the SYN accept backlog:

for i in $(seq 1 400); do curl -x localhost:443 https://myhost.com -L  --connect-timeout 30 -o /dev/null -sS & done

Run `ss -nltp` and usually the RecVQ column does not drain to 0.

This reverts commit 7d9f104f9cabe1d72a50c4816a48f64fc1da7a64.

This really needs to be reverted across all LineageOS forks:
https://gitlab.com/LineageOS/issues/android/-/issues/3916#note_669493796

Change-Id: Ia7969aeedae411677b307a8e094f9a4cc02b801d
2022-07-05 01:10:45 -04:00
followmsi 643356ada1 defconfig: Enable CONFIG_NLS_UTF8 2022-02-15 15:09:25 +01:00
followmsi bca4d60123 defconfig: Disable CONFIG_RT_GROUP_SCHED
Fix for Bluetooth on Android 12
2021-12-01 21:16:17 +01:00
followmsi 00efd82d52 regen: defconfig
- Re-Enable BFQ
- Keep CFQ as default
- Disable LOCALVERSION_AUTO
2021-12-01 21:12:19 +01:00
Rik van Riel 86a80ec9b8 mm: remove swap token code
The swap token code no longer fits in with the current VM model.  It
does not play well with cgroups or the better NUMA placement code in
development, since we have only one swap token globally.

It also has the potential to mess with scalability of the system, by
increasing the number of non-reclaimable pages on the active and
inactive anon LRU lists.

Last but not least, the swap token code has been broken for a year
without complaints, as reported by Konstantin Khlebnikov.  This suggests
we no longer have much use for it.

The days of sub-1G memory systems with heavy use of swap are over.  If
we ever need thrashing reducing code in the future, we will have to
implement something that does scale.

Change-Id: I6d287cfc3c3206ca24da2de0c1392e5fdfcfabe8
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Hugh Dickins <hughd@google.com>
Acked-by: Bob Picco <bpicco@meloft.net>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: e709ffd616
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: franciscofranco <franciscofranco.1990@gmail.com>
Signed-off-by: flar2 <asegaert@gmail.com>
2021-12-01 15:26:49 +01:00
followmsi dd2a3cd7a0 regen: defconfig
- binder update
2021-11-27 11:34:33 +01:00
followmsi 9ab3bf6892 flo/deb: Update Android binder 2021-11-26 22:02:17 +01:00
Russell King 8564b84a62 mm: list_lru: fix almost infinite loop causing effective livelock
I've seen a fair number of issues with kswapd and other processes
appearing to get stuck in v3.12-rc.  Using sysrq-p many times seems to
indicate that it gets stuck somewhere in list_lru_walk_node(), called
from prune_icache_sb() and super_cache_scan().

I never seem to be able to trigger a calltrace for functions above that
point.

So I decided to add the following to super_cache_scan():

    @@ -81,10 +81,14 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
            inodes = list_lru_count_node(&sb->s_inode_lru, sc->nid);
            dentries = list_lru_count_node(&sb->s_dentry_lru, sc->nid);
            total_objects = dentries + inodes + fs_objects + 1;
    +printk("%s:%u: %s: dentries %lu inodes %lu total %lu\n", current->comm, current->pid, __func__, dentries, inodes, total_objects);

            /* proportion the scan between the caches */
            dentries = mult_frac(sc->nr_to_scan, dentries, total_objects);
            inodes = mult_frac(sc->nr_to_scan, inodes, total_objects);
    +printk("%s:%u: %s: dentries %lu inodes %lu\n", current->comm, current->pid, __func__, dentries, inodes);
    +BUG_ON(dentries == 0);
    +BUG_ON(inodes == 0);

            /*
             * prune the dcache first as the icache is pinned by it, then
    @@ -99,7 +103,7 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
                    freed += sb->s_op->free_cached_objects(sb, fs_objects,
                                                           sc->nid);
            }
    -
    +printk("%s:%u: %s: dentries %lu inodes %lu freed %lu\n", current->comm, current->pid, __func__, dentries, inodes, freed);
            drop_super(sb);
            return freed;
     }

and shortly thereafter, having applied some pressure, I got this:

    update-apt-xapi:1616: super_cache_scan: dentries 25632 inodes 2 total 25635
    update-apt-xapi:1616: super_cache_scan: dentries 1023 inodes 0
    ------------[ cut here ]------------
    Kernel BUG at c0101994 [verbose debug info unavailable]
    Internal error: Oops - BUG: 0 [#3] SMP ARM
    Modules linked in: fuse rfcomm bnep bluetooth hid_cypress
    CPU: 0 PID: 1616 Comm: update-apt-xapi Tainted: G      D      3.12.0-rc7+ #154
    task: daea1200 ti: c3bf8000 task.ti: c3bf8000
    PC is at super_cache_scan+0x1c0/0x278
    LR is at trace_hardirqs_on+0x14/0x18
    Process update-apt-xapi (pid: 1616, stack limit = 0xc3bf8240)
    ...
    Backtrace:
      (super_cache_scan) from [<c00cd69c>] (shrink_slab+0x254/0x4c8)
      (shrink_slab) from [<c00d09a0>] (try_to_free_pages+0x3a0/0x5e0)
      (try_to_free_pages) from [<c00c59cc>] (__alloc_pages_nodemask+0x5)
      (__alloc_pages_nodemask) from [<c00e07c0>] (__pte_alloc+0x2c/0x13)
      (__pte_alloc) from [<c00e3a70>] (handle_mm_fault+0x84c/0x914)
      (handle_mm_fault) from [<c001a4cc>] (do_page_fault+0x1f0/0x3bc)
      (do_page_fault) from [<c001a7b0>] (do_translation_fault+0xac/0xb8)
      (do_translation_fault) from [<c000840c>] (do_DataAbort+0x38/0xa0)
      (do_DataAbort) from [<c00133f8>] (__dabt_usr+0x38/0x40)

Notice that we had a very low number of inodes, which were reduced to
zero my mult_frac().

Now, prune_icache_sb() calls list_lru_walk_node() passing that number of
inodes (0) into that as the number of objects to scan:

    long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan,
                         int nid)
    {
            LIST_HEAD(freeable);
            long freed;

            freed = list_lru_walk_node(&sb->s_inode_lru, nid, inode_lru_isolate,
                                           &freeable, &nr_to_scan);

which does:

    unsigned long
    list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate,
                       void *cb_arg, unsigned long *nr_to_walk)
    {

            struct list_lru_node    *nlru = &lru->node[nid];
            struct list_head *item, *n;
            unsigned long isolated = 0;

            spin_lock(&nlru->lock);
    restart:
            list_for_each_safe(item, n, &nlru->list) {
                    enum lru_status ret;

                    /*
                     * decrement nr_to_walk first so that we don't livelock if we
                     * get stuck on large numbesr of LRU_RETRY items
                     */
                    if (--(*nr_to_walk) == 0)
                            break;

So, if *nr_to_walk was zero when this function was entered, that means
we're wanting to operate on (~0UL)+1 objects - which might as well be
infinite.

Clearly this is not correct behaviour.  If we think about the behaviour
of this function when *nr_to_walk is 1, then clearly it's wrong - we
decrement first and then test for zero - which results in us doing
nothing at all.  A post-decrement would give the desired behaviour -
we'd try to walk one object and one object only if *nr_to_walk were one.

It also gives the correct behaviour for zero - we exit at this point.

Fixes: 5cedf721a7 ("list_lru: fix broken LRU_RETRY behaviour")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
[ Modified to make sure we never underflow the count: this function gets
  called in a loop, so the 0 -> ~0ul transition is dangerous  - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Change-Id: I8c53bcc4c70ed978e6cf81a6f38fb06a59cc64ce
2021-11-26 22:02:17 +01:00
Johannes Weiner 8c8971e883 mm: keep page cache radix tree nodes in check
Previously, page cache radix tree nodes were freed after reclaim emptied
out their page pointers.  But now reclaim stores shadow entries in their
place, which are only reclaimed when the inodes themselves are
reclaimed.  This is problematic for bigger files that are still in use
after they have a significant amount of their cache reclaimed, without
any of those pages actually refaulting.  The shadow entries will just
sit there and waste memory.  In the worst case, the shadow entries will
accumulate until the machine runs out of memory.

To get this under control, the VM will track radix tree nodes
exclusively containing shadow entries on a per-NUMA node list.  Per-NUMA
rather than global because we expect the radix tree nodes themselves to
be allocated node-locally and we want to reduce cross-node references of
otherwise independent cache workloads.  A simple shrinker will then
reclaim these nodes on memory pressure.

A few things need to be stored in the radix tree node to implement the
shadow node LRU and allow tree deletions coming from the list:

1. There is no index available that would describe the reverse path
   from the node up to the tree root, which is needed to perform a
   deletion.  To solve this, encode in each node its offset inside the
   parent.  This can be stored in the unused upper bits of the same
   member that stores the node's height at no extra space cost.

2. The number of shadow entries needs to be counted in addition to the
   regular entries, to quickly detect when the node is ready to go to
   the shadow node LRU list.  The current entry count is an unsigned
   int but the maximum number of entries is 64, so a shadow counter
   can easily be stored in the unused upper bits.

3. Tree modification needs tree lock and tree root, which are located
   in the address space, so store an address_space backpointer in the
   node.  The parent pointer of the node is in a union with the 2-word
   rcu_head, so the backpointer comes at no extra cost as well.

4. The node needs to be linked to an LRU list, which requires a list
   head inside the node.  This does increase the size of the node, but
   it does not change the number of objects that fit into a slab page.

[akpm@linux-foundation.org: export the right function]
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jan Kara <jack@suse.cz>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Luigi Semenzato <semenzato@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Metin Doslu <metin@citusdata.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Ozgun Erdogan <ozgun@citusdata.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roman Gushchin <klamm@yandex-team.ru>
Cc: Ryan Mallon <rmallon@gmail.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Change-Id: I6da4642d842f91615747957e7f54a5c2d4593427
2021-11-26 22:02:17 +01:00
Glauber Costa b21bebc0f8 list_lru: dynamically adjust node arrays
We currently use a compile-time constant to size the node array for the
list_lru structure.  Due to this, we don't need to allocate any memory at
initialization time.  But as a consequence, the structures that contain
embedded list_lru lists can become way too big (the superblock for
instance contains two of them).

This patch aims at ameliorating this situation by dynamically allocating
the node arrays with the firmware provided nr_node_ids.

Change-Id: If8f8d671d505709d22918b023ed1935b12c06c89
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-11-26 22:02:16 +01:00
Glauber Costa 9fda83a755 list_lru: remove special case function list_lru_dispose_all.
The list_lru implementation has one function, list_lru_dispose_all, with
only one user (the dentry code).  At first, such function appears to make
sense because we are really not interested in the result of isolating each
dentry separately - all of them are going away anyway.  However, it's
implementation is buggy in the following way:

When we call list_lru_dispose_all in fs/dcache.c, we scan all dentries
marking them with DCACHE_SHRINK_LIST.  However, this is done without the
nlru->lock taken.  The imediate result of that is that someone else may
add or remove the dentry from the LRU at the same time.  When list_lru_del
happens in that scenario we will see an element that is not yet marked
with DCACHE_SHRINK_LIST (even though it will be in the future) and
obviously remove it from an lru where the element no longer is.  Since
list_lru_dispose_all will in effect count down nlru's nr_items and
list_lru_del will do the same, this will lead to an imbalance.

The solution for this would not be so simple: we can obviously just keep
the lru_lock taken, but then we have no guarantees that we will be able to
acquire the dentry lock (dentry->d_lock).  To properly solve this, we need
a communication mechanism between the lru and dentry code, so they can
coordinate this with each other.

Such mechanism already exists in the form of the list_lru_walk_cb
callback.  So it is possible to construct a dcache-side prune function
that does the right thing only by calling list_lru_walk in a loop until no
more dentries are available.

With only one user, plus the fact that a sane solution for the problem
would involve boucing between dcache and list_lru anyway, I see little
justification to keep the special case list_lru_dispose_all in tree.

Change-Id: I7cbc4646a323aae9605dac32e0a1591340493245
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Michal Hocko <mhocko@suse.cz>
Acked-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2021-11-26 22:02:16 +01:00