Commit graph

314553 commits

Author SHA1 Message Date
Sasha Levin
61223bca80 decode_stacktrace: make stack dump output useful again
Right now when people try to report issues in the kernel they send stack
dumps to eachother, which looks something like this:

  [    6.906437]  [<ffffffff811f0e90>] ? backtrace_test_irq_callback+0x20/0x20
  [    6.907121]  [<ffffffff84388ce8>] dump_stack+0x52/0x7f
  [    6.907640]  [<ffffffff811f0ec8>] backtrace_regression_test+0x38/0x110
  [    6.908281]  [<ffffffff813596a0>] ? proc_create_data+0xa0/0xd0
  [    6.908870]  [<ffffffff870a8040>] ? proc_modules_init+0x22/0x22
  [    6.909480]  [<ffffffff810020c2>] do_one_initcall+0xc2/0x1e0
  [...]

However, most of the text you get is pure garbage.

The only useful thing above is the function name.  Due to the amount of
different kernel code versions and various configurations being used,
the kernel address and the offset into the function are not really
helpful in determining where the problem actually occured.

Too often the result of someone looking at a stack dump is asking the
person who sent it for a translation for one or more 'addr2line'
translations.  Which slows down the entire process of debugging the
issue (and really annoying).

The decode_stacktrace script is an attempt to make the output more
useful and easy to work with by translating all kernel addresses in the
stack dump into line numbers.  Which means that the stack dump would
look like this:

  [  635.148361]  dump_stack (lib/dump_stack.c:52)
  [  635.149127]  warn_slowpath_common (kernel/panic.c:418)
  [  635.150214]  warn_slowpath_null (kernel/panic.c:453)
  [  635.151031]  _oalloc_pages_slowpath+0x6a/0x7d0
  [  635.152171]  ? zone_watermark_ok (mm/page_alloc.c:1728)
  [  635.152988]  ? get_page_from_freelist (mm/page_alloc.c:1939)
  [  635.154766]  __alloc_pages_nodemask (mm/page_alloc.c:2766)

It's pretty obvious why this is better than the previous stack dump
before.

Usage is pretty simple:

        ./decode_stacktrace.sh [vmlinux] [base path]

Where vmlinux is the vmlinux to extract line numbers from and base path
is the path that points to the root of the build tree, for example:

        ./decode_stacktrace.sh vmlinux /home/sasha/linux/ < input.log > output.log

The stack trace should be piped through it (I, for example, just pipe
the output of the serial console of my KVM test box through it).

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: Ibd1b079d54d49b081a72f44136c3b6c478523fbb
2020-11-30 19:35:25 +03:00
Eric Dumazet
aedce03d5d tcp: gso: fix truesize tracking
[ Upstream commit 0d08c42cf9 ]

commit 6ff50cd555 ("tcp: gso: do not generate out of order packets")
had an heuristic that can trigger a warning in skb_try_coalesce(),
because skb->truesize of the gso segments were exactly set to mss.

This breaks the requirement that

skb->truesize >= skb->len + truesizeof(struct sk_buff);

It can trivially be reproduced by :

ifconfig lo mtu 1500
ethtool -K lo tso off
netperf

As the skbs are looped into the TCP networking stack, skb_try_coalesce()
warns us of these skb under-estimating their truesize.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: Iced7cd04a2860fcc14815c4f5564e01840ac41e3
2020-11-30 19:35:22 +03:00
Eric Dumazet
33e834ef26 tcp: tsq: restore minimal amount of queueing
[ Upstream commit 98e09386c0 ]

After commit c9eeec26e3 ("tcp: TSQ can use a dynamic limit"), several
users reported throughput regressions, notably on mvneta and wifi
adapters.

802.11 AMPDU requires a fair amount of queueing to be effective.

This patch partially reverts the change done in tcp_write_xmit()
so that the minimal amount is sysctl_tcp_limit_output_bytes.

It also remove the use of this sysctl while building skb stored
in write queue, as TSO autosizing does the right thing anyway.

Users with well behaving NICS and correct qdisc (like sch_fq),
can then lower the default sysctl_tcp_limit_output_bytes value from
128KB to 8KB.

This new usage of sysctl_tcp_limit_output_bytes permits each driver
authors to check how their driver performs when/if the value is set
to a minimum of 4KB.

Normally, line rate for a single TCP flow should be possible,
but some drivers rely on timers to perform TX completion and
too long TX completion delays prevent reaching full throughput.

Fixes: c9eeec26e3 ("tcp: TSQ can use a dynamic limit")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Sujith Manoharan <sujith@msujith.org>
Reported-by: Arnaud Ebalard <arno@natisbad.org>
Tested-by: Sujith Manoharan <sujith@msujith.org>
Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I1fab5c795148dff3da34b42c7808476f51d99f89
2020-11-30 19:35:19 +03:00
Eric Dumazet
31089dfedf tcp: TSQ can use a dynamic limit
[ Upstream commit c9eeec26e3 ]

When TCP Small Queues was added, we used a sysctl to limit amount of
packets queues on Qdisc/device queues for a given TCP flow.

Problem is this limit is either too big for low rates, or too small
for high rates.

Now TCP stack has rate estimation in sk->sk_pacing_rate, and TSO
auto sizing, it can better control number of packets in Qdisc/device
queues.

New limit is two packets or at least 1 to 2 ms worth of packets.

Low rates flows benefit from this patch by having even smaller
number of packets in queues, allowing for faster recovery,
better RTT estimations.

High rates flows benefit from this patch by allowing more than 2 packets
in flight as we had reports this was a limiting factor to reach line
rate. [ In particular if TX completion is delayed because of coalescing
parameters ]

Example for a single flow on 10Gbp link controlled by FQ/pacing

14 packets in flight instead of 2

$ tc -s -d qd
qdisc fq 8001: dev eth0 root refcnt 32 limit 10000p flow_limit 100p
buckets 1024 quantum 3028 initial_quantum 15140
 Sent 1168459366606 bytes 771822841 pkt (dropped 0, overlimits 0
requeues 6822476)
 rate 9346Mbit 771713pps backlog 953820b 14p requeues 6822476
  2047 flow, 2046 inactive, 1 throttled, delay 15673 ns
  2372 gc, 0 highprio, 0 retrans, 9739249 throttled, 0 flows_plimit

Note that sk_pacing_rate is currently set to twice the actual rate, but
this might be refined in the future when a flow is in congestion
avoidance.

Additional change : skb->destructor should be set to tcp_wfree().

A future patch (for linux 3.13+) might remove tcp_limit_output_bytes

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I481dfb7b15ae4fc699e03e2ef846b0631c5ebb3f
2020-11-30 19:35:16 +03:00
Eric Dumazet
0cf083915c tcp: gso: do not generate out of order packets
GSO TCP handler has following issues :

1) ooo_okay from original GSO packet is duplicated to all segments
2) segments (but the last one) are orphaned, so transmit path can not
get transmit queue number from the socket. This happens if GSO
segmentation is done before stacked device for example.

Result is we can send packets from a given TCP flow to different TX
queues (if using multiqueue NICS). This generates OOO problems and
spurious SACK & retransmits.

Fix this by keeping socket pointer set for all segments.

This means that every segment must also have a destructor, and the
original gso skb truesize must be split on all segments, to keep
precise sk->sk_wmem_alloc accounting.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I9d9d4216ec24ad4091535a9b9e811760cd949825
2020-11-30 19:35:13 +03:00
Eric Dumazet
1315c0d65e tcp: tcp_tso_segment() small optimization
We can move th->check computation out of the loop, as compiler
doesn't know each skb initially share same tcp headers after
skb_segment()

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I7f81480604c4bd58f5cdd0be289233b9e6142a7f
2020-11-30 19:35:10 +03:00
Eric Dumazet
2687196048 tcp: GSO should be TSQ friendly
I noticed that TSQ (TCP Small queues) was less effective when TSO is
turned off, and GSO is on. If BQL is not enabled, TSQ has then no
effect.

It turns out the GSO engine frees the original gso_skb at the time the
fragments are generated and queued to the NIC.

We should instead call the tcp_wfree() destructor for the last fragment,
to keep the flow control as intended in TSQ. This effectively limits
the number of queued packets on qdisc + NIC layers.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Idbe0206447bf979e012b6c4d5fe9e4fec10d5eb8
2020-11-30 19:35:07 +03:00
Eric Dumazet
e7e3467ab1 tcp: TCP Small Queues
This introduce TSQ (TCP Small Queues)

TSQ goal is to reduce number of TCP packets in xmit queues (qdisc &
device queues), to reduce RTT and cwnd bias, part of the bufferbloat
problem.

sk->sk_wmem_alloc not allowed to grow above a given limit,
allowing no more than ~128KB [1] per tcp socket in qdisc/dev layers at a
given time.

TSO packets are sized/capped to half the limit, so that we have two
TSO packets in flight, allowing better bandwidth use.

As a side effect, setting the limit to 40000 automatically reduces the
standard gso max limit (65536) to 40000/2 : It can help to reduce
latencies of high prio packets, having smaller TSO packets.

This means we divert sock_wfree() to a tcp_wfree() handler, to
queue/send following frames when skb_orphan() [2] is called for the
already queued skbs.

Results on my dev machines (tg3/ixgbe nics) are really impressive,
using standard pfifo_fast, and with or without TSO/GSO.

Without reduction of nominal bandwidth, we have reduction of buffering
per bulk sender :
< 1ms on Gbit (instead of 50ms with TSO)
< 8ms on 100Mbit (instead of 132 ms)

I no longer have 4 MBytes backlogged in qdisc by a single netperf
session, and both side socket autotuning no longer use 4 Mbytes.

As skb destructor cannot restart xmit itself ( as qdisc lock might be
taken at this point ), we delegate the work to a tasklet. We use one
tasklest per cpu for performance reasons.

If tasklet finds a socket owned by the user, it sets TSQ_OWNED flag.
This flag is tested in a new protocol method called from release_sock(),
to eventually send new segments.

[1] New /proc/sys/net/ipv4/tcp_limit_output_bytes tunable
[2] skb_orphan() is usually called at TX completion time,
  but some drivers call it in their start_xmit() handler.
  These drivers should at least use BQL, or else a single TCP
  session can still fill the whole NIC TX ring, since TSQ will
  have no effect.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Dave Taht <dave.taht@bufferbloat.net>
Cc: Tom Herbert <therbert@google.com>
Cc: Matt Mathis <mattmathis@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I37d5e4d7c9ced1846385b6a04ae3ad134763a949
2020-11-30 19:35:00 +03:00
Eric Dumazet
05fe7848aa tcp: fix a potential deadlock in tcp_get_info()
Taking socket spinlock in tcp_get_info() can deadlock, as
inet_diag_dump_icsk() holds the &hashinfo->ehash_locks[i],
while packet processing can use the reverse locking order.

We could avoid this locking for TCP_LISTEN states, but lockdep would
certainly get confused as all TCP sockets share same lockdep classes.

[  523.722504] ======================================================
[  523.728706] [ INFO: possible circular locking dependency detected ]
[  523.734990] 4.1.0-dbg-DEV #1676 Not tainted
[  523.739202] -------------------------------------------------------
[  523.745474] ss/18032 is trying to acquire lock:
[  523.750002]  (slock-AF_INET){+.-...}, at: [<ffffffff81669d44>] tcp_get_info+0x2c4/0x360
[  523.758129]
[  523.758129] but task is already holding lock:
[  523.763968]  (&(&hashinfo->ehash_locks[i])->rlock){+.-...}, at: [<ffffffff816bcb75>] inet_diag_dump_icsk+0x1d5/0x6c0
[  523.774661]
[  523.774661] which lock already depends on the new lock.
[  523.774661]
[  523.782850]
[  523.782850] the existing dependency chain (in reverse order) is:
[  523.790326]
-> #1 (&(&hashinfo->ehash_locks[i])->rlock){+.-...}:
[  523.796599]        [<ffffffff811126bb>] lock_acquire+0xbb/0x270
[  523.802565]        [<ffffffff816f5868>] _raw_spin_lock+0x38/0x50
[  523.808628]        [<ffffffff81665af8>] __inet_hash_nolisten+0x78/0x110
[  523.815273]        [<ffffffff816819db>] tcp_v4_syn_recv_sock+0x24b/0x350
[  523.822067]        [<ffffffff81684d41>] tcp_check_req+0x3c1/0x500
[  523.828199]        [<ffffffff81682d09>] tcp_v4_do_rcv+0x239/0x3d0
[  523.834331]        [<ffffffff816842fe>] tcp_v4_rcv+0xa8e/0xc10
[  523.840202]        [<ffffffff81658fa3>] ip_local_deliver_finish+0x133/0x3e0
[  523.847214]        [<ffffffff81659a9a>] ip_local_deliver+0xaa/0xc0
[  523.853440]        [<ffffffff816593b8>] ip_rcv_finish+0x168/0x5c0
[  523.859624]        [<ffffffff81659db7>] ip_rcv+0x307/0x420

Lets use u64_sync infrastructure instead. As a bonus, 64bit
arches get optimized, as these are nop for them.

Fixes: 0df48c26d841 ("tcp: add tcpi_bytes_acked to tcp_info")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I50c833b77f41111a6fa3a57834c6311f95573754
2020-11-30 19:31:51 +03:00
Marcelo Ricardo Leitner
461a9cb3c4 tcp: add tcpi_segs_in and tcpi_segs_out to tcp_info
This patch tracks the total number of inbound and outbound segments on a
TCP socket. One may use this number to have an idea on connection
quality when compared against the retransmissions.

RFC4898 named these : tcpEStatsPerfSegsIn and tcpEStatsPerfSegsOut

These are a 32bit field each and can be fetched both from TCP_INFO
getsockopt() if one has a handle on a TCP socket, or from inet_diag
netlink facility (iproute2/ss patch will follow)

Note that tp->segs_out was placed near tp->snd_nxt for good data
locality and minimal performance impact, while tp->segs_in was placed
near tp->bytes_received for the same reason.

Join work with Eric Dumazet.

Note that received SYN are accounted on the listener, but sent SYNACK
are not accounted.

Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Ic3e5daea102e13fb24c5fb1ce6913d5ece13c521
2020-11-30 19:31:48 +03:00
Eric Dumazet
2afbb021d9 tcp: add tcpi_bytes_received to tcp_info
This patch tracks total number of payload bytes received on a TCP socket.
This is the sum of all changes done to tp->rcv_nxt

RFC4898 named this : tcpEStatsAppHCThruOctetsReceived

This is a 64bit field, and can be fetched both from TCP_INFO
getsockopt() if one has a handle on a TCP socket, or from inet_diag
netlink facility (iproute2/ss patch will follow)

Note that tp->bytes_received was placed near tp->rcv_nxt for
best data locality and minimal performance impact.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Matt Mathis <mattmathis@google.com>
Cc: Eric Salo <salo@google.com>
Cc: Martin Lau <kafai@fb.com>
Cc: Chris Rapier <rapier@psc.edu>
Acked-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Ie1fec6455c8c6cf9ae3e2df09bf55abb56b57286
2020-11-30 19:31:44 +03:00
Eric Dumazet
45970b0b74 tcp: add tcpi_bytes_acked to tcp_info
This patch tracks total number of bytes acked for a TCP socket.
This is the sum of all changes done to tp->snd_una, and allows
for precise tracking of delivered data.

RFC4898 named this : tcpEStatsAppHCThruOctetsAcked

This is a 64bit field, and can be fetched both from TCP_INFO
getsockopt() if one has a handle on a TCP socket, or from inet_diag
netlink facility (iproute2/ss patch will follow)

Note that tp->bytes_acked was placed near tp->snd_una for
best data locality and minimal performance impact.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Cc: Matt Mathis <mattmathis@google.com>
Cc: Eric Salo <salo@google.com>
Cc: Martin Lau <kafai@fb.com>
Cc: Chris Rapier <rapier@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I975b11b049d6bde1b6802dc4331a9da93c371c9b
2020-11-30 19:31:41 +03:00
Eric Dumazet
3e12c5a9d4 tcp: add pacing_rate information into tcp_info
Add two new fields to struct tcp_info, to report sk_pacing_rate
and sk_max_pacing_rate to monitoring applications, as ss from iproute2.

User exported fields are 64bit, even if kernel is currently using 32bit
fields.

lpaa5:~# ss -i
..
	 skmem:(r0,rb357120,t0,tb2097152,f1584,w1980880,o0,bl0) ts sack cubic
wscale:6,6 rto:400 rtt:0.875/0.75 mss:1448 cwnd:1 ssthresh:12 send
13.2Mbps pacing_rate 3336.2Mbps unacked:15 retrans:1/5448 lost:15
rcv_space:29200

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I31149707ef565448d5f2f1b43f5759f308f824f5
2020-11-30 19:31:38 +03:00
Eric Dumazet
27cf7ce29b net: add missing sk_max_pacing_rate doc
Warning(include/net/sock.h:411): No description found for parameter
'sk_max_pacing_rate'

Lets please "make htmldocs" and kbuild bot.

Reported-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: I0a7952b50cf3df1dc80011ebef685d4dfd0f6dc6
2020-11-30 19:31:35 +03:00
Eric Dumazet
adc3caedd0 net: introduce SO_MAX_PACING_RATE
As mentioned in commit afe4fd0624 ("pkt_sched: fq: Fair Queue packet
scheduler"), this patch adds a new socket option.

SO_MAX_PACING_RATE offers the application the ability to cap the
rate computed by transport layer. Value is in bytes per second.

u32 val = 1000000;
setsockopt(sockfd, SOL_SOCKET, SO_MAX_PACING_RATE, &val, sizeof(val));

To be effectively paced, a flow must use FQ packet scheduler.

Note that a packet scheduler takes into account the headers for its
computations. The effective payload rate depends on MSS and retransmits
if any.

I chose to make this pacing rate a SOL_SOCKET option instead of a
TCP one because this can be used by other protocols.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Steinar H. Gunderson <sesse@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Iea51b6104a5420d5c9ed1ea7382cbd53bdb8f4be
2020-11-30 19:31:32 +03:00
Eric Dumazet
e5a9f4f1cc tcp: TSO packets automatic sizing
After hearing many people over past years complaining against TSO being
bursty or even buggy, we are proud to present automatic sizing of TSO
packets.

One part of the problem is that tcp_tso_should_defer() uses an heuristic
relying on upcoming ACKS instead of a timer, but more generally, having
big TSO packets makes little sense for low rates, as it tends to create
micro bursts on the network, and general consensus is to reduce the
buffering amount.

This patch introduces a per socket sk_pacing_rate, that approximates
the current sending rate, and allows us to size the TSO packets so
that we try to send one packet every ms.

This field could be set by other transports.

Patch has no impact for high speed flows, where having large TSO packets
makes sense to reach line rate.

For other flows, this helps better packet scheduling and ACK clocking.

This patch increases performance of TCP flows in lossy environments.

A new sysctl (tcp_min_tso_segs) is added, to specify the
minimal size of a TSO packet (default being 2).

A follow-up patch will provide a new packet scheduler (FQ), using
sk_pacing_rate as an input to perform optional per flow pacing.

This explains why we chose to set sk_pacing_rate to twice the current
rate, allowing 'slow start' ramp up.

sk_pacing_rate = 2 * cwnd * mss / srtt

v2: Neal Cardwell reported a suspect deferring of last two segments on
initial write of 10 MSS, I had to change tcp_tso_should_defer() to take
into account tp->xmit_size_goal_segs

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Van Jacobson <vanj@google.com>
Cc: Tom Herbert <therbert@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: If293e2c6a0f52fe0e2b4ee149c9242f8c5e46ef5
2020-11-30 19:30:45 +03:00
Eric W. Biederman
ec0a45cd4a net: Replace u64_stats_fetch_begin_bh to u64_stats_fetch_begin_irq
Replace the bh safe variant with the hard irq safe variant.

We need a hard irq safe variant to deal with netpoll transmitting
packets from hard irq context, and we need it in most if not all of
the places using the bh safe variant.

Except on 32bit uni-processor the code is exactly the same so don't
bother with a bh variant, just have a hard irq safe variant that
everyone can use.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[javelinanddart]: Merge conflicts were resolved for drivers that exist in 3.4,
and additionally a treewide find and replace was run for these functions.
Signed-off-by: Paul Keith <javelinanddart@gmail.com>
Change-Id: Ib74db793de5e546414a0599f23095f82f0e20c86
2020-11-30 19:26:49 +03:00
Li RongQing
a559d1f125 ipv6: fix the use of pcpu_tstats in ip6_tunnel
when read/write the 64bit data, the correct lock should be hold.

Fixes: 87b6d218f3 ("tunnel: implement 64 bits statistics")

Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Paul Keith <javelinanddart@gmail.com>
Change-Id: Ib90ecb495739eafb1563d5cfd3d2dc275e4944ea
2020-11-30 19:26:46 +03:00
Li RongQing
2e641339f3 ipv6: fix the use of pcpu_tstats in sit
when read/write the 64bit data, the correct lock should be hold.

Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Paul Keith <javelinanddart@gmail.com>
Change-Id: Ie9c73149b86eb028e1646d1001bdb4310a72fe14
2020-11-30 19:26:43 +03:00
John Stultz
ad74346428 net: Explicitly initialize u64_stats_sync structures for lockdep
In order to enable lockdep on seqcount/seqlock structures, we
must explicitly initialize any locks.

The u64_stats_sync structure, uses a seqcount, and thus we need
to introduce a u64_stats_init() function and use it to initialize
the structure.

This unfortunately adds a lot of fairly trivial initialization code
to a number of drivers. But the benefit of ensuring correctness makes
this worth while.

Because these changes are required for lockdep to be enabled, and the
changes are quite trivial, I've not yet split this patch out into 30-some
separate patches, as I figured it would be better to get the various
maintainers thoughts on how to best merge this change along with
the seqcount lockdep enablement.

Feedback would be appreciated!

Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: James Morris <jmorris@namei.org>
Cc: Jesse Gross <jesse@nicira.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Mirko Lindner <mlindner@marvell.com>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Roger Luethi <rl@hellgate.ch>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Simon Horman <horms@verge.net.au>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Wensong Zhang <wensong@linux-vs.org>
Cc: netdev@vger.kernel.org
Link: http://lkml.kernel.org/r/1381186321-4906-2-git-send-email-john.stultz@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: Ieda06b95f0ec302dbef8576ef4c8fb4bd28ffb2f
2020-11-30 19:26:40 +03:00
Stephen Hemminger
39b326bec0 tcp: remove Appropriate Byte Count support
TCP Appropriate Byte Count was added by me, but later disabled.
There is no point in maintaining it since it is a potential source
of bugs and Linux already implements other better window protection
heuristics.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Change-Id: Ifc93fa043256c275580a9f93611ad1d013889bc4
2020-11-30 19:26:36 +03:00
stephen hemminger
a9d180c942 tunnel: implement 64 bits statistics
Convert the per-cpu statistics kept for GRE, IPIP, and SIT tunnels
to use 64 bit statistics.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Paul Keith <javelinanddart@gmail.com>
Change-Id: I317ad78a7f3a31326a1bc6f3069712f348b38a6c
2020-11-30 19:26:33 +03:00
Theodore Ts'o
c51e428eea fs: push sync_filesystem() down to the file system's remount_fs()
Previously, the no-op "mount -o mount /dev/xxx" operation when the
file system is already mounted read-write causes an implied,
unconditional syncfs().  This seems pretty stupid, and it's certainly
documented or guaraunteed to do this, nor is it particularly useful,
except in the case where the file system was mounted rw and is getting
remounted read-only.

However, it's possible that there might be some file systems that are
actually depending on this behavior.  In most file systems, it's
probably fine to only call sync_filesystem() when transitioning from
read-write to read-only, and there are some file systems where this is
not needed at all (for example, for a pseudo-filesystem or something
like romfs).

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: linux-fsdevel@vger.kernel.org
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Artem Bityutskiy <dedekind1@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Evgeniy Dushistov <dushistov@mail.ru>
Cc: Jan Kara <jack@suse.cz>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Anders Larsen <al@alarsen.net>
Cc: Phillip Lougher <phillip@squashfs.org.uk>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: Petr Vandrovec <petr@vandrovec.name>
Cc: xfs@oss.sgi.com
Cc: linux-btrfs@vger.kernel.org
Cc: linux-cifs@vger.kernel.org
Cc: samba-technical@lists.samba.org
Cc: codalist@coda.cs.cmu.edu
Cc: linux-ext4@vger.kernel.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: fuse-devel@lists.sourceforge.net
Cc: cluster-devel@redhat.com
Cc: linux-mtd@lists.infradead.org
Cc: jfs-discussion@lists.sourceforge.net
Cc: linux-nfs@vger.kernel.org
Cc: linux-nilfs@vger.kernel.org
Cc: linux-ntfs-dev@lists.sourceforge.net
Cc: ocfs2-devel@oss.oracle.com
Cc: reiserfs-devel@vger.kernel.org
Change-Id: I03b43c745f82fce2cd3e0856c42eda70d94a45f8
2020-11-29 16:11:45 +03:00
Konstantin Khlebnikov
7c58c4e397 mm: kill vma flag VM_CAN_NONLINEAR
Move actual pte filling for non-linear file mappings into the new special
vma operation: ->remap_pages().

Filesystems must implement this method to get non-linear mapping support,
if it uses filemap_fault() then generic_file_remap_pages() can be used.

Now device drivers can implement this method and obtain nonlinear vma support.

Change-Id: Ifbbbdfcdf871a8173856a13087400885357f95ee
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>	#arch/tile
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Eric Paris <eparis@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Venkatesh Pallipadi <venki@google.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-11-29 16:11:40 +03:00
Ritesh Harjani
5063bb6474 ANDROID: fuse: Add null terminator to path in canonical path to avoid issue
page allocated in fuse_dentry_canonical_path to be handled in
fuse_dev_do_write is allocated using __get_free_pages(GFP_KERNEL).
This may not return a page with data filled with 0. Now this
page may not have a null terminator at all.
If this happens and userspace fuse daemon screws up by passing a string
to kernel which is not NULL terminated (or did not fill anything),
then inside fuse driver in kernel when we try to do
strlen(fuse_dev_write->kern_path->getname_kernel)
on that page data -> it may give us issue with kernel paging request.

Unable to handle kernel paging request at virtual address
------------[ cut here ]------------
<..>
PC is at strlen+0x10/0x90
LR is at getname_kernel+0x2c/0xf4
<..>
strlen+0x10/0x90
kern_path+0x28/0x4c
fuse_dev_do_write+0x5b8/0x694
fuse_dev_write+0x74/0x94
do_iter_readv_writev+0x80/0xb8
do_readv_writev+0xec/0x1cc
vfs_writev+0x54/0x64
SyS_writev+0x64/0xe4
el0_svc_naked+0x24/0x28

To avoid this we should ensure in case of FUSE_CANONICAL_PATH,
the page is null terminated.

Change-Id: I33ca7cc76b4472eaa982c67bb20685df451121f5
Signed-off-by: Ritesh Harjani <riteshh@codeaurora.org>
Bug: 75984715
[Daniel - small edit, using args size ]
Signed-off-by: Daniel Rosenberg <drosen@google.com>
2020-11-29 16:11:35 +03:00
Dave Chinner
4e1be4a42c mm: new shrinker API
The current shrinker callout API uses an a single shrinker call for
multiple functions.  To determine the function, a special magical value is
passed in a parameter to change the behaviour.  This complicates the
implementation and return value specification for the different
behaviours.

Separate the two different behaviours into separate operations, one to
return a count of freeable objects in the cache, and another to scan a
certain number of objects in the cache for freeing.  In defining these new
operations, ensure the return values and resultant behaviours are clearly
defined and documented.

Modify shrink_slab() to use the new API and implement the callouts for all
the existing shrinkers.

Change-Id: Id673f15f32d0497cfd398805d550907b68a06531
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Glauber Costa <glommer@parallels.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-11-29 16:11:30 +03:00
Minchan Kim
96cd891619 vmscan: remove obsolete shrink_control comment
09f363c7 ("vmscan: fix shrinker callback bug in fs/super.c") fixed a
shrinker callback which was returning -1 when nr_to_scan is zero, which
caused excessive slab scanning.  But 635697c6 ("vmscan: fix initial
shrinker size handling") fixed the problem, again so we can freely return
-1 although nr_to_scan is zero.  So let's revert 09f363c7 because the
comment added in 09f363c7 made an unnecessary rule.

Change-Id: Ic57b698b97406b980e06bd213afa283868a779a2
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-11-29 16:11:26 +03:00
Linus Torvalds
9fc39ebcf6 vfs: make O_PATH file descriptors usable for 'fstat()'
We already use them for openat() and friends, but fstat() also wants to
be able to use O_PATH file descriptors.  This should make it more
directly comparable to the O_SEARCH of Solaris.

Note that you could already do the same thing with "fstatat()" and an
empty path, but just doing "fstat()" directly is simpler and faster, so
there is no reason not to just allow it directly.

See also commit 332a2e1244, which did the same thing for fchdir, for
the same reasons.

Reported-by: ольга крыжановская <olga.kryzhanovska@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: stable@kernel.org    # O_PATH introduced in 3.0+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-11-22 01:21:34 +03:00
Stephen Smalley
9ee1b3ecbe selinux: Remove unused permission definitions
Remove unused permission definitions from SELinux.
Many of these were only ever used in pre-mainline
versions of SELinux, prior to Linux 2.6.0.  Some of them
were used in the legacy network or compat_net=1 checks
that were disabled by default in Linux 2.6.18 and
fully removed in Linux 2.6.30.

Permissions never used in mainline Linux:
file swapon
filesystem transition
tcp_socket { connectto newconn acceptfrom }
node enforce_dest
unix_stream_socket { newconn acceptfrom }

Legacy network checks, removed in 2.6.30:
socket { recv_msg send_msg }
node { tcp_recv tcp_send udp_recv udp_send rawip_recv rawip_send dccp_recv dccp_send }
netif { tcp_recv tcp_send udp_recv udp_send rawip_recv rawip_send dccp_recv dccp_send }

Change-Id: I976d81760be7a800d696afb9ffc6c7a5dafa5c69
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <pmoore@redhat.com>
2020-11-14 17:06:52 +01:00
Stephen Boyd
09553d5278 msm: SSR: Remove useless warning
This is a warning so that developers know to add more restart
order lists to SSR when a new chip is added. This is mostly
irrelevant now because we assume either entire SoC restart on SSR
or independent restart on SSR, not group restarts. Remove this
warning as it is mostly a reminder that nobody is listening for.

Change-Id: Icbf955cb18395d8d5d086b2167c5c329588b9256
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2020-11-13 15:50:48 +01:00
Elektroschmock
bbafa74230 flo: defconfig: Remove unused USB storage drivers
reference:
https://source.android.com/devices/tech/perf/boot-times

Change-Id: Ie617f5e00587769a3e419ec6c8e1fb4678688842
2020-11-10 20:56:21 +01:00
elektroschmock
ed06b19dcc flo: defconfig: Disable CONFIG_USB_EHSET_TEST_FIXTURE
We're not going to do electrical tests on flo's usb ports

Change-Id: I4878b329a10ed0531beecec011fc7b578c15acb2
2020-11-09 21:31:14 +01:00
bohu
ff10168f46 Disable /dev/port
Bug: 36604779
BUG: 37646833

Signed-off-by: Bo Hu <bohu@google.com>
Test: run cts -m  CtsPermissionTestCases
-t android.permission.cts.FileSystemPermissionTest#testDevPortSane
Change-Id: Ie3155cb577e3f5e9c565129e3f007daded1a6328
2020-11-09 21:31:14 +01:00
Max Bires
795cd31308 Fixing an issue that caused DEVPORT to always be set.
Without a bool string present, using "# CONFIG_DEVPORT is not set" in
defconfig files would not actually unset devport. This ensured that
/dev/port was always on, but there are reasons a user may wish to
disable it (smaller kernel, attack surface reduction) if it's not being
used. Adding a message here in order to make this user visible.

Bug: 36604779
Change-Id: Iab41b5c1ba44e9e52361fbfd8b1863b88eee417b
Signed-off-by: Max Bires <jbires@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Bug: 33301618
2020-11-09 21:31:14 +01:00
Elektroschmock
a228ac4a97 flo: defconfig: disable some debug and logging options
Change-Id: I972aadc94d6e4e79b3ffa92f19b3d0f907bb5e13
2020-11-09 22:31:03 +02:00
Alistair Strachan
77157df731 staging: android: ashmem: Fix mmap size validation
[ Upstream commit 8632c614565d0c5fdde527889601c018e97b6384 ]

The ashmem driver did not check that the size/offset of the vma passed
to its .mmap() function was not larger than the ashmem object being
mapped. This could cause mmap() to succeed, even though accessing parts
of the mapping would later fail with a segmentation fault.

Ensure an error is returned by the ashmem_mmap() function if the vma
size is larger than the ashmem object size. This enables safer handling
of the problem in userspace.

Change-Id: I15033469c256aff805e737698de7db3903acd37c
Cc: Todd Kjos <tkjos@android.com>
Cc: devel@driverdev.osuosl.org
Cc: linux-kernel@vger.kernel.org
Cc: kernel-team@android.com
Cc: Joel Fernandes <joel@joelfernandes.org>
Signed-off-by: Alistair Strachan <astrachan@google.com>
Acked-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Reviewed-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-10-25 02:45:30 -04:00
zhangshuxiao
d4e989cbc1 staging: android: ashmem: lseek failed due to no FMODE_LSEEK.
vfs_llseek will check whether the file mode has
FMODE_LSEEK, no return failure. But ashmem can be
lseek, so add FMODE_LSEEK to ashmem file.

Change-Id: Ia78ef4c7c96adb89d52e70b63f7c00636fe60d01
Signed-off-by: zhangshuxiao <zhangshuxiao@xiaomi.com>
(cherry picked from commit 6c8d409129bbebe36cde9f8e511011756216163a)
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
2020-10-25 02:45:30 -04:00
Al Viro
4e29842b2e ashmem: use vfs_llseek()
Change-Id: I6747ced59472320c537e35a8fa791e9f8990915e
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2020-10-25 02:45:19 -04:00
Joel Fernandes
89bdbdbc7e staging: android: ashmem: Fix lockdep issue during llseek
commit cb57469c9573f6018cd1302953dd45d6e05aba7b upstream.

ashmem_mutex create a chain of dependencies like so:

(1)
mmap syscall ->
  mmap_sem ->  (acquired)
  ashmem_mmap
  ashmem_mutex (try to acquire)
  (block)

(2)
llseek syscall ->
  ashmem_llseek ->
  ashmem_mutex ->  (acquired)
  inode_lock ->
  inode->i_rwsem (try to acquire)
  (block)

(3)
getdents ->
  iterate_dir ->
  inode_lock ->
  inode->i_rwsem   (acquired)
  copy_to_user ->
  mmap_sem         (try to acquire)

There is a lock ordering created between mmap_sem and inode->i_rwsem
causing a lockdep splat [2] during a syzcaller test, this patch fixes
the issue by unlocking the mutex earlier. Functionally that's Ok since
we don't need to protect vfs_llseek.

[1] https://patchwork.kernel.org/patch/10185031/
[2] https://lkml.org/lkml/2018/1/10/48

Change-Id: Ifb68925084a3e7944cef8144e783f4bd2e573782
Acked-by: Todd Kjos <tkjos@google.com>
Cc: Arve Hjonnevag <arve@android.com>
Cc: stable@vger.kernel.org
Reported-by: syzbot+8ec30bb7bf1a981a2012@syzkaller.appspotmail.com
Signed-off-by: Joel Fernandes <joelaf@google.com>
Acked-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-10-25 02:37:54 -04:00
Vinayak Menon
15f07d2c4d lowmemorykiller: use for_each_thread instead of buggy while_each_thread
Couple of cases were reported few months ago, where the cpu was blocked
on the following call stack for /seconds/ after which the watchdog fires.

test_task_flag(p = 0xE14ABF00, ?)
lowmem_shrink(?, sc = 0xD7A03C04)
shrink_slab(shrink = 0xD7A03C04, nr_pages_scanned = 0, lru_pages = 120)
try_to_free_pages(zonelist = 0xC1116440, ?, ?, ?)
__alloc_pages_nodemask(?, order = 0, ?, nodemask = 0x0)
__do_page_cache_readahead(mapping = 0xEB819364, filp = 0xCC16DC00, offset =
ra_submit(?, ?, ?)
filemap_fault(vma = 0xC105D240, vmf = 0xD7A03DC8)

There weren't any dumps to analyse the case, but this can be a possible
reason. while_each_thread is known to be buggy and can result in the
function looping forever if the task exits, even when protected with
rcu_read_lock. Use for_each_thread instead.

More details on the problems with while_each_thread can be found
at https://lkml.org/lkml/2013/12/2/320

Change-Id: I5eb6e4b463f81142a2a7824db389201357432ec7
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2020-10-25 02:37:54 -04:00
Hu Wang
9bde4a0965 wlan: Fix null mac address check in WDA
Driver failed to join AP with specific BSSID(eg. 00:00:00:00:00:03).
The reason is WDA_IS_NULL_MAC_ADDRESS only checks the first 4 bytes
of mac address, due to which AP's BSSID failed the check, hence WDA
returned the join failure.

Fix WDA_IS_NULL_MAC_ADDRESS to check all 6 bytes of mac address.

Change-Id: Ifda6d6ada80a5197e56893e30061f48e418ba041
CRs-Fixed: 1029543
2020-10-25 02:37:54 -04:00
Hanumanth Reddy Pothula
cbc1ee52d1 wlan: Can't scan the hidden external SSID when the 1st SSID is empty
Propagation from qcacld-2.0 to prima

Because of previous issue with supplicant setting n_ssids to 1 when
there is no SSID provided, wlan_hdd_cfg80211.c simply ignores the
case when the first SSID is empty. However, this fails when the
1st SSID is empty but the one after is not.

Change-Id: I8b25cab6335b59db587fb90d04a31682afa48d06
CRs-Fixed: 2148403
2020-10-25 02:37:54 -04:00
Jianmin Zhu
66d4bfb7da cfg80211: Fix use after free when process wdev events
"bssid" is only initialized out of the while loop, in case of two
events with same type: EVENT_CONNECT_RESULT, but one has zero
ether addr, the other is non-zero, the bssid pointer will be
referenced twice, which lead to use-after-free issue

Change-Id: Ie8a24275f7ec5c2f936ef0a802a42e5f63be9c71
CRs-Fixed: 2254305
Signed-off-by: Zhu Jianmin <jianminz@codeaurora.org>
2020-10-25 02:37:54 -04:00
Luca Weiss
e4cede11f4 ipv4: Pass struct flowi4 directly to rt_fill_info
This is partly a backport of d6c0a4f609
  (ipv4: Kill 'rt_src' from 'struct rtable').

skb->sk can be null, and in fact it is when creating the buffer
in inet_rtm_getroute. There is no other way of accessing the flow,
so pass it directly.

Fixes invalid memory address when running 'ip route get $IPADDR'

Change-Id: I7b9e5499614b96360c9c8420907e82e145bb97f3
2020-10-25 02:37:54 -04:00
Will Deacon
5e0f6dfb91 asm-generic: add memfd_create system call to unistd.h
Commit 9183df25fe ("shm: add memfd_create() syscall") added a new
system call (memfd_create) but didn't update the asm-generic unistd
header.

This patch adds the new system call to the asm-generic version of
unistd.h so that it can be used by architectures such as arm64.

Change-Id: I173b1e5b6087fcea7d226a9f55f792432515897d
Cc: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2020-10-25 02:37:54 -04:00
David Herrmann
0309fda2fe shm: add memfd_create() syscall
memfd_create() is similar to mmap(MAP_ANON), but returns a file-descriptor
that you can pass to mmap().  It can support sealing and avoids any
connection to user-visible mount-points.  Thus, it's not subject to quotas
on mounted file-systems, but can be used like malloc()'ed memory, but with
a file-descriptor to it.

memfd_create() returns the raw shmem file, so calls like ftruncate() can
be used to modify the underlying inode.  Also calls like fstat() will
return proper information and mark the file as regular file.  If you want
sealing, you can specify MFD_ALLOW_SEALING.  Otherwise, sealing is not
supported (like on all other regular files).

Compared to O_TMPFILE, it does not require a tmpfs mount-point and is not
subject to a filesystem size limit.  It is still properly accounted to
memcg limits, though, and to the same overcommit or no-overcommit
accounting as all user memory.

Change-Id: Iaf959293e2c490523aeb46d56cc45b0e7bbe7bf5
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Ryan Lortie <desrt@desrt.ca>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Angelo G. Del Regno <kholk11@gmail.com>
2020-10-25 02:37:54 -04:00
Russell King
0fbdad1f0f ARM: wire up memfd_create syscall
Add the memfd_create syscall to ARM.

Change-Id: I857960ac11d1e574a7957325d2b754bcc31b902d
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2020-10-25 02:37:54 -04:00
Riley Andrews
b010175dd4 android: binder: Use wake up hint for synchronous transactions.
Use wake_up_interruptible_sync() to hint to the scheduler binder
transactions are synchronous wakeups. Disable premption while waking
to avoid ping-ponging on the binder lock.

Change-Id: Ic406a232d0873662f80148e37acefe5243d912a0
2020-10-25 02:37:54 -04:00
Arne Coucheron
31a1482703 msm8960: Use tuned options when compiling
Change-Id: I52591c2eb5b6831e7302acf71e2c6c173d811c5e
2020-10-25 02:37:54 -04:00
Yatto
f24ec3f684 defconfig: flo: Enable CONFIG_NETFILTER_XT_TARGET_CT
* Fixes hotspot in many cases.

Change-Id: I30e3a58f91cb061ca6f4590e327ef91aeb44c73a
2020-10-25 02:36:11 -04:00