Originally from Peter Zijlstra. The helper migrates perf events
from one cpu to another cpu.
Change-Id: I4d3c45b4594f3d26bbe7cc9e3fb79675ffac8b5e
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1339741902-8449-5-git-send-email-zheng.z.yan@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Allow the pmu->event_init callback to change event->cpu, so the PMU driver
can choose the CPU on which to install events.
Change-Id: Ie1f67c8b9fac650002f059081fe325eb799690c1
Signed-off-by: Zheng Yan <zheng.z.yan@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1339741902-8449-4-git-send-email-zheng.z.yan@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 12ca6ad2e3a896256f086497a7c7406a547ee373)
There's a race on CPU unplug where we free the swevent hash array
while it can still have events on. This will result in a
use-after-free which is BAD.
Simply do not free the hash array on unplug. This leaves the thing
around and no use-after-free takes place.
When the last swevent dies, we do a for_each_possible_cpu() iteration
anyway to clean these up, at which time we'll free it, so no leakage
will occur.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I14c0679a2934dccdbb052805e6430fe54b5978f0
Bug: 30952077
Buffer overread may happen as nl80211_set_station() reads 4 bytes
from the attribute NL80211_ATTR_LOCAL_MESH_POWER_MODE without
validating the size of data received when userspace sends less
than 4 bytes of data with NL80211_ATTR_LOCAL_MESH_POWER_MODE.
Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE to avoid
the buffer overread.
Fixes: 3b1c5a5307 ("{cfg,nl}80211: mesh power mode primitives and userspace access")
Cc: stable@vger.kernel.org
Bug: 36819059
Signed-off-by: Srinivas Dasari <dasaris@qti.qualcomm.com>
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
Git-commit: 8feb69c7bd89513be80eb19198d48f154b254021
Change-Id: Ie20993309501fd242782311b9fe787931f716116
CRs-Fixed: 2055013
Signed-off-by: Srinivas Dasari <dasaris@codeaurora.org>
nla policy checks for only maximum length of the attribute data
when the attribute type is NLA_BINARY. If userspace sends less
data than specified, the wireless drivers may access illegal
memory. When type is NLA_UNSPEC, nla policy check ensures that
userspace sends minimum specified length number of bytes.
Remove type assignment to NLA_BINARY from nla_policy of
NL80211_ATTR_PMKID to make this NLA_UNSPEC and to make sure minimum
WLAN_PMKID_LEN bytes are received from userspace with
NL80211_ATTR_PMKID.
Fixes: 67fbb16be6 ("nl80211: PMKSA caching support")
Cc: stable@vger.kernel.org
Bug: 36818836
Signed-off-by: Srinivas Dasari <dasaris@qti.qualcomm.com>
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
Git-commit: 9361df14d1cbf966409d5d6f48bb334384fbe138
Change-Id: I5feb729a9ef48f67c4ee460e7e133d5fc8cecd4f
CRs-Fixed: 2061676
Signed-off-by: Srinivas Dasari <dasaris@codeaurora.org>
When the device descriptor is closed, the `substream->runtime` pointer
is freed. But another thread may be in the ioctl handler, case
SNDRV_CTL_IOCTL_PCM_INFO. This case calls snd_pcm_info_user() which
calls snd_pcm_info() which accesses the now freed `substream->runtime`.
Bug: 36006981
Signed-off-by: Robb Glasser <rglasser@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Change-Id: I445d24bc21dc0af6d9522a8daabe64969042236a
Fixes a steady memory leak in the keychord release code. A close of
the keychord device will leak 1 keychord structure. Easily
reproducible by a simple program that does an open()->write()->close()
of the keychord device.
Bug: 64483974
Change-Id: I1fa402c666cffb00b8cfd6379d9fe47a0989152c
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
(cherry picked from commit 72a8dae2c25d0277e48672ee85b70236268add01)
There are multiple bugs caused by threads racing in keychord_write.
1) Threads racing through this function can cause the same element to
be added to a linked list twice (multiple calls to
input_register_handler() for the same input_handler struct). And the
races can also cause an element in a linked list that doesn't exist
attempted to be removed (multiple calls to input_unregister_handler()
with the same input_handler struct).
2) The races can also cause duplicate kfree's of the keychords
struct.
Bug: 64133562
Bug: 63974334
Change-Id: I6329a4d58c665fab5d3e96ef96391e07b4941e80
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
(cherry picked from commit 59584701f1e2ce8ce024570576b206bea6ac69cf)
This was reported many times, and this was even mentioned in commit
52ee2dfdd4 "pids: refactor vnr/nr_ns helpers to make them safe" but
somehow nobody bothered to fix the obvious problem: task_tgid_nr_ns()
is not safe because task->group_leader points to nowhere after the
exiting task passes exit_notify(), rcu_read_lock() can not help.
We really need to change __unhash_process() to nullify group_leader,
parent, and real_parent, but this needs some cleanups. Until then we
can turn task_tgid_nr_ns() into another user of __task_pid_nr_ns() and
fix the problem.
Reported-by: Troy Kensinger <tkensinger@google.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
(url: https://patchwork.kernel.org/patch/9913055/)
Bug: 31495866
Change-Id: I5e67b02a77e805f71fa3a787249f13c1310f02e2
commit 4f0414e54e4d1893c6f08260693f8ef84c929293 upstream.
We need to load the TX SG list in sendmsg(2) after waiting for
incoming data, not before.
Bug: 64386293
Change-Id: Ibb0b7969ee1df314b49462ecd65ce381118d915d
Cc: stable@vger.kernel.org
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Siqi Lin <siqilin@google.com>
The configs for this is used for Bluez.
So, there is no reason to keep this that causes the Security
Vulnerability.
Bug: 63527053
Change-Id: Ia25a8268412ce58c6a162953af3602634b219669
Signed-off-by: Ecco Park <eccopark@google.com>
The 'move_paghes()' system call was introduced long long ago with the
same permission checks as for sending a signal (except using
CAP_SYS_NICE instead of CAP_SYS_KILL for the overriding capability).
That turns out to not be a great choice - while the system call really
only moves physical page allocations around (and you need other
capabilities to do a lot of it), you can check the return value to map
out some the virtual address choices and defeat ASLR of a binary that
still shares your uid.
So change the access checks to the more common 'ptrace_may_access()'
model instead.
This tightens the access checks for the uid, and also effectively
changes the CAP_SYS_NICE check to CAP_SYS_PTRACE, but it's unlikely that
anybody really _uses_ this legacy system call any more (we hav ebetter
NUMA placement models these days), so I expect nobody to notice.
Famous last words.
Reported-by: Otto Ebeling <otto.ebeling@iki.fi>
Acked-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Willy Tarreau <w@1wt.eu>
Cc: stable@kernel.org
Bug: 65468230
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
cherry-picked from: 197e7e521384a23b9e585178f3f11c9fa08274b9
This branch does not have the PTRACE_MODE_REALCREDS flag but its
default behavior is the same as PTRACE_MODE_REALCREDS. So use
PTRACE_MODE_READ instead of PTRACE_MODE_READ_REALCREDS.
Change-Id: I75364561d91155c01f78dd62cdd41c5f0f418854
A lock-unlock is missing in ASHMEM_SET_SIZE ioctl which can result in a
race condition when mmap is called. After the !asma->file check, before
setting asma->size, asma->file can be set in mmap. That would result in
having different asma->size than the mapped memory size. Combined with
ASHMEM_UNPIN ioctl and shrinker invocation, this can result in memory
corruption.
Bug: 66954097
Signed-off-by: Viktor Slavkovic <viktors@google.com>
Change-Id: Ia52312a75ade30bc94be6b94420f17f34e0c1f86
Remove read permission for debugfs reg dump node
for group and users to not allow reading of wcd9xxx
registers.
CRs-fixed: 2113240
Bug: 62464339
Change-Id: I73a22e140446828e694fdc95fde7ac4e051c9548
Signed-off-by: Karthikeyan Mani <kmani@codeaurora.org>
Complete removal of gud mobicore driver.
The driver author delivers an updated version of this driver to
interested parties directly rendering this version obsolete.
Bug: 65468975
Change-Id: I40498d3203b1d6ca04f2b5a2e65461851d84d2d4
Acked-by: Tony Hamilton <tonyh@qti.qualcomm.com>
Signed-off-by: Trudy Shearer <tshearer@codeaurora.org>
Signed-off-by: Siqi Lin <siqilin@google.com>
commit 49d31c2f389acfe83417083e1208422b4091cd9e upstream.
take_dentry_name_snapshot() takes a safe snapshot of dentry name;
if the name is a short one, it gets copied into caller-supplied
structure, otherwise an extra reference to external name is grabbed
(those are never modified). In either case the pointer to stable
string is stored into the same structure.
dentry must be held by the caller of take_dentry_name_snapshot(),
but may be freely dropped afterwards - the snapshot will stay
until destroyed by release_dentry_name_snapshot().
Intended use:
struct name_snapshot s;
take_dentry_name_snapshot(&s, dentry);
...
access s.name
...
release_dentry_name_snapshot(&s);
Replaces fsnotify_oldname_...(), gets used in fsnotify to obtain the name
to pass down with event.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
[carnil: backport 4.9: adjust context]
[bwh: Backported to 3.16:
- External names are not ref-counted, so copy them
- Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
[ghackmann@google.com: backported to 3.10: adjust context]
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Change-Id: I612e687cbffa1a03107331a6b3f00911ffbebd8e
Bug: 63689921
snd_timer_notify1() is called outside the spinlock and it retakes the
lock after the unlock. This is rather racy, and it's safer to move
snd_timer_notify() call inside the main spinlock.
The patch also contains a slight refactoring / cleanup of the code.
Now all start/stop/continue/pause look more symmetric and a bit better
readable.
Bug: 37240993
Change-Id: Ib90099f88c8b04928a8cdd2808cd9e16da6d519c
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Siqi Lin <siqilin@google.com>
commit ed8b1d6d2c741ab26d60d499d7fbb7ac801f0f51 upstream.
A slave timer element also unlinks at snd_timer_stop() but it takes
only slave_active_lock. When a slave is assigned to a master,
however, this may become a race against the master's interrupt
handling, eventually resulting in a list corruption. The actual bug
could be seen with a syzkaller fuzzer test case in BugLink below.
As a fix, we need to take timeri->timer->lock when timer isn't NULL,
i.e. assigned to a master, while the assignment to a master itself is
protected by slave_active_lock.
Bug: 37240993
Change-Id: Ib6eae144d5fdc92546d2210bcd6bc56454ad3e42
BugLink: http://lkml.kernel.org/r/CACT4Y+Y_Bm+7epAb=8Wi=AaWd+DYS7qawX52qxdCfOfY49vozQ@mail.gmail.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Siqi Lin <siqilin@google.com>
commit f784beb75ce82f4136f8a0960d3ee872f7109e09 upstream.
Although ALSA timer code got hardening for races, it still causes
use-after-free error. This is however rather a corrupted linked list,
not actually the concurrent accesses. Namely, when timer start is
triggered twice, list_add_tail() is called twice, too. This ends
up with the link corruption and triggers KASAN error.
The simplest fix would be replacing list_add_tail() with
list_move_tail(), but fundamentally it's the problem that we don't
check the double start/stop correctly. So, the right fix here is to
add the proper checks to snd_timer_start() and snd_timer_stop() (and
their variants).
Bug: 37240993
Change-Id: I86a327c4479fecf9b502ba6122c8ae67a2326754
BugLink: http://lkml.kernel.org/r/CACT4Y+ZyPRoMQjmawbvmCEDrkBD2BQuH7R09=eOkf5ESK8kJAw@mail.gmail.com
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Siqi Lin <siqilin@google.com>
commit c3b1681375dc6e71d89a3ae00cc3ce9e775a8917 upstream.
This is a minor code cleanup without any functional changes:
- Kill keep_flag argument from _snd_timer_stop(), as all callers pass
only it false.
- Remove redundant NULL check in _snd_timer_stop().
Bug: 37240993
Change-Id: Idc3778ca1cd62b8c22e2a57b3c1130fe7b3d13f6
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Siqi Lin <siqilin@google.com>
snd_timer_user_read() has a potential race among parallel reads, as
qhead and qused are updated outside the critical section due to
copy_to_user() calls. Move them into the critical section, and also
sanitize the relevant code a bit.
Bug: 37240993
Change-Id: I7358a57638ef23eb7f97341eaee1f0dd4ba2795a
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Siqi Lin <siqilin@google.com>
(cherry picked from commit 4dff5c7b7093b19c19d3a100f8a3ad87cb7cd9e7)
[ Upstream commit 230323dac060123c340cf75997971145a42661ee ]
Currently ALSA timer device doesn't take the disconnection into
account very well; it merely unlinks the timer device at disconnection
callback but does nothing else. Because of this, when an application
accessing the timer device is disconnected, it may release the
resource before actually closed. In most cases, it results in a
warning message indicating a leftover timer instance like:
ALSA: timer xxxx is busy?
But basically this is an open race.
This patch tries to address it. The strategy is like other ALSA
devices: namely,
- Manage card's refcount at each open/close
- Wake up the pending tasks at disconnection
- Check the shutdown flag appropriately at each possible call
Note that this patch has one ugly hack to handle the wakeup of pending
tasks. It'd be cleaner to introduce a new disconnect op to
snd_timer_instance ops. But since it would lead to internal ABI
breakage and it eventually increase my own work when backporting to
stable kernels, I took a different path to implement locally in
timer.c. A cleanup patch will follow at next for 4.5 kernel.
Bug: 37240993
Change-Id: I05c7f0e7d28b63fc343091f800ceae9ec2afe4a4
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=109431
Cc: <stable@vger.kernel.org> # v3.15+
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Siqi Lin <siqilin@google.com>
(cherry picked from commit 230323dac060123c340cf75997971145a42661ee)
commit 117159f0b9d392fb433a7871426fad50317f06f7 upstream.
In snd_timer_notify1(), the wrong timer instance was passed for slave
ccallback function. This leads to the access to the wrong data when
an incompatible master is handled (e.g. the master is the sequencer
timer and the slave is a user timer), as spotted by syzkaller fuzzer.
This patch fixes that wrong assignment.
Bug: 37240993
Change-Id: I7a9f258f13d500776725f2383136dabcb563a0d3
BugLink: http://lkml.kernel.org/r/CACT4Y+Y_Bm+7epAb=8Wi=AaWd+DYS7qawX52qxdCfOfY49vozQ@mail.gmail.com
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Siqi Lin <siqilin@google.com>
If the user passes in a negative file size in a int64,
this will compare to be smaller than buffer length,
and it will get truncated to form a read length that
is larger than the buffer length.
To fix, return -EINVAL if the count argument is negative,
so the loop will never happen.
Bug: 37429972
Test: Test with PoC
Change-Id: I5d52e38e6fbe2c17eb8c493f9eb81df6cfd780a4
Signed-off-by: Jerry Zhang <zhangjerry@google.com>
Make the various timeout values HZ agnostic by using the proper
macros and values instead.
Change-Id: I906b948657c8873518042c7465272c98c5391e59
Signed-off-by: Suman Tatiraju <sumant@codeaurora.org>
Signed-off-by: Ajay Dudani <adudani@codeaurora.org>
During suspend we call sched_clock_poll() to update the epoch and
accumulated time and reprogram the sched_clock_timer to fire
before the next wrap-around time. Unfortunately,
sched_clock_poll() doesn't restart the timer, instead it relies
on the hrtimer layer to do that and during suspend we aren't
calling that function from the hrtimer layer. Instead, we're
reprogramming the expires time while the hrtimer is enqueued,
which can cause the hrtimer tree to be corrupted. Furthermore, we
restart the timer during suspend but we update the epoch during
resume which seems counter-intuitive.
Let's fix this by saving the accumulated state and canceling the
timer during suspend. On resume we can update the epoch and
restart the timer similar to what we would do if we were starting
the clock for the first time.
Change-Id: Iee2a1cca42e5b681347ea0607e9af420a63892d7
CRs-Fixed: 696826
Fixes: a08ca5d108 "sched_clock: Use an hrtimer instead of timer"
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
The ARM architected system counter has at least 56 usable bits.
Add support for counters with more than 32 bits to the generic
sched_clock implementation so we can increase the time between
wakeups due to dealing with wrap-around on these devices while
benefiting from the irqtime accounting and suspend/resume
handling that the generic sched_clock code already has. On my
system using 56 bits over 32 bits changes the wraparound time
from a few minutes to an hour. For faster running counters (GHz
range) this is even more important because we may not be able to
execute the timer in time to deal with the wraparound if only 32
bits are used.
We choose a maxsec value of 3600 seconds because we assume no
system will go idle for more than an hour. In the future we may
need to increase this value.
Note: All users should switch over to the 64-bit read function so
we can remove setup_sched_clock() in favor of sched_clock_register().
Change-Id: Ieea67ada51e6fce2637520bf5f60e789762d8694
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Git-commit: e7e3ff1bfe
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
In the next patch we're going to increase the number of bits that
the generic sched_clock can handle to be greater than 32. With
more than 32 bits the wraparound time can be larger than what can
fit into the units that msecs_to_jiffies takes (unsigned int).
Luckily, the wraparound is initially calculated in nanoseconds
which we can easily use with hrtimers, so switch to using an
hrtimer.
Change-Id: Ia052d3d5d9c1ec7b59c440ceae852c12c61477e1
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
[jstultz: Fixup hrtimer intitialization order issue]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Git-commit: a08ca5d108
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
We're going to increase the cyc value to 64 bits in the near
future. Doing that is going to break the custom seqcount
implementation in the sched_clock code because 64 bit numbers
aren't guaranteed to be atomic. Replace the cyc_copy with a
seqcount to avoid this problem.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Git-commit: 85c3d2dd15
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[imaund@codeaurora.org: resolve merge conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
Change-Id: If7fae3228ff425c9f1763145cbd5c851141c7755
There is a small race between when the cycle count is read from
the hardware and when the epoch stabilizes. Consider this
scenario:
CPU0 CPU1
---- ----
cyc = read_sched_clock()
cyc_to_sched_clock()
update_sched_clock()
...
cd.epoch_cyc = cyc;
epoch_cyc = cd.epoch_cyc;
...
epoch_ns + cyc_to_ns((cyc - epoch_cyc)
The cyc on cpu0 was read before the epoch changed. But we
calculate the nanoseconds based on the new epoch by subtracting
the new epoch from the old cycle count. Since epoch is most likely
larger than the old cycle count we calculate a large number that
will be converted to nanoseconds and added to epoch_ns, causing
time to jump forward too much.
Fix this problem by reading the hardware after the epoch has
stabilized.
Change-Id: I2d81b8422814f209ca1e03a45eb370c9a0696c31
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Git-commit: 336ae1180d
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
Nothing about the sched_clock implementation in the ARM port is
specific to the architecture. Generalize the code so that other
architectures can use it by selecting GENERIC_SCHED_CLOCK.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
[jstultz: Merge minor collisions with other patches in my tree]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Git-commit: 38ff87f77a
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[imaund@codeaurora.org: resolve merge conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
[flex1911: backport to 3.4]
Signed-off-by: Artem Borisov <dedsa2002@gmail.com>
Change-Id: I798c7c58dc9f476b07e60958a970aff7ceb4b797
cyc_to_sched_clock() is called by sched_clock() and cyc_to_ns()
is called by cyc_to_sched_clock(). I suspect that some compilers
inline both of these functions into sched_clock() and so we've
been getting away without having a notrace marking. It seems that
my compiler isn't inlining cyc_to_sched_clock() though, so I'm
hitting a recursion bug when I enable the function graph tracer,
causing my system to crash. Marking these functions notrace fixes
it. Technically cyc_to_ns() doesn't need the notrace because it's
already marked inline, but let's just add it so that if we ever
remove inline from that function it doesn't blow up.
Change-Id: I18b0b4a41eb22fd454a9cd11cfeb73cec1327e92
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This converts sched_clock to simply a call to a function pointer in order
to allow overriding it. This will allow for use with 64-bit counters where
overflow handling is not needed.
Change-Id: Ia2b5c1382f845e28efc80365d52debd3d122b519
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Allow multiple calls to setup_sched_clock and switch to the new counter
if it is higher frequency.
Change-Id: I482ee5d2ce20f114041347110c63d140d47e7da5
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
(cherry picked from commit b277da0a8a)
Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
QUEUE_FLAG_NONROT.
Historically, all block devices have automatically made entropy
contributions. But as previously stated in commit e2e1a148 ("block: add
sysfs knob for turning off disk entropy contributions"):
- On SSD disks, the completion times aren't as random as they
are for rotational drives. So it's questionable whether they
should contribute to the random pool in the first place.
- Calling add_disk_randomness() has a lot of overhead.
There are more reliable sources for randomness than non-rotational block
devices. From a security perspective it is better to err on the side of
caution than to allow entropy contributions from unreliable "random"
sources.
Change-Id: I8e9005feeedf301e486cf1aa13c804e322a3591b
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Artem Borisov <dedsa2002@gmail.com>
(cherry picked from commit 06af1c52c9ea234e0b1266cc0b52c3e0c6c8fe9f)
The current lz4 compress buffer is 16kb on 32-bits, 32kb on 64-bits
system. But, lz4 needs only 16kb on both. On 64-bits, this causes
wasted cpu cycles for additional memset during every compression.
In case of lz4hc, the current buffer size is (256kb + 8) on 32-bits,
(512kb + 16) on 64-bits. But, lz4hc needs only (256kb + 2 * pointer) on
both.
This patch fixes these wrong compress buffer sizes for 64-bits.
Signed-off-by: Bongkyu Kim <bongkyu.kim@lge.com>
Cc: Chanho Min <chanho.min@lge.com>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit e02d238c9852a91b30da9ea32ce36d1416cdc683)
Do not __GFP_ZERO allocated zcomp ->private pages. We keep allocated
streams around and use them for read/write requests, so we supply a
zeroed out ->private to compression algorithm as a scratch buffer only
once -- the first time we use that stream. For the rest of IO requests
served by this stream ->private usually contains some temporarily data
from the previous requests.
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 75d8947a36d0c9aedd69118d1f14bf424005c7c2)
Each zcomp backend uses own gfp flag but it's pointless because the
context they could be called is driven by upper layer(ie, zcomp
frontend). As well, zcomp frondend could call them in different
context. One context(ie, zram init part) is it should be better to make
sure successful allocation other context(ie, further stream allocation
part for accelarating I/O speed) is just optional so let's pass gfp down
from driver (ie, zcomp frontend) like normal MM convention.
[sergey.senozhatsky@gmail.com: add missing __vmalloc zero and highmem gfps]
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit d913897abace843bba20249f3190167f7895e9c3)
When we're using LZ4 multi compression streams for zram swap, we found
out page allocation failure message in system running test. That was
not only once, but a few(2 - 5 times per test). Also, some failure
cases were continually occurring to try allocation order 3.
In order to make parallel compression private data, we should call
kzalloc() with order 2/3 in runtime(lzo/lz4). But if there is no order
2/3 size memory to allocate in that time, page allocation fails. This
patch makes to use vmalloc() as fallback of kmalloc(), this prevents
page alloc failure warning.
After using this, we never found warning message in running test, also
It could reduce process startup latency about 60-120ms in each case.
For reference a call trace :
Binder_1: page allocation failure: order:3, mode:0x10c0d0
CPU: 0 PID: 424 Comm: Binder_1 Tainted: GW 3.10.49-perf-g991d02b-dirty #20
Call trace:
dump_backtrace+0x0/0x270
show_stack+0x10/0x1c
dump_stack+0x1c/0x28
warn_alloc_failed+0xfc/0x11c
__alloc_pages_nodemask+0x724/0x7f0
__get_free_pages+0x14/0x5c
kmalloc_order_trace+0x38/0xd8
zcomp_lz4_create+0x2c/0x38
zcomp_strm_alloc+0x34/0x78
zcomp_strm_multi_find+0x124/0x1ec
zcomp_strm_find+0xc/0x18
zram_bvec_rw+0x2fc/0x780
zram_make_request+0x25c/0x2d4
generic_make_request+0x80/0xbc
submit_bio+0xa4/0x15c
__swap_writepage+0x218/0x230
swap_writepage+0x3c/0x4c
shrink_page_list+0x51c/0x8d0
shrink_inactive_list+0x3f8/0x60c
shrink_lruvec+0x33c/0x4cc
shrink_zone+0x3c/0x100
try_to_free_pages+0x2b8/0x54c
__alloc_pages_nodemask+0x514/0x7f0
__get_free_pages+0x14/0x5c
proc_info_read+0x50/0xe4
vfs_read+0xa0/0x12c
SyS_read+0x44/0x74
DMA: 3397*4kB (MC) 26*8kB (RC) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB
0*512kB 0*1024kB 0*2048kB 0*4096kB = 13796kB
[minchan@kernel.org: change vmalloc gfp and adding comment about gfp]
[sergey.senozhatsky@gmail.com: tweak comments and styles]
Signed-off-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 3d5fe03a3ea013060ebba2a811aeb0f23f56aefa)
We can end up allocating a new compression stream with GFP_KERNEL from
within the IO path, which may result is nested (recursive) IO
operations. That can introduce problems if the IO path in question is a
reclaimer, holding some locks that will deadlock nested IOs.
Allocate streams and working memory using GFP_NOIO flag, forbidding
recursive IO and FS operations.
An example:
inconsistent {IN-RECLAIM_FS-W} -> {RECLAIM_FS-ON-W} usage.
git/20158 [HC0[0]:SC0[0]:HE1:SE1] takes:
(jbd2_handle){+.+.?.}, at: start_this_handle+0x4ca/0x555
{IN-RECLAIM_FS-W} state was registered at:
__lock_acquire+0x8da/0x117b
lock_acquire+0x10c/0x1a7
start_this_handle+0x52d/0x555
jbd2__journal_start+0xb4/0x237
__ext4_journal_start_sb+0x108/0x17e
ext4_dirty_inode+0x32/0x61
__mark_inode_dirty+0x16b/0x60c
iput+0x11e/0x274
__dentry_kill+0x148/0x1b8
shrink_dentry_list+0x274/0x44a
prune_dcache_sb+0x4a/0x55
super_cache_scan+0xfc/0x176
shrink_slab.part.14.constprop.25+0x2a2/0x4d3
shrink_zone+0x74/0x140
kswapd+0x6b7/0x930
kthread+0x107/0x10f
ret_from_fork+0x3f/0x70
irq event stamp: 138297
hardirqs last enabled at (138297): debug_check_no_locks_freed+0x113/0x12f
hardirqs last disabled at (138296): debug_check_no_locks_freed+0x33/0x12f
softirqs last enabled at (137818): __do_softirq+0x2d3/0x3e9
softirqs last disabled at (137813): irq_exit+0x41/0x95
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(jbd2_handle);
<Interrupt>
lock(jbd2_handle);
*** DEADLOCK ***
5 locks held by git/20158:
#0: (sb_writers#7){.+.+.+}, at: [<ffffffff81155411>] mnt_want_write+0x24/0x4b
#1: (&type->i_mutex_dir_key#2/1){+.+.+.}, at: [<ffffffff81145087>] lock_rename+0xd9/0xe3
#2: (&sb->s_type->i_mutex_key#11){+.+.+.}, at: [<ffffffff8114f8e2>] lock_two_nondirectories+0x3f/0x6b
#3: (&sb->s_type->i_mutex_key#11/4){+.+.+.}, at: [<ffffffff8114f909>] lock_two_nondirectories+0x66/0x6b
#4: (jbd2_handle){+.+.?.}, at: [<ffffffff811e31db>] start_this_handle+0x4ca/0x555
stack backtrace:
CPU: 2 PID: 20158 Comm: git Not tainted 4.1.0-rc7-next-20150615-dbg-00016-g8bdf555-dirty #211
Call Trace:
dump_stack+0x4c/0x6e
mark_lock+0x384/0x56d
mark_held_locks+0x5f/0x76
lockdep_trace_alloc+0xb2/0xb5
kmem_cache_alloc_trace+0x32/0x1e2
zcomp_strm_alloc+0x25/0x73 [zram]
zcomp_strm_multi_find+0xe7/0x173 [zram]
zcomp_strm_find+0xc/0xe [zram]
zram_bvec_rw+0x2ca/0x7e0 [zram]
zram_make_request+0x1fa/0x301 [zram]
generic_make_request+0x9c/0xdb
submit_bio+0xf7/0x120
ext4_io_submit+0x2e/0x43
ext4_bio_write_page+0x1b7/0x300
mpage_submit_page+0x60/0x77
mpage_map_and_submit_buffers+0x10f/0x21d
ext4_writepages+0xc8c/0xe1b
do_writepages+0x23/0x2c
__filemap_fdatawrite_range+0x84/0x8b
filemap_flush+0x1c/0x1e
ext4_alloc_da_blocks+0xb8/0x117
ext4_rename+0x132/0x6dc
? mark_held_locks+0x5f/0x76
ext4_rename2+0x29/0x2b
vfs_rename+0x540/0x636
SyS_renameat2+0x359/0x44d
SyS_rename+0x1e/0x20
entry_SYSCALL_64_fastpath+0x12/0x6f
[minchan@kernel.org: add stable mark]
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Kyeongdon Kim <kyeongdon.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 058dbde928)
With ZRAM enabled it is observed that lowmemory killer
doesn't trigger properly. swap cached pages are
accounted in NR_FILE, and lowmemorykiller considers
this as reclaimable and adds to other_file. But these
pages can't be reclaimed unless lowmemorykiller triggers.
So subtract swap pages from other_file.
Change-Id: Ibd1e1b83515517b9b80a8b48f90c4b767024397d
Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The amount of reserved memory varies between devices. Subtract it
here to reduce the amount of devices specific tuning needed for the
minfree values.
Change-Id: I481d84699f80c50384a6d1742383e81043e4ae1a
Cc: Android Kernel Team <kernel-team@android.com>
Cc: Arve Hjønnevåg <arve@android.com>
Signed-off-by: Arve Hjønnevåg <arve@android.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When I use several fast SSD to do swap, swapper_space.tree_lock is
heavily contended. This makes each swap partition have one
address_space to reduce the lock contention. There is an array of
address_space for swap. The swap entry type is the index to the array.
In my test with 3 SSD, this increases the swapout throughput 20%.
[akpm@linux-foundation.org: revert unneeded change to __add_to_swap_cache]
Signed-off-by: Shaohua Li <shli@fusionio.com>
Cc: Hugh Dickins <hughd@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I8503ace83342398bf7be3d2216616868cca65311
This patch changes memcg's behavior at task_move().
At task_move(), the kernel scans a task's page table and move the changes
for mapped pages from source cgroup to target cgroup. There has been a
bug at handling shared anonymous pages for a long time.
Before patch:
- The spec says 'shared anonymous pages are not moved.'
- The implementation was 'shared anonymoys pages may be moved'.
If page_mapcount <=2, shared anonymous pages's charge were moved.
After patch:
- The spec says 'all anonymous pages are moved'.
- The implementation is 'all anonymous pages are moved'.
Considering usage of memcg, this will not affect user's experience.
'shared anonymous' pages only exists between a tree of processes which
don't do exec(). Moving one of process without exec() seems not sane.
For example, libcgroup will not be affected by this change. (Anyway, no
one noticed the implementation for a long time...)
Below is a discussion log:
- current spec/implementation are complex
- Now, shared file caches are moved
- It adds unclear check as page_mapcount(). To do correct check,
we should check swap users, etc.
- No one notice this implementation behavior. So, no one get benefit
from the design.
- In general, once task is moved to a cgroup for running, it will not
be moved....
- Finally, we have control knob as memory.move_charge_at_immigrate.
Here is a patch to allow moving shared pages, completely. This makes
memcg simpler and fix current broken code.
Suggested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Glauber Costa <glommer@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 6bc2b856bb)
Set TIF_MEMDIE tsk_thread flag before send kill signal to the
selected thread. This is to fit a usual code sequence and avoid
potential race issue.
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>