Commit graph

16466 commits

Author SHA1 Message Date
Steve Muckle
ee9ddb5f3c sched: add sync wakeup recognition in select_best_cpu
If a wakeup is a sync wakeup, we need to discount the currently
running task's load from the waker's CPU as we calculate the best
CPU for the waking task to land on.

Change-Id: I00c5df626d17868323d60fb90b4513c0dd314825
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:55 -08:00
Srivatsa Vaddagiri
6e778f0cdc sched: Provide knob to prefer mostly_idle over idle cpus
sysctl_sched_prefer_idle lets the scheduler bias selection of
idle cpus over mostly idle cpus for tasks. This knob could be
useful to control balance between power and performance.

Change-Id: Ide6eef684ef94ac8b9927f53c220ccf94976fe67
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-10 23:53:54 -08:00
Steve Muckle
75d1c94217 sched: make sched_cpu_high_irqload a runtime tunable
It may be desirable to be able to alter the scehd_cpu_high_irqload
setting easily, so make it a runtime tunable value.

Change-Id: I832030eec2aafa101f0f435a4fd2d401d447880d
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:53 -08:00
Steve Muckle
00acd0448b sched: trace: extend sched_cpu_load to print irqload
The irqload is used in determining whether CPUs are mostly idle
so it is useful to know this value while viewing scheduler traces.

Change-Id: Icbb74fc1285be878f254ae54886bdb161b14a270
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:51 -08:00
Steve Muckle
51f0d7663b sched: avoid CPUs with high irq activity
CPUs with significant IRQ activity will not be able to serve tasks
quickly. Avoid them if possible by disqualifying such CPUs from
being recognized as mostly idle.

Change-Id: I2c09272a4f259f0283b272455147d288fce11982
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:47 -08:00
Steve Muckle
a14e01109a sched: refresh sched_clock() after acquiring rq lock in irq path
The wallclock time passed to sched_account_irqtime() may be stale
after we wait to acquire the runqueue lock. This could cause problems
in update_task_ravg because a different CPU may have advanced
this CPU's window_start based on a more up-to-date wallclock value,
triggering a BUG_ON(window_start > wallclock).

Change-Id: I316af62d1716e9b59c4a2898a2d9b44d6c7a75d8
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:46 -08:00
Steve Muckle
5fdc1d3aaa sched: track soft/hard irqload per-RQ with decaying avg
The scheduler currently ignores irq activity when deciding which
CPUs to place tasks on. If a CPU is getting hammered with IRQ activity
but has no tasks it will look attractive to the scheduler as it will
not be in a low power mode.

Track irqload with a decaying average. This quantity can be used
in the task placement logic to avoid CPUs which are under high
irqload. The decay factor is 3/4. Note that with this algorithm the
tracked irqload quantity will be higher than the actual irq time
observed in any single window. Some sample outcomes with steady
irqloads per 10ms window and the 3/4 decay factor (irqload of 10 is
used as a threshold in a subsequent patch):

irqload per window        load value asymptote      # windows to > 10
2ms			  8			    n/a
3ms			  12			    7
4ms			  16			    4
5ms			  20			    3

Of course irqload will not be constant in each window, these are just
given as simple examples.

Change-Id: I9dba049f5dfdcecc04339f727c8dd4ff554e01a5
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:45 -08:00
Steve Muckle
c5c90f6099 sched: do not set window until sched_clock is fully initialized
The system initially uses a jiffy-based sched clock. When the platform
registers a new timer for sched_clock, sched_clock can jump backwards.
Once sched_clock_postinit() runs it should be safe to rely on it.

Also sched_clock_cpu() relies on completion of sched_clock_init()
and until that happens sched_clock_cpu() returns zero. This is used
in the irq accounting path which window-based stats relies upon.
So do not set window_start until sched_clock_cpu() is working.

Change-Id: Ided349de8f8554f80a027ace0f63ea52b1c38c68
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:44 -08:00
Linux Build Service Account
fea8806a70 Merge "sched: Fix inaccurate accounting for real-time task" 2014-12-06 14:38:08 -08:00
Linux Build Service Account
cd2d717655 Merge "Revert "sched: update_rq_clock() must skip ONE update"" 2014-12-06 14:38:07 -08:00
Linux Build Service Account
b4229d736e Merge "sched: Make RT tasks eligible for boost" 2014-12-05 00:05:48 -08:00
Syed Rameez Mustafa
fce95c9a12 sched: Make RT tasks eligible for boost
During sched boost RT tasks currently end up going to the lowest
power cluster. This can be a performance bottleneck especially if
the frequency and IPC differences between clusters are high.
Furthermore, when RT tasks go over to the little cluster during
boost, the load balancer keeps attempting to pull work over to the
big cluster. This results in pre-emption of the executing RT task
causing more delays. Finally, containing more work on a single
cluster during boost might help save some power if the little
cluster can then enter deeper low power modes.

Change-Id: I177b2e81be5657c23e7ac43889472561ce9993a9
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2014-12-03 19:50:25 -08:00
Linux Build Service Account
a5f4e12c8d Merge "sched: Limit LBF_PWR_ACTIVE_BALANCE to within cluster" 2014-12-03 16:31:51 -08:00
Linux Build Service Account
6be10b2d68 Merge "sched: Packing support until a frequency threshold" 2014-12-03 16:31:32 -08:00
Linux Build Service Account
0899fc9f81 Merge "irq: smp_affinity: Initialize work struct only once" 2014-12-03 16:31:31 -08:00
Srivatsa Vaddagiri
66b5ce9db0 sched: Limit LBF_PWR_ACTIVE_BALANCE to within cluster
When higher power (performance) cluster has only one online cpu, we
currently let an idle cpu in lower power cluster pull a running task
from performance cluster via active balance. Active balance for
power-aware reasons is supposed to be restricted to balance within
cluster, the check for which is not correctly implemented.

Change-Id: I5fba7f01ad80c082a9b27e89b7f6b17a6d9cde14
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-02 15:28:14 +05:30
Srivatsa Vaddagiri
8fd5aa3bf2 sched: Fix inaccurate accounting for real-time task
It is possible that rq->clock_task was not updated in put_prev_task()
in which case we can potentially overcharge a real-time task for time
it did not run. This is because clock_task could be stale and not
represent the exact time real-time task started running.

Fix this by forcing update of rq->clock_task when real-time task
starts running.

Change-Id: I8320bb4e47924368583127b950d987925e8e6a6c
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-02 15:09:11 +05:30
Srivatsa Vaddagiri
21357f54c1 Revert "sched: update_rq_clock() must skip ONE update"
This reverts commit ab2ff007fe
as it was found to cause some performance regressions

Change-Id: Idd71fb04c77f5c9b0bc6bccc66b94ab5a7368471
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-02 14:46:32 +05:30
Srivatsa Vaddagiri
57da62614c sched: Packing support until a frequency threshold
Add another dimension for task packing based on frequency. This patch
adds a per-cpu tunable, rq->mostly_idle_freq, which when set will
result in tasks being packed on a single cpu in cluster as long as
cluster frequency is less than set threshold.

Change-Id: I318e9af6c8788ddf5dfcda407d621449ea5343c0
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-02 11:48:30 +05:30
Linux Build Service Account
72de2d6cb8 Merge "sched: update_rq_clock() must skip ONE update" 2014-11-30 16:19:16 -08:00
Linux Build Service Account
b4b0ebc5f9 Merge "sched: tighten up jiffy to sched_clock mapping" 2014-11-29 17:17:43 -08:00
Praveen Chidambaram
ee7ee8aa74 irq: smp_affinity: Initialize work struct only once
The work function to handle the irq affinity change is currently being
set from setup_affinity, which is called whenever the affinity changes.
Initialize the work function only once when the irq desc object's
defaults are set up.

CRs-Fixed: 756463
Change-Id: I66732f8c01cba166c41ce89c329d313eeaea8a7d
Signed-off-by: Praveen Chidambaram <pchidamb@codeaurora.org>
2014-11-25 11:33:31 -07:00
Srivatsa Vaddagiri
ab2ff007fe sched: update_rq_clock() must skip ONE update
Prevent large wakeup latencies from being accounted to the wrong task.

Change-Id: Ie9932acb8a733989441ff2dd51c50a2626cfe5c5
Cc: <stable@vger.kernel.org>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
CRs-Fixed: 755576
Patch-mainline: http://permalink.gmane.org/gmane.linux.kernel/1677324
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-11-25 12:28:35 +05:30
Linux Build Service Account
6383a10831 Merge "perf: Add queued work to remove orphaned child events" 2014-11-22 08:32:07 -08:00
Linux Build Service Account
e999189413 Merge "perf: Set owner pointer for kernel events" 2014-11-22 08:32:07 -08:00
Jiri Olsa
6cd67d5a13 perf: Add queued work to remove orphaned child events
In cases when the  owner task exits before the workload and the
workload made some forks, all the events stay in until the last
workload process exits. Thats' because each child event holds
parent reference.

We want to release all children events once the parent is gone,
because at that time there's no process to read them anyway, so
they're just eating resources.

This removal  races with process exit, which removes all events
and fork, which clone events.  To be clear of those two, adding
work queue to remove orphaned child for context in case such
event is detected.

Using delayed work queue (with delay == 1), because we queue this
work under perf scheduler callbacks. Normal work queue tries to wake
up the queue process, which deadlocks on rq->lock in this place.

Also preventing clones from abandoned parent event.

Change-Id: I20b674d9b56910828444e29a9c0756daac1b4680
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1406896382-18404-4-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Git-commit: fadfe7be6e50de7f03913833b33c56cd8fb66bac
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[sheetals@codeaurora.org: fixed merge conflicts]
Signed-off-by: Sheetal Sahasrabudhe <sheetals@codeaurora.org>
2014-11-21 14:35:38 -05:00
Jiri Olsa
18bebb752b perf: Set owner pointer for kernel events
Adding fake EVENT_OWNER_KERNEL owner pointer value for kernel perf
events, so we could distinguish it from user events, which needs
special care in following patch.

Change-Id: I975186151644af709d7fdfc13f1ce9d2ebd4c83b
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1406896382-18404-3-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Git-commit: f86977620ee4635f26befcf436700493a38ce002
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[sheetals@codeaurora.org: fixed merge conflicts]
Signed-off-by: Sheetal Sahasrabudhe <sheetals@codeaurora.org>
2014-11-21 14:35:16 -05:00
Linux Build Service Account
aa285b577f Merge "sched: per-cpu mostly_idle threshold" 2014-11-20 15:36:30 -08:00
Linux Build Service Account
62b1d26801 Merge "sched: Add API to set task's initial task load" 2014-11-20 15:36:29 -08:00
Steve Muckle
f17fe85baf sched: tighten up jiffy to sched_clock mapping
The tick code already tracks exact time a tick is expected
to arrive. This can be used to eliminate slack in the jiffy
to sched_clock mapping that aligns windows between a caller
of sched_set_window and the scheduler itself.

Change-Id: I9d47466658d01e6857d7457405459436d504a2ca
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-11-19 15:06:33 -08:00
Ram Chandrasekar
ecd44b989f printk: Make the console flush configurable in hotplug path
Make console flush configurable in hot plug code path.

The thread which initiates the hot plug can get scheduled
out, while trying to acquire the console lock,
thus increasing the hot plug latency. This option
allows to selectively disable the console flush and
in turn reduce the hot plug latency.

Change-Id: I0e6cd50a7e2ab14cab987815e47352b6b71f187a
Signed-off-by: Ram Chandrasekar <rkumbako@codeaurora.org>
2014-11-18 19:16:25 -07:00
Patrick Daly
5a52a6e15e printk: Save interrupt flags when enabling __log_oops_buf
oops_printk_start() may be called with interrupts disabled; save and
restore the interrupt flags properly.

Found after a spinlock was acquired recursively in the following backtrace:

\raw_spin_lock_irq
\run_timer_softirq
...(skipped)
\el1_irq
()
\printk\oops_printk_start
\panic\oops_enter
\traps\die
\fault\__do_kernel_fault.part.5
\fault\do_page_fault
\fault\do_translation_fault
\fault\do_mem_abort
()
\get_next_timer_interrupt
\tick-sched\__tick_nohz_idle_enter
\tick-sched\tick_nohz_idle_enter
\idle\cpu_startup_entry
\init/main\rest_init
\init/main\start_kernel

CRs-fixed: 754837
Change-Id: Ib9b5079b0177833d40b14ddf0f0458be5676509a
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
2014-11-17 12:30:52 -08:00
Linux Build Service Account
b3491934ec Merge "idle: Implement a per-cpu idle-polling mode" 2014-11-15 20:27:53 -08:00
Linux Build Service Account
2492f77873 Merge "idle: exit the cpu_idle_poll loop if cpu_idle_force_poll is cleared" 2014-11-15 20:27:53 -08:00
Linux Build Service Account
0866af874e Merge "idle: Add a memory barrier after setting cpu_idle_force_poll" 2014-11-15 20:27:52 -08:00
Syed Rameez Mustafa
a40d3ce56e sched: Avoid unnecessary load balance when tasks don't fit on dst_cpu
When considering to pull over a task that does not fit on the
destination CPU make sure that the busiest group has exceeded its
capacity. While the change is applicable to all groups, the biggest
impact will be on migrating big tasks to little CPUs. This should
only happen when the big cluster is no longer capable of balancing
load within the cluster. This change should have no impact on single
cluster systems.

Change-Id: I6d1ef0e0d878460530f036921ce4a4a9c1e1394b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2014-11-13 12:24:31 -08:00
Vikram Mulukutla
fd40937a44 idle: Implement a per-cpu idle-polling mode
cpu_idle_poll_ctrl provides a way of switching the
idle thread to use cpu_idle_poll instead of the arch
specific lower power mode callbacks (arch_cpu_idle).
cpu_idle_poll spins on a flag in a tight loop with
interrupts enabled.

In some cases it may be useful to enter the tight loop
polling mode only on a particular CPU. This allows
other CPUs to continue using the arch specific low
power mode callbacks. Provide an API that allows this.

Change-Id: I7c47c3590eb63345996a1c780faa79dbd1d9fdb4
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2014-11-12 22:49:15 -08:00
Vikram Mulukutla
1142bb4b83 idle: exit the cpu_idle_poll loop if cpu_idle_force_poll is cleared
cpu_idle_poll_ctrl allows the enabling/disabling of the idle
polling mode; this mode allows a CPU to spin waiting for a
new task to be scheduled rather than having to execute the
arch specific idle code.

However, the loop that checks for a new task does not look
at the flag that enables idle polling mode. So, the CPU may
continue to spin even though the aforementioned flag has
been cleared. Since the CPU is already in idle, it may be
a while before a task is scheduled, precluding potential
power savings.

Modify the while loop conditional in question to also check
if the cpu_idle_force_poll flag is set.

Change-Id: Ia2e83af97890dc399b86e090459a41d31ce28b6c
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2014-11-12 22:49:13 -08:00
Vikram Mulukutla
a00c5ba3d1 idle: Add a memory barrier after setting cpu_idle_force_poll
To ensure that CPUs see cpu_idle_force_poll flag
updates, add a memory barrier after writing to
the flag.

Change-Id: Ic3fdef7d17b673247bce5093530ce8aa08694632
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
2014-11-12 22:49:06 -08:00
Linux Build Service Account
14aa11a8c0 Merge "sched: print sched_cpu_load tracepoint for all CPUs" 2014-11-12 17:02:03 -08:00
Linux Build Service Account
9944234eb1 Merge "msm: rtb: Add timestamp to rtb logging" 2014-11-12 13:57:55 -08:00
Vignesh Radhakrishnan
786e8fd8aa msm: rtb: Add timestamp to rtb logging
RTB logging currently doesn't log the time
at which the logging was done. This can be
useful to compare with dmesg during debug.
The bytes for timestamp are taken by reducing
the sentinel array size to three from eleven
thus giving the extra 8 bytes to store time.
This maintains the size of the layout at 32.

Change-Id: Ifc7e4d2e89ed14d2a97467891ebefa9515983630
Signed-off-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>
2014-11-11 14:17:24 +05:30
Linux Build Service Account
bc1f02a97c Merge "irq: Allow multiple clients to register for irq affinity notification" 2014-11-10 22:03:32 -08:00
Steve Muckle
e3d8a00dab sched: print sched_cpu_load tracepoint for all CPUs
When select_best_cpu() is called because a task is on a suboptimal
CPU, certain CPUs are skipped because moving the task there would
not make things any better. For the purposes of debugging though it
is useful to always see the state of all CPUs.

Change-Id: I76965663c1feef5c4cfab9909e477b0dcf67272d
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-11-10 19:22:51 -08:00
Linux Build Service Account
9b62c0def4 Merge "sched: use C-states in non-small task wakeup placement logic" 2014-11-10 03:57:51 -08:00
Lina Iyer
1d5b600b50 irq: Allow multiple clients to register for irq affinity notification
PM QoS and other idle frameworks can do a better job of addressing power
and performance requirements for a cpu, knowing the IRQs that are
affine to that cpu. If a performance request is placed against serving
the IRQ faster and if the IRQ is affine to a set of cpus, then setting
the performance requirements only on those cpus help save power on the
rest of the cpus. PM QoS framework is one such framework interested in
knowing the smp_affinity of an IRQ and the change notificiation in this
regard. QoS requests for the CPU_DMA_LATENCY constraint currently apply
to all cpus, but when attached to an IRQ, can be applied only to the set
of cpus that IRQ's smp_affinity is set to. This allows other cpus to
enter deeper sleep states to save power. More than one framework/driver
can be interested in such information.

The current implementation allows only a single notification callback
whenever the IRQ's SMP affinity is changed. Adding a second notification
punts the existing notifier function out of registration.  Add a list of
notifiers, allowing multiple clients to register for irq affinity
notifications.

The kref object associated with the struct irq_affinity_notify was used
to prevent the notifier object from being released if there is a pending
notification. It was incremented before the work item was scheduled and
was decremented when the notification was completed. If the kref count
was zero at the end of it, the release function gets a callback allowing
the module to release the irq_affinity_notify memory. This works well
for a single notification. When multiple clients are registered, no
single kref object can be used. Hence, the work function when scheduled,
will increase the kref count using the kref_get_unless_zero(), so if the
module had already unregistered the irq_affinity_notify object while the
work function was scheduled, it will not be notified.

Change-Id: If2e38ce8d7c43459ba1604d5b4798d1bad966997
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
Patch-mainline: linux-pm @ Wed, 27 Aug 2014 13:18:28
https://lkml.org/lkml/2014/8/27/609
[mnalajal@codeaurora.org: resolve NON SMP target compilation issues]
Signed-off-by: Murali Nalajala <mnalajal@codeaurora.org>
2014-11-09 15:17:27 -08:00
Srivatsa Vaddagiri
ed7d7749e9 sched: per-cpu mostly_idle threshold
sched_mostly_idle_load and sched_mostly_idle_nr_run knobs help pack
tasks on cpus to some extent. In some cases, it may be desirable to
have different packing limits for different cpus. For example, pack to
a higher limit on high-performance cpus compared to power-efficient
cpus.

This patch removes the global mostly_idle tunables and makes them
per-cpu, thus letting task packing behavior to be controlled in a
fine-grained manner.

Change-Id: Ifc254cda34b928eae9d6c342ce4c0f64e531e6c2
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-11-06 15:27:00 +05:30
Srivatsa Vaddagiri
f0e281597c sched: Add API to set task's initial task load
Add a per-task attribute, init_load_pct, that is used to initialize
newly created children's initial task load. This helps important
applications launch their child tasks on cpus with highest capacity.

Change-Id: Ie9665fd2aeb15203f95fd7f211c50bebbaa18727
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-11-05 14:26:59 +05:30
Syed Rameez Mustafa
297c4ccce8 sched: use C-states in non-small task wakeup placement logic
Currently when a non-small task wakes up, the task placement logic
first tries to find the least loaded CPU before breaking any ties
via the power cost of running the task on those CPUs. When the power
cost is also same, however, the scheduler just selects the first CPU
it came across. Use C-states to further break ties when the power
cost is the same for multiple CPUs. The scheduler will now pick a
CPU in the shallowest C-state.

Change-Id: Ie1401b305fa02758a2f7b30cfca1afe64459fc2b
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2014-11-04 14:11:24 -08:00
Colin Cross
447d7848b4 mm: fix prctl_set_vma_anon_name
prctl_set_vma_anon_name could attempt to set the name across
two vmas at the same time due to a typo, which might corrupt
the vma list.  Fix it to use tmp instead of end to limit
the name setting to a single vma at a time.

Change-Id: I8dc2353f32b5f8510986d01c5f27d450b645902a
Reported-by: Jed Davis <jld@mozilla.com>
Signed-off-by: Colin Cross <ccross@android.com>
Git-commit: 9bc0c15675840178cee1486c2a7f25faead1518e
Git-Repo: https://android.googlesource.com/kernel/common.git
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
2014-11-03 15:20:47 +05:30