Commit graph

16507 commits

Author SHA1 Message Date
Biswajit Paul
e758417e7c kernel: Restrict permissions of /proc/iomem.
The permissions of /proc/iomem currently are -r--r--r--. Everyone can
see its content. As iomem contains information about the physical memory
content of the device, restrict the information only to root.

Change-Id: If0be35c3fac5274151bea87b738a48e6ec0ae891
CRs-Fixed: 786116
Signed-off-by: Biswajit Paul <biswajitpaul@codeaurora.org>
Signed-off-by: Avijit Kanti Das <avijitnsec@codeaurora.org>
2015-02-09 16:17:30 -08:00
Vignesh Radhakrishnan
e07ac5d19c kmemleak : Make module scanning optional using config
Currently kmemleak scans module memory as provided
in the area list. This takes up lot of time with
irq's and preemption disabled. Provide a compile
time configurable config to enable this functionality.

Change-Id: I5117705e7e6726acdf492e7f87c0703bc1f28da0
Signed-off-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>
2015-02-04 18:42:37 +05:30
Linux Build Service Account
651997c82a Merge "sched: Remove sched_wake_to_idle for HMP scheduler" 2015-02-03 15:23:53 -08:00
Linux Build Service Account
df92d19819 Merge "sched: Support CFS_BANDWIDTH feature in HMP scheduler" 2015-02-03 15:23:51 -08:00
Linux Build Service Account
54dd18b6a1 Merge "sched: Consolidate hmp stats into their own struct" 2015-02-03 15:23:50 -08:00
Linux Build Service Account
d00c2b9d82 Merge "sched: Add userspace interface to set PF_WAKE_UP_IDLE" 2015-02-03 15:23:50 -08:00
Srivatsa Vaddagiri
930bca74d2 sched: Remove sched_wake_to_idle for HMP scheduler
sched_wake_to_idle tunable is obsoleted by sched_prefer_idle tunable
in HMP scheduler. Remove the same when CONFIG_SCHED_HMP is defined

Change-Id: I7bcf12cc3c50df5ef09261f097711c9f29ec63a4
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2015-01-28 14:22:12 +05:30
Srivatsa Vaddagiri
2385d33016 sched: Support CFS_BANDWIDTH feature in HMP scheduler
CFS_BANDWIDTH feature is not currently well-supported by HMP
scheduler. Issues encountered include a kernel panic when
rq->nr_big_tasks count becomes negative. This patch fixes HMP
scheduler code to better handle CFS_BANDWIDTH feature. The most
prominent change introduced is maintenance of HMP stats (nr_big_tasks,
nr_small_tasks, cumulative_runnable_avg) per 'struct cfs_rq' in
addition to being maintained in each 'struct rq'. This allows HMP
stats to be updated easily when a group is throttled on a cpu.

Change-Id: Iad9f378b79ab5d9d76f86d1775913cc1941e266a
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2015-01-28 14:13:19 +05:30
Srivatsa Vaddagiri
bbef4c5e1b sched: Consolidate hmp stats into their own struct
Key hmp stats (nr_big_tasks, nr_small_tasks and
cumulative_runnable_average) are currently maintained per-cpu in
'struct rq'. Merge those stats in their own structure (struct
hmp_sched_stats) and modify impacted functions to deal with the newly
introduced structure. This cleanup is required for a subsequent patch
which fixes various issues with use of CFS_BANDWIDTH feature in HMP
scheduler.

Change-Id: Ieffc10a3b82a102f561331bc385d042c15a33998
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2015-01-28 14:13:14 +05:30
Linux Build Service Account
3ff6a5a197 Merge "tracing: power: Add trace events for core control" 2015-01-27 06:53:12 -08:00
Srivatsa Vaddagiri
7e767d3e45 sched: Add userspace interface to set PF_WAKE_UP_IDLE
sched_prefer_idle flag controls whether tasks can be woken to any
available idle cpu. It may be desirable to set sched_prefer_idle to 0
so that most tasks wake up to non-idle cpus under mostly_idle
threshold and have specialized tasks override this behavior through
other means. PF_WAKE_UP_IDLE flag per task provides exactly that. It
lets tasks with PF_WAKE_UP_IDLE flag set be woken up to any available
idle cpu independent of sched_prefer_idle flag setting. Currently
only kernel-space API exists to set PF_WAKE_UP_IDLE flag for a task.
This patch adds a user-space API (in /proc filesystem) to set
PF_WAKE_UP_IDLE flag for a given task. /proc/[pid]/sched_wake_up_idle
file can be written to set or clear PF_WAKE_UP_IDLE flag for a given
task.

Change-Id: I13a37e740195e503f457ebe291d54e83b230fbeb
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2015-01-27 19:39:57 +05:30
Junjie Wu
6c68b1215d tracing: power: Add trace events for core control
Add trace events for core control module.

Change-Id: I36da5381709f81ef1ba82025cd9cf8610edef3fc
Signed-off-by: Junjie Wu <junjiew@codeaurora.org>
2015-01-22 17:31:16 -08:00
Linux Build Service Account
4c5b2873a2 Merge "sched: add sched feature FORCE_CPU_THROTTLING_IMMINENT" 2015-01-14 17:12:07 -08:00
Linux Build Service Account
a2ada2d4c8 Merge "sched: continue to search less power efficient cpu for load balancer" 2015-01-14 17:12:06 -08:00
Linux Build Service Account
81921f5681 Merge "cpu_pm: Add level to the cluster pm notification" 2015-01-13 12:24:49 -08:00
Joonwoo Park
66bd788705 sched: add sched feature FORCE_CPU_THROTTLING_IMMINENT
Add a new sched feature FORCE_CPU_THROTTLING_IMMINENT to perform
migration due to EA without checking frequency throttling.  This option
can give us better debugging and verification capability.

Change-Id: Iba445961a7f9812528b4e3aa9c6ddf47a3aad583
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2015-01-08 11:22:33 -08:00
Joonwoo Park
2c7cc326ed sched: continue to search less power efficient cpu for load balancer
When choosing a CPU to do power-aware active balance from the load
balancer currently selects the first eligible CPU it finds, even if
there is another eligible CPU which is higher-power. This can lead to
suboptimal load balancing behavior and extra migrations. Power and
performance will be impacted.

Achieve better power and performance by continuing to search the least
power efficient cpu as long as the cpu's load average is higher than or
equal to the busiest cpu found by far.

CRs-fixed: 777341
Change-Id: I14eb21ab725bf7dab88b2e1e169aced6f2d712ca
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2015-01-08 11:22:33 -08:00
Murali Nalajala
16323e6eef cpu_pm: Add level to the cluster pm notification
Cluster pm notifications without level information increases difficulty
and complexity for the registered drivers to figure out when the last
coherency level is going into power collapse.

Send notifications with level information that allows the registered
drivers to easily determine the cluster level that is going in/out of
power collapse.

There is an issue with this implementation. GIC driver saves and
restores the distributed registers as part of cluster notifications. On
newer platforms there are multiple cluster levels are defined (e.g l2,
cci etc). These cluster level notofications can happen independently.
On MSM platforms GIC is still active while the cluster sleeps in idle,
causing the GIC state to be overwritten with an incorrect previous state
of the interrupts. This leads to a system hang. Do not save and restore
on any L2 and higher cache coherency level sleep entry and exit.

Change-Id: I31918d6383f19e80fe3b064cfaf0b55e16b97eb6
Signed-off-by: Archana Sathyakumar <asathyak@codeaurora.org>
Signed-off-by: Murali Nalajala <mnalajal@codeaurora.org>
2015-01-07 22:31:58 -08:00
Syed Rameez Mustafa
13e853e988 sched: Update cur_freq for offline CPUs in notifier callback
cpufreq governor does not send frequency change notifications for
offline CPUs. This means that a hot removed CPU's cur_freq information
can get stale if there is a frequency change while that CPU is offline.
When the offline CPU is hotplugged back in, all subsequent load
calculations are based off the stale information until another frequency
change occurs and the corresponding set of notifications are sent out.
Avoid this incorrect load tracking by updating the cur_freq for all
CPUs in the same frequency domain.

Change-Id: Ie11ad9a64e7c9b115d01a7c065f22d386eb431d5
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2015-01-05 15:36:33 -08:00
Linux Build Service Account
05e6c29de4 Merge "sched: add preference for prev_cpu in HMP task placement" 2014-12-29 17:31:49 -08:00
Linux Build Service Account
380cadc7f3 Merge "sched: Per-cpu prefer_idle flag" 2014-12-29 17:31:47 -08:00
Linux Build Service Account
307e71816e Merge "sched: Consider PF_WAKE_UP_IDLE in select_best_cpu()" 2014-12-29 17:31:47 -08:00
Olav Haugan
30d383d45b sched: Fix overflow in max possible capacity calculation
The max possible capacity calculation might overflow given large enough
max possible frequency and capacity. Fix potential for overflow.

Change-Id: Ie9345bc657988845aeb450d922052550cca48a5f
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-26 11:25:32 -08:00
Steve Muckle
cecf6c46cd sched: add preference for prev_cpu in HMP task placement
At present the HMP task placement algorithm scans CPUs in numerical
order and if two identical options are found, the first one
encountered is chosen, even if it is different from the task's
previous CPU.

Add a bias towards the task's previous CPU in such situations. Any
time two or more CPUs are considered equivalent (load, C-state, power
cost), if one of them is the task's previous CPU, bias towards that
CPU. The algorithm is otherwise unchanged.

CRs-Fixed: 772033
Change-Id: I511f5b929c2bfa6fdea9e7433893c27b29ed8026
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-23 15:54:35 -08:00
Linux Build Service Account
5da5828a23 Merge "sched: Add sysctl to enable power aware scheduling" 2014-12-23 13:21:41 -08:00
Linux Build Service Account
4ef567e8bc Merge "sched: Ensure no active EA migration occurs when EA is disabled" 2014-12-23 05:58:15 -08:00
Srivatsa Vaddagiri
599bfc7503 sched: Per-cpu prefer_idle flag
Remove the global sysctl_sched_prefer_idle flag and replace it with a
per-cpu prefer_idle flag. The per-cpu flag is expected to same for all
cpus in a cluster. It thus provides convenient means to disable
packing in one cluster while allowing packing in another cluster.

Change-Id: Ie4cc73bb1a55b4eac5697be38e558546161faca1
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-23 09:52:43 +05:30
Srivatsa Vaddagiri
92ba1d55f3 sched: Consider PF_WAKE_UP_IDLE in select_best_cpu()
sysctl_sched_prefer_idle controls selection of idle cpus for waking
tasks. In some cases, waking to idle cpus help performance while in
other cases it hurts (as tasks incur latency associated with C-state
wakeup). Its ideal if scheduler can adapt prefer_idle behavior based
on the task that is waking up, but that's hard for scheduler to
figure by itself. PF_WAKE_UP_IDLE hint can be provided by external
module/driver in such case to guide scheduler in preferring an idle
cpu for select tasks irrespective of sysctl_sched_prefer_idle flag.

This patch enhances select_best_cpu() to consider PF_WAKE_UP_IDLE
hint. Wakeup posted from any task that has PF_WAKE_UP_IDLE set is a
hint for scheduler to prefer idle cpu for waking tasks. Similarly
scheduler will attempt to place any task with PF_WAKE_UP_IDLE set on
idle cpu when they wakeup.

CRs-Fixed: 773101
Change-Id: Ia8bf334d98fd9fd2ff9eda875430497d55d64ce6
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-23 09:52:27 +05:30
Olav Haugan
7e13b27b8b sched: Add sysctl to enable power aware scheduling
Add sysctl to enable energy awareness at runtime. This is useful for
performance/power tuning/measurements and debugging. In addition this
will match up with the Documentation/scheduler/sched-hmp.txt documentation.

Change-Id: I0a9185498640d66917b38bf5d55f6c59fc60ad5c
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-22 14:37:33 -08:00
Olav Haugan
294b88dc67 sched: Ensure no active EA migration occurs when EA is disabled
There exists a flag called "sched_enable_power_aware" that is not honored
everywhere. Fix this.

Change-Id: I62225939b71b25970115565b4e9ccb450e252d7c
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-22 14:23:55 -08:00
Vignesh Radhakrishnan
230faea50d irq_work: register irq_work_cpu_notify in early init
Currently irq_work_cpu_notify is registered using
device_initcall().

In cases where CPU is hotplugged early (example
would be thermal engine hotplugging CPU), there
are chances where irq_work_cpu_notifier has not
even registered, but CPU is already hotplugged out.
irq_work uses CPU_DYING notifier to clear out the
pending irq_work. But since the cpu notifier is not
even registered that early, pending irq_work
items are never run since this pending list is
percpu.

One specific scenario where this is impacting
the system is, rcu framework using irq_work
to wakeup and complete cleanup operations. In this
scenario we notice that RCU operations needs cleanup
on the hotplugged CPU.

Fix this by registering irq_work_cpu_notify
in early init.

CRs-Fixed: 768180
Change-Id: Ibe7f5c77097de7a342eeb1e8d597fb2f72185ecf
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Vignesh Radhakrishnan <vigneshr@codeaurora.org>
2014-12-22 14:30:12 +05:30
Joonwoo Park
fc994a4b9e sched: take account of irq preemption when calculating irqload delta
If irq raises while sched_irqload() is calculating irqload delta,
sched_account_irqtime() can update rq's irqload_ts which can be greater
than the jiffies stored in sched_irqload()'s context so delta can be
negative.  This negative delta means there was recent irq occurence.
So remove improper BUG_ON().

CRs-fixed: 771894
Change-Id: I5bb01b50ec84c14bf9f26dd9c95de82ec2cd19b5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-12-16 16:56:50 -08:00
Joonwoo Park
2cec55a2e2 sched: Prevent race conditions where upmigrate_min_nice changes
When upmigrate_min_nice is changed dec_nr_big_small_task() can trigger
BUG_ON(rq->nr_big_tasks < 0).  This happens when there is a task which was
considered as non-big task due to its nice > upmigrate_min_nice and later
upmigrate_min_nice is changed to higher value so the task becomes big task.
In this case runqueue still has nr_big_tasks = 0 incorrectly with current
implementation.  Consequently next scheduler tick sees a big task to
schedule and try to decrease nr_big_tasks which is already 0.

Introduce sched_upmigrate_min_nice which is updated atomically and re-count
the number of big and small tasks to fix BUG_ON() triggering.

Change-Id: I6f5fc62ed22bbe5c52ec71613082a6e64f406e58
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-12-16 11:05:22 -08:00
Olav Haugan
4beca1fd4d sched: Avoid frequent task migration due to EA in lb
A new tunable exists that allow task migration to be throttled when the
scheduler tries to do task migrations due to Energy Awareness (EA). This
tunable is only taken into account when migrations occur in the tick
path. Extend the usage of the tunable to take into account the load
balancer (lb) path also.

In addition ensure that the start of task execution on a CPU is updated
correctly. If a task is preempted but still runnable on the same CPU the
start of execution should not be updated. Only update the start of
execution when a task wakes up after sleep or moves to a new CPU.

Change-Id: I6b2a8e06d8d2df8e0f9f62b7aba3b4ee4b2c1c4d
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-13 06:43:49 -08:00
Olav Haugan
a7bc092692 sched: Avoid migrating tasks to little cores due to EA
If during the check whether migration is needed we find that there is a
lower power CPU available we commence to find a new CPU for this task.
However, by the time we search for a new CPU the lower power CPU might
no longer be available. We should abort the attempt to migrate a task in
this case.

CRs-Fixed: 764788
Change-Id: I867923a82b95c599278b81cd73bb102b6aff4d03
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-13 06:43:48 -08:00
Olav Haugan
2c320f2ffa sched: Add temperature to cpu_load trace point
Add the current CPU temperature to the sched_cpu_load trace point.
This will allow us to track the CPU temperature.

CRs-Fixed: 764788
Change-Id: Ib2e3559bbbe3fe07a6b7c8115db606828bc36254
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-13 06:43:48 -08:00
Olav Haugan
0c0d18bb15 sched: Only do EA migration when CPU throttling is imminent
We do not want to migrate tasks unnecessary to avoid cache hit and other
migration latencies that could affect the performance of the system. Add
a check to only try EA migration when CPU frequency throttling is
imminent.

CRs-Fixed: 764788
Change-Id: I92e86e62da10ce15f1e76a980df3545e93d76348
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2014-12-13 06:34:55 -08:00
Srivatsa Vaddagiri
32c6ac7c62 sched: Avoid frequent migration of running task
Power values for cpus can drop quite considerably when it goes idle.
As a result, the best choice for running a single task in a cluster
can vary quite rapidly. As the task keeps hopping cpus, other cpus go
idle and start being seen as more favorable target for running a task,
leading to task migrating almost every scheduler tick!

Prevent this by keeping track of when a task started running on a cpu
and allowing task migration in tick path (migration_needed()) on
account of energy efficiency reasons only if the task has run
sufficiently long (as determined by sysctl_sched_min_runtime
variable).

Note that currently sysctl_sched_min_runtime setting is considered
only in scheduler_tick()->migration_needed() path and not in
idle_balance() path. In other words, a task could be migrated to
another cpu which did a idle_balance(). This limitation should not
affect high-frequency migrations seen typically (when a single
high-demand task runs on high-performance cpu).

CRs-Fixed: 756570
Change-Id: I96413b7a81b623193c3bbcec6f3fa9dfec367d99
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-13 06:34:55 -08:00
Steve Muckle
1bfb9a0dd3 sched: treat sync waker CPUs with 1 task as idle
When a CPU with one task performs a sync wakeup, its
one task is expected to sleep immediately so this CPU
should be treated as idle for the purposes of CPU selection
for the waking task.

This is only done when idle CPUs are the preferred targets
for non-small task wakeups. When prefer_idle is 0, the
CPU is left as non-idle in the selection logic so it is still
a preferred candidate for the sync wakeup.

Change-Id: I65c6535169293e8ba0c37fb5e88aec336338f7d7
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:58 -08:00
Syed Rameez Mustafa
b3c5c54d72 sched: extend sched_task_load tracepoint to indicate prefer_idle
Prefer idle determines whether the scheduler prefers an idle CPU
over a busy CPU or not to wake up a task on. Knowing the correct
value of this tunable is essential in understanding placement
decisions made in select_best_cpu().

Change-Id: I955d7577061abccb65d01f560e1911d9db70298a
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
2014-12-10 23:53:57 -08:00
Steve Muckle
84370f934b sched: extend sched_task_load tracepoint to indicate sync wakeup
Sync wakeups provide a hint to the scheduler about upcoming task
activity. Knowing which wakeups are sync wakeups from logs will
assist in workload analysis.

Change-Id: I6ffe73f2337e56b8234d4097069d5d70ab045eda
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:56 -08:00
Steve Muckle
ee9ddb5f3c sched: add sync wakeup recognition in select_best_cpu
If a wakeup is a sync wakeup, we need to discount the currently
running task's load from the waker's CPU as we calculate the best
CPU for the waking task to land on.

Change-Id: I00c5df626d17868323d60fb90b4513c0dd314825
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:55 -08:00
Srivatsa Vaddagiri
6e778f0cdc sched: Provide knob to prefer mostly_idle over idle cpus
sysctl_sched_prefer_idle lets the scheduler bias selection of
idle cpus over mostly idle cpus for tasks. This knob could be
useful to control balance between power and performance.

Change-Id: Ide6eef684ef94ac8b9927f53c220ccf94976fe67
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
2014-12-10 23:53:54 -08:00
Steve Muckle
75d1c94217 sched: make sched_cpu_high_irqload a runtime tunable
It may be desirable to be able to alter the scehd_cpu_high_irqload
setting easily, so make it a runtime tunable value.

Change-Id: I832030eec2aafa101f0f435a4fd2d401d447880d
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:53 -08:00
Steve Muckle
00acd0448b sched: trace: extend sched_cpu_load to print irqload
The irqload is used in determining whether CPUs are mostly idle
so it is useful to know this value while viewing scheduler traces.

Change-Id: Icbb74fc1285be878f254ae54886bdb161b14a270
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:51 -08:00
Steve Muckle
51f0d7663b sched: avoid CPUs with high irq activity
CPUs with significant IRQ activity will not be able to serve tasks
quickly. Avoid them if possible by disqualifying such CPUs from
being recognized as mostly idle.

Change-Id: I2c09272a4f259f0283b272455147d288fce11982
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 23:53:47 -08:00
Steve Muckle
a14e01109a sched: refresh sched_clock() after acquiring rq lock in irq path
The wallclock time passed to sched_account_irqtime() may be stale
after we wait to acquire the runqueue lock. This could cause problems
in update_task_ravg because a different CPU may have advanced
this CPU's window_start based on a more up-to-date wallclock value,
triggering a BUG_ON(window_start > wallclock).

Change-Id: I316af62d1716e9b59c4a2898a2d9b44d6c7a75d8
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:46 -08:00
Steve Muckle
5fdc1d3aaa sched: track soft/hard irqload per-RQ with decaying avg
The scheduler currently ignores irq activity when deciding which
CPUs to place tasks on. If a CPU is getting hammered with IRQ activity
but has no tasks it will look attractive to the scheduler as it will
not be in a low power mode.

Track irqload with a decaying average. This quantity can be used
in the task placement logic to avoid CPUs which are under high
irqload. The decay factor is 3/4. Note that with this algorithm the
tracked irqload quantity will be higher than the actual irq time
observed in any single window. Some sample outcomes with steady
irqloads per 10ms window and the 3/4 decay factor (irqload of 10 is
used as a threshold in a subsequent patch):

irqload per window        load value asymptote      # windows to > 10
2ms			  8			    n/a
3ms			  12			    7
4ms			  16			    4
5ms			  20			    3

Of course irqload will not be constant in each window, these are just
given as simple examples.

Change-Id: I9dba049f5dfdcecc04339f727c8dd4ff554e01a5
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:45 -08:00
Steve Muckle
c5c90f6099 sched: do not set window until sched_clock is fully initialized
The system initially uses a jiffy-based sched clock. When the platform
registers a new timer for sched_clock, sched_clock can jump backwards.
Once sched_clock_postinit() runs it should be safe to rely on it.

Also sched_clock_cpu() relies on completion of sched_clock_init()
and until that happens sched_clock_cpu() returns zero. This is used
in the irq accounting path which window-based stats relies upon.
So do not set window_start until sched_clock_cpu() is working.

Change-Id: Ided349de8f8554f80a027ace0f63ea52b1c38c68
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2014-12-10 19:50:44 -08:00
Linux Build Service Account
fea8806a70 Merge "sched: Fix inaccurate accounting for real-time task" 2014-12-06 14:38:08 -08:00