Commit graph

305008 commits

Author SHA1 Message Date
Laura Abbott
ff7c655e76 msm: Rip out fmem related memory adjustments
fmem is deprecated. Get rid of the special handling for
fmem in memory reservation code.

Change-Id: I24dc24cee364d992cbbe08d81851987d721a587b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:25 -08:00
Laura Abbott
8acc4f0080 gpu: ion: Add support for CMA allocations in cp heap
Extend the cp heap to allow memory to be allocated from
the contiguous memory allocator (CMA) instead of from
the standard caveout region. The option to use CMA or regular
carveout memory is configured via a parameter in platform
data.

Change-Id: I9f3a169325c44230dde1d91a9cdcf613ad291df2
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:25 -08:00
Laura Abbott
ea3ab1acc8 gpu: ion: Rename request/release region
request_region and release_region are macro names defined
in the linux kernel. Under some circumstances, the C compiler
can't differentiate between the macro name and the field name.
Changing the field name is the easiest way to prevent this
problem.

Change-Id: I2c8d61bdaa20e332e0215f0bb3237e8332f0f3ac
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:24 -08:00
Laura Abbott
547237bed6 gpu: ion: Factor out common code on first alloc/last free
Currently, fmem must be transitioned on first allocation/
last free. Going forward, there may be other use cases to
call functions on first allocation/last free. Factor some of
this code out to avoid duplication.

Change-Id: I36472333222c497c5b4c888394b4bd277c146249
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:24 -08:00
Laura Abbott
142cd31a3e gpu: ion: Add msm specific extensions to CMA heap
A number of changes have been made to the Ion framework for the
msm target. Add the necessary changes on top of the CMA heap to
allow the CMA heap to be fully utilized.

Change-Id: Ie006dcd4c41481e4d914c67bafbf42d1afdb1a76
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
1382bb05d7 add CMA heap
New heap type ION_HEAP_TYPE_DMA where allocation is done with dma_alloc_coherent API.
device coherent_dma_mask must be set to DMA_BIT_MASK(32).
ion_platform_heap private field is used to retrieve the device linked to CMA,
if NULL the default CMA area is used.
ion_cma_get_sgtable is a copy of dma_common_get_sgtable function which should
be in kernel 3.5

Change-Id: I9ae54a3a021cb3513c2b0e8c58b69f3ae118561b
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[lauraa: Fix context in ion_priv.h/ion.h and omit Makefile change for now]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
522629b432 add private field in ion_heap and ion_platform_heap structure
copy private field from platform configuration to internal heap structure.

Change-Id: Ia7571d88fc2f72f5d655fb6f6b54fde389d96c85
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[laura: Rebase context fixes]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
bf8847b852 fix ion_platform_data definition
fix ion_platform_heap to make is use an usual way in board configuration file.

Change-Id: I8686108a9fe0aa2ba9f9c84990d555f947f78f86
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[lauraa: Fixup msm board files]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:22 -08:00
Laura Abbott
e788c2d1c4 arm: dma-mapping: Add APIs for other memory types
Currently, the only attributes supported for DMA memory are
writecombine and coherent. Both of these allow speculative
prefetches to occur. For certain use cases (e.g. content
protection) there are requirements to disallow prefetching.
Relatedly, there may be cases where buffering is not enough
for high performance use cases and the full cache should
be used. Add appropriate APIs for the strongly ordered and
cached memory types for the needed use cases.

Change-Id: Ibe17b3d002f9615e2cb34183f47f6d1bcd045611
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:22 -08:00
Laura Abbott
c1386f095a common: DMA-mapping: Add strongly ordered memory attribute
Strongly ordered memory is occasionally needed for some DMA
allocations for specialized use cases. Add the corresponding
DMA attribute.

Change-Id: Idd9e756c242ef57d6fa6700e51cc38d0863b760d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Marek Szyprowski
0f53e5abb2 common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attribute
This patch adds DMA_ATTR_NO_KERNEL_MAPPING attribute which lets the
platform to avoid creating a kernel virtual mapping for the allocated
buffer. On some architectures creating such mapping is non-trivial task
and consumes very limited resources (like kernel virtual address space
or dma consistent address space). Buffers allocated with this attribute
can be only passed to user space by calling dma_mmap_attrs().

Change-Id: Id12b93fa2b02d5f3d01ab48eb61cda79f533d695
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Laura Abbott
a5708bc035 arm: dma: Allow CMA pages to not have a kernel mapping.
Currently, there are use cases where not having any kernel
mapping is required; if the CMA memory needs to be used as
a pool which can have both cached and uncached mappings we
need to remove the mapping to avoid the multiple mapping
problem. Extend the dma APIs to use the DMA_ATTR_NO_KERNEL_MAPPING
with CMA. This doesn't end up saving any virtual address space
but the mapping will still not be present.

Change-Id: I64d21250abbe615c43e2b5b1272ee2b6d106705a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Laura Abbott
4d6e1c5965 arm: dma: Expand the page protection attributes
Currently, the decision on which page protection to use
is limited to writecombine and coherent. Expand to include
strongly ordered memory and non consistent memory.

Change-Id: I7585fe3ce804cf321a5585c3d93deb7a7c95045c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Marek Szyprowski
d1fd8d785d ARM: dma-mapping: fix buffer chunk allocation order
IOMMU-aware dma_alloc_attrs() implementation allocates buffers in
power-of-two chunks to improve performance and take advantage of large
page mappings provided by some IOMMU hardware. However current code, due
to a subtle bug, allocated those chunks in the smallest-to-largest
order, what completely killed all the advantages of using larger than
page chunks. If a 4KiB chunk has been mapped as a first chunk, the
consecutive chunks are not aligned correctly to the power-of-two which
match their size and IOMMU drivers were not able to use internal
mappings of size other than the 4KiB (largest common denominator of
alignment and chunk size).

This patch fixes this issue by changing to the correct largest-to-smallest
chunk size allocation sequence.

Change-Id: I5cc9c12322e832951faf3bba6387946c890e0ed4
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Sachin Kamat
7a2356ef59 ARM: dma-mapping: Add missing static storage class specifier
Fixes the following sparse warnings:
arch/arm/mm/dma-mapping.c:231:15: warning: symbol 'consistent_base' was not
declared. Should it be static?
arch/arm/mm/dma-mapping.c:326:8: warning: symbol 'coherent_pool_size' was not
declared. Should it be static?

Change-Id: I90e2ccdc4d132a37ebcd8ae7a8441ad3fede55bf
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Vitaly Andrianov
2382e50268 ARM: dma-mapping: use PMD size for section unmap
The dma_contiguous_remap() function clears existing section maps using
the wrong size (PGDIR_SIZE instead of PMD_SIZE).  This is a bug which
does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same.
On LPAE systems, however, this bug causes the kernel to hang at this point.

This fix has been tested on both LPAE and non-LPAE kernel builds.

Change-Id: I63650057864907f1a2d8eed7257665cb2f648bbb
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
4f6b3dc624 ARM: dma-mapping: add support for IOMMU mapper
This patch add a complete implementation of DMA-mapping API for
devices which have IOMMU support.

This implementation tries to optimize dma address space usage by remapping
all possible physical memory chunks into a single dma address space chunk.

DMA address space is managed on top of the bitmap stored in the
dma_iommu_mapping structure stored in device->archdata. Platform setup
code has to initialize parameters of the dma address space (base address,
size, allocation precision order) with arm_iommu_create_mapping() function.
To reduce the size of the bitmap, all allocations are aligned to the
specified order of base 4 KiB pages.

dma_alloc_* functions allocate physical memory in chunks, each with
alloc_pages() function to avoid failing if the physical memory gets
fragmented. In worst case the allocated buffer is composed of 4 KiB page
chunks.

dma_map_sg() function minimizes the total number of dma address space
chunks by merging of physical memory chunks into one larger dma address
space chunk. If requested chunk (scatter list entry) boundaries
match physical page boundaries, most calls to dma_map_sg() requests will
result in creating only one chunk in dma address space.

dma_map_page() simply creates a mapping for the given page(s) in the dma
address space.

All dma functions also perform required cache operation like their
counterparts from the arm linear physical memory mapping version.

This patch contains code and fixes kindly provided by:
- Krishna Reddy <vdumpa@nvidia.com>,
- Andrzej Pietrasiewicz <andrzej.p@samsung.com>,
- Hiroshi DOYU <hdoyu@nvidia.com>

Change-Id: I4a9b155bef4d5f2b8a8dfe87751d82960b09b253
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
da2b2117de ARM: dma-mapping: use alloc, mmap, free from dma_ops
This patch converts dma_alloc/free/mmap_{coherent,writecombine}
functions to use generic alloc/free/mmap methods from dma_map_ops
structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been
introduced to implement writecombine methods.

Change-Id: I2709e3ffc97546df2f505d555b29c3bb8148daec
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
5e0ee00c15 ARM: dma-mapping: remove redundant code and do the cleanup
This patch just performs a global cleanup in DMA mapping implementation
for ARM architecture. Some of the tiny helper functions have been moved
to the caller code, some have been merged together.

Change-Id: I60b3450bd1180ea007e7326a63762d3a44b3c25d
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
0e8fe4a111 ARM: dma-mapping: move all dma bounce code to separate dma ops structure
This patch removes dma bounce hooks from the common dma mapping
implementation on ARM architecture and creates a separate set of
dma_map_ops for dma bounce devices.

Change-Id: I42d7869b4f74ffa5f36a4a7526bc0c55aaf6bab7
[lauraa: conflicts due to code cruft]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
3f47a1438c ARM: dma-mapping: implement dma sg methods on top of any generic dma ops
This patch converts all dma_sg methods to be generic (independent of the
current DMA mapping implementation for ARM architecture). All dma sg
operations are now implemented on top of respective
dma_map_page/dma_sync_single_for* operations from dma_map_ops structure.

Before this patch there were custom methods for all scatter/gather
related operations. They iterated over the whole scatter list and called
cache related operations directly (which in turn checked if we use dma
bounce code or not and called respective version). This patch changes
them not to use such shortcut. Instead it provides similar loop over
scatter list and calls methods from the device's dma_map_ops structure.
This enables us to use device dependent implementations of cache related
operations (direct linear or dma bounce) depending on the provided
dma_map_ops structure.

Change-Id: Icbd72d1e4fed6d7478b98bb4ead120c02dd26588
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Marek Szyprowski
b5702dc251 ARM: dma-mapping: use asm-generic/dma-mapping-common.h
This patch modifies dma-mapping implementation on ARM architecture to
use common dma_map_ops structure and asm-generic/dma-mapping-common.h
helpers.

Change-Id: I574a3b5ac883cd5d9beb79deef8f5cb44fd83296
[lauraa: conflicts due to code cruft/context changes]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Mitchel Humpherys
42fab316dd ion: isolate msm-specific ion extensions
This is another step in the process of isolating msm-specific ion
features from stock ion.

Change-Id: I3a437dbc618cb70859126c81596373338ad06500
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Mitchel Humpherys
5ebf0bb53c ion: change ion kernel map function to not take flags argument
Buffer flags are going to be specified at allocation time rather than
map time. This removes the flags argument from the ion kernel map
function.

Change-Id: Ib983ecd0dcd7befb36287ae7037c71d4ca475f90
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:16 -08:00
Mitchel Humpherys
fefa905b39 ion: remove obsolete ion flags
The symbols CACHED and UNCACHED have been replaced by ION_FLAG_CACHED
upstream. This removes them from the kernel.

Change-Id: I90c33c293f56792131fc6bd490fe041b5798ac20
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:16 -08:00
Srinu Gorle
b769cd0cdd msm: vidc: port heap mask change to ion for secure session
Changes to pass ION_SECURE in the correct argument field
while calling ion_alloc. Without this change secure session
fails.

Change-Id: Ifa4878b1c312beafc735cb649570913159799d7c
Signed-off-by: Srinu Gorle <sgorle@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:15 -08:00
Ameya Thakur
8ad7b2c8e6 msm:subsystem_restart: Enable download mode.
Entry into download mode on device crash is now enabled by default.

Change-Id: I9eada266243494a37883d24bf634acd8d87d22a2
Signed-off-by: Ameya Thakur <ameyat@codeaurora.org>
2013-03-07 15:23:15 -08:00
Jordan Crouse
42c3dc73c9 msm: board-8930: Set the GPU chip ID and turbo speed for 8930AB
Set the chip ID for the GPU revision in the 8930AB target and bump
the turbo GPU speed to 500Mhz per the 8930AB clock plan.

(cherry picked from commit 9c7bab35f7cf43d516bbab13f0814daae1c739d9)

Change-Id: I2af60e8328d7203d8a6116f2a99eef217d9efa6a
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:14 -08:00
Shubhraprakash Das
4ae2f81c4b msm: kgsl: Idle GPU core before programming SMMU from CPU
Always idle the GPU core before programming SMMU from CPU for
SMMU-v1. GPU core was already being idled before programming
the pagetable register, make sure that it's also idle before
programming the tlb invalidate registers. This is required to
prevent a deadlock from happening at the bus level.

Change-Id: Ie901b92028b289fc546ab6186eedd01411d0727e
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Tarun Karra <tkarra@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:14 -08:00
Rajeev Kulkarni
febbcec8c0 msm: Kconfig: Enable IOMMU CPU-GPU synchronization
Enable synchronization between CPU and GPU for
IOMMU configuration register accesses.

Change-Id: I90af0a58931c54d5922df369fcd3180dc288603f
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:14 -08:00
Olav Haugan
b36256dbbb msm: iommu: Synchronize access to IOMMU cfg port
Add remote spinlock that allows CPU and GPU to
synchronize access to IOMMU hardware.

Add usage of remote spinlock to iommu driver and
add depenency on SFPB hardware mutex being enabled.`

This feature is not using SFPB hardware mutex. However,
SFPB hardware mutex must be enabled since the remote
spinlock implementation is making use of shared memory
that is normally used when SFPB hardware mutex is not enabled.

Change-Id: Idc622f3484062e0721493be3cbbfb8889ed9d800
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:13 -08:00
Shubhraprakash Das
d83ae276b4 msm: kgsl: Remove extra interrupts when setting MMU state
The interrupts added to the ringbuffer on PTFLUSH and TLBUPDATE
were causing a major increase in the number of interrupts from the GPU.
This was leading to increase in power and loss of performance. Add a check
to turn off IOMMU clocks when going to SLEEP.

Change-Id: I41617dd3b7b3f7d9622523f2a1407b912dbd989e
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
2013-03-07 15:23:13 -08:00
Jordan Crouse
7681021ad9 msm: kgsl: Make the GPU device aware of the next pending event
The adreno core needs to know what the next event pending for
any given context is so it can mark the interupt to be fired.
If this isn't done then some timestamps that don't have a
matching waittimestamp call won't fire an interrupt. This is
dangerous on the last interrupt/event before a context goes
away.

Change-Id: Ic0dedbad71f6de07b43b0656128c76509326d645
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:13 -08:00
Jeremy Gebben
c390a33e2d iommu/msm: fix the include guard in iommu.h
msm_soc_version_supports_iommu_v1() was defined outside the guard.

Change-Id: I8db106908b08b89e267550d81d031cdb028b92a2
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:12 -08:00
Jordan Crouse
33c15ac817 msm: kgsl: Use signed integers for power level comparsions
The code that clamped the power levels to the requested minimum and
maximum values was mixing comparison signed and unsigned integers with
predictiably faulty results.  Move all values to signed integers to
handle negative power level numbers correctly

CRs-fixed: 427670
Change-Id: Ic0dedbada6153dc0b109923b376c3aa9a6abbeee
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:12 -08:00
Jordan Crouse
f57d9d09d3 msm: kgsl: Always set the active powerlevel when changing clock rates
kgsl_pwrctrl_pwrlevel_change might be called when clocks are on or off.
If clocks are off we don't step the clock rate and the active_pwrlevel
won't be set to the new and correct level.  Set active_pwrlevel to its
new level before doing anything else.

Change-Id: Ic0dedbad84ce1cc1b3b8df97df32e39686b85671
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:11 -08:00
Jordan Crouse
c8794d4de7 msm: kgsl: Add max and min power level controls in sysfs
Add min_pwrlevel, max_pwrlevel and thermal_pwrlevel to give a
privileged user more control over which power levels are considered
during DCVS power management.  max_pwrlevel is the maximum power
level that the system can go to at any time, min_pwrlevel represents
the lowest power level.  DCVS will chose any level between these
two extremes.  thermal_pwrlevel allows a daemon to set an absolute
top for frequency to prevent thermal issues. The effective maximum
power level is considered to be the lower of thermal or max.

Also added is num_pwrlevels that shows the number of active power
levels. This corresponds to the gpu frequencies in
available_gpu_frequencies.

Change-Id: Ic0dedbad20102d9a3c3350055e6fcd10358fc53d
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
2013-03-07 15:23:11 -08:00
Carter Cooper
8d67af8cc3 msm: kgsl: Disable HLSQ register reads from snapshot
Reading the A3XX HLSQ registers during a GPU hang recovery might cause
the device to hang.  Disable the the HLSQ register reads that would
cause recovery to fail until the failures are better understood.

Change-Id: I1553025fbd824bfacf91f062372d5731cd905cc4
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:10 -08:00
Carter Cooper
3aa6127300 msm: kgsl: Disable clock gating earlier during snapshot
Disable clock gating earlier when recording the GPU snapshot.
This will ensure that there are no issues when reading register
values from the GPU hardware.

Change-Id: I173655b419c958f0b8cdfa4609c712e512ff2487
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
2013-03-07 15:23:10 -08:00
Harsh Vardhan Dwivedi
df3f149099 msm: kgsl: Remove incorrect check for current context
We remove an incorrect check for currently active context.
The intent of the original check was to ensure that the
current context is at least there/valid before we issue
a dummy command with a forced interrupt. However, this
check was implemented incorrectly, instead of checking
the context under which the function is running, the check
was probing the "drawctxt_active" which may not necessarily
be the same as the context for which the function was called.
We fix this by changing the check to instead look for the
context under which the kgsl_check_interrupt_timestamp() has
been called.

CRs-fixed: 426186
Change-Id: I6ac123d16888287b14e6e53028f482eb709f24c5
Signed-off-by: Harsh Vardhan Dwivedi <hdwivedi@codeaurora.org>
2013-03-07 15:23:09 -08:00
Vinay Roy
7260cb74cc msm: kgsl: Set requested power state to NONE after resume
When GPU resumes from suspend requested state is not cleared to NONE.
Due to this idle reporting is not done and GPU stays at MAX freq. As
a fix if the current power state is already ACTIVE and request is
made for active power state then clear the requested state immediately.

CRs-fixed: 424682
Change-Id: I7f0d7fa819308f166cbbbf30b2c20aee73644cfb
Signed-off-by: Vinay Roy <vroy@codeaurora.org>
2013-03-07 15:23:09 -08:00
Carter Cooper
fe851d33bd msm: kgsl: Issue conditional interrupts on internal submissions
Waittimestamp calls require interrupts to check if a timestamp
has passed.  The lack of these interrupts was causing waittimestamp
to wait longer than expected since the interrupts were less frequent.
Cause the conditional interrupts to be issued faster by allowing
internal command submissions to issue them.

CRs-fixed: 417577
Change-Id: Idb6f18261b3dd6fcbea5607d449d70ca54136e81
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:09 -08:00
Jordan Crouse
89f3fc915a msm: kgsl: Update A330 VBIF settings
Update the VBIF register settings for A330 for better performance and
stability per the latest testing and analysis.

CRs-Fixed: 416680
Change-Id: Ic0dedbad71bfd589b322bed503052315d0bd1940
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:08 -08:00
Jordan Crouse
da34678534 msm: kgsl: Turn off the CP_DEBUG dynamic clock
The CP dynamic clock seems to be glitchy when the CP clocks are turned
back on after a power event. Turn off said dynamic clock control at
init time. The impact of leaving the dynamic clock control off is
negligible since the CP clock is only on when the CP is actually in
use.

CRs-fixed: 402119
CRs-fixed: 409253
CRs-fixed: 413224
Change-Id: Ic0dedbad783f8b911d9b57d1602d9b3976af1b3b
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rajeev Kulkarni <krajeev@codeaurora.org>
2013-03-07 15:23:08 -08:00
Tianyi Gou
69d8a0f92e msm: clock-8960: Add clock initialization support for 8930ab
Clock driver has initialization functions to do some one time
clock configurations e.g. enable/disable clock dynamic gating.
Add this support for 8930ab. Note that the dynamic clock gating
is disabled and will be added once it is verified on 8930ab.

Change-Id: I5f9dbbd1deeb084ca3d58d7be2407ccbb10bc977
Signed-off-by: Tianyi Gou <tgou@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:08 -08:00
Seemanta Dutta
aa740ca4fb msm: clock-8960: Add more gfx3d and vcodec frequencies for 8930ab
On 8930ab, gfx3d and vcodec maximum frequencies have been bumped
to 500MHz and 266MHz respectively. Therefore, update PLL15 frequency
to 1000MHz to support gfx3d clock at 500MHz and also update Fmax
values for both gfx3d and vcodec clocks.

Change-Id: I6296a59fcc67b4edc38834009ce9403df2cf2ab6
Signed-off-by: Seemanta Dutta <seemanta@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:07 -08:00
Patrick Daly
c65d23c879 msm: clock-8960: Merge similar gfx3d_clk freq tables
Remove support for 325 MHz in 8960ab and 8064, and replace with
320 MHz. This allows the targets to share the same freq table with
8930.

Change-Id: Ib1d4a850b46683db5ae818eb157abde164c0ca65
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:07 -08:00
Saravana Kannan
e7bc5abbf8 msm: clock: Convert clock fmax and vdd class levels from arrays to pointers
Not all clocks can capture their fmax data with an array size of 4. So,
change the fmax entry from an array to a pointer and add a num_fmax field.
This allows each clock to specify fmax data of different length. Also, this
makes fixing up of fmax entries based on SoC id and version a lot easier.

Obviously, if a clock can have more than 4 fmax levels, the vdd class would
also need more than 4 levels. So, update the vdd class code in a similar
fashion.

Conflicts:

	arch/arm/mach-msm/clock-9625.c

Change-Id: I12568dd8fa7c0f8dcfeff68d8ca8de8810445cc7
Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:06 -08:00
Stephen Boyd
48cc36254f msm: clock: Make lock for vdd_class into mutex
There is no reason to hold a spinlock here anymore when the
vdd_class is only updated in non-atomic context. Move to using a
mutex instead. We couldn't do this before because voltage voting
was done in atomic context.

Change-Id: I7cd0469194d9fd57bd6a6ba34ff51a089812b96d
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:06 -08:00
Matt Wagantall
5d1d86ffbc msm: clock: Expose parts of "clock.h" through <mach/clk-provider.h>
Expose the features of "clock.h" outside of mach-msm so that new clock
drivers leveraging the framework in mach-msm/clock.c can be implemented
outside of the mach-msm sub-architecture directory.

Conflicts:

	arch/arm/mach-msm/board-8226.c
	arch/arm/mach-msm/clock-mdss-8974.c

Change-Id: I0dea8c716ed6f81c0296a21dd1d96701dfed5a63
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:05 -08:00