Commit graph

304834 commits

Author SHA1 Message Date
Ken Zhang
c55faec727 msm: display: Clear performance request data in turning on
In resume, previous cached request data needs be cleared as
they do not reflect the current hw status.

Signed-off-by: Ken Zhang <kenz@codeaurora.org>

Conflicts:

	drivers/video/msm/mdp4_overlay.c

Change-Id: I6e3abe09a38b4499ceb168ea7b0351672253a6cd
Signed-off-by: Ramakrishna Prasad N <crpn@codeaurora.org>
2013-03-07 15:23:41 -08:00
Steve Muckle
24e8dccaf1 msm: dcvs: enable/disable power collapse on CPU 0 only
Idle power collapse is not currently used on secondary cores
with DCVS, so it should not be manipulated after boot by DCVS.

(cherry picked from commit 7419749fb656f2f6b406d61c1df5993b2385af0a)

Change-Id: Id75829172c7c529d2b3e42a1ad406e591a94d1e9
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2013-03-07 15:23:41 -08:00
Steve Muckle
43b7c2f7ea msm: dcvs: check power collapse state after updating params
The dcvs params are updated when the number of online cores change.
When this happens, the low power modes of each CPU need to be
enabled or disabled depending on how fast they are running and what
the new power collapse frequency thresholds are.

(cherry picked from commit 1b82c72e23f013ea2bdc526ad2859b421a97f48a)

Change-Id: I88c779dc21f47acd7a1fa1ded02e90825d7dc9d6
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2013-03-07 15:23:40 -08:00
Steve Muckle
be237a3b3e msm: dcvs: add ss_no_corr_below_freq parameter
The busy/idle behavior of different cores can be correlated by
DCVS when determining what frequency to run cores at. However,
this is not desirable below a certain frequency. Add a parameter
to establish what this frequency is. The parameter is configurable
in userspace via sysfs.

The ss_iobusy_conv parameter is currently unused, so it is
being replaced with ss_no_corr_below_freq.

(cherry picked from commit e8c6d615259af5fde8a6613f53c41c212407bda9)

Change-Id: Ibf814f3f93b92a532d7b3af80721a5bc7db1bd31
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2013-03-07 15:23:40 -08:00
Steve Muckle
4d9a321b56 msm: mpdecision: make runqueue divisor configurable via sysfs
The runqueue divisor controls how sensitive mpdecision is to
changes in runqueue depth - if the runqueue divisor is increased,
updates to TZ are made less frequently. It is desirable to be able
to easily change this parameter from userspace.

(cherry picked from commit f5d5d54a8c64337115a44d9d9292c677042c9fc1)

Change-Id: Ibfadc7fba413ae08467ded737e192fd918e97755
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
2013-03-07 15:23:39 -08:00
Tianyi Gou
f23fda4d05 msm: acpuclock-8930{aa,ab}: Turn off inefficient frequencies
Given frequcies at the same voltage and the same L2 operating
point, the highest frequency is a more efficient choice. Update
the frequency tables for 8930, 8930aa, 8930ab to turn off the
inefficient frequencies.

Change-Id: I9e0d2c9566052ac1b50ae241b62fc8bd08b09776
Signed-off-by: Tianyi Gou <tgou@codeaurora.org>
Signed-off-by: Neha Pandey <nehap@codeaurora.org>
2013-03-07 15:23:39 -08:00
Tarun Karra
6601ef91c5 msm: kgsl: Synchronize access to IOMMU cfg port
Add a software based spinlock between CPU and GPU.
This spinlock is used to grant mutually exclusive access to
SMMU configuration between CPU and GPU. This mutual exclusion
is required to prevent deadlock in the system.

CRs-Fixed: 409198
Change-Id: Ic375beaaf4c5505b41d3fabc4adf15965d71b13a
Signed-off-by: Tarun Karra <tkarra@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rajeev Kulkarnie <krajeev@codeaurora.org>
2013-03-07 15:23:38 -08:00
Laura Abbott
5b37e138e2 gpu: ion: Flush new pages
When allocating pages that are intended to be used for
uncached allocations, we need to ensure the cache is coherent;
there may be outstanding data in the cache related to those
pages. Ensure cache coherency by flushing each of the pages.

Change-Id: I4b89c799b5c099f6c050d8ddd758bdb368c07c08
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:38 -08:00
Laura Abbott
3c2b534580 mm: Use aligned zone start for pfn_to_bitidx calculation
The current calculation in pfn_to_bitidx assumes that
(pfn - zone->zone_start_pfn) >> pageblock_order will return the
same bit for all pfn in a pageblock. If zone_start_pfn is not
aligned to pageblock_nr_pages, this may not always be correct.

Consider the following with pageblock order = 10, zone start 2MB:

pfn     | pfn - zone start | (pfn - zone start) >> page block order
----------------------------------------------------------------
0x26000 | 0x25e00	   |  0x97
0x26100 | 0x25f00	   |  0x97
0x26200 | 0x26000	   |  0x98
0x26300 | 0x26100	   |  0x98

This means that calling {get,set}_pageblock_migratetype on a single
page will not set the migratetype for the full block. Fix this by
rounding down zone_start_pfn when doing the bitidx calculation.

Change-Id: I13e2f53f50db294f38ec86138c17c6fe29f0ee82
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:38 -08:00
Laura Abbott
1a75cfa9e2 fs: fuse: Workaround for CMA migration
The FUSE file system may hold references to pages for long
periods of time, preventing migration from occuring. If a CMA
page is used here, CMA allocations may fail. Work around this
by swapping out a CMA page for a non-CMA page when working with
the FUSE file system.

Change-Id: Id763ea833ee125c8732ae3759ec9e20d94aa8424
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:37 -08:00
Minchan Kim
360ffdd941 cma: fix migration mode
__alloc_contig_migrate_range calls migrate_pages with wrong argument
for migrate_mode. Fix it.

Change-Id: I84697cf7c6aef6253e9ee7e5b3028c946b95e253
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:37 -08:00
woojoong.lee
200d9f6ddf cma : use migrate_prep() instead of migrate_prep_local()
__alloc_contig_migrate_range() should use all possible ways to get all the
pages migrated from the given memory range, so pruning per-cpu lru lists
for all CPUs is required, regadless the cost of such operation. Otherwise
some pages which got stuck at per-cpu lru list might get missed by
migration procedure causing the contiguous allocation to fail.

Change-Id: I70cc0864c57dd49e89f57797122a3fd0f300647a
Signed-off-by: woojoong.lee <woojoong.lee@samsung.com>
Reviewed-on: http://165.213.202.130:8080/43063
Tested-by: System S/W SCM <scm.systemsw@samsung.com>
Reviewed-by: daeho jeong <daeho.jeong@samsung.com>
Reviewed-by: Jeong-Ho Kim <jammer@samsung.com>
Tested-by: Jeong-Ho Kim <jammer@samsung.com>
[lauraa@codeaurora.org: Applied to correct file]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Laura Abbott
9fa56006b4 mm: Add is_cma_pageblock definition
Bring back the is_cma_pageblock definition for determining if a
page is CMA or not.

Change-Id: I39fd546e22e240b752244832c79514f109c8e84b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Liam Mark
39832ef865 mm: split_free_page ignore memory watermarks for CMA
Memory watermarks were sometimes preventing CMA allocations
in low memory.

Change-Id: I550ec987cbd6bc6dadd72b4a764df20cd0758479
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:36 -08:00
Laura Abbott
24c697b28a mm: Don't use CMA pages for writes
If CMA pages are used for writes, the writes may not complete
fast enough for CMA to be allocated within a reasonable amount
of time. If we get a CMA page, get another one to use instead.

Change-Id: I19d8ba655da7525d68d5947337d500566998971c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:35 -08:00
Heesub Shin
d4c6e690a5 cma: fix race condition on a page
cruel, brute-force method for letting cma/migration to
finish its job without stealing the lock
migration_entry_wait() and creating a live-lock on the
faulted page. This patch solves the case of
page->_count == 2 migration failure.

Change-Id: Ia94542a80e44a213831291af289bbf5ee6880bfd
Signed-off-by: Heesub Shin <heesub.shin@samsung.com>
Reviewed-on: http://165.213.202.130:8080/39341
Tested-by: System S/W SCM <scm.systemsw@samsung.com>
Tested-by: Dongjun Shin <d.j.shin@samsung.com>
Reviewed-by: Hyunju Ahn <hyunju.ahn@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:35 -08:00
Laura Abbott
23b04db439 gpu: ion: Restrict access to CP heap
On certain targets, the CP heap should only be used
for secure allocations. Add a check to determine which
targets are allowed to make non-secure allocations from
the CP heap type. Targets with this restriction will
fall back to an alternate heap.

Change-Id: Ieaa9e76cbf2dc3ea1da6f4e75a4de903c39a3077
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:34 -08:00
Laura Abbott
783b854fbd defconfig: Enable lowmemorykiller autodetect option
Recent ABI changes have made the lowmemorykiller less effective.
Enable ANDROID_LOW_MEMORY_KILLER_AUTODETECT_OOM_ADJ_VALUE to
detect when the old ABI is being used and convert it to the new
ABI.

Change-Id: If47113e5fc7706bcd6bb144c591e9935b0c5115a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:33 -08:00
Liam Mark
2da9b9c9d3 android/lowmemorykiller: Check all tasks for death pending
The lowmemorykiller uses the TIF_MEMDIE flag to help ensure it doesn't
kill another task until the memory from the previously killed task has
been returned to the system.

However the lowmemorykiller does not currently look at tasks who do not
have a tasks->mm, but just because a process doesn't have a tasks->mm
does not mean that the task's memory has been fully returned to the
system yet.

In order to prevent the lowmemorykiller from unnecessarily killing
multiple applications in a row the lowmemorykiller has been changed to
ensure that previous killed tasks are no longer in the process list
before attempting to kill another task.

Change-Id: I7d8a8fd39ca5625e6448ed2efebfb621f6e93845
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:30 -08:00
Laura Abbott
2df1e5cd2c msm: Enable MM heap to use CMA
Enable the MM heap on 8960, 8930, and 8064 to use CMA
instead of carved out memory. All allocations will come
from memory reserved as CMA instead of carved out memory.

Change-Id: I6190144564ce263fdad9ec74a85cfefca6089a0d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:28 -08:00
Laura Abbott
50486cbd19 arm: select HAVE_DMA_CONTIGUOUS
During a recent merge conflict, this option was removed by
mistake. Add it back.

Change-Id: Ic1a806d90a0d1ae24ed04f938e411fb1ebf4fe08
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:27 -08:00
Laura Abbott
256012293e defconfig: msm8960: Enable CMA
Enable the option to turn on the Contiguous Memory Allocator
(CMA) feature. This will allow clients to allocate large chunks
of memory without having to remove it from the system.

Change-Id: Ifc18c34d15f94e5d113d576bc32dbb0cf78f9c49
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:27 -08:00
Laura Abbott
3b77d8bf4b fs/buffer.c: Revoke LRU when trying to drop buffers
When a buffer is added to the LRU list, a reference is taken which is
not dropped until the buffer is evicted from the LRU list. This is the
correct behavior, however this LRU reference will prevent the buffer
from being dropped. This means that the buffer can't actually be dropped
until it is selected for eviction. There's no bound on the time spent
on the LRU list, which means that the buffer may be undroppable for
very long periods of time. Given that migration involves dropping
buffers, the associated page is now unmigratible for long periods of
time as well. CMA relies on being able to migrate a specific range
of pages, so these these types of failures make CMA significantly
less reliable, especially under high filesystem usage.

Rather than waiting for the LRU algorithm to eventually kick out
the buffer, explicitly remove the buffer from the LRU list when trying
to drop it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.

Change-Id: I253f4ee2069e190c1115afc421dadd27a7fa87dc
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:27 -08:00
Laura Abbott
324e7537aa 8930: Add support for using CMA with ion heaps
Adjust the memory reservation/placing code in the board
file to account for heaps that might use CMA. This includes
both dedicated CMA heaps and cp heaps marked as using CMA.

Change-Id: I71a96e93a974d9a1adcbc4b9d0dc172740f7f299
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:26 -08:00
Laura Abbott
3d23fad401 8064: Add support for using CMA with ion heaps
Adjust the memory reservation/placing code in the board
file to account for heaps that might use CMA. This includes
both dedicated CMA heaps and cp heaps marked as using CMA.

Change-Id: Iabf2e8f5e8d775a1a265380f1fa6b709591ba11d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:26 -08:00
Laura Abbott
ca2ad08656 8960: Add support for using CMA with Ion heaps
Adjust the memory reservation/placing code in the board
file to account for heaps that might use CMA. This includes
both dedicated CMA heaps and cp heaps marked as using CMA.

Change-Id: Ifb715bc2d4bf7fbba78a7201a68ccf3ec93c38b2
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:25 -08:00
Laura Abbott
ff7c655e76 msm: Rip out fmem related memory adjustments
fmem is deprecated. Get rid of the special handling for
fmem in memory reservation code.

Change-Id: I24dc24cee364d992cbbe08d81851987d721a587b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:25 -08:00
Laura Abbott
8acc4f0080 gpu: ion: Add support for CMA allocations in cp heap
Extend the cp heap to allow memory to be allocated from
the contiguous memory allocator (CMA) instead of from
the standard caveout region. The option to use CMA or regular
carveout memory is configured via a parameter in platform
data.

Change-Id: I9f3a169325c44230dde1d91a9cdcf613ad291df2
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:25 -08:00
Laura Abbott
ea3ab1acc8 gpu: ion: Rename request/release region
request_region and release_region are macro names defined
in the linux kernel. Under some circumstances, the C compiler
can't differentiate between the macro name and the field name.
Changing the field name is the easiest way to prevent this
problem.

Change-Id: I2c8d61bdaa20e332e0215f0bb3237e8332f0f3ac
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:24 -08:00
Laura Abbott
547237bed6 gpu: ion: Factor out common code on first alloc/last free
Currently, fmem must be transitioned on first allocation/
last free. Going forward, there may be other use cases to
call functions on first allocation/last free. Factor some of
this code out to avoid duplication.

Change-Id: I36472333222c497c5b4c888394b4bd277c146249
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:24 -08:00
Laura Abbott
142cd31a3e gpu: ion: Add msm specific extensions to CMA heap
A number of changes have been made to the Ion framework for the
msm target. Add the necessary changes on top of the CMA heap to
allow the CMA heap to be fully utilized.

Change-Id: Ie006dcd4c41481e4d914c67bafbf42d1afdb1a76
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
1382bb05d7 add CMA heap
New heap type ION_HEAP_TYPE_DMA where allocation is done with dma_alloc_coherent API.
device coherent_dma_mask must be set to DMA_BIT_MASK(32).
ion_platform_heap private field is used to retrieve the device linked to CMA,
if NULL the default CMA area is used.
ion_cma_get_sgtable is a copy of dma_common_get_sgtable function which should
be in kernel 3.5

Change-Id: I9ae54a3a021cb3513c2b0e8c58b69f3ae118561b
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[lauraa: Fix context in ion_priv.h/ion.h and omit Makefile change for now]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
522629b432 add private field in ion_heap and ion_platform_heap structure
copy private field from platform configuration to internal heap structure.

Change-Id: Ia7571d88fc2f72f5d655fb6f6b54fde389d96c85
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[laura: Rebase context fixes]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:23 -08:00
Benjamin Gaignard
bf8847b852 fix ion_platform_data definition
fix ion_platform_heap to make is use an usual way in board configuration file.

Change-Id: I8686108a9fe0aa2ba9f9c84990d555f947f78f86
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
[lauraa: Fixup msm board files]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:22 -08:00
Laura Abbott
e788c2d1c4 arm: dma-mapping: Add APIs for other memory types
Currently, the only attributes supported for DMA memory are
writecombine and coherent. Both of these allow speculative
prefetches to occur. For certain use cases (e.g. content
protection) there are requirements to disallow prefetching.
Relatedly, there may be cases where buffering is not enough
for high performance use cases and the full cache should
be used. Add appropriate APIs for the strongly ordered and
cached memory types for the needed use cases.

Change-Id: Ibe17b3d002f9615e2cb34183f47f6d1bcd045611
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:22 -08:00
Laura Abbott
c1386f095a common: DMA-mapping: Add strongly ordered memory attribute
Strongly ordered memory is occasionally needed for some DMA
allocations for specialized use cases. Add the corresponding
DMA attribute.

Change-Id: Idd9e756c242ef57d6fa6700e51cc38d0863b760d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Marek Szyprowski
0f53e5abb2 common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attribute
This patch adds DMA_ATTR_NO_KERNEL_MAPPING attribute which lets the
platform to avoid creating a kernel virtual mapping for the allocated
buffer. On some architectures creating such mapping is non-trivial task
and consumes very limited resources (like kernel virtual address space
or dma consistent address space). Buffers allocated with this attribute
can be only passed to user space by calling dma_mmap_attrs().

Change-Id: Id12b93fa2b02d5f3d01ab48eb61cda79f533d695
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Laura Abbott
a5708bc035 arm: dma: Allow CMA pages to not have a kernel mapping.
Currently, there are use cases where not having any kernel
mapping is required; if the CMA memory needs to be used as
a pool which can have both cached and uncached mappings we
need to remove the mapping to avoid the multiple mapping
problem. Extend the dma APIs to use the DMA_ATTR_NO_KERNEL_MAPPING
with CMA. This doesn't end up saving any virtual address space
but the mapping will still not be present.

Change-Id: I64d21250abbe615c43e2b5b1272ee2b6d106705a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:21 -08:00
Laura Abbott
4d6e1c5965 arm: dma: Expand the page protection attributes
Currently, the decision on which page protection to use
is limited to writecombine and coherent. Expand to include
strongly ordered memory and non consistent memory.

Change-Id: I7585fe3ce804cf321a5585c3d93deb7a7c95045c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Marek Szyprowski
d1fd8d785d ARM: dma-mapping: fix buffer chunk allocation order
IOMMU-aware dma_alloc_attrs() implementation allocates buffers in
power-of-two chunks to improve performance and take advantage of large
page mappings provided by some IOMMU hardware. However current code, due
to a subtle bug, allocated those chunks in the smallest-to-largest
order, what completely killed all the advantages of using larger than
page chunks. If a 4KiB chunk has been mapped as a first chunk, the
consecutive chunks are not aligned correctly to the power-of-two which
match their size and IOMMU drivers were not able to use internal
mappings of size other than the 4KiB (largest common denominator of
alignment and chunk size).

This patch fixes this issue by changing to the correct largest-to-smallest
chunk size allocation sequence.

Change-Id: I5cc9c12322e832951faf3bba6387946c890e0ed4
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Sachin Kamat
7a2356ef59 ARM: dma-mapping: Add missing static storage class specifier
Fixes the following sparse warnings:
arch/arm/mm/dma-mapping.c:231:15: warning: symbol 'consistent_base' was not
declared. Should it be static?
arch/arm/mm/dma-mapping.c:326:8: warning: symbol 'coherent_pool_size' was not
declared. Should it be static?

Change-Id: I90e2ccdc4d132a37ebcd8ae7a8441ad3fede55bf
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:20 -08:00
Vitaly Andrianov
2382e50268 ARM: dma-mapping: use PMD size for section unmap
The dma_contiguous_remap() function clears existing section maps using
the wrong size (PGDIR_SIZE instead of PMD_SIZE).  This is a bug which
does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same.
On LPAE systems, however, this bug causes the kernel to hang at this point.

This fix has been tested on both LPAE and non-LPAE kernel builds.

Change-Id: I63650057864907f1a2d8eed7257665cb2f648bbb
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
4f6b3dc624 ARM: dma-mapping: add support for IOMMU mapper
This patch add a complete implementation of DMA-mapping API for
devices which have IOMMU support.

This implementation tries to optimize dma address space usage by remapping
all possible physical memory chunks into a single dma address space chunk.

DMA address space is managed on top of the bitmap stored in the
dma_iommu_mapping structure stored in device->archdata. Platform setup
code has to initialize parameters of the dma address space (base address,
size, allocation precision order) with arm_iommu_create_mapping() function.
To reduce the size of the bitmap, all allocations are aligned to the
specified order of base 4 KiB pages.

dma_alloc_* functions allocate physical memory in chunks, each with
alloc_pages() function to avoid failing if the physical memory gets
fragmented. In worst case the allocated buffer is composed of 4 KiB page
chunks.

dma_map_sg() function minimizes the total number of dma address space
chunks by merging of physical memory chunks into one larger dma address
space chunk. If requested chunk (scatter list entry) boundaries
match physical page boundaries, most calls to dma_map_sg() requests will
result in creating only one chunk in dma address space.

dma_map_page() simply creates a mapping for the given page(s) in the dma
address space.

All dma functions also perform required cache operation like their
counterparts from the arm linear physical memory mapping version.

This patch contains code and fixes kindly provided by:
- Krishna Reddy <vdumpa@nvidia.com>,
- Andrzej Pietrasiewicz <andrzej.p@samsung.com>,
- Hiroshi DOYU <hdoyu@nvidia.com>

Change-Id: I4a9b155bef4d5f2b8a8dfe87751d82960b09b253
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:19 -08:00
Marek Szyprowski
da2b2117de ARM: dma-mapping: use alloc, mmap, free from dma_ops
This patch converts dma_alloc/free/mmap_{coherent,writecombine}
functions to use generic alloc/free/mmap methods from dma_map_ops
structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been
introduced to implement writecombine methods.

Change-Id: I2709e3ffc97546df2f505d555b29c3bb8148daec
[lauraa: context conflicts]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
5e0ee00c15 ARM: dma-mapping: remove redundant code and do the cleanup
This patch just performs a global cleanup in DMA mapping implementation
for ARM architecture. Some of the tiny helper functions have been moved
to the caller code, some have been merged together.

Change-Id: I60b3450bd1180ea007e7326a63762d3a44b3c25d
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
0e8fe4a111 ARM: dma-mapping: move all dma bounce code to separate dma ops structure
This patch removes dma bounce hooks from the common dma mapping
implementation on ARM architecture and creates a separate set of
dma_map_ops for dma bounce devices.

Change-Id: I42d7869b4f74ffa5f36a4a7526bc0c55aaf6bab7
[lauraa: conflicts due to code cruft]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:18 -08:00
Marek Szyprowski
3f47a1438c ARM: dma-mapping: implement dma sg methods on top of any generic dma ops
This patch converts all dma_sg methods to be generic (independent of the
current DMA mapping implementation for ARM architecture). All dma sg
operations are now implemented on top of respective
dma_map_page/dma_sync_single_for* operations from dma_map_ops structure.

Before this patch there were custom methods for all scatter/gather
related operations. They iterated over the whole scatter list and called
cache related operations directly (which in turn checked if we use dma
bounce code or not and called respective version). This patch changes
them not to use such shortcut. Instead it provides similar loop over
scatter list and calls methods from the device's dma_map_ops structure.
This enables us to use device dependent implementations of cache related
operations (direct linear or dma bounce) depending on the provided
dma_map_ops structure.

Change-Id: Icbd72d1e4fed6d7478b98bb4ead120c02dd26588
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Marek Szyprowski
b5702dc251 ARM: dma-mapping: use asm-generic/dma-mapping-common.h
This patch modifies dma-mapping implementation on ARM architecture to
use common dma_map_ops structure and asm-generic/dma-mapping-common.h
helpers.

Change-Id: I574a3b5ac883cd5d9beb79deef8f5cb44fd83296
[lauraa: conflicts due to code cruft/context changes]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Mitchel Humpherys
42fab316dd ion: isolate msm-specific ion extensions
This is another step in the process of isolating msm-specific ion
features from stock ion.

Change-Id: I3a437dbc618cb70859126c81596373338ad06500
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:17 -08:00
Mitchel Humpherys
5ebf0bb53c ion: change ion kernel map function to not take flags argument
Buffer flags are going to be specified at allocation time rather than
map time. This removes the flags argument from the ion kernel map
function.

Change-Id: Ib983ecd0dcd7befb36287ae7037c71d4ca475f90
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2013-03-07 15:23:16 -08:00