Commit Graph

47 Commits

Author SHA1 Message Date
LuK1337 4e71469c73 Merge tag 'LA.BR.1.3.6-03510-8976.0' into HEAD
Change-Id: Ie506850703bf9550ede802c13ba5f8c2ce723fa3
2017-04-18 12:11:50 +02:00
LuK1337 fc9499e55a Import latest Samsung release
* Package version: T713XXU2BQCO

Change-Id: I293d9e7f2df458c512d59b7a06f8ca6add610c99
2017-04-18 03:43:52 +02:00
Rohit Vaswani 833bf4f64a mm: cma: fix incorrect type conversion for size during dma allocation.
This was found during userspace fuzzing test when a large size dma cma
allocation is made by driver(like ion) through userspace.

  show_stack+0x10/0x1c
  dump_stack+0x74/0xc8
  kasan_report_error+0x2b0/0x408
  kasan_report+0x34/0x40
  __asan_storeN+0x15c/0x168
  memset+0x20/0x44
  __dma_alloc_coherent+0x114/0x18c

Change-Id: Ia0c4def2ec27ec56e9faf43ed1b8012381e3b253
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 67a2e213e7e937c41c52ab5bc46bf3f4de469f6e
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[shashim@codeaurora.org: replace %p by %pK in print format]
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-12-19 22:18:43 -08:00
Shiraz Hashim 819c8636fc cma: fix alignment to PMD_SIZE for fixup region
PMD_SIZE alignment is required to support
"linux,fixup-reserve-region", fix the same.

Change-Id: I67302a5dfd7738df93a63931b58b87686edb9a75
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2015-06-18 16:41:56 +05:30
Shiraz Hashim b0d31ee0f8 cma: add provision to adjust reserved area
For some use cases, it is not known beforehand, how much cma
area size must be reserved. Or the reserved region might
need to be adjusted to support varying use cases. In such
cases maintaining different cma region size in device tree
is difficult.

Introduce an optional cma property,
"linux,fixup-reserve-region" which works in tandem with
"linux,reserve-region" that tries to shrink the cma area on
first successful allocation. After which it returns the
additional pages from the reserved region back to the
system.

fixup region requires base and size to be aligned to
SECTION_SIZE. If it isn't so, then such regions simply
fall back to carve-out region.

Change-Id: I1f71baf146b978398946f466bbba2f192560593b
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2015-06-04 10:12:44 +05:30
Susheel Khiani 7376575296 cma: Increase number of retries for allocation
It was observed that CMA pages are sometimes getting
pinned down by BG process scheduled out in their exit
path. Since BG process has lower priority they end up
getting less time slice by scheduler there by consuming
more time to free up CMA page.

So instead of failing to allocate and directly
returning error on CMA allocation path, we increase the
number of retries to 2 to see if the process which
was in exit path and about to release pages  was able to
to do so or not.

Change-Id: I693228d36186ab17480f36b492ada91ab7c262d8
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
2015-02-26 16:26:10 +05:30
Susheel Khiani 30f491c978 cma: Add 100ms delay before retrying for CMA allocation
CMA allocation sometimes fail because page is
momentarily pinned by some other process, i.e.
reference count page->_count > 1, as a
result of which we are not able to migrate
the page out of CMA area. When such situation
occurs, instead of failing to allocate and directly
returning error,sleep for 100ms and re-scan the
CMA area to see if the page which was pinned down
has been freed.

Change-Id: Ie9b92002f38fd44cf28aee32a184c57c26e59437
Signed-off-by: Susheel Khiani <skhiani@codeaurora.org>
2015-02-05 15:20:06 -08:00
Chintan Pandya 5b2530f661 dma-contiguous: Re-order the error handling sequence
When CMA allocation fails because of any pending
signals, we clear off pfn and bit map before
returning back. While clearing off the bitmap,
we still use the pfn info so, clear pfn only
after bitmap is cleared.

CRs-fixed: 772299
Change-Id: I94e566181f75b7c8ebdab7d29437e5fca5f3fbdc
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
2014-12-17 12:54:54 +05:30
Neeti Desai 9ccb66aade cma: Add support for cma_get_size
HLOS assumes the size of the CMA based heaps is equal to the
size specified with the "linux-contiguous-region" property.
This is not always true, and CMA might reserve more memory
to take care of alignment. This causes the secure world to
lockdown more memory than it is supposed to, thus denying
access to clients. The cma_get_size api returns the correct
size which is reserved for these heaps, thus preventing
client access issues.

CRs-Fixed: 737584
Change-Id: Idf5f3587c0a2b1a300b2102079b6796323254cc8
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
2014-10-14 12:25:13 -07:00
Rob Herring 51001775c3 of/fdt: update of_get_flat_dt_prop in prep for libfdt
Make of_get_flat_dt_prop arguments compatible with libfdt fdt_getprop
call in preparation to convert FDT code to use libfdt. Make the return
value const and the property length ptr type an int.

Signed-off-by: Rob Herring <robh@kernel.org>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Stephen Chivers <schivers@csc.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 9d0c4dfedd96ee54fc075b16d02f82499c8cc3a6
[joonwoop@codeaurora.org: updated drivers/base/dma-contiguous.c to use 'const'
 qualifier.  dropped arch/arm/mach-exynos/exynos.c.]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:45:34 -07:00
Laura Abbott 90a838a8ab cma: Return 0 on error path
If a free CMA region cannot be found because every one is busy,
an error needs to be propegated up by returning a zero pfn.
Ensure the pfn is actually zero when returning an error.

Change-Id: I0d5a66a25c4483bf0f219cec1d7009239518f27a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-04-22 14:55:05 -07:00
Laura Abbott 265862e70d cma: Call dma_contiguous_early_fixup after allocation
Commit 42e668f cma: Delay non-placed memblocks until after all allocations
delayed calling memblock_alloc until later but didn't move the
fixup. Make sure to call the fixup after allocation of the
memblock.

Change-Id: I305c05f1b1e0b6aeb462746e91b30560ddf5b934
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-04-21 17:13:50 -07:00
Chintan Pandya a8c645cb00 dma-contiguous: Retrun 'zero' pfn in case of error
When CMA allocator gets error return from the page
allocator framework, except for the -EBUSY case, it
will bail out. Caller depends on the 'pfn' for
confirming allocation successful or not. Return 0
for those error case.

Change-Id: Ica4e04a9f9f18b1a29035ba2bae9deecfd68a5e8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
2014-04-18 13:45:51 +05:30
Laura Abbott 42e668f427 cma: Delay non-placed memblocks until after all allocations
CMA is now responsible for almost all memory reservation/removal.
Some regions are at fixed locations, some are placed dynamically.
We need to place all fixed regions first before trying to place
dynamic regions to avoid overlap. Additionally, allow an
architectural callback after all removals/fixed location has
happened to potentially update any relevant limits.

Change-Id: Iaaffe60445ef44d432f0d87875ce2b292b717cc7
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-04-08 09:51:04 -07:00
Laura Abbott 1e9802fbc9 cma: Drop the right mutex
The lock that was locked was cma->lock not cma_mutex. Drop
the right one when breaking out of the loop.

Change-Id: I0a1831b23613c5220795fe2a63f9db7439268c3f
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-04-02 20:25:51 -07:00
Laura Abbott 439e55f256 cma: Make locking finer grained
CMA locking is currently very coarse. The cma_mutex protects both
the bitmap and avoids concurrency with alloc_contig_range. There
are several situations which may result in a deadlock on the CMA
mutex currently, mostly involving AB/BA situations with alloc and
free. Fix this issue by protecting the bitmap with a mutex per CMA
region and use the existing mutex for protecting against concurrency
with alloc_contig_range.

Change-Id: I6863ba7ab7fae07c68fee23b0aa4c869244fe2a1
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-03-20 14:21:17 -07:00
Laura Abbott 38896aaa63 cma: Make the default CMA region not reserved by default
Due to a bug in code logic, the default CMA region is
reserved when CONFIG_CMA_RESERVE_DEFAULT_AREA is NOT set
instead of when it is set. Fix this logic.

Change-Id: I476594c1e3745386741f2aba1b978a436b60c7a4
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-03-11 11:35:50 -07:00
Linux Build Service Account bdea643fee Merge "cma: Drop alignment requirements for regions not in system" 2014-03-04 09:02:33 -08:00
Linux Build Service Account 85adcf19f1 Merge "cma: Add support for status in DT nodes" 2014-03-04 09:02:29 -08:00
Laura Abbott c0c4dd6136 cma: Drop alignment requirements for regions not in system
CMA requires that regions be aligned to page block order or higher
to ensure migration types can be changed for whole regions. This
is not applicable if the memory is removed from the system completely.
Keep the alignment requirement at PAGE_SIZE if the memory is outside
the system allocator. Note that if there are alignment requirements
these can still be set up by manually aligning the base/size.

Change-Id: I316008095469492c150e8b69bb20b369579e3a36
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-02-26 14:08:06 -08:00
Laura Abbott d6b5770a6a cma: Add support for status in DT nodes
CMA scans the flattened devicetree to reserve memory early. Check
for the status of a DT node to possibly skip initialization.

Change-Id: I58760ee5ff241a1ce3a93955b1e507d506f92162
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-02-26 14:08:05 -08:00
Laura Abbott 98bc05ed91 cma: Remove potential deadlock situation
A report was given of a deadlock situation on the cma_mutex:

mutex_lock(cma_mutex)   (*2)
dma_release_from_contiguous
ion_secure_cma_free_chunk
ion_secure_cma_shrinker
shrink_slab
try_to_free_pages
migrate_pages
alloc_contig_range
mutex_lock(cma_mutex)  (*1)
dma_alloc_from_contiguous
ion_alloc

We may need to free CMA allocations while a current CMA allocation is in
progress if CMA is freed from a shrinker. cma_mutex currently protects
two things: the bitmap indicating which pages are allocated/free and
serialization of isolation/migration calls on allocation. There is
no need to take the mutex on free calls though as the pages are
freed back into the system via the regular __free_page call. Move
the free_contig_range call outside the cma_mutex to break the
chain dependency. We can safely free the pages back into the system
before changing the bitmap without any risk of races.

Change-Id: I4989eb2891e502b08db8117a51bd86652e902778
CRs-Fixed: 619644
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-02-25 11:10:58 -08:00
Laura Abbott c825a1d7d2 cma: Add support for removed regions
In addition to reserving memory from the system, there may
be uses where memory should be completely removed from control
of the linux page allocator. Add the appropriate information to
be able to remove the memory.

Change-Id: Ia2e959e0858fb240ab5c0deee49b0d6e4aecfc00
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-01-22 16:23:59 -08:00
Laura Abbott 84361a5f79 cma: Change to reserve-contiguous-region
Currently, devices are associated with CMA regions via phandle to the
CMA region. The phandle name 'linux,contiguous-region' is the same as
that which is used to designate a CMA region. This can lead to issues
of trying to treat a device only using a CMA region as an actual CMA
region. Rather than continuing to rely on this handle and the depth
to differentiate CMA regions, just create separate DT binding
linux,reserve-contiguous-region to indicate that the node is
an actual CMA node for reserving rather than a client referencing
via phandle.

Change-Id: I88b2d86054525b0569efc424da87974509ce9b25
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-01-22 16:23:58 -08:00
Laura Abbott 9740ba422e cma: use pfn instead of pages for argument passing
The CMA code is generic enough that it can be expanded out to track
regions of memory that aren't officially managed by the regular page
allocator. This memory can't be referenced via struct page. Change the
CMA apis to track using pfn for allocation/free instead. The pfn can
be converted to a struct page as needed elsewhere.

Change-Id: I5ac3fa5e2169b2101a738177f1654faa401f7604
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-01-22 16:23:58 -08:00
Laura Abbott 6659f8fea0 cma: use be32_to_cpup for devicetree conversion
The value returned by of_get_flat_dt_prop is a pointer. Use
be32_to_cpup which properly type checks against pointers.

Change-Id: Ie75ac6776bd26537a28729902bd918e710eda512
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-12-06 11:16:00 -08:00
Laura Abbott d4d3828c58 cma: print physical addresses correctly
Physical addresses can be greater than an unsigned long on
systems with LPAE. Print these addresses properly with %pa

Change-Id: If9f1572bb9dadcb14f2834647a4421509e90096a
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-26 14:00:14 -07:00
Laura Abbott 79543b0dbd cma: Add support for memory limits
Currently, when dynamically placing regions CMA will allow the memory
to be placed anywhere, including highmem. Due to system restrictions,
regions may need to be placed in a smaller range. Add support to
devicetree to allow these regions to have an upper bound on where they
will be placed.

Change-Id: Ib4ae194cbb6389e1091e7e04cfd331e9ab67ad05
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-26 14:00:13 -07:00
Laura Abbott be59e85da9 cma: Add support for different size_cells and address_cells
Currently, the CMA flat device tree code does not take into account
targets that may specify size_cells and address_cells > 1. This will
lead to unsuccessful parsing. Add support for taking into account
nodes that may specify size_cells and address_cells explicitly.

Change-Id: I9ea63b8c34e903b186c29ec6555dd7a5c317c602
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-26 14:00:10 -07:00
Laura Abbott 1ca8d4889a cma: Add API to get the start address of a CMA region
When setting CMA up at a fixed region, it is possible for the address
to be shared between multiple subsystems. If the address is placed
dynamically, there is no mechanism to be able to get the start address
of the region. Drivers may wish to do keep track of allocations relative
to the start address so add an API to get the start region of a CMA
address.

Change-Id: If0730c64496c876d3143064d767b22b984c6dc84
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 17:28:48 -07:00
Laura Abbott 417f7da139 drivers: Add option to reserve default CMA region
CMA provides good utilization of memory but for some use cases, the
allocation time is too costly. Add a Kconfig option to allow the
default region to be permanently set aside for contiguous use cases.

Change-Id: I1eef508d37cf6ae3b7b7a652fc59391b186fc122
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 17:21:55 -07:00
Laura Abbott a4c113d04c cma: Allow option to use strict memblock_reserved memory
Despite all the performance optimizations, some clients are
still unable to use CMA because of the allocation latency.
Rather than make those clients use a separate set of APIs,
extend the CMA code to allow clients to keep the memory out of
the buddy allocator. Since the pages never actually go to the
buddy allocator, allocation and freeing is only based on the bitmap
allocator to find the appropriate region.

Change-Id: Ia31bb1212fd7b19280361128453c8d25369ce592
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 17:21:54 -07:00
Laura Abbott 430dc76002 Revert "Revert "cma: use MEMBLOCK_ALLOC_ANYWHERE for placing CMA regions""
This reverts commit 74ab7c1e198200036a84acd88c28ee9a441e5167.

Appropriate calls have now been made to flush outstanding highmem
mappings. Place the CMA region in highmem again.

Change-Id: I66baa055ed31074f2f21c77d46e3a5fc906e9683
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:21:37 -07:00
Laura Abbott 0a0161d173 Revert "cma: use MEMBLOCK_ALLOC_ANYWHERE for placing CMA regions"
This reverts commit 6308fd5399ae93e35ac381825a279996264a5f78.

As an optimization for kmap/kunmap, the page table entries may
not be completely removed on kunmap. This causes a problem
when memory is XPU protected and the CPU speculates into the
virtual address map. There are functions to remove zero
count entries for kmap calls but kmap_atomic is handled separately.
Remove CMA from highmem until an effective strategy for removing
these entries is found.

Change-Id: Ic60a34401244182442fc6d11dd0cdf986be7a335
CRs-Fixed: 467508
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:14:43 -07:00
Laura Abbott a9bf6e0c5c cma: use MEMBLOCK_ALLOC_ANYWHERE for placing CMA regions
MEMBLOCK_ALLOC_ANYWHERE allocates blocks from anywhere,
including highmem. Use this flag to allow CMA regions to
be placed in highmem as opposed to lowmem.

Change-Id: Id5fa36a96e46d60f0e898d764a1f4c8a0a37f5f8
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:12:30 -07:00
Laura Abbott d6280fae7b cma: Remove restriction on region names
CMA currently restricts region names to 'region@x'. Devicetree
does not support the same value of x to be used multiple times.
This means that the devicetree cannot have multiple regions
be dynamically placed (x = 0). Remove the naming restriction
for CMA regions.

Change-Id: If647f8d7e6323497952431ae5b8cae05ba17af50
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:09:27 -07:00
Laura Abbott bcc2c9f21d cma: Add support for associating regions by name
Currently, the devicetree lookup code assumes that all
CMA regions are present at a fixed address and uses the
fixed address for associating CMA regions to devices.
This presents a problem for dynamically assigning regions.
Device names get mangled/changed between flattened and
populated devicetree so relying on that is unworkable.
Add a separate name binding to allow lookup later between
devices.

Change-Id: Iaacd9888ea708d7293f1120e2b8c473c5c601f3d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:04:45 -07:00
Laura Abbott e815914c87 cma: Fix up devicetree bindings
The correct binding for regions is linux,contiguous-regions.
Fix it.

Change-Id: I4bbb4cd3e880c75d917b5a5a081861b3197adfa3
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:04:31 -07:00
Laura Abbott 2a8f78c9c9 cma: Remove __init annotations from data structures
Several of the CMA data structures are used after initialization
remove the __init annotations from them.

Change-Id: Iff48ed88eef7b8fffdfba4b868cc69ded3c6df42
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:04:30 -07:00
Marek Szyprowski 2d9873440f drivers: dma-contiguous: add initialization from device tree
Add device tree support for contiguous memory regions defined in device
tree. Initialization is done in 2 steps. First, the contiguous memory is
reserved, what happens very early, when only flattened device tree is
available. Then on device initialization the corresponding cma regions are
assigned to device structures.

Change-Id: Ic242499b64875ee57a346d7cbc8a34ebd64e68d2
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:04:16 -07:00
Marek Szyprowski ce3bd4c724 drivers: dma-contiguous: clean source code and prepare for device tree
This patch cleans the initialization of dma contiguous framework. The
all-in-one dma_declare_contiguous() function is now separated into
dma_contiguous_reserve_area() which only steals the the memory from
memblock allocator and dma_contiguous_add_device() function, which
assigns given device to the specified reserved memory area. This improves
the flexibility in defining contiguous memory areas and assigning device
to them, because now it is possible to assign more than one device to
the given contiguous memory area. This split in initialization is also
required for upcoming device tree support.

Change-Id: Ibddd1c9abc6550ee62b09645e7a3355256838bfe
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2013-09-04 16:03:32 -07:00
Liam Mark 8ee2256602 ion: tracing: add ftrace events for ion allocations
Add ftrace events for ion allocations to make it easier to profile
their performance.

Change-Id: I9f32e076cd50d7d3a145353dfcef74f0f6cdf8a0
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2013-09-04 15:52:50 -07:00
Vitaly Andrianov 4009793e15 drivers: cma: represent physical addresses as phys_addr_t
This commit changes the CMA early initialization code to use phys_addr_t
for representing physical addresses instead of unsigned long.

Without this change, among other things, dma_declare_contiguous() simply
discards any memory regions whose address is not representable as unsigned
long.

This is a problem on 32-bit PAE machines where unsigned long is 32-bit
but physical address space is larger.

Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-12-11 09:28:09 +01:00
Laurent Pinchart 446c82fc44 drivers: dma-contiguous: Don't redefine SZ_1M
Use the definition from linux/sizes.h instead.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-10-23 14:05:32 +02:00
Michal Nazarewicz bdd43cb39f drivers: dma-contiguous: refactor dma_alloc_from_contiguous()
The dma_alloc_from_contiguous() function returns either a valid pointer
to a page structure or NULL, the error code set when pageno >= cma->count
is not used at all and can be safely removed.

This commit also changes the function to avoid goto and have only one exit
path and one place where mutex is unlocked.

Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
[fixed compilation break caused by missing semicolon]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-10-02 08:57:45 +02:00
Marek Szyprowski 7ce9bf1f47 mm: cma: fix alignment requirements for contiguous regions
Contiguous Memory Allocator requires each of its regions to be aligned
in such a way that it is possible to change migration type for all
pageblocks holding it and then isolate page of largest possible order from
the buddy allocator (which is MAX_ORDER-1). This patch relaxes alignment
requirements by one order, because MAX_ORDER alignment is not really
needed.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
2012-08-28 21:01:01 +02:00
Marek Szyprowski c64be2bb1c drivers: add Contiguous Memory Allocator
The Contiguous Memory Allocator is a set of helper functions for DMA
mapping framework that improves allocations of contiguous memory chunks.

CMA grabs memory on system boot, marks it with MIGRATE_CMA migrate type
and gives back to the system. Kernel is allowed to allocate only movable
pages within CMA's managed memory so that it can be used for example for
page cache when DMA mapping do not use it. On
dma_alloc_from_contiguous() request such pages are migrated out of CMA
area to free required contiguous block and fulfill the request. This
allows to allocate large contiguous chunks of memory at any time
assuming that there is enough free memory available in the system.

This code is heavily based on earlier works by Michal Nazarewicz.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
2012-05-21 15:09:37 +02:00