msm: kgsl: Use the kmalloc/vmalloc trick for the sharedmem page array

It was previously assumed that most GPU memory allocations would be
small enough to allow us to fit the array of page pointers into one
or two pages allocated via kmalloc.  Recent reports have proven
those assumptions to be wrong - allocations on the order of 32MB will
end up trying to get 8 pages from kmalloc and 8 contiguous pages
on a busy system are a rare beast indeed.

So use the usual kmalloc/vmalloc trick instead - use kmalloc for the
page array when we can and vmalloc if we can't.

CRs-fixed: 513469
Change-Id: Ic0dedbad0a5b14abe6a8bd73342b3e68faa8c8b7
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
This commit is contained in:
Carter Cooper 2013-07-16 09:25:05 -06:00 committed by Iliyan Malchev
parent af405436bc
commit 9354a396e0

View file

@ -608,13 +608,16 @@ _kgsl_sharedmem_page_alloc(struct kgsl_memdesc *memdesc,
/*
* Allocate space to store the list of pages to send to vmap.
* This is an array of pointers so we can track 1024 pages per page of
* allocation which means we can handle up to a 8MB buffer request with
* two pages; well within the acceptable limits for using kmalloc.
* This is an array of pointers so we can t rack 1024 pages per page
* of allocation. Since allocations can be as large as the user dares,
* we have to use the kmalloc/vmalloc trick here to make sure we can
* get the memory we need.
*/
pages = kmalloc(memdesc->sglen_alloc * sizeof(struct page *),
GFP_KERNEL);
if ((memdesc->sglen_alloc * sizeof(struct page *)) > PAGE_SIZE)
pages = vmalloc(memdesc->sglen_alloc * sizeof(struct page *));
else
pages = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (pages == NULL) {
ret = -ENOMEM;
@ -725,7 +728,10 @@ _kgsl_sharedmem_page_alloc(struct kgsl_memdesc *memdesc,
kgsl_driver.stats.histogram[order]++;
done:
kfree(pages);
if ((memdesc->sglen_alloc * sizeof(struct page *)) > PAGE_SIZE)
vfree(pages);
else
kfree(pages);
if (ret)
kgsl_sharedmem_free(memdesc);