mirror of
https://github.com/team-infusion-developers/android_kernel_samsung_msm8976.git
synced 2024-10-31 18:09:19 +00:00
vmalloc: walk vmap_areas by sorted list instead of rb_next()
There's a walk by repeating rb_next to find a suitable hole. Could be simply replaced by walk on the sorted vmap_area_list. More simpler and efficient. Mutation of the list and tree only happens in pair within __insert_vmap_area and __free_vmap_area, under protection of vmap_area_lock. The patch code is also under vmap_area_lock, so the list walk is safe, and consistent with the tree walk. Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and rounds for hours. Signed-off-by: Hong Zhiguo <honkiko@gmail.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
c2cddf9919
commit
92ca922f0a
1 changed files with 4 additions and 4 deletions
|
@ -413,11 +413,11 @@ nocache:
|
||||||
if (addr + size - 1 < addr)
|
if (addr + size - 1 < addr)
|
||||||
goto overflow;
|
goto overflow;
|
||||||
|
|
||||||
n = rb_next(&first->rb_node);
|
if (list_is_last(&first->list, &vmap_area_list))
|
||||||
if (n)
|
|
||||||
first = rb_entry(n, struct vmap_area, rb_node);
|
|
||||||
else
|
|
||||||
goto found;
|
goto found;
|
||||||
|
|
||||||
|
first = list_entry(first->list.next,
|
||||||
|
struct vmap_area, list);
|
||||||
}
|
}
|
||||||
|
|
||||||
found:
|
found:
|
||||||
|
|
Loading…
Reference in a new issue