Skip to content
Snippets Groups Projects
Commit 344c3acd authored by Iliyan Malchev's avatar Iliyan Malchev
Browse files

kgsl: allocate from lowmem


Also, simplify _kgsl_sharedmem_page_alloc now that we only allocate GFP_KERNEL
in PAGE_SIZE chunks.

This reduces the non-movable highmem order-0 allocations from 15% to less than
2% of the total.  This this, 98% of highmem is now movable, hence compactable.
This in turn leaves us room to later enable compaction outside of the
direct-reclaim path, which should improve responsiveness on highly-fragmented
devices further.

b/18069309 N5 running L can't run as many apps simultaneously as running K

Change-Id: I810c0100e30156bb2877d6f6b8cecf244f733ed9
Signed-off-by: default avatarIliyan Malchev <malchev@google.com>
parent 8d913fe4
Branches
Tags
No related merge requests found
......@@ -580,17 +580,12 @@ _kgsl_sharedmem_page_alloc(struct kgsl_memdesc *memdesc,
size_t size)
{
int order, ret = 0;
int len, page_size, sglen_alloc, sglen = 0;
int len, sglen_alloc, sglen = 0;
void *ptr;
unsigned int align;
align = (memdesc->flags & KGSL_MEMALIGN_MASK) >> KGSL_MEMALIGN_SHIFT;
page_size = PAGE_SIZE;
/* update align flags for what we actually use */
if (page_size != PAGE_SIZE)
kgsl_memdesc_set_align(memdesc, ilog2(page_size));
/*
* There needs to be enough room in the sg structure to be able to
* service the allocation entirely with PAGE_SIZE sized chunks
......@@ -617,33 +612,10 @@ _kgsl_sharedmem_page_alloc(struct kgsl_memdesc *memdesc,
while (len > 0) {
struct page *page;
unsigned int gfp_mask = __GFP_HIGHMEM;
int j;
/* don't waste space at the end of the allocation*/
if (len < page_size)
page_size = PAGE_SIZE;
/*
* Don't do some of the more aggressive memory recovery
* techniques for large order allocations
*/
if (page_size != PAGE_SIZE)
gfp_mask |= __GFP_COMP | __GFP_NORETRY |
__GFP_NO_KSWAPD | __GFP_NOWARN;
else
gfp_mask |= GFP_KERNEL;
gfp_mask |= __GFP_ZERO;
page = alloc_pages(gfp_mask, get_order(page_size));
page = alloc_page(GFP_KERNEL | __GFP_ZERO);
if (page == NULL) {
if (page_size != PAGE_SIZE) {
page_size = PAGE_SIZE;
continue;
}
/*
* Update sglen and memdesc size,as requested allocation
* not served fully. So that they can be correctly freed
......@@ -660,15 +632,12 @@ _kgsl_sharedmem_page_alloc(struct kgsl_memdesc *memdesc,
goto done;
}
for (j = 0; j < page_size >> PAGE_SHIFT; j++) {
struct page *p = nth_page(page, j);
ptr = kmap_atomic(p);
ptr = kmap_atomic(page);
dmac_flush_range(ptr, ptr + PAGE_SIZE);
kunmap_atomic(ptr);
}
sg_set_page(&memdesc->sg[sglen++], page, page_size, 0);
len -= page_size;
sg_set_page(&memdesc->sg[sglen++], page, PAGE_SIZE, 0);
len -= PAGE_SIZE;
}
memdesc->sglen = sglen;
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment