Commit Graph

182 Commits

Author SHA1 Message Date
Swetha Chikkaboraiah 1fdc33cb78 arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctx.
When returning from idle, we rely on the fact that thread_info lives at
the end of the kernel stack, and restore this by masking the saved stack
pointer. Subsequent patches will sever the relationship between the
stack and thread_info, and to cater for this we must save/restore sp_el0
explicitly, storing it in cpu_suspend_ctx.

As cpu_suspend_ctx must be doubleword aligned, this leaves us with an
extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1
in the same way, which simplifies the code, avoiding pointer chasing on
the restore path (as we no longer need to load thread_info::cpu followed
by the relevant slot in __per_cpu_offset based on this).

This patch stashes both registers in cpu_suspend_ctx.

Change-Id: Icd9395e4783c252d7e7f9ee5e991e38777014ccc
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-commit: 623b476fc815464a0241ea7483da7b3580b7d8ac
Git-repo: https://source.codeaurora.org/quic/la/kernel/msm-3.10.git
[schikk@codeaurora.org: Resolved merge conflicts.
Ignored the sp_el0 changes as changes to support sp_el0
are not there in this baseline ]
Signed-off-by: Swetha Chikkaboraiah <schikk@codeaurora.org>
Signed-off-by: Rajshekar Eashwarappa <reashw@codeaurora.org>
2019-07-27 21:50:38 +02:00
Will Deacon ca66d14e62 arm64: Add skeleton to harden the branch predictor against aliasing attacks.
Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.

This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.

Change-Id: Idf832a78acc728a8b535e904482d0e14dd2db2bf
Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Git-commit: 7bd293b6845d003ab087faa6515a626c8703b8da
Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
[neeraju@codeaurora.org: resolve merge conflicts and context
 conflicts. Ignore changes in missing files: cpucaps.h,
 sysreg.h, cpu_errata.c, cpufeature.c. Remove KVM, hyp
 specific changes. Port bp hardening installation from
 missing cpu_errata.c to setup.c, ignoring the missing
 features like capability matches, feature register
 checks and KVM specific code. Fix compilation due to
 missing includes in mmu.h.]
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
Signed-off-by: Rajshekar Eashwarappa <reashw@codeaurora.org>
2019-07-27 21:50:35 +02:00
Marc Zyngier 0e1e663dc0 arm64: Move post_ttbr_update_workaround to C code.
We will soon need to invoke a CPU-specific function pointer after changing
page tables, so move post_ttbr_update_workaround out into C code to make
this possible.

Change-Id: I5ee5b7a43b2505148c7ac33dded16e4543bad514
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Git-commit: 81659da6deabd66da571f82ece19539aa76e370c
Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
[neeraju@codeaurora.org: resolve trivial merge conflicts,
 ignore post_ttbr_update_workaround() call in entry.S
 under ARM64_SW_TTBR0_PAN]
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Rajshekar Eashwarappa <reashw@codeaurora.org>
2019-07-27 21:50:35 +02:00
Luca Stefani 788d27a6bc Revert "arm64: mm: ensure that the zero page is visible to the page table walker"
This reverts commit c8f487a49a.
2017-04-18 17:24:53 +02:00
Luca Stefani e516c31e99 This is the 3.10.96 stable release
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJWqv2IAAoJEDjbvchgkmk+180QAKqYrypT3cyClNOHGRFRaxID
 Sxo8S9tr8apxaIeP/nfZH3fYXyoadKBwxet15PNYwGVex3jBIVO0M0kspNPu9guG
 ogM0hf558EiWpdN5kydwCyN2ukJkhPP9r1ZQ5T84UcqflIboLDYXksqW1w8JX7wm
 dumt8kbbnN42e9S1bXD79CRaBB+dkNBTg0fdfpCi7pOQvUQD9DAs/j6XM1ZkOouX
 P+/vnIWbRwzbVqlJSaWNfBotlNsydosazJD9lg8iFIRDpVGJPKYbDMP2MPpyrmyA
 mesNRIy0wD9cixXW6jMS3fkSOY27N5hZIYYVPWQ8vfCcooTej4GHw37C7Inlh8z6
 iWf/sy1Hu+vniJKAr0BD86ocZxnaMv//BQtwCJZv3TfuQ93QkaRmEznEnCHYGN4M
 thoaS7oYGfrJnsHKkh913Kr3K7QuvyFttOE058PloYzJbCPV+YVRa/UGyuR6qOCl
 SbuSMXDdUDcf/Wznr6S6p6T2GIfM8GYvfm7hzIYwHpClCQpDR3lRdonDAg82mdMh
 YCNbEZQ32+l8idBX/YG97MskMD869237yh4MLUUWoxLTbevAblkYSt81WuDO4Gya
 PcWcB+zH4t2Y25W9yVoTKmaJSJPhT4ngNFSy7V8zKgVG2Vmz4YIuLRhd6N2/fGcd
 FVSXw7uHZhrn+SEl+L6W
 =tiwo
 -----END PGP SIGNATURE-----

Merge tag 'v3.10.96' into HEAD

This is the 3.10.96 stable release

Change-Id: I428f544d161be44e66e56e2d6900700e798cdd0a
2017-04-18 17:16:02 +02:00
Luca Stefani 82b37d9f2f Merge remote-tracking branch 'f2fs/linux-3.10.y' into HEAD
Change-Id: Ic2fe24529f029909ddd96490bd6d885d60f88be2
2017-04-18 17:02:28 +02:00
LuK1337 4e71469c73 Merge tag 'LA.BR.1.3.6-03510-8976.0' into HEAD
Change-Id: Ie506850703bf9550ede802c13ba5f8c2ce723fa3
2017-04-18 12:11:50 +02:00
LuK1337 fc9499e55a Import latest Samsung release
* Package version: T713XXU2BQCO

Change-Id: I293d9e7f2df458c512d59b7a06f8ca6add610c99
2017-04-18 03:43:52 +02:00
Shiraz Hashim 3831d5a2dd arm: dma-mapping: page align size before flush tlb
start and end must be page aligned while calling
flush_tlb_kernel_range else the last page may get
missed while invalidation.

Change-Id: Ibaab202c47a475623e197a13191b2fed638ce20b
Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
2016-11-08 04:52:07 -08:00
Ravi Kumar Siddojigari d68584f045 Revert "arm64: Introduce execute-only page access permissions"
This reverts commit f72129c220.

While the aim is increased security for --x memory maps, it does not
protect against kernel level reads. Until SECCOMP is implemented for
arm64, revert this patch to avoid giving a false idea of execute-only
mappings.

Change-Id: Ifb2fb182450bc656189738842f344f63daa5e317
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-commit:5a0fdfada3a2aa50d7b947a2e958bf00cbe0d830
Git-repo: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ravi Kumar Siddojigari <rsiddoji@codeaurora.org>
2016-09-09 01:17:23 -07:00
Marek Szyprowski 01dcb7ce15 arm64: dma-mapping: always clear allocated buffers
[ Upstream commit 6829e274a623187c24f7cfc0e3d35f25d087fcc5 ]

Buffers allocated by dma_alloc_coherent() are always zeroed on Alpha,
ARM (32bit), MIPS, PowerPC, x86/x86_64 and probably other architectures.
It turned out that some drivers rely on this 'feature'. Allocated buffer
might be also exposed to userspace with dma_mmap() call, so clearing it
is desired from security point of view to avoid exposing random memory
to userspace. This patch unifies dma_alloc_coherent() behavior on ARM64
architecture with other implementations by unconditionally zeroing
allocated buffer.

CRs-Fixed: 1041735
Change-Id: I74bf024e0f603ca8c0b05430dc2ee154d579cfb2
Cc: <stable@vger.kernel.org> # v3.14+
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Git-commit: a142e9641dcbead2c8845c949ad518acac96ed28
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[lmark@codeaurora.org: resolve merge conflicts]
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2016-08-04 04:25:18 -07:00
Pradosh Das 571b43a792 Merge commit '4742aa9efad673157273b07095ac1070dd2f02ea' into HEAD
Conflicts:
        drivers/media/platform/msm/camera_v2/sensor/actuator/msm_actuator.c
        sound/soc/msm/msm8952-slimbus.c

Change-Id: If4516c52837e61afda301496b9053cb44ea59dd9
Signed-off-by: Pradosh Das <prados@codeaurora.org>
2016-07-26 12:02:09 +05:30
dcashman f5968659d5 BACKPORT: FROMLIST: mm: ASLR: use get_random_long()
(cherry picked from commit https://lkml.org/lkml/2016/2/4/833)

Replace calls to get_random_int() followed by a cast to (unsigned long)
with calls to get_random_long().  Also address shifting bug which, in case
of x86 removed entropy mask for mmap_rnd_bits values > 31 bits.

Bug: 26963541
Signed-off-by: Daniel Cashman <dcashman@android.com>
Signed-off-by: Daniel Cashman <dcashman@google.com>
Change-Id: Ie577b21a0678cf4b21eae06bddd8ccb27cbe70ff
2016-05-18 14:36:13 +05:30
dcashman 8b1a215ddc BACKPORT: FROMLIST: arm64: mm: support ARCH_MMAP_RND_BITS.
(cherry picked from commit https://lkml.org/lkml/2015/12/21/340)

arm64: arch_mmap_rnd() uses STACK_RND_MASK to generate the
random offset for the mmap base address.  This value represents a
compromise between increased ASLR effectiveness and avoiding
address-space fragmentation. Replace it with a Kconfig option, which
is sensibly bounded, so that platform developers may choose where to
place this compromise. Keep default values as new minimums.

Bug: 24047224
Signed-off-by: Daniel Cashman <dcashman@android.com>
Signed-off-by: Daniel Cashman <dcashman@google.com>
Change-Id: I7caf105b838cfc3ab55f275e1a061eb2b77c9a2a

Conflicts:
	arch/arm64/Kconfig
2016-05-18 14:36:00 +05:30
Mark Salyzyn 58b32cb28c ARM64 readahead: fault retry breaks mmap file read random detection
Description from commit 45cac65b0f
    ("readahead: fault retry breaks mmap file read random detection")

.fault now can retry.  The retry can break state machine of .fault.  In
filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
try, since the page is in page cache now, ra->mmap_miss is decreased.  And
these are done in one fault, so we can't detect random mmap file access.

Add a new flag to indicate .fault is tried once.  In the second try, skip
ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.

I only tested x86, didn't test other archs, but looks the change for other
archs is obvious, but who knows :)

< snip >

Yup, arm64 needs this too! Random read improves by 250%, sequential
read improves by 40%, and random write by 400% to an eMMC device with
dm crypto wrapped around it.

Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Riley Andrews <riandrews@android.com>
Cc: Shaohua Li <shli@kernel.org>
Cc: Shaohua Li <shaohua.li@fusionio.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Bug: 23181629
Bug: 23385923
Change-Id: Ia4de1199164d6b5d4430f5518daf2aa5a71a4059

Conflicts:
	arch/arm64/mm/fault.c
2016-05-18 14:31:36 +05:30
Vinayak Menon e04cf3dfb6 arm/arm64: dma-mapping: flush the tlb on unremap
Make sure there are no stale tlb entries when
dma_unremap returns, thus preventing speculative
fetches.

Change-Id: I22070de282f25fe5ea20177e67a6d629123e29a4
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Ramesh Gupta Guntha <rgguntha@codeaurora.org>
2016-04-18 07:03:27 -07:00
Will Deacon c8f487a49a arm64: mm: ensure that the zero page is visible to the page table walker
commit 32d6397805d00573ce1fa55f408ce2bca15b0ad3 upstream.

In paging_init, we allocate the zero page, memset it to zero and then
point TTBR0 to it in order to avoid speculative fetches through the
identity mapping.

In order to guarantee that the freshly zeroed page is indeed visible to
the page table walker, we need to execute a dsb instruction prior to
writing the TTBR.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-01-28 21:49:36 -08:00
Mark Salyzyn 339cb27fc5 arm64: readahead: fault retry breaks mmap file read random detection
commit 569ba74a7ba69f46ce2950bf085b37fea2408385 upstream.

This is the arm64 portion of commit 45cac65b0f ("readahead: fault
retry breaks mmap file read random detection"), which was absent from
the initial port and has since gone unnoticed. The original commit says:

> .fault now can retry.  The retry can break state machine of .fault.  In
> filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
> try, since the page is in page cache now, ra->mmap_miss is decreased.  And
> these are done in one fault, so we can't detect random mmap file access.
>
> Add a new flag to indicate .fault is tried once.  In the second try, skip
> ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.

With this change, Mark reports that:

> Random read improves by 250%, sequential read improves by 40%, and
> random write by 400% to an eMMC device with dm crypto wrapped around it.

Cc: Shaohua Li <shli@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Riley Andrews <riandrews@android.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-10-22 14:37:52 -07:00
Yann Droneaud 82c9aed33b arm64/mm: Remove hack in mmap randomize layout
commit d6c763afab142a85e4770b4bc2a5f40f256d5c5d upstream.

Since commit 8a0a9bd4db ('random: make get_random_int() more
random'), get_random_int() returns a random value for each call,
so comment and hack introduced in mmap_rnd() as part of commit
1d18c47c73 ('arm64: MMU fault handling and page table management')
are incorrects.

Commit 1d18c47c73 seems to use the same hack introduced by
commit a5adc91a4b ('powerpc: Ensure random space between stack
and mmaps'), latter copied in commit 5a0efea09f ('sparc64: Sharpen
address space randomization calculations.').

But both architectures were cleaned up as part of commit
fa8cbaaf5a ('powerpc+sparc64/mm: Remove hack in mmap randomize
layout') as hack is no more needed since commit 8a0a9bd4db.

So the present patch removes the comment and the hack around
get_random_int() on AArch64's mmap_rnd().

Cc: David S. Miller <davem@davemloft.net>
Cc: Anton Blanchard <anton@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Dan McGee <dpmcgee@gmail.com>
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Matthias Brugger <mbrugger@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-09-13 09:07:59 -07:00
Dave P Martin 39ae2d098a arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP
commit b9bcc919931611498e856eae9bf66337330d04cc upstream.

The memmap freeing code in free_unused_memmap() computes the end of
each memblock by adding the memblock size onto the base.  However,
if SPARSEMEM is enabled then the value (start) used for the base
may already have been rounded downwards to work out which memmap
entries to free after the previous memblock.

This may cause memmap entries that are in use to get freed.

In general, you're not likely to hit this problem unless there
are at least 2 memblocks and one of them is not aligned to a
sparsemem section boundary.  Note that carve-outs can increase
the number of memblocks by splitting the regions listed in the
device tree.

This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
vmemmap code deals with freeing the unused regions of the memmap
instead of requiring the arch code to do it.

This patch gets the memblock base out of the memblock directly when
computing the block end address to ensure the correct value is used.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-03 09:29:41 -07:00
Catalin Marinas 2b97cbc8c4 arm64: Do not attempt to use init_mm in reset_context()
commit 565630d503ef24e44c252bed55571b3a0d68455f upstream.

After secondary CPU boot or hotplug, the active_mm of the idle thread is
&init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1
and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the
TTBR0_EL1 is already set to the reserved value, there is no need to
perform any context reset.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-03 09:29:41 -07:00
Neeti Desai 538f5610ff Revert "arm64: dma-mapping: avoid calling iommu_iova_to_phys"
This reverts commit 0d02975d9ffd55f1c0fe5db08f45a9ee1d22f354

.sync_single_for_device is called independent of .map_page.
This caused an issue, because .sync_single_for_device, doesn't
walk through the page tables to get the physical address, but
depends on .map_page to populate mapping->phys. This caused
Null pointer dereference.

Change-Id: I7bc8c713938cf2d38a9f11301bfae456c4fed362
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
2015-05-19 10:45:14 +05:30
Linux Build Service Account 35a883314a Merge "arm64: dma-mapping: make alloc_noncoherent more robust" 2015-04-20 10:29:20 -07:00
Liam Mark 6ce720752e arm64: dma-mapping: make alloc_noncoherent more robust
Large allocations can result in the arm64_swiotlb_alloc_noncoherent
call not being able to succeed because it can't find enough
contiguous memory to setup the mapping.

Make arm64_swiotlb_alloc_noncoherent more robust by using vmalloc
as a fallback.

Change-Id: I00e8c3f634dc2280f3731c6042b9dd3dc644cb73
Signed-off-by: Liam Mark <lmark@codeaurora.org>
2015-04-15 14:53:57 -07:00
Linux Build Service Account b19adcde86 Merge "arm64: dma-mapping: avoid calling iommu_iova_to_phys" 2015-04-07 22:38:47 -07:00
Linux Build Service Account dad0861af4 Merge "arm64: add support for memtest" 2015-04-04 05:38:48 -07:00
Mitchel Humpherys 3b45241234 arm64: dma-mapping: avoid calling iommu_iova_to_phys
We're currently asking the IOMMU layer to do an iova-to-phys translation
in .unmap_page and .sync_single_for_* in the IOMMU DMA mapper.  This can
be a costly operation since it will need to walk the domain's page
tables, either in software or in hardware.  Also, in some
less-than-ideal implementations of iommu_iova_to_phys this might
actually involve sleeping operations.

Avoid this overhead by saving the physical address of the buffer in the
dma_iommu_mapping structure in .map_page, using it later instead of
iommu_iova_to_phys.

Change-Id: Ic53b91a222dab01cfcdc34246a847a8c399adfb6
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2015-04-04 01:50:59 -07:00
Vladimir Murzin 0e9298991e arm64: add support for memtest
Add support for memtest command line option.

Change-Id: I8f4b75f6209e98e4ad03e7088ef06083035e01fd
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Patch-mainline: linux-arm-kernel @ 03/09/15, 10:27
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2015-04-01 09:27:43 -07:00
Chintan Pandya fd84567625 arm64: dma-mapping: use correct type for iova in arm_iommu_unmap_sg
IOMMU virtual addresses use the dma_addr_t type since they can be up to
64-bits.  We're currently using an `unsigned int' to store our IOVA in
arm_iommu_unmap_sg, which could result in truncation.  Use the correct
type for an I/O virtual address: dma_addr_t.

This was previously fixed for arm_iommu_map_sg in
[02454d7f9feeb: "arm64: dma-mapping: use correct type for iova"].

Make the same fix in arm_iommu_unmap_sg.

Change-Id: Ib22a9600f33e6fa155812b08d67d62f72af0ad8e
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
2015-04-01 19:29:59 +05:30
Linux Build Service Account 0925dc4962 Merge "Merge tmp-61c3cde into msm-3.10" 2015-03-21 21:52:56 -07:00
Rich Wiley fcb3d529fa arm64: enable deprecated SETEND instruction in SCTLR compat config
Change-Id: I703d4843f8aab2ec63324f04cc13aaabae88e163
Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Reviewed-on: http://git-master/r/422174
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Git-commit: 2e0602939baf22b8f9057f7626c189248383d4ae
Git-repo: https://android.googlesource.com/kernel/common.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2015-03-19 14:52:36 -07:00
Rich Wiley f671070010 arm64: make SCTLR compat config depend on CONFIG_ARMV7_COMPAT
Conflicts:
	arch/arm64/mm/proc.S

Change-Id: I76e0067839c96e3082b42c80d3fc670cf3d371b5
Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Reviewed-on: http://git-master/r/422173
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Git-commit: bad15588d39c24ecb76593f632a0ab5d71ace7ed
Git-repo: https://android.googlesource.com/kernel/common.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2015-03-19 14:52:35 -07:00
Alex Van Brunt fe75c38a36 arm64: optionally set CP15BEN in SCTLR
Setting CP15BEN allows legacy applications running in AArch32 mode
that use CP15 DMB as similar instructions to continue running.

Change-Id: If76d3c6ee12865ff8c4b4e7aed01146bead87773
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/366096
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Git-commit: 80cb26c175627cb9633aeae13adc8455450bf77a
Git-repo: https://android.googlesource.com/kernel/common.git
[imaund@codeaurora.org: Resolved context conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2015-03-19 14:52:33 -07:00
Greg Hackmann ee230028d9 arm64: check for upper PAGE_SHIFT bits in pfn_valid()
pfn_valid() returns a false positive when the lower (64 - PAGE_SHIFT)
bits match a valid pfn but some of the upper bits are set.  This caused
a kernel panic in kpageflags_read() when a userspace utility parsed
/proc/*/pagemap, neglected to discard the upper flag bits, and tried to
lseek()+read() from the corresponding offset in /proc/kpageflags.

A valid pfn will never have the upper PAGE_SHIFT bits set, so simply
check for this before passing the pfn to memblock_is_memory().

Change-Id: Ief5d8cd4dd93cbecd545a634a8d5885865cb5970
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Git-commit: bf485e6f51d505bbe4ab5eaafcfe7789ec83e7ee
Git-repo: https://android.googlesource.com/kernel/common.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2015-03-19 14:52:29 -07:00
Linux Build Service Account a307d1ad03 Merge "arm64: Switch to iommu_map_sg API" 2015-03-19 14:23:24 -07:00
Laura Abbott bd28cb0cd9 arm64: Switch to iommu_map_sg API
The map_sg API is the newer standard API. Switch to it
from the map_range API.

Change-Id: I34bc0b4c6ad54e016dd530feaba542af5e053a91
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2015-03-12 16:58:57 -07:00
Mitchel Humpherys 048e818534 arm64: dma-mapping: use correct type for iova
IOMMU virtual addresses use the dma_addr_t type since they can be up to
64-bits.  We're currently using an `unsigned int' to store our IOVA in
arm_iommu_map_sg, which could result in truncation.  Use the correct
type for an I/O virtual address: dma_addr_t.

Change-Id: Ie63bf17268ca70d102ab9d472ed9bcc6f4a793d7
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2015-03-11 13:17:37 -07:00
Matt Wagantall f4b9c08367 arm64: mark split_pmd() with __init to avoid section mismatch warnings
split_pmd() calls early_alloc(), which is marked with __init. Mark
split_pmd() similarly. The only current caller of split_pmd() is
remap_pages(), which is already __init, so there was no real danger
here in the first place.

Change-Id: I3bbc4c66f1ced8fe772366b7e5287be5f474f314
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
2015-03-11 13:17:24 -07:00
Linux Build Service Account 6a27fa9e5e Merge "arm64: Improve error message for SP/PC aborts" 2014-12-26 11:51:16 -08:00
Linux Build Service Account 174faf3c6e Merge "edac: cortex_arm64: Remove misleading edac error warning" 2014-12-17 22:33:30 -08:00
Patrick Daly 87493e2c8a arm64: Improve error message for SP/PC aborts
Add a more descriptive string to be printed out by __die().

Change-Id: Ic5abc1f808d2753f6195492db32c18fd0f0fa313
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
2014-12-17 16:48:09 -08:00
Patrick Daly 3db5e3d816 edac: cortex_arm64: Remove misleading edac error warning
If bad_mode() or do_bad() is called, the arm edac driver checks for an
error. Ensure that warning messages are only printed out if there is
actually an error.

Additionally, fix an issue where the warning for a single bit error could
be printed for a double bit error.

Change-Id: I6133cb298fb9e660a220434761427d6ea6adb2ba
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
2014-12-12 13:09:09 -08:00
Mitchel Humpherys a03f74ef16 arm64: dma-mapping: map sg lists into the SMMU as virtually contiguous
In arm_iommu_map_sg, currently we map each individual link in the given
scatterlist into the SMMU individually such that they may or may not be
virtually contiguous.  However, in most (all?) of our use cases we
actually want the entire sg list mapped into the SMMU as a single
contiguous range.  Use iommu_map_range to accomplish this.

Change-Id: Icf72ece50c3120a0091dbfab1523ff11da20f807
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2014-12-05 14:43:14 -08:00
Mitchel Humpherys a9b6f43592 arm64: dma-mapping: fix some issues with IOMMU mapper
We haven't been compiling the IOMMU mapper for a while and some things
have gotten stale.  Fix various compilation issues.

Change-Id: Ia76264c57e5caea048b8364d345c011338465613
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2014-12-05 14:43:13 -08:00
Mitchel Humpherys 91721b71d6 arm64: dma-mapping: swap arguments to __get_dma_pgprot
The arguments to __get_dma_pgprot in arm64 are reversed from those in
the corresponding arm version. Match them up to facilitate the re-use of
some other code (like the IOMMU mapping code) from arm in arm64.

Change-Id: Idc8645c3455bda4e991f3e04eee5726b2b398092
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
2014-12-03 15:45:23 -08:00
Laura Abbott 4fad4d37bd mmu: arm64: fix ability to write to protected memory
Use the mem_text_address_writeable function if
CONFIG_KERNEL_TEXT_RDONLY is specified. Modify the
page table entry rather than the pmd, depending
on pmd type.

Change-Id: I04390a9b7376b299161842e87150802da2d4d728
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
2014-11-26 15:33:32 -05:00
Johannes Weiner e2ec2c2b96 arch: mm: pass userspace fault flag to generic fault handler
commit 759496ba6407c6994d6a5ce3a5e74937d7816208 upstream.

Unlike global OOM handling, memory cgroup code will invoke the OOM killer
in any OOM situation because it has no way of telling faults occuring in
kernel context - which could be handled more gracefully - from
user-triggered faults.

Pass a flag that identifies faults originating in user space from the
architecture-specific fault handlers to generic code so that memcg OOM
handling can be improved.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-21 09:22:56 -08:00
Johannes Weiner 086c6cc537 arch: mm: do not invoke OOM killer on kernel fault OOM
commit 871341023c771ad233620b7a1fb3d9c7031c4e5c upstream.

Kernel faults are expected to handle OOM conditions gracefully (gup,
uaccess etc.), so they should never invoke the OOM killer.  Reserve this
for faults triggered in user context when it is the only option.

Most architectures already do this, fix up the remaining few.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: David Rientjes <rientjes@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: azurIt <azurit@pobox.sk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-21 09:22:55 -08:00
Linux Build Service Account 30e6149014 Merge "arm64: Check for parity errors on synchronous aborts" 2014-11-12 13:57:58 -08:00
Patrick Daly 98ceffa273 arm64: Check for parity errors on synchronous aborts
Certain types of fatal synchronous aborts may be triggered by parity
errors in the L1 or L2 caches. Check whether a parity error occured and
print out the relevant information.

Change-Id: Ibc306e12e95286d29757bac293bd0b69bf04ebc4
Signed-off-by: Patrick Daly <pdaly@codeaurora.org>
2014-11-11 15:32:45 -08:00