Commit graph

25 commits

Author SHA1 Message Date
Yuanyuan Zhong b4c98bc799 arm64: strcmp: align to 64B cache line
Align strcmp to 64B. This will ensure the preformance critical
loop is within one 64B cache line.

Change-Id: I9240fbb4407637b2290a44e02ad59098a377b356
Signed-off-by: Yuanyuan Zhong <zyy@motorola.com>
Reviewed-on: https://gerrit.mot.com/902536
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver <sltawvr@motorola.com>
Tested-by: Jira Key <jirakey@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
Reviewed-by: Igor Kovalenko <igork@motorola.com>
Submit-Approved: Jira Key <jirakey@motorola.com>
2017-04-18 12:17:54 +02:00
Hong-Mei Li 5cb8aed1c3 arm64: lib: memory utilities optimization
Optimize memcpy and memmove, to prefetch several cache lines.
We can achieve 15% memcpy speed improvement with the preload method.

Change-Id: I2259b98a33eba0b7466920b3f270f953e609cf13
Signed-off-by: Hong-Mei Li <a21834@motorola.com>
Reviewed-on: http://gerrit.mot.com/740766
SLTApproved: Slta Waiver <sltawvr@motorola.com>
SME-Granted: SME Approvals Granted
Tested-by: Jira Key <jirakey@motorola.com>
Reviewed-by: Zhi-Ming Yuan <a14194@motorola.com>
Submit-Approved: Jira Key <jirakey@motorola.com>
2017-04-18 12:17:48 +02:00
Will Deacon c182462032 arm64: lib: improve copy_page to deal with 128 bytes at a time
We want to avoid lots of different copy_page implementations, settling
for something that is "good enough" everywhere and hopefully easy to
understand and maintain whilst we're at it.

This patch reworks our copy_page implementation based on discussions
with Cavium on the list and benchmarking on Cortex-A processors so that:

  - The loop is unrolled to copy 128 bytes per iteration

  - The reads are offset so that we read from the next 128-byte block
    in the same iteration that we store the previous block

  - Explicit prefetch instructions are removed for now, since they hurt
    performance on CPUs with hardware prefetching

  - The loop exit condition is calculated at the start of the loop

Change-Id: I0d9f3bbe4efa2751f41432a3b4b299fbb0e494be
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-18 12:17:45 +02:00
Will Deacon 59180ec546 arm64: lib: use pair accessors for copy_*_user routines
commit 23e94994464a7281838785675e09c8ed1055f62f upstream.

The AArch64 instruction set contains load/store pair memory accessors,
so use these in our copy_*_user routines to transfer 16 bytes per
iteration.

Change-Id: Ie9874b067ff7450a40b29a760f0c6e742f750adc
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: David Brown <david.brown@linaro.org>
2017-04-18 12:17:42 +02:00
Trilok Soni b0993d762f Revert "arm64: optimized copy_to_user and copy_from_user assembly code"
Reverting this commit f397991e3b since it seems to be causing
regression in some usecases of copy_from_user.

Change-Id: Ia60977a3501c3273110d19cc231f790da7570e57
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
2015-06-18 17:25:45 -07:00
Feng Kan f397991e3b arm64: optimized copy_to_user and copy_from_user assembly code
Using the glibc cortex string work work authored by Linaro as base to
create new copy to/from user kernel routine.

Iperf performance increase:
		-l (size)	1 core result
Optimized 	64B		44-51Mb/s
		1500B		4.9Gb/s
		30000B		16.2Gb/s
Original	64B		34-50.7Mb/s
		1500B		4.7Gb/s
		30000B		14.5Gb/s

Change-Id: I6beac63a65164226f798b043f726fc391ce65e2f
Signed-off-by: Feng Kan <fkan@apm.com>
Patch-mainline: linux-arm-kernel @ 28th April 2015 17:38
Signed-off-by: David Keitel <dkeitel@codeaurora.org>
2015-06-08 14:52:15 -07:00
Kyle McMartin 0896f287e4 arm64: __clear_user: handle exceptions on strb
ARM64 currently doesn't fix up faults on the single-byte (strb) case of
__clear_user... which means that we can cause a nasty kernel panic as an
ordinary user with any multiple PAGE_SIZE+1 read from /dev/zero.
i.e.: dd if=/dev/zero of=foo ibs=1 count=1 (or ibs=65537, etc.)

This is a pretty obscure bug in the general case since we'll only
__do_kernel_fault (since there's no extable entry for pc) if the
mmap_sem is contended. However, with CONFIG_DEBUG_VM enabled, we'll
always fault.

if (!down_read_trylock(&mm->mmap_sem)) {
	if (!user_mode(regs) && !search_exception_tables(regs->pc))
		goto no_context;
retry:
	down_read(&mm->mmap_sem);
} else {
	/*
	 * The above down_read_trylock() might have succeeded in
	 * which
	 * case, we'll have missed the might_sleep() from
	 * down_read().
	 */
	might_sleep();
	if (!user_mode(regs) && !search_exception_tables(regs->pc))
		goto no_context;
}

Fix that by adding an extable entry for the strb instruction, since it
touches user memory, similar to the other stores in __clear_user.

Change-Id: I7256cecee5ff3f8f5958d3f7f39ad6b4e761c828
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Reported-by: Miloš Prchlík <mprchlik@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-commit: 97fc15436b36ee3956efad83e22a557991f7d19d
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
2014-11-25 19:14:41 -08:00
Trilok Soni 34d64b3518 Revert "arm64: optimized copy_to_user and copy_from_user assembly code"
Reverting this commit 6d5031fb since it seems to be causing
regression in some usecases of copy_to_user.

Change-Id: I42a71b85b7a912944e5d3f21a0a2e9db2ff5b465
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
2014-10-14 10:43:45 -07:00
Feng Kan 6d5031fb2b arm64: optimized copy_to_user and copy_from_user assembly code
Using the glibc cortex string work work authored by Linaro as base to
create new copy to/from user kernel routine.

Iperf performance increase:
		-l (size)		1 core result
Optimized 	64B			44-51Mb/s
		1500B			4.9Gb/s
		30000B			16.2Gb/s
Original	64B			34-50.7Mb/s
		1500B			4.7Gb/s
		30000B			14.5Gb/s

Change-Id: I1c1ec6403e7e5c040c1528511578888cb1e77379
Signed-off-by: Feng Kan <fkan@apm.com>
Patch-mainline: linux-arm-kernel @ 8 Aug 2014 16:02
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2014-10-03 13:21:34 -07:00
zhichang.yuan 586fac4aed arm64: lib: Implement optimized string length routines
This patch, based on Linaro's Cortex Strings library, adds
an assembly optimized strlen() and strnlen() functions.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 0a42cb0a6fa64cb17db11164a1ad3511b43acefe
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:49 -07:00
zhichang.yuan 3540fd8247 arm64: lib: Implement optimized string compare routines
This patch, based on Linaro's Cortex Strings library, adds
an assembly optimized strcmp() and strncmp() functions.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 192c4d902f19b66902d7aacc19e9b169bebfb2e5
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:49 -07:00
zhichang.yuan 48429493a7 arm64: lib: Implement optimized memcmp routine
This patch, based on Linaro's Cortex Strings library, adds
an assembly optimized memcmp() function.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: d875c9b3724083cd2629cd8507e424cd3716cd28
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:48 -07:00
zhichang.yuan 4b6a005c43 arm64: lib: Implement optimized memset routine
This patch, based on Linaro's Cortex Strings library, improves
the performance of the assembly optimized memset() function.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: b29a51fe0e0be63157d8661666be8bbfd8f0c5d7
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:48 -07:00
zhichang.yuan de8ae2881c arm64: lib: Implement optimized memmove routine
This patch, based on Linaro's Cortex Strings library, improves
the performance of the assembly optimized memmove() function.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 280adc1951c0c9fc8f2d85571ff563a1c412b1cd
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:47 -07:00
zhichang.yuan 8fffeb104f arm64: lib: Implement optimized memcpy routine
This patch, based on Linaro's Cortex Strings library, improves
the performance of the assembly optimized memcpy() function.

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
Signed-off-by: Deepak Saxena <dsaxena@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Git-commit: 808dbac6b51f3441eb5a07724c0b0d1257046d51
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
2014-08-15 11:46:47 -07:00
Will Deacon cbe8537022 arm64: atomics: fix use of acquire + release for full barrier semantics
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.

On arm64, these operations have been incorrectly implemented as follows:

	// A, B, C are independent memory locations

	<Access [A]>

	// atomic_op (B)
1:	ldaxr	x0, [B]		// Exclusive load with acquire
	<op(B)>
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b

	<Access [C]>

The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -> B -> C
(where B is the atomic operation involving both a load and a store).

Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.

The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:

	<Access [A]>

	// atomic_op (B)
	dmb	ish		// Full barrier
1:	ldxr	x0, [B]		// Exclusive load
	<op(B)>
	stxr	w1, x0, [B]	// Exclusive store
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	<Access [C]>

but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:

	<Access [A]>

	// atomic_op (B)
1:	ldxr	x0, [B]		// Exclusive load
	<op(B)>
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	<Access [C]>

The simple observations here are:

  - The dmb ensures that no subsequent accesses (e.g. the access to C)
    can enter or pass the atomic sequence.

  - The dmb also ensures that no prior accesses (e.g. the access to A)
    can pass the atomic sequence.

  - Therefore, no prior access can pass a subsequent access, or
    vice-versa (i.e. A is strictly ordered before C).

  - The stlxr ensures that no prior access can pass the store component
    of the atomic operation.

The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.

From an (arbitrary) observer's point of view, there are two scenarios:

  1. We have observed the ldxr. This means that if we perform a store to
     [B], the ldxr will still return older data. If we can observe the
     ldxr, then we can potentially observe the permitted re-ordering
     with the access to A, which is clearly an issue when compared to
     the dmb variant of the code. Thankfully, the exclusive monitor will
     save us here since it will be cleared as a result of the store and
     the ldxr will retry. Notice that any use of a later memory
     observation to imply observation of the ldxr will also imply
     observation of the access to A, since the stlxr/dmb ensure strict
     ordering.

  2. We have not observed the ldxr. This means we can perform a store
     and influence the later ldxr. However, that doesn't actually tell
     us anything about the access to [A], so we've not lost anything
     here either when compared to the dmb variant.

This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.

Cc: <stable@vger.kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-commit: 8e86f0b409a44193f1587e87b69c5dcf8f65be67
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2014-04-17 17:16:53 -07:00
Will Deacon 9a3c26b7c0 arm64: use generic strnlen_user and strncpy_from_user functions
This patch implements the word-at-a-time interface for arm64 using the
same algorithm as ARM. We use the fls64 macro, which expands to a clz
instruction via a compiler builtin. Big-endian configurations make use
of the implementation from asm-generic.

With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Git-commit: 12a0ef7b0ac38677bd2d85f33df5ca0a57868819
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Ian Maund <imaund@codeaurora.org>
2014-04-17 17:04:37 -07:00
Catalin Marinas 420c158dcf arm64: Treat the bitops index argument as an 'int'
The bitops prototype use an 'int' as the bit index type but the asm
implementation assume it to be a 'long'. Since the compiler does not
guarantee zeroing the upper 32-bits in a register when used as 'int',
change the bitops implementation accordingly.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-05-08 10:33:17 +01:00
Catalin Marinas 16c85a1fd7 arm64: Use acquire/release semantics instead of explicit DMB
This patch changes the test_and_*_bit functions to use the
load-acquire/store-release instructions instead of explicit DMB.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-04-30 15:58:37 +01:00
Mark Rutland c47d6a04e6 arm64: klib: bitops: fix unpredictable stxr usage
We're currently relying on unpredictable behaviour in our testops
(test_and_*_bit), as stxr is unpredictable when the status register and
the source register are the same

This patch changes reallocates the status register so as to bring us back into
the realm of predictable behaviour. Boot tested on an AEMv8 model.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-04-30 15:53:01 +01:00
Catalin Marinas 6247958653 arm64: klib: Optimised atomic bitops
This patch implements the AArch64-specific atomic bitops functions using
exclusive memory accesses to avoid locking.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-03-21 17:39:31 +00:00
Catalin Marinas 2b8cac814c arm64: klib: Optimised string functions
This patch introduces AArch64-specific string functions (strchr,
strrchr).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-03-21 17:39:30 +00:00
Catalin Marinas 4a8992271c arm64: klib: Optimised memory functions
This patch introduces AArch64-specific memory functions (memcpy,
memmove, memchr, memset). These functions are not optimised for any CPU
implementation but can be used as a starting point once hardware is
available.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-03-21 17:39:29 +00:00
Marc Zyngier f27bb139c3 arm64: Miscellaneous library functions
This patch adds udelay, memory and bit operations together with the
ksyms exports.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2012-09-17 13:42:18 +01:00
Catalin Marinas 0aea86a217 arm64: User access library functions
This patch add support for various user access functions. These
functions use the standard LDR/STR instructions and not the LDRT/STRT
variants in order to allow kernel addresses (after set_fs(KERNEL_DS)).

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
2012-09-17 13:42:11 +01:00