Delay of 50us between successive TLBSTATUS check is too
huge when we call map/unmap quite frequently. This reduces
the performance. So instead use very minimal delay (1us)
between 2 successive checks.
Change-Id: Iaa7d9d2bae93bd7004a1f42e8d2936f28f8f11a8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
When sync_tlb gets timeout, respective client needs some
notification to get debug info from their device point
of view. So, add the notifier calling routine and not
just limit it to some debug defconfig.
Change-Id: I532022e38af0f4db9d12048c02e19663cd284a8e
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
This SMMU driver now supports use-cases where large
numbers of buffers each being 4K in size getting
mapped and unmapped very frequently. Doing TLBIASID
is costly for every such unmap. So, provide support
for TLBIVA again.
Change-Id: I58d04d91231c5da0dc3e0b92082eb515630a69ce
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Flushing the cache for each PTE update could be very time
consuming if the number of PTEs getting updated are in
order of few thousands. Instead, don't perform cache ops
for each PTE update and flush the updated page table once
at the end of map/unmap routine. This saves roughly 60%
of the total time spent by map/unmap calls. Few numbers
with/without this optimization applied.
Numebrs are taken at Cortex-A53 single core clocked at
1.2 GHz.
AARCH64 (without optimization)
size iommu_map_range iommu_unmap
64K 14 us 9 us
2M 176 us 16 us
12M 1016 us 54 us
20M 1809 us 100 us
AARCH64 (with optimization)
size iommu_map_range iommu_unmap
64K 18 us 12 us
2M 77 us 18 us
12M 396 us 47 us
20M 648 us 73 us
Change-Id: I5c5f9e5cec5a7aed5b478be52d943fcaa1c0ed84
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Some clients can generate 48/49 bit virtual address.
Support those clients by AARCH64 page table format.
Change-Id: Ic8d9a12e990f13ffebd6be6c81506d6bcc421f05
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Some clients of IOMMU treats the CB fault as non-recoverable
and they may want to trap the fault for debug purpose. Provide
that provision via context bank DT property.
Change-Id: Icb9cb67ed3dac44e144fcd7bc85deca833bf941c
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Bit[9] and Bit[10] of an FSR register indicates the
page table format of the context bank and can be
non-zero even if there is no fault recorded into FSR.
Only first 9 bits [8:0] indicates type of fault 'at'
the time of fault. So, fix the false positive by just
checking over fault indicator bits only.
Change-Id: Id7c37d8d0b26002156ae3b829e3a11fb7a631fed
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Certain use cases require iommu map and unmap functions
to be called from atomic context. Remove all sleeping
calls and fix the locks to make these functions atomic.
The secure map and unmap functions are still non-atomic
since they have to invoke smc calls which are sleeping
functions right now.
Change-Id: I802b1aed98d30bf75b381fadcb5fc68978618a3f
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Remove CONFIG_MSM_IOMMU_TLBINVAL_ON_MAP and the code
protected by it, since there is no known case now,
which requires a tlb invalidate during a map.
Change-Id: Ia9566dfadbb24345e4bcc66111dd0013a53e1b1c
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
MSM IOMMU driver presently supports mapping of virtual
addresses of 32-bit long only. Because of this, 32-bit
long virtual address was okay and some mis-matches were
silent. This is harmless but still buggy. Also, for
64-bit virtual address mapping, some API needs update.
Make input virtual address always 'unsigned long' and
trunk (if necessary) based on the page table format.
Change-Id: I5d761246b0e150d9a0d22a9ae25581b5205e0594
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
TLB invalidation by VA will work for the given VA and
not the range of VA. So, for a region that spreads
across multiple PTEs, that need to call TLBIVA multiple
times. This is un-optimized.
Anyways, the present implementation also support single
TLBIVA only for the first VA and rest may be kept in
TLB as is. This is a problem. Fix this by upgrading
TLBIVA with TLBIASID which will do invalidation for
all the VAs for matching ASID.
We may still skip fixing this in iommu_map as iommu_map
is still indeed mapping one VA at a time.
Change-Id: I6c833e62fd47d9c11457ef90cdd322b6f751c698
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
iommu_map_sg is the newer, preferred API. Add a wrapper around
the existing map_range API for map_sg. Once all clients have been
successfully converted map_range can be removed.
Change-Id: Ib77c86f6b12b00b2bd83a4938465dc685faea624
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Allow clients to querey pagetable addresses via this
attribute instead of the non-standard iommu_get_pt_base_addr()
function.
Change-Id: Ide61f4cb5cec4b2e67fd035aa59e154b5dfca8d0
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
When detaching the IOMMU device there is no need to flush the
TLB. The detach can be called from drivers to recover from IOMMU
pipe lockups, if IOMMU pipe is stuck then the flush will not
complete which prevents client drivers from resetting IOMMU.
Change-Id: I03d99398692b2942497b50c3b6367a1dd1d63cfc
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
It's currently inside an `#ifdef CONFIG_MSM_IOMMU_VBIF_CHECK', which is
incorrect since it doesn't actually depend on that config. Drivers that
want to use it fail to link when CONFIG_MSM_IOMMU_V1=y &&
CONFIG_MSM_IOMMU_VBIF_CHECK=n. Move it out and export it while we're at
it.
Change-Id: I48dc655fbd1558871704aa40e065e1002e836308
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
If tlb sync is not complete to due some reason, recovery sequence is
performed on a client vbif. However vbif base address to perform this
recovery sequence is incorrect and leads to xpu violation. Fix the vbif
base address to prevent unintentional register access.
Change-Id: I8cffffa1f0f3e30116fd245c7b6e8f2c61ce847e
Signed-off-by: Ujwal Patel <ujwalp@codeaurora.org>
Signed-off-by: Siddhartha Agrawal <agrawals@codeaurora.org>
There are some events that IOMMU client drivers might like to be
notified about. Add a notifier chain for this purpose. Currently the
only supported event is TLB sync timeout.
Change-Id: I4f04e856c9a809f49afb857de8047a8ac2d02a92
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
We recently moved the L2 redirect configuration to a domain attribute
[dffd6f05e94f: "iommu: msm: move L2 redirect to a domain attribute"].
This allows us to do away with some divergence in the generic IOMMU
APIs. However, that commit botched the case when
CONFIG_IOMMU_PGTABLES_L2 is disabled. In that case we actually always
set the page tables to be shared, exactly the opposite of what we want.
Fix this.
Change-Id: I47837584ae88fbc0be578500d20c2a62a9b33bca
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
We will soon be removing the `flags' parameter from iommu_domain_alloc
and iommu_ops.domain_init. In preparation for this, move the L2 redirect
flag to a domain attribute.
We now no longer need the extra parameter to iommu_domain_alloc which
was added in [8984b0e30df: "drivers: iommu: Add flags to
iommu_domain_alloc"] since we're changing the way L2 redirect is
configured. Remove it.
Change-Id: Ie0d15767ca08211740d22568683fae01e8123a26
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
For SMMU, invalidation operation consist of 3 steps.
1) Give invalidation command
2) Give SYNC command
3) Check the status bit and confirm whether TLB operation
has completed or not.
Fix this sequence in the driver.
Change-Id: I60cefb313b359134367a48528bebe554487d436a
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
ASID conflict between 2 CBs can lead to some odd behavior
of SMMU. Present way of ASID allotment can conflict with
the ASID being used by secure world. Use the CB number
as ASID. That will ensure that we will not use any other
ASID which secure world may think of using it.
Change-Id: I622d5c1aee7dac5913706588ae7ff1c490a2981c
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
The device struct may be passed as NULL into the msm_detach_dev
function. Add check to prevent NULL pointer dereferencing.
Change-Id: I4123b60969358cd4ff9ad20b76257405aacc4257
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
Due to a hardware bug the TLB has to be invalidated
during map and map_range operations. Newer targets
no longer see this issue. Add config option
to invalidate the TLB only for older targets.
Change-Id: I5bbe84e9dde23bcf960cf5409eed41c6cea41c16
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
When an address is unmapped from the page tables there is a short period
of time between the code freeing the page table back to the memory
subsystem and the code issuing a TLB invalidate on the IOMMU hardware
when the IOMMU could be accessing the page table that has been freed.
This can cause the IOMMU to translate a virtual address to a bogus
physical address which can cause system instability.
Instead of freeing the page before doing a TLB invalidate we keep a
shadow table that keeps track of the pointers to the page tables and the
number of outstanding mappings. We can thus zero out the real page
table, do a TLB invalidate, and then free the pages through the shadow
page table entries.
Change-Id: Ifb677ea8033fb35d8a98c1f00c9aaa9bcfe0b2d0
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
For some of the smmus's the number of unique stream
ids is more than the number of Stream Mapping
Table entries. The mask field of the SMRn register
needs to updated to handle the correct mapping
behaviour.
Change-Id: I72a2ffe538a6078320c65575d1b007e4114401a4
Signed-off-by: Neeti Desai <neetid@codeaurora.org>
Now we are dumping both global space registers and context
bank registers as a part of page fault handler. Pass
proper base addresses for both of them.
Change-Id: I4d5cbdc508100cbcb5f5960a209d01cadac3904b
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Original intent:
MMU-500 implements context caching in TLBs and prefetch
buffers of IOMMU. Enabling them would boost performance.
But this implementation is different from what present
SMMU driver expects. So, configure auxiliary registers
for MMU-500.
Reason for re-enable:
Context caching was reverted due to strange issues of
Permission faults at SMMU when this was enabled. Now,
that issue is fixed. Re-enable this feature for
performance needs.
Change-Id: I63829358e501d4c6f7526d5b59a31ab23b1cd0d8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Page faults often leads to doubt of either page table corruption
in DDR or the malfunctioning of the IOMMU. To rule out that, we
can walk through the page tables and get actual translation with
the faulty VA. If physical address from the page table is expected,
we can doubt IOMMU malfunctioning, or otherwise, page table
corruption.
Change-Id: Icef69e7a50ce3110fc83ca75b4a984057336b5bb
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
This reverts commit 0f2211398a.
Context caching is resulting into page faults at SMMU.
Disable context caching until we find proper fix for
those faults.
Change-Id: I21bade2795be1649c18bbc75800e57683999c2a8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
MMU-500 implements context caching in TLBs and prefetch
buffers of IOMMU. Enabling them would boost performance.
But this implementation is different from what present
SMMU driver expects. So, configure auxiliary registers
for MMU-500.
Change-Id: Ife23428315ccec01cf40d4d2409b3640a38c20bb
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Within SMMUv2, different implementation have
different Software interface. Support IOMMU
driver to be compatible with MMU-500.
Change-Id: I7d18c5aad82acafa317fe95b021e29b992a082be
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
During iommu detach we do a local iommu halt to ensure certain
registers are atomically updated. However, the local iommu halt
is taken at a larger scope than needed. Reduce the scope of this
halt. This will help with performance when detaching a context bank
from an iommu that has another context bank in use.
Change-Id: Id7630bd73a71d38ee987bd0f19cc8e06c725a7c1
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Present logic of enabling aggregated CB interrupts works only
for non-secure SMMUs. Improvise that logic to enable interrupts
for secure SMMUs also.
Change-Id: I77f914de760562ce30b7ade512a12639eb84af6d
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Header files are no longer allowed in the directory
arch/arm/mach-msm/include/mach/ and they have
been moved to their right places. Fix all header
inclusion impacted by this in iommu driver.
Change-Id: I36b8ba5d32f27ce9290edc840478214e4e8929c4
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Header files are no longer allowed in the directory
arch/arm/mach-msm/include/mach/ .
Move the iommu related header files to a more suitable place.
Change-Id: Ib7bbce1485d6185f669935b507040cac75368985
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
The file mach/msm_bus.h should not be included,
instead the file linux/msm-bus.h should be.
Also remove an unneeded include.
Change-Id: I6e060739977f8604409f660c72c9a983eaddfa45
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
VFE SMMU needs to be secure for new secure camera use
cases. Change the VFE to be secure and designate the
last context bank (CB) as secure. Also ensure backwards
compatability so that when running with old secure environment
we fall back to VFE SMMU being non-secure.
Change-Id: I25f31b0350ef0c1b16ebb0db531cc0e6bc556fcf
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
If we get back a partial register dump from TZ we should go ahead and
print as much as we can, rather than bailing out and not printing
anything.
Change-Id: Idcc8b14a76bf1f23bdf00b52879b3ce0ca9afdcc
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Add support for dumping registers from the global register space. Dump
CBAR_N and CBFRSYNRA_N.
Change-Id: If20605968fac75ad791d4e63e4d089ecaf8f7ebd
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Currently in the secure register dump code, we try to determine the
register in question by doing some reverse arithmetic on the full
address that TZ returns and matching that against known register
offsets. However, support was recently added for storing the base
physical address of the Iommu in `struct msm_iommu_drvdata'. Simplify
the code by calculating the offset of the registers being returned by TZ
by subtracting their values from the base address of the Iommu.
Change-Id: Icd6e4a35a48b808b0523ab4971cb9fb5b00125f5
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Check whether halting the VBIF XIN when the TLB sync or IOMMU halt
times out clears the issue indicating an issue with the master not clearing
the VBIF FIFO.
Change-Id: I552f72db17e31a174ab2725bbecbf4ad98a9d378
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
TLB sync might take longer than expected to complete and we wait for 10 ms
to recheck. This is causing jitter in some applications. Instead of waiting
too long to check whether TLB sync has completed we use function
readl_tight_poll_timeout to quickly poll the status without incurring sleep
delay.
We also ensure that if we do eventually time out on the TLB sync or iommu
halt we cause a crash to indicate that something has gone wrong.
CRs-fixed: 608971
Change-Id: I6b3e5155ab7aafa80432f95554d25cf0e2b753f9
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>