The unmap api is currently not handling unmapping of page table
entries (PTE) properly. The generic function that calls the msm
unmap API expects the unmap call to unmap as much as possible
and then return the amount that was unmapped.
In addition the unmap function does not support an arbitrary input
length. However, the function that calls the msm unmap function
assumes that this is supported.
Both these issues can cause mappings to not be unmapped which will
cause subsequent mappings to fail because the mapping already exists.
Change-Id: I638d5c38673abe297a701de9b7209c962564e1f1
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
If msm_iommu_map_range() fails mid way through the va
range with an error, clean up the PTEs that have already
been created so they are not leaked.
Change-Id: Ie929343cd6e36cade7b2cc9b4b4408c3453e6b5f
CRs-Fixed: 478304
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Currently, the iommu page table code treats a scattergather
list with physical address 0 as an error. This may not be
correct in all cases. Physical address 0 is a valid part
of the system and may be used for valid page allocations.
Nothing else in the system checks for physical address 0
for error so don't treat it as an error.
Change-Id: Ie9f0dae9dace4fff3b1c3449bc89c3afdd2e63a0
CRs-Fixed: 478304
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Make sure iommu_map_range() does not leave a partial
mapping on error if part of the range is already mapped.
Change-Id: I108b45ce8935b73ecb65f375930fe5e00b8d91eb
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
IOMMU map and unmap function should be using phys_addr_t
instead of unsigned int which will not work properly with
LPAE.
Change-Id: I22b31b4f13a27c0280b0d88643a8a30d019e6e90
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Allow the IOMMUv1 to use 16M, 1M, 64K or 4K iommu
pages when physical and virtual addresses are
appropriately aligned. This can reduce TLB misses
when large buffers are mapped.
Change-Id: Iffcaa04097fc3877962f3954d73a6ba448dca20b
Signed-off-by: Kevin Matlage <kmatlage@codeaurora.org>
Fix the following NULL pointer dereference issue.
Pointer '__p' returned from call to function 'smem_alloc' at line 84
may be NULL and will be dereferenced at line 85.
drivers/iommu/msm_iommu.c +85 | _msm_iommu_remote_spin_lock_init()
Change-Id: I3549e8dc6cb6b13518ced7d28186da74667c1cb6
Signed-off-by: Binoy Jayan <bjayan@codeaurora.org>
commit d14053b3c714178525f22660e6aaf41263d00056 upstream.
The VT-d specification says that "Software must enable ATS on endpoint
devices behind a Root Port only if the Root Port is reported as
supporting ATS transactions."
We walk up the tree to find a Root Port, but for integrated devices we
don't find one — we get to the host bridge. In that case we *should*
allow ATS. Currently we don't, which means that we are incorrectly
failing to use ATS for the integrated graphics. Fix that.
We should never break out of this loop "naturally" with bus==NULL,
since we'll always find bridge==NULL in that case (and now return 1).
So remove the check for (!bridge) after the loop, since it can never
happen. If it did, it would be worthy of a BUG_ON(!bridge). But since
it'll oops anyway in that case, that'll do just as well.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
[lizf: Backported to 3.4:
- adjust context
- drop the last part of the changes of the patch]
Signed-off-by: Zefan Li <lizefan@huawei.com>
commit cbf3ccd09d683abf1cacd36e3640872ee912d99b upstream.
During device assignment/deassignment the flags in the DTE
get lost, which might cause spurious faults, for example
when the device tries to access the system management range.
Fix this by not clearing the flags with the rest of the DTE.
Reported-by: G. Richard Bellamy <rbellamy@pteradigm.com>
Tested-by: G. Richard Bellamy <rbellamy@pteradigm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Zefan Li <lizefan@huawei.com>
commit ba2374fd2bf379f933773811fdb06cb6a5445f41 upstream.
In preparation for the installation of a large page, any small page
tables that may still exist in the target IOV address range are
removed. However, if a scatter/gather list entry is large enough to
fit more than one large page, the address space for any subsequent
large pages is not cleared of conflicting small page tables.
This can cause legitimate mapping requests to fail with errors of the
form below, potentially followed by a series of IOMMU faults:
ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083)
In this example, a 4MiB scatter/gather list entry resulted in the
successful installation of a large page @ vPFN 0xfdc00, followed by
a failed attempt to install another large page @ vPFN 0xfde00, due to
the presence of a pointer to a small page table @ 0x7f83a4000.
To address this problem, compute the number of large pages that fit
into a given scatter/gather list entry, and use it to derive the
last vPFN covered by the large page(s).
Signed-off-by: Christian Zander <christian@nervanasys.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
[bwh: Backported to 3.2:
- Add the lvl_pages variable, added by an earlier commit upstream
- Also change arguments to dma_pte_clear_range(), which is called by
dma_pte_free_pagetable() upstream]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Zefan Li <lizefan@huawei.com>
commit cc4f14aa170d895c9a43bdb56f62070c8a6da908 upstream.
There's an off-by-one bug in function __domain_mapping(), which may
trigger the BUG_ON(nr_pages < lvl_pages) when
(nr_pages + 1) & superpage_mask == 0
The issue was introduced by commit 9051aa0268 "intel-iommu: Combine
domain_pfn_mapping() and domain_sg_mapping()", which sets sg_res to
"nr_pages + 1" to avoid some of the 'sg_res==0' code paths.
It's safe to remove extra "+1" because sg_res is only used to calculate
page size now.
Reported-And-Tested-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Acked-By: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
[lizf: Backported to 3.4: adjust context]
Signed-off-by: Zefan Li <lizefan@huawei.com>
commit 9b29d3c651 upstream.
When multiple devices are detached in __detach_device, they
are also removed from the domains dev_list. This makes it
unsafe to use list_for_each_entry_safe, as the next pointer
might also not be in the list anymore after __detach_device
returns. So just repeatedly remove the first element of the
list until it is empty.
Tested-by: Marti Raudsepp <marti@juffo.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Zefan Li <lizefan@huawei.com>
commit 3a93c841c2 upstream.
This patch disables translation(dma-remapping) before its initialization
if it is already enabled.
This is needed for kexec/kdump boot. If dma-remapping is enabled in the
first kernel, it need to be disabled before initializing its page table
during second kernel boot. Wei Hu also reported that this is needed
when second kernel boots with intel_iommu=off.
Basically iommu->gcmd is used to know whether translation is enabled or
disabled, but it is always zero at boot time even when translation is
enabled since iommu->gcmd is initialized without considering such a
case. Therefor this patch synchronizes iommu->gcmd value with global
command register when iommu structure is allocated.
Signed-off-by: Takao Indoh <indou.takao@jp.fujitsu.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
[wyj: Backported to 3.4: adjust context]
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 08336fd218 upstream.
dma_pte_free_level() has an off-by-one error when checking whether a pte
is completely covered by a range. Take for example the case of
attempting to free pfn 0x0 - 0x1ff, ie. 512 entries covering the first
2M superpage.
The level_size() is 0x200 and we test:
static void dma_pte_free_level(...
...
if (!(0 > 0 || 0x1ff < 0 + 0x200)) {
...
}
Clearly the 2nd test is true, which means we fail to take the branch to
clear and free the pagetable entry. As a result, we're leaking
pagetables and failing to install new pages over the range.
This was found with a PCI device assigned to a QEMU guest using vfio-pci
without a VGA device present. The first 1M of guest address space is
mapped with various combinations of 4K pages, but eventually the range
is entirely freed and replaced with a 2M contiguous mapping.
intel-iommu errors out with something like:
ERROR: DMA PTE for vPFN 0x0 already set (to 5c2b8003 not 849c00083)
In this case 5c2b8003 is the pointer to the previous leaf page that was
neither freed nor cleared and 849c00083 is the superpage entry that
we're trying to replace it with.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f9423606ad upstream.
The BUG_ON in drivers/iommu/intel-iommu.c:785 can be triggered from userspace via
VFIO by calling the VFIO_IOMMU_MAP_DMA ioctl on a vfio device with any address
beyond the addressing capabilities of the IOMMU. The problem is that the ioctl code
calls iommu_iova_to_phys before it calls iommu_map. iommu_map handles the case that
it gets addresses beyond the addressing capabilities of its IOMMU.
intel_iommu_iova_to_phys does not.
This patch fixes iommu_iova_to_phys to return NULL for addresses beyond what the
IOMMU can handle. This in turn causes the ioctl call to fail in iommu_map and
(correctly) return EFAULT to the user with a helpful warning message in the kernel
log.
Signed-off-by: Julian Stecklina <jsteckli@os.inf.tu-dresden.de>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3269ee0bd6 upstream.
At best the current code only seems to free the leaf pagetables and
the root. If you're unlucky enough to have a large gap (like any
QEMU guest with more than 3G of memory), only the first chunk of leaf
pagetables are freed (plus the root). This is a massive memory leak.
This patch re-writes the pagetable freeing function to use a
recursive algorithm and manages to not only free all the pagetables,
but does it without any apparent performance loss versus the current
broken version.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sometimes a single IOMMU user may have to deal with several
different IOMMU devices (e.g. remoteproc).
When an IOMMU fault happens, such users have to regain their
context in order to deal with the fault.
Users can't use the private fields of neither the iommu_domain nor
the IOMMU device, because those are already used by the IOMMU core
and low level driver (respectively).
This patch just simply allows users to pass a private token (most
notably their own context pointer) to iommu_set_fault_handler(),
and then makes sure it is provided back to the users whenever
an IOMMU fault happens.
The patch also adopts remoteproc to the new fault handling
interface, but the real functionality using this (recovery of
remote processors) will only be added later in a subsequent patch
set.
Change-Id: Ic04659686e72838a0db518e9303dd037191e3879
Cc: Fernando Guzman Lugo <fernando.lugo@ti.com>
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
[ohaugan@codeaurora.org: Resolved compilation and merge issues]
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Conflicts:
drivers/video/msm/mdss/mdss_mdp.c
commit 60d0ca3cfd upstream.
If we use a large mapping, the expectation is that only unmaps from
the first pte in the superpage are supported. Unmaps from offsets
into the superpage should fail (ie. return zero sized unmap). In the
current code, unmapping from an offset clears the size of the full
mapping starting from an offset. For instance, if we map a 16k
physically contiguous range at IOVA 0x0 with a large page, then
attempt to unmap 4k at offset 12k, 4 ptes are cleared (12k - 28k) and
the unmap returns 16k unmapped. This potentially incorrectly clears
valid mappings and confuses drivers like VFIO that use the unmap size
to release pinned pages.
Fix by refusing to unmap from offsets into the page.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
msm: kgsl: Add device init function
Some device specific parameters need to be setup only once during
device initialization. Create an init function for this purpose
rather than re-doing this init everytime the device is started.
Change-Id: I45c7fcda8d61fd2b212044c9167b64f793eedcda
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 2nd commit message:
msm: kgsl: improve active_cnt and ACTIVE state management
Require any code path which intends to touch the hardware
to take a reference on active_cnt with kgsl_active_count_get()
and release it with kgsl_active_count_put() when finished.
These functions now do the wake / sleep steps that were
previously handled by kgsl_check_suspended() and
kgsl_check_idle().
Additionally, kgsl_pre_hwaccess() will no longer turn on
the clocks, it just enforces via BUG_ON that the clocks
are enabled before a register is touched.
Change-Id: I31b0d067e6d600f0228450dbd73f69caa919ce13
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 3rd commit message:
msm: kgsl: Sync memory with CFF from places where it was missing
Before submitting any indirect buffer to GPU via the ringbuffer,
the indirect buffer memory should be synced with CFF so that the
CFF capture will be complete. Add the syncing of memory with CFF
in places where this was missing
Change-Id: I18f506dd1ab7bdfb1a68181016e6f661a36ed5a2
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 4th commit message:
msm: kgsl: Export some kgsl-core functions to EXPORT_SYMBOLS
Export some functions in the KGSL core driver so they can
be seen by the leaf drivers.
Change-Id: Ic0dedbad5dbe562c2e674f8e885a3525b6feac7b
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 5th commit message:
msm: kgsl: Send the right IB size to adreno_find_ctxtmem
adreno_find_ctxtmem expects byte lengths and we were sending it
dword lengths which was about as effective as you would expect.
Change-Id: Ic0dedbad536ed377f6253c3a5e75e5d6cb838acf
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 6th commit message:
msm: kgsl: Add 8974 default GPR0 & clk gating values
Add correct clock gating values for A330, A305 and A320.
Add generic function to return the correct default clock
gating values for the respective gpu. Add default GPR0
value for A330.
Change-Id: I039e8e3622cbda04924b0510e410a9dc95bec598
Signed-off-by: Harsh Vardhan Dwivedi <hdwivedi@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 7th commit message:
msm: kgsl: Move A3XX VBIF settings decision to a table
The vbif selection code is turning into a long series of if/else
clauses. Move the decision to a look up table that will be easier
to update and maintain when when we have eleventy A3XX GPUs.
Change-Id: Ic0dedbadd6b16734c91060d7e5fa50dcc9b8774d
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 8th commit message:
msm: kgsl: Update settings for the A330v2 GPU in 8972v2
The new GPU spin in 8974v2 has some slightly different settings
then the 8974v1: add support for identifying a v2 spin, add a new
table of VBIF register settings and update the clock gating
registers.
Change-Id: Ic0dedbad22bd3ed391b02f6327267cf32f17af3d
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 9th commit message:
msm: kgsl: Fix compilation errors when CFF is turned on
Fix the compilation errors when option MSM_KGSL_CFF_DUMP option
is turned on.
Change-Id: I59b0a7314ba77e2c2fef03338e061cd503e88714
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 10th commit message:
msm: kgsl: Convert the Adreno GPU cycle counters to run free
In anticipation of allowing multiple entities to share access to the
performance counters; make the few performance counters that KGSL
uses run free.
Change-Id: Ic0dedbadbefb400b04e4f3552eed395770ddbb7b
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 11th commit message:
msm: kgsl: Handle a possible ringbuffer allocspace error
In the GPU specific start functions, account for the possibility
that ringbuffer allocation routine might return NULL.
Change-Id: Ic0dedbadf6199fee78b6a8c8210a1e76961873a0
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 12th commit message:
msm: kgsl: Add a new API to allow sharing of GPU performance counters
Adreno uses programmable performance counters, meaning that while there
are a limited number of physical counters each counter can be programmed
to count a vast number of different measurements (we refer to these as
countables). This could cause problems if multiple apps want to use
the performance counters, so this API and infrastructure allows the
counters to be safely shared.
The kernel tracks which countable is selected for each of the physical
counters for each counter group (where groups closely match hardware
blocks). If the desired countable is already in use, or there is an
open physical counter, then the process is allowed to use the counter.
The get ioctl reserves the counter and returns the dword offset of the
register associated with that physical counter. The put ioctl
releases the physical counter. The query ioctl gets the countables
used for all of the counters in the block - up to 8 values can be
returned. The read ioctl gets the current hardware value in the counter
Change-Id: Ic0dedbadae1dedadba60f8a3e685e2ce7d84fb33
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
# This is the 13th commit message:
msm: kgsl: Print the nearest active GPU buffers to a faulting address
Print the two active GPU memory entries that bracket a faulting GPU
address. This will help diagnose premature frees and buffer ovverruns.
Check if the faulting GPU address was freed by the same process.
Change-Id: Ic0dedbadebf57be9abe925a45611de8e597447ea
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Vladimir Razgulin <vrazguli@codeaurora.org>
# This is the 14th commit message:
msm: kgsl: Remove an uneeded register write for A3XX GPUs
A3XX doesn't have the MH block and so the register at 0x40 points
somewhere else. Luckily the write was harmless but remove it anyway.
Change-Id: Ic0dedbadd1e043cd38bbaec8fcf0c490dcdedc8c
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 15th commit message:
msm: kgsl: clean up iommu/gpummu protflag handling
Make kgsl_memdesc_protflags() return the correct type of flags
for the type of mmu being used. Query the memdesc with this
function in kgsl_mmu_map(), rather than passing in the
protflags. This prevents translation at multiple layers of
the code and makes it easier to enforce that the mapping matches
the allocation flags.
Change-Id: I2a2f4a43026ae903dd134be00e646d258a83f79f
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 16th commit message:
msm: kgsl: remove kgsl_mem_entry.flags
The two flags fields in kgsl_memdesc should be enough for
anyone. Move the only flag using kgsl_mem_entry, the
FROZEN flag for snapshot procesing, to use kgsl_memdesc.priv.
Change-Id: Ia12b9a6e6c1f5b5e57fa461b04ecc3d1705f2eaf
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 17th commit message:
msm: kgsl: map the guard page readonly on the iommu
The guard page needs to be readable by the GPU, due to
a prefetch range issue, but it should never be writable.
Change the page fault message to indicate if nearby
buffers have a guard page.
Change-Id: I3955de1409cbf4ccdde92def894945267efa044d
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 18th commit message:
msm: kgsl: Add support for VBIF and VBIF_PWR performance counters
These 2 counter groups are also "special cases" that require
different programming sequences.
Change-Id: I73e3e76b340e6c5867c0909b3e0edc78aa62b9ee
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 19th commit message:
msm: kgsl: Only allow two counters for VBIF performance counters
There are only two VBIF counter groups so validate that the user
doesn't pass in > 1 and clean up the if/else clause.
Change-Id: Ic0dedbad3d5a54e4ceb1a7302762d6bf13b25da1
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 20th commit message:
msm: kgsl: Avoid an array overrun in the perfcounter API
Make sure the passed group is less than the size of the list of
performance counters.
Change-Id: Ic0dedbadf77edf35db78939d1b55a05830979f85
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 21st commit message:
msm: kgsl: Don't go to slumber if active_count is non zero
If active_cnt happens to be set when we go into
kgsl_early_suspend_driver() then don't go to SLUMBER. This
avoids trouble if we come back and and try to access the
hardware while it is off.
Change-Id: Ic0dedbadb13514a052af6199c8ad1982d7483b3f
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 22nd commit message:
msm: kgsl: Enable HLSQ registers in snapshot when available
Reading the HLSQ registers during a GPU hang recovery might cause
the device to hang depending on the state of the HLSQ block.
Enable the HLSQ register reads when we know that they will
succeed.
Change-Id: I69f498e6f67a15328d1d41cc64c43d6c44c54bad
Signed-off-by: Carter Cooper <ccooper@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 23rd commit message:
msm: kgsl: snapshot: Don't keep parsing indirect buffers on failure
Stop parsing an indirect buffer if an error is encountered (such as
a missing buffer). This is a pretty good indication that the buffers
are not reliable and the further the parser goes with a unreliable
buffer the more likely it is to get confused.
Change-Id: Ic0dedbadf28ef374c9afe70613048d3c31078ec6
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 24th commit message:
msm: kgsl: snapshot: Only push the last IB1 and IB2 in the static space
Some IB1 buffers have hundreds of little IB2 buffers and only one of them
will actually be interesting enough to push into the static space. Only
push the last executed IB1 and IB2 into the static space.
Change-Id: Ic0dedbad26fb30fb5bf90c37c29061fd962dd746
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 25th commit message:
msm: kgsl: Save the last active context in snapshot
Save the last active context that was executing when the hang happened
in snapshot.
Change-Id: I2d32de6873154ec6c200268844fee7f3947b7395
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 26th commit message:
msm: kgsl: In snapshot track a larger object size if address is same
If the object being tracked has the same address as a previously
tracked object then only track a single object with larger size
as the smaller object will be a part of the larger one anyway.
Change-Id: I0e33bbaf267bc0ec580865b133917b3253f9e504
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 27th commit message:
msm: kgsl: Track memory address from 2 additional registers
Add tracking of memory referenced by VS_OBJ_START_REG and FS_OBJ_START_REG
registers in snapshot. This makes snapshot more complete in terms of
tracking data that is used by the GPU at the time of hang.
Change-Id: I7e5f3c94f0d6744cd6f2c6413bf7b7fac4a5a069
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 28th commit message:
msm: kgsl: Loop till correct index on type0 packets
When searching for memory addresses in type0 packet we were looping
from start of the type0 packet till it's end, but the first DWORD
is a header so we only need to loop till packet_size - 1. Fix this.
Change-Id: I278446c6ab380cf8ebb18d5f3ae192d3d7e7db62
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 29th commit message:
msm: kgsl: Add global timestamp information to snapshot
Make sure that we always add global timestamp information to
snapshot. This is needed in playbacks for searching whereabouts
of last executed IB.
Change-Id: Ica5b3b2ddff6fd45dbc5a911f42271ad5855a86a
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 30th commit message:
msm: kgsl: Skip cff dump for certain functions when its disabled
Certain functions were generating CFF when CFF was disabled. Make
sure these functions do not dump CFF when it is disabled.
Change-Id: Ib5485b03b8a4d12f190f188b80c11ec6f552731d
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 31st commit message:
msm: kgsl: Fix searching of memory object
Make sure that at least a size of 1 byte is searched when locating
the memory entry of a region. If size is 0 then a memory region
whose last address is equal to the start address of the memory being
searched will be returned which is wrong.
Change-Id: I643185d1fdd17296bd70fea483aa3c365e691bc5
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 32nd commit message:
msm: kgsl: If adreno start fails then restore state of device
Restore the state of the device back to what it was at the
start of the adreno_start function if this function fails to
execute successfully.
Change-Id: I5b279e5186b164d3361fba7c8f8d864395b794c8
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 33rd commit message:
msm: kgsl: Fix early exit condition in ringbuffer drain
The ringbuffer drain function can be called when the ringbuffer
start flag is not set. This happens on startup. Hence,
exiting the function early based on start flag is incorrect.
Simply execute this function regardless of the start flag.
Change-Id: Ibf2075847f8bb1a760bc1550309efb3c7aa1ca49
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 34th commit message:
msm: kgsl: Do not return an error on NULL gpu address
If a NULL gpu address is passed to snapshot object tracking
function then do not treat this as an error and return 0. NULL
objects may be present in an IB so just skip over these objects
instead of exiting due to an error.
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Change-Id: Ic253722c58b41f41d03f83c77017e58365da01a7
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 35th commit message:
msm: kgsl: Don't hold process list global mutex in process private create
Don't hold process list global mutex for long. Instead make
use of process specific spin_lock() to serialize access
to process private structure while creating it. Holding
process list global mutex could lead to deadlocks as other
functions depend on it.
CRs-fixed: 480732
Change-Id: Id54316770f911d0e23384f54ba5c14a1c9113680
Signed-off-by: Harsh Vardhan Dwivedi <hdwivedi@codeaurora.org>
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 36th commit message:
msm: kgsl: Use CPU path to program pagetable when active count is 0
When active count is 0 then we should use the CPU path to program
pagetables because the GPU path requires event registration. Events
can only be queued when active count is valid. Hence, if the active
count is NULL then use the CPU path.
Change-Id: I70f5894d20796bdc0f592db7dc2731195c0f7a82
CRs-fixed: 481887
Signed-off-by: Shubhrapralash Das <sadas@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 37th commit message:
iommu: msm: prevent partial mappings on error
If msm_iommu_map_range() fails mid way through the va
range with an error, clean up the PTEs that have already
been created so they are not leaked.
Change-Id: Ie929343cd6e36cade7b2cc9b4b4408c3453e6b5f
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 38th commit message:
msm: kgsl: better handling of virtual address fragmentation
When KGSL_MEMFLAGS_USE_CPU_MAP is enabled, the mmap address
must try to match the GPU alignment requirements of the buffer,
as well as include space in the mapping for the guard page.
This can cause -ENOMEM to be returned from get_unmapped_area()
when there are a large number of mappings. When this happens,
fall back to page alignment and retry to avoid failure.
Change-Id: I2176fe57afc96d8cf1fe1c694836305ddc3c3420
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 39th commit message:
iommu: msm: Don't treat address 0 as an error case
Currently, the iommu page table code treats a scattergather
list with physical address 0 as an error. This may not be
correct in all cases. Physical address 0 is a valid part
of the system and may be used for valid page allocations.
Nothing else in the system checks for physical address 0
for error so don't treat it as an error.
Change-Id: Ie9f0dae9dace4fff3b1c3449bc89c3afdd2e63a0
CRs-Fixed: 478304
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 40th commit message:
msm: kgsl: prevent race between mmap() and free on timestamp
When KGSL_MEMFLAGS_USE_CPU_MAP is set, we must check that the
address from get_unmapped_area() is not used as part of a
mapping that is present only in the GPU pagetable and not the
CPU pagetable. These mappings can occur because when a buffer
is freed on timestamp, the CPU mapping is destroyed immediately
but the GPU mapping is not destroyed until the GPU timestamp
has passed.
Because kgsl_mem_entry_detach_process() removed the rbtree
entry before removing the iommu mapping, there was a window
of time where kgsl thought the address was available even
though it was still present in the iommu pagetable. This
could cause the address to get assigned to a new buffer,
which would cause iommu_map_range() to fail since the old
mapping was still in the pagetable. Prevent this race by
removing the iommu mapping before removing the rbtree entry
tracking the address.
Change-Id: I8f42d6d97833293b55fcbc272d180564862cef8a
CRs-Fixed: 480222
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 41st commit message:
msm: kgsl: add guard page support for imported memory
Imported memory buffers sometimes do not have enough
padding to prevent page faults due to overzealous
GPU prefetch. Attach guard pages to their mappings
to prevent these faults.
Because we don't create the scatterlist for some
types of imported memory, such as ion, the guard
page is no longer included as the last entry in
the scatterlist. Instead, it is handled by
size ajustments and a separate iommu_map() call
in the kgsl_mmu_map() and kgsl_mmu_unmap() paths.
Change-Id: I3af3c29c3983f8cacdc366a2423f90c8ecdc3059
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 42nd commit message:
msm: kgsl: fix kgsl_mem_entry refcounting
Make kgsl_sharedmem_find* return a reference to the
entry that was found. This makes using an entry
without the mem_lock held less race prone.
Change-Id: If6eb6470ecfea1332d3130d877922c70ca037467
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 43rd commit message:
msm: kgsl: add ftrace for cache operations
Add the event kgsl_mem_sync_cache. This event is
emitted when only a cache operation is actually
performed. Attempts to flush uncached memory,
which do nothing, do not cause this event.
Change-Id: Id4a940a6b50e08b54fbef0025c4b8aaa71641462
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 44th commit message:
msm: kgsl: Add support for bulk cache operations
Add a new ioctl, IOCTL_KGSL_GPUMEM_SYNC_CACHE_BULK, which can be used
to sync a number of memory ids at once. This gives the driver an
opportunity to optimize the cache operations based on the total
working set of memory that needs to be managed.
Change-Id: I9693c54cb6f12468b7d9abb0afaef348e631a114
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 45th commit message:
msm: kgsl: flush the entire cache when the bulk batch is large
On 8064 and 8974, flushing more than 16mb of virtual address
space is slower than flushing the entire cache. So flush
the entire cache when the working set is larger than this.
The threshold for full cache flush can be tuned at runtime via
the full_cache_threshold sysfs file.
Change-Id: If525e4c44eb043d0afc3fe42d7ef2c7de0ba2106
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 46th commit message:
msm: kgsl: Use a read/lock for the context idr
Everybody loves a rcu but in this case we are dangerously mixing rcus and
atomic operations. Add a read/write lock to explicitly protect the idr.
Also fix a few spots where the idr was used without protection.
Change-Id: Ic0dedbad517a9f89134cbcf7af29c8bf0f034708
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 47th commit message:
msm: kgsl: embed kgsl_context struct in adreno_context struct
Having a separate allocated struct for the device specific context
makes ownership unclear, which could lead to reference counting
problems or invalid pointers. Also, duplicate members were
starting to appear in adreno_context because there wasn't a safe
way to reach the kgsl_context from some parts of the adreno code.
This can now be done via container_of().
This change alters the lifecycle of the context->id, which is
now freed when the context reference count hits zero rather
than in kgsl_context_detach().
It also changes the context creation and destruction sequence.
The device specific code must allocate a structure containing
a struct kgsl_context and passes a pointer it to kgsl_init_context()
before doing any device specific initialization. There is also a
separate drawctxt_detach() callback for doing device specific
cleanup. This is separate from freeing memory, which is done
by the drawctxt_destroy() callback.
Change-Id: I7d238476a3bfec98fd8dbc28971cf3187a81dac2
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 48th commit message:
msm: kgsl: Take a reference count on the active adreno draw context
Take a reference count on the currently active draw context to keep
it from going away while we are maintaining a pointer to it in the
adreno device.
Change-Id: Ic0dedbade8c09ecacf822e9a3c5fbaf6e017ec0c
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 49th commit message:
msm: kgsl: Add a command dispatcher to manage the ringbuffer
Implements a centralized dispatcher for sending user commands
to the ringbuffer. Incoming commands are queued by context and
sent to the hardware on a round robin basis ensuring each context
a small burst of commands at a time. Each command is tracked
throughout the pipeline giving the dispatcher better knowledge
of how the hardware is being used. This will be the basis for
future per-context and cross context enhancements as priority
queuing and server-side syncronization.
Change-Id: Ic0dedbad49a43e8e6096d1362829c800266c2de3
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 50th commit message:
msm: kgsl: Only turn on the idle timer when active_cnt is 0
Only turn on the idle timer when the GPU expected to be quiet.
Change-Id: Ic0dedbad57846f1e7bf7820ec3152cd20598b448
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 51st commit message:
msm: kgsl: Add a ftrace event for active_cnt
Add a new ftrace event for watching the rise and fall of active_cnt:
echo 1 > /sys/kernel/debug/tracing/events/kgsl/kgsl_active_count/enable
This will give you the current active count and the caller of the function:
kgsl_active_count: d_name=kgsl-3d0 active_cnt=8e9 func=kgsl_ioctl
Change-Id: Ic0dedbadc80019e96ce759d9d4e0ad43bbcfedd2
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 52nd commit message:
msm: kgsl: Implement KGSL fault tolerance policy in the dispatcher
Implement the KGSL fault tolerance policy for faults in the dispatcher.
Replay (or skip) the inflight command batches as dictated by the policy,
iterating progressively through the various behaviors.
Change-Id: Ic0dedbade98cc3aa35b26813caf4265c74ccab56
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 53rd commit message:
msm: kgsl: Don't process events if the timestamp hasn't changed
Keep track of the global timestamp every time the event code runs.
If the timestamp hasn't changed then we are caught up and we can
politely bow out. This avoids the situation where multiple
interrupts queue the work queue multiple times:
IRQ
-> process events
IRQ
IRQ
-> process events
The actual retired timestamp in the first work item might be well
ahead of the delivered interrupts. The event loop will end up
processing every event that has been retired by the hardware
at that point. If the work item gets re-queued by a subesquent
interrupt then we might have already addressed all the pending
timestamps.
Change-Id: Ic0dedbad79722654cb17e82b7149e93d3c3f86a0
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 54th commit message:
msm: kgsl: Make active_cnt an atomic variable
In kgsl_active_cnt_light() the mutex was needed just to check and
increment the active_cnt value. Move active_cnt to an atomic to
begin the task of freeing ourselves from the grip of the device
mutex if we can avoid it.
Change-Id: Ic0dedbad78e086e3aa3559fab8ecebc43539f769
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 55th commit message:
msm: kgsl: Add a new command submission API
Add an new ioctl entry point for submitting commands to the GPU
called IOCTL_KGSL_SUBMIT_COMMANDS.
As with IOCTL_KGSL_RINGBUFFER_ISSUEIBCMDS the user passes a list of
indirect buffers, flags and optionally a user specified timestamp. The
old way of passing a list of indirect buffers is no longer supported.
IOCTL_KGSL_SUBMIT_COMMANDS also allows the user to define a
list of sync points for the command. Sync points are dependencies
on events that need to be satisfied before the command will be issued
to the hardware. Events are designed to be flexible. To start with
the only events that are supported are GPU events for a given context/
timestamp pair.
Pending events are stored in a list in the command batch. As each event is
expired it is deleted from the list. The adreno dispatcher won't send the
command until the list is empty. Sync points are not supported for Z180.
CRs-Fixed: 468770
Change-Id: Ic0dedbad5a5935f486acaeb033ae9a6010f82346
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 56th commit message:
msm: kgsl: add kgsl_sync_fence_waiter for server side sync
For server side sync the KGSL kernel module needs to perform
an asynchronous wait for a fence object prior to issuing
subsequent commands.
Change-Id: I1ee614aa3af84afc4813f1e47007f741beb3bc92
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 57th commit message:
msm: kgsl: Add support for KGSL_CMD_SYNCPOINT_TYPE_FENCE
Allow command batches to wait for external fence sync events.
Change-Id: Ic0dedbad3a211019e1cd3a3d62ab6a3e4d4eeb05
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 58th commit message:
msm: kgsl: fix potential double free of the kwaiter
Change-Id: Ic0dedbad66a0af6eaef52b2ad53c067110bdc6e4
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
# This is the 59th commit message:
msm: kgsl: free an event only after canceling successfully
Change-Id: Ic0dedbade256443d090dd11df452dc9cdf65530b
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
commit d3263bc297 upstream.
Work around an IOMMU hardware bug where clearing the
EVT_INT or PPR_INT bit in the status register may race with
the hardware trying to set it again. When not handled the
bit might not be cleared and we lose all future event or ppr
interrupts.
Reported-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 925fe08bce upstream.
Current driver does not clear the IOMMU event log interrupt bit
in the IOMMU status register after processing an interrupt.
This causes the IOMMU hardware to generate event log interrupt only once.
This has been observed in both IOMMU v1 and V2 hardware.
This patch clears the bit by writing 1 to bit 1 of the IOMMU
status register (MMIO Offset 2020h)
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When pagefault happens do not cancel the faulting transaction if
the registered fault handler returns EBUSY error. This way
drivers can control when they want to resume the transaction.
Change-Id: Ia4563da073ab04174803101c3b8ec82b0571850e
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
Signed-off-by: Sakshi Agrawal <sakshia@codeaurora.org>
Make sure iommu_map_range() does not leave a partial
mapping on error if part of the range is already
mapped.
Change-Id: I0ddeb0e0169b579f1efdeca4071fce4ee75a11f8
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Signed-off-by: Sakshi Agrawal <sakshia@codeaurora.org>
commit c2a2876e86 upstream.
There is a bug introduced with commit 27c2127 that causes
devices which are hot unplugged and then hot-replugged to
not have per-device dma_ops set. This causes these devices
to not function correctly. Fixed with this patch.
Reported-by: Andreas Degert <andreas.degert@googlemail.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Use 16M, 1M, 64K or 4K iommu pages when physical
and virtual addresses are appropriately aligned.
This can reduce TLB misses when large buffers
are mapped.
Change-Id: Ic0dedbadeca18cf163eb4e42116e0573720ab4d2
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Jeremy Gebben <jgebben@codeaurora.org>
Add remote spinlock that allows CPU and GPU to
synchronize access to IOMMU hardware.
Add usage of remote spinlock to iommu driver and
add depenency on SFPB hardware mutex being enabled.`
This feature is not using SFPB hardware mutex. However,
SFPB hardware mutex must be enabled since the remote
spinlock implementation is making use of shared memory
that is normally used when SFPB hardware mutex is not enabled.
Change-Id: Idc622f3484062e0721493be3cbbfb8889ed9d800
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
When setting the NSCFG field, the S2CR register being
written needs to be indexed by the stream matching group,
not by the value of the SID being configured. Additionally,
there is no need to set CBACR for every SMR that is
programmed.
Change-Id: Ib79771a3bd87e4bd3b353bd5c6de9247138ca43e
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
(cherry picked from commit c51f77cf28c2deba250444b51e88d87339890915)
Signed-off-by: Sudhir Sharma <sudsha@codeaurora.org>
(cherry picked from commit b8cda0f34f2fc7369dddd833e41edcdbe75642c2)
commit f528d980c1 upstream.
When dma_ops are initialized the unity mappings are
created. The init_device_table_dma() function makes sure DMA
from all devices is blocked by default. This opens a short
window in time where DMA to unity mapped regions is blocked
by the IOMMU. Make sure this does not happen by initializing
the device table after dma_ops.
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Shuah Khan <shuah.khan@hp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 210561ffd7 upstream.
We already have the quirk entry for the mobile platform, but also
reports on some desktop versions. So be paranoid and set it
everywhere.
References: http://www.mail-archive.com/dri-devel@lists.freedesktop.org/msg33138.html
Reported-and-tested-by: Mihai Moldovan <ionic@ionic.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: "Sankaran, Rajesh" <rajesh.sankaran@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Refactor the IOMMU clock control code to always require a
core clock as well as an interface clock. Add support for
an optional alternate core clock and update device tree
bindings accordingly. Clean up the probe function to remove
needless enabling / disabling of clocks.
Change-Id: I4d744ffabc1e6fb123bacfda324f64408257cb25
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
The hardware requires a TLB sync operation at the end of
each TLB maintenance operation.
Change-Id: I8102253cfc12af530216346efa5bb9760db25352
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Use the label property to specify device labels instead of
a vendor-specific property.
Change-Id: I74f3b57db469781f738f0d52c785d992c1e88efb
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
When probing the context devices, dev_info already prints
the device name, so printing it again is redundant. The
context name is more useful anyway, so print this instead.
Change-Id: Ibe2e33501baa1fd53f6ff45943226377eb61fd7e
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Query SMT and SID mapping information at probe-time instead
of attach-time to allow this information to be
error-checked at an earlier time.
Change-Id: Ib2bbdc8374f9c86c3e6013d298fe8b279b53d83b
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Since the IOMMU ID registers are only accessible by the
secure environment, specify the SMT sizes in device tree
so that the IOMMU driver knows how many SMRs to initialize.
Change-Id: I614a51069c0304f71b0c7d061d97aca0289c17ea
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
The official name for copper is MSM8974.
Switch to it.
Change-Id: Ifb241232111139912477bf7b5f2e9cf5d38d0f9e
Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
The IOMMU hardware blocks are power-gated by GDSCs which
need to be enabled prior to programming the IOMMU hardware.
Change-Id: I5b4e5a0a60ce672c1180faaf3a8344d72a6ebe5e
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Enable and disable the IOMMU clocks for each high-level
mapping operation rather than leaving the clocks enabled
between attach-time and detach-time even if no IOMMU
operations are being done.
Change-Id: I4cde881992b8cd77fb4ea7e8dc1c003f639d15b6
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
SMMU global address space programming need to be performed
each time the device comes out of power collapse, move the
programming of global address space from driver initialization
to the point where the attach is initiated by the clients.
When the first context is attached, the global address space
is programmed prior to the programming the context.
Change-Id: I36e4f161861823aa43d15c3271f8d9b26214cb84
Signed-off-by: Sathish Ambley <sambley@codeaurora.org>
Do not return a context pointer if the context does not
have driver data associated with it to ensure that IOMMU
functions fail gracefully on targets where the IOMMU
hardware could not be found.
Change-Id: Ibf915251a4a133c2baaf9fb5b01145fb3c419347
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
SMMU v2 is based off the ARM SMMU architecture specification.
The SMMUs primary purpose is to provide virtual address translation
and abstract the physical view of system memory. In doing so,
discontiguous physical memory appears virtually contiguous to
hardware cores.
The SMMU instances are now represented in device tree with each
instance having multiple translation context banks.
Change-Id: If4733500e5226984d26f1c8a97ae98603c2f75f9
Signed-off-by: Sathish Ambley <sambley@codeaurora.org>
Refactor clean_pte to accept a redirection attribute, which
allows for cleaner code at the caller.
Change-Id: Iff77abdced1fa6ea295a4bf6ec76f644b9922e63
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
If L2 redirection is disabled, clean page tables upon
allocation to prevent hardware table walks into unmapped
areas from accessing stale data.
Change-Id: If1e70bfa52f86d9ddc5a001a699050667b075631
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Provide the ability to report IOMMU fault events through
the IOMMU API. The driver will fall back on the previous
behavior of printing the fault information registers if no
domain fault handler is registered, or if the handler
returns -ENOSYS.
Change-Id: I9144e9b4bba117b67c7d81609e986ea716b34882
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
Fail to look up IOMMU context devices if there is no driver
data associated with them, as this would imply that the
device did not pass the hardware sanity check.
Change-Id: If2998a96dea9342850092344c4ad70eebf965229
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>