The Contiguous Memory Allocator is a set of helper functions for DMA
mapping framework that improves allocations of contiguous memory chunks.
CMA grabs memory on system boot, marks it with MIGRATE_CMA migrate type
and gives back to the system. Kernel is allowed to allocate only movable
pages within CMA's managed memory so that it can be used for example for
page cache when DMA mapping do not use it. On
dma_alloc_from_contiguous() request such pages are migrated out of CMA
area to free required contiguous block and fulfill the request. This
allows to allocate large contiguous chunks of memory at any time
assuming that there is enough free memory available in the system.
This code is heavily based on earlier works by Michal Nazarewicz.
Change-Id: I8a04c58b0d39ee7343ac0b58b6dad9d57912c91d
[lauraa: fixed Kconfig conflict]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
It is possible for user space to attempt to attach a valid fd that
does not correspond to a genlock file. This change adds a magic to
the kernel private data that is used for validation.
CRs-fixed: 359649
Change-Id: Idb16056ba0f89b7d519d501f5b1ce6d5d930ed70
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
A race condition exists where the attach may not kref_get
before the kref_put finished. The genlock_file_lock was
originally added to protect against this case however it
was not protecting the kref.
CRs-fixed: 359649
Change-Id: Iab1fc847c2d3357ec1be79b7cc3e004643126dec
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
In order to support synchronization in a process with a single
gralloc handle we require the ability to write lock a buffer
while it is already read locked by the same handle. This change
extends the concept of an exclusive write lock or recursive read
locks to a genlock handle (rather than the genlock lock).
Genlock cannot provide deadlock protection because the same
handle can be used simultaneously by a producer and consumer.
In practice an error will still be generated when the timeout
expires.
CRs-fixed: 356263
Change-Id: I322e7fadc8b43287f53b211242b176d3de731db2
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
interruptible_wait_timeout returns a signed long, so make sure that
we use a signed long to hold the result. Using an unsigned value would
horribly misinterpet an error such as -ERESTARTSYS.
CRs-Fixed: 341347
Change-Id: Ic0dedbadff2dbe404e68a2a78f6282b5976d05c1
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Allowing a lock to be asynchronously released while a handle
was still active turned out to be too dangerous to use in a
multi-threaded environment and it served no pratical
purpose anyway. Handles now hold an attached lock until they
are destroyed.
CRs-fixed: 333141
Change-Id: Ic0dedbad8050ff01927ddb165c65a939bf297c10
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Check for the possibility of NULL handles passed into the
in-kernel API functions and return error where appropriate.
There is a non-zero chance that the private_data will be
cleared while the FD is still active, so check in the ioctl()
function as well.
CRs-fixed: 332835
Change-Id: Ic0dedbada2713080fef79ca188a87c578bec6d2f
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Naomi Luis <nluis@codeaurora.org>
Change-Id: I833e480abfc11ce4aa00a5ae46144895429a0339
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Avoid a race condition if a lock gets released by all owners
but a process fails to close a lock file descriptor and tries
to reuse it. Clearing the pointers to the data will ensure
that attaching a dead lock will return an error.
Change-Id: Ic0dedbadd693c7a22b027052cdd247370b28a7c5
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
The resources allocated for a handle were not being freed on device
release. This included releasing the lock. Unfortunately, the
lock release had a fput() too many which was screwing up the
reference counting on the file descriptors. When attaching a lock,
we only need the file pointer for a short time so fput() it back
immediately. That way, only there will only be one reference to
the lock per process, and the process will be responsible for
closing the fd directly and the fput() in genlock_release_lock
is no longer needed.
CRs-fixed: 322645
Change-Id: Ic0dedbad55300a2f8463c800cf8361719583b0b8
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Add a generic locking API for situations where multiple user-space
processes and/or kernel drivers need to cooordinate access to a
shared resource such as a graphics buffer.
Change-Id: Ic0dedbad74b970d7bd1a6624a845b5b1b9847443
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
A sync point that is not added to the active_list will never signal
in sync_timeline_signal thus causing sync_fence_wait to deadlock or
block until the timeout expired.
Change-Id: I75168d0eec874bf70dd8c28db9508dd8fc1077d3
Signed-off-by: Jeff Boody <jboody@codeaurora.org>
This is needed to allow modules to link against the sync subsystem
Change-Id: I15c1818de329f24e4113ef1d0923413b22fd0eff
Signed-off-by: Erik Gilling <konkers@android.com>
In order to allow drivers to cleanly handled teardown we need to allow them
to cancel pending async waits. To do this cleanly, we move allocation of
sync_fence_waiter to the driver calling sync_async_wait().
Change-Id: Ifcd95648be6ec07026d67f810070a4310f099989
Signed-off-by: Erik Gilling <konkers@android.com>
Compared to Rob Clark's RFC I've ditched the prepare/finish hooks
and corresponding ioctls on the dma_buf file. The major reason for
that is that many people seem to be under the impression that this is
also for synchronization with outstanding asynchronous processsing.
I'm pretty massively opposed to this because:
- It boils down reinventing a new rather general-purpose userspace
synchronization interface. If we look at things like futexes, this
is hard to get right.
- Furthermore a lot of kernel code has to interact with this
synchronization primitive. This smells a look like the dri1 hw_lock,
a horror show I prefer not to reinvent.
- Even more fun is that multiple different subsystems would interact
here, so we have plenty of opportunities to create funny deadlock
scenarios.
I think synchronization is a wholesale different problem from data
sharing and should be tackled as an orthogonal problem.
Now we could demand that prepare/finish may only ensure cache
coherency (as Rob intended), but that runs up into the next problem:
We not only need mmap support to facilitate sw-only processing nodes
in a pipeline (without jumping through hoops by importing the dma_buf
into some sw-access only importer), which allows for a nicer
ION->dma-buf upgrade path for existing Android userspace. We also need
mmap support for existing importing subsystems to support existing
userspace libraries. And a loot of these subsystems are expected to
export coherent userspace mappings.
So prepare/finish can only ever be optional and the exporter /needs/
to support coherent mappings. Given that mmap access is always
somewhat fallback-y in nature I've decided to drop this optimization,
instead of just making it optional. If we demonstrate a clear need for
this, supported by benchmark results, we can always add it in again
later as an optional extension.
Other differences compared to Rob's RFC is the above mentioned support
for mapping a dma-buf through facilities provided by the importer.
Which results in mmap support no longer being optional.
Note that this dma-buf mmap patch does _not_ support every possible
insanity an existing subsystem could pull of with mmap: Because it
does not allow to intercept pagefaults and shoot down ptes importing
subsystems can't add some magic of their own at these points (e.g. to
automatically synchronize with outstanding rendering or set up some
special resources). I've done a cursory read through a few mmap
implementions of various subsytems and I'm hopeful that we can avoid
this (and the complexity it'd bring with it).
Additonally I've extended the documentation a bit to explain the hows
and whys of this mmap extension.
In case we ever want to add support for explicitly cache maneged
userspace mmap with a prepare/finish ioctl pair, we could specify that
userspace needs to mmap a different part of the dma_buf, e.g. the
range starting at dma_buf->size up to dma_buf->size*2. This works
because the size of a dma_buf is invariant over it's lifetime. The
exporter would obviously need to fall back to coherent mappings for
both ranges if a legacy clients maps the coherent range and the
architecture cannot suppor conflicting caching policies. Also, this
would obviously be optional and userspace needs to be able to fall
back to coherent mappings.
v2:
- Spelling fixes from Rob Clark.
- Compile fix for !DMA_BUF from Rob Clark.
- Extend commit message to explain how explicitly cache managed mmap
support could be added later.
- Extend the documentation with implementations notes for exporters
that need to manually fake coherency.
Change-Id: Ia8f2ae5d8a1b1c87ed12ca1c89d7bf2067239ee4
Cc: Rob Clark <rob.clark@linaro.org>
Cc: Rebecca Schultz Zavin <rebecca@android.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The function regmap_bulk_read() calls the regmap_read() for
each register if set of register has volatile and cache is
enabled. In this case, last few register read makes the memory
corruption if the register size is not the size of unsigned int.
The regam_read() takes argument as unsigned int for returning
value and it update the value as
*val = map->format.parse_val(map->work_buf);
This causes complete 4 bytes (size of unsigned int) to get written.
Now if client pass the memory pointer for value which is equal to the
required size of register count in regmap_bulk_read() then last few
register read actually update the memory beyond passed pointer size.
Avoid this by using local variable for read and then do memcpy()
for actual byte copy to passed pointer based on register size.
I allocated one pointer ptr and take first 16 bytes dump of that
pointer then call regmap_bulk_read() with pointer which is just
on top of this allocated pointer and register count of 128. Here
register size is 1 byte.
The memory trace of last 5 register read are as follows:
[ 5.438589] regmap_bulk_read after regamp_read() for register 122
[ 5.447421] 0xef993c20 0xef993c00 0x00000000 0x00000001
[ 5.467535] regmap_bulk_read after regamp_read() for register 123
[ 5.476374] 0xef993c20 0xef993c00 0x00000000 0x00000001
[ 5.496425] regmap_bulk_read after regamp_read() for register 124
[ 5.505260] 0xef993c20 0xef993c00 0x00000000 0x00000001
[ 5.525372] regmap_bulk_read after regamp_read() for register 125
[ 5.534205] 0xef993c00 0xef993c00 0x00000000 0x00000001
[ 5.554258] regmap_bulk_read after regamp_read() for register 126
[ 5.563100] 0xef990000 0xef993c00 0x00000000 0x00000001
[ 5.554258] regmap_bulk_read after regamp_read() for register 127
[ 5.587108] 0xef000000 0xef993c00 0x00000000 0x00000001
Here it is observed that the memory content at first word started changing
on last 3 regmap_read() and so corruption happened.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
This fixes:
note: expected ‘struct ida *’ but argument is of type ‘struct idr *’
warning: passing argument 1 of ‘ida_pre_get’ from incompatible pointer type
Reported-by: Arnd Bergman <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
soc_lock is already initialized by DEFINE_SPINLOCK.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rather than hard-lock the kernel, dump the suspend thread stack and
BUG() when a driver takes too long to suspend. The timeout is set
to 12 seconds to be longer than the usbhid 10 second timeout.
Exclude from the watchdog the time spent waiting for children that
are resumed asynchronously and time every device, whether or not they
resumed synchronously.
Change-Id: Ifd211c06b104860c2fee6eecfe0d61774aa4508a
Original-author: San Mehat <san@google.com>
Signed-off-by: Benoit Goby <benoit@android.com>
Two more small fixes:
- Now we have users for it that aren't running Android it turns out that
regcache_sync_region() is much more useful to drivers if it's exported
for use by modules. Who knew?
- Make sure we don't divide by zero when doing debugfs dumps of rbtrees,
not visible up until now because everything was providing at least
some cache on startup.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPfMvJAAoJEBus8iNuMP3d43YQAI8IJqPoAqK2eKjQlYNRzP3O
hWgA6oU56Yqg0PZKKTbWKkul2j9onRV7UrCsXrKo9gCVFNAROkMh9q8uZxzf7yl1
AlOsoKDH/ijYhuAkbLri5tWc8vw5SZS/rSXx6BnVAIPgDjaCEoJcd6swJTfieuyz
slN+y3Y3FDk7zIefkcAlMpUR5ks+jAHOHhk/Kwe5+xP3xk/09acuiNogpPYRH4Fp
2tV9Qr9cSrDKIX8eLkR/AkRkmESMIzkpEopQY4vpYO+GiEwyKGdGjMTqkgjQ7PSk
jL1lp36CAeVuR7Bp3OFT7bilXZKTrkOiwkC2ctFmyjYK+VO4HWBeOeMmoZvTBRCO
+RXAZVN0zFyxPuH6ZJqOuQpCyoY0JBZPZulwRrXGsQpQOoITuEt9yJpLfDSj6hYd
Pj8NLHT10n8DBnLk8nXuxT0mNgGDBTNOVCpVblmfm2CLcEGOQsAzWCgCKjkehCUJ
O3I/3ZHzs1tvCZNcmt5HH8d8D+iMtkOS8bSHTHvZ2ADjSXWGPgXYlUwObYH6kV9N
nMYi8Q6r8skkESL1jaE12XMZxGm07emIyUh+9hfM0lLGEC/cPff2gXwKhtZMDQfE
XELx3e/EbyqNsNqFd71v9XpGyJA9si7JvPY/ZSei/CTqToIEAsX/BwGMKGAWnrNy
ARlp9oaM6BOOg+i2Ddrg
=qm1Q
-----END PGP SIGNATURE-----
Merge tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull two more small regmap fixes from Mark Brown:
- Now we have users for it that aren't running Android it turns out
that regcache_sync_region() is much more useful to drivers if it's
exported for use by modules. Who knew?
- Make sure we don't divide by zero when doing debugfs dumps of
rbtrees, not visible up until now because everything was providing at
least some cache on startup.
* tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: prevent division by zero in rbtree_show
regmap: Export regcache_sync_region()
Merge batch of fixes from Andrew Morton:
"The simple_open() cleanup was held back while I wanted for laggards to
merge things.
I still need to send a few checkpoint/restore patches. I've been
wobbly about merging them because I'm wobbly about the overall
prospects for success of the project. But after speaking with Pavel
at the LSF conference, it sounds like they're further toward
completion than I feared - apparently davem is at the "has stopped
complaining" stage regarding the net changes. So I need to go back
and re-review those patchs and their (lengthy) discussion."
* emailed from Andrew Morton <akpm@linux-foundation.org>: (16 patches)
memcg swap: use mem_cgroup_uncharge_swap fix
backlight: add driver for DA9052/53 PMIC v1
C6X: use set_current_blocked() and block_sigmask()
MAINTAINERS: add entry for sparse checker
MAINTAINERS: fix REMOTEPROC F: typo
alpha: use set_current_blocked() and block_sigmask()
simple_open: automatically convert to simple_open()
scripts/coccinelle/api/simple_open.cocci: semantic patch for simple_open()
libfs: add simple_open()
hugetlbfs: remove unregister_filesystem() when initializing module
drivers/rtc/rtc-88pm860x.c: fix rtc irq enable callback
fs/xattr.c:setxattr(): improve handling of allocation failures
fs/xattr.c:listxattr(): fall back to vmalloc() if kmalloc() failed
fs/xattr.c: suppress page allocation failure warnings from sys_listxattr()
sysrq: use SEND_SIG_FORCED instead of force_sig()
proc: fix mount -t proc -o AAA
Many users of debugfs copy the implementation of default_open() when
they want to support a custom read/write function op. This leads to a
proliferation of the default_open() implementation across the entire
tree.
Now that the common implementation has been consolidated into libfs we
can replace all the users of this function with simple_open().
This replacement was done with the following semantic patch:
<smpl>
@ open @
identifier open_f != simple_open;
identifier i, f;
@@
-int open_f(struct inode *i, struct file *f)
-{
(
-if (i->i_private)
-f->private_data = i->i_private;
|
-f->private_data = i->i_private;
)
-return 0;
-}
@ has_open depends on open @
identifier fops;
identifier open.open_f;
@@
struct file_operations fops = {
...
-.open = open_f,
+.open = simple_open,
...
};
</smpl>
[akpm@linux-foundation.org: checkpatch fixes]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If there are no nodes in the cache, nodes will be 0, so calculating
"registers / nodes" will cause division by zero.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: stable@vger.kernel.org
Fixes mostly, including:
* Patch series that hopefully fixes races between the freezer and request_firmware()
and request_firmware_nowait() for good, with two cleanups from Stephen Boyd on top.
* Runtime PM fix from Alan Stern preventing tasks from getting stuck indefinitely
in the runtime PM wait queue.
* Device PM QoS update from MyungJoo Ham introducing a new variant of
pm_qos_update_request() allowing the callers to specify a timeout.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
iQIcBAABAgAGBQJPdmPZAAoJEKhOf7ml8uNsvcgQAIKBya3ESVg2PbB1riIRJ0M5
3R5ntbQ0sxa631lIoipZLP6HeN2fgTcfTqhHpr9/dtt80Zh/HbNWee4XEmkJvGOK
UuG/Vzg2IJA2LKYbRDEALm9GwvlG8ylIrz1mWOSt77K+seyjnvCyfQsoVd5S/+sz
bzDCwIJlV/lvtynvAMfaZ+O75XW1uYRJ6a1ABviEU4o+J7OC9UCp0h/b9c1WZqDJ
1X0pBU0/28ZFnYnK+zuAqwJg7pua/HrC0nT/pQTRSZ0kXAgt7uuqIlpVz9HXiqzu
TVbu3uW6FPWT0TP/iFmKMA1eiQJHLXgshECaccVOoMzIG/pqYTNbfu9BzEho3tL9
w716ruo1JoythvnlIz4j8R2RtiE8SxTzCqGm4OHcie72VUSqduIhWgRyZOFhebUo
xqiUSN2cyYUf9SJoeg0TSmQdutoa7vnswZgq4qjlOz39OPxHrwAe5ROXIBwoHvnz
akmBtnabyNVsRiLe9eIH5N5C9TxHDgZwS70SMYqo1D09Qo+NTUtvSVgC/NiIjhXb
yY3UliDqGlkUhHJ+8ydntNb39VU4L1MO0IGzEvmvfXvSIcXavGkkmd9RV9yytLEK
1ujq99NHITzxyuF2+bNGpPQVEVH3sQgAv/doFTiEZiUHIIAy5Fmy/+ipcurslXLm
urlq4RLG+JXgPjw4XO14
=ligR
-----END PGP SIGNATURE-----
Merge tag 'pm-for-3.4-part-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull more power management updates from Rafael Wysocki:
- Patch series that hopefully fixes races between the freezer and
request_firmware() and request_firmware_nowait() for good, with two
cleanups from Stephen Boyd on top.
- Runtime PM fix from Alan Stern preventing tasks from getting stuck
indefinitely in the runtime PM wait queue.
- Device PM QoS update from MyungJoo Ham introducing a new variant of
pm_qos_update_request() allowing the callers to specify a timeout.
* tag 'pm-for-3.4-part-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
PM / QoS: add pm_qos_update_request_timeout() API
firmware_class: Move request_firmware_nowait() to workqueues
firmware_class: Reorganize fw_create_instance()
PM / Sleep: Mitigate race between the freezer and request_firmware()
PM / Sleep: Move disabling of usermode helpers to the freezer
PM / Hibernate: Disable usermode helpers right before freezing tasks
firmware_class: Do not warn that system is not ready from async loads
firmware_class: Split _request_firmware() into three functions, v2
firmware_class: Rework usermodehelper check
PM / Runtime: don't forget to wake up waitqueue on failure
regcache_sync_region() isn't going to be useful to most drivers if we
don't export it since otherwise they can't use it when built modular.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
The code currently passes the register offset in the current block to
regcache_lookup_reg. This works fine as long as there is only one block and with
base register of 0, but in all other cases it will look-up the default for a
wrong register, which can cause unnecessary register writes. This patch fixes
it by passing the actual register number to regcache_lookup_reg.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: <stable@vger.kernel.org>
Pull dma-buf updates from Sumit Semwal:
"This includes the following key items:
- kernel cpu access support,
- flag-passing to dma_buf_fd,
- relevant Documentation updates, and
- some minor cleanups and fixes.
These changes are needed for the drm prime/dma-buf interface code that
Dave Airlie plans to submit in this merge window."
* 'for-linus-3.4' of git://git.linaro.org/people/sumitsemwal/linux-dma-buf:
dma-buf: correct dummy function declarations.
dma-buf: document fd flags and O_CLOEXEC requirement
dma_buf: Add documentation for the new cpu access support
dma-buf: add support for kernel cpu access
dma-buf: don't hold the mutex around map/unmap calls
dma-buf: add get_dma_buf()
dma-buf: pass flags into dma_buf_fd.
dma-buf: add dma_data_direction to unmap dma_buf_op
dma-buf: Move code out of mutex-protected section in dma_buf_attach()
dma-buf: Return error instead of using a goto statement when possible
dma-buf: Remove unneeded sanity checks
dma-buf: Constify ops argument to dma_buf_export()
Oddly enough a work_struct was already part of the firmware_work
structure but nobody was using it. Instead of creating a new
kthread for each request_firmware_nowait() call just schedule the
work on the system workqueue. This should avoid some overhead
in forking new threads when they're not strictly necessary.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Recent patches to split up the three phases of request_firmware()
lead to a casting away of const in fw_create_instance(). We can
avoid this cast by splitting up fw_create_instance() a bit.
Make _request_firmware_setup() return a struct fw_priv and use
that struct instead of passing struct firmware to
_request_firmware(). Move the uevent and device file creation
bits to the loading phase and rename the function to
_request_firmware_load() to better reflect its purpose.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
If firmware is requested asynchronously, by calling
request_firmware_nowait(), there is no reason to fail the request
(and warn the user) when the system is (presumably temporarily)
unready to handle it (because user space is not available yet or
frozen). For this reason, introduce an alternative routine for
read-locking umhelper_sem, usermodehelper_read_lock_wait(), that
will wait for usermodehelper_disabled to be unset (possibly with
a timeout) and make request_firmware_work_func() use it instead of
usermodehelper_read_trylock().
Accordingly, modify request_firmware() so that it uses
usermodehelper_read_trylock() to acquire umhelper_sem and remove
the code related to that lock from _request_firmware().
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org
Split _request_firmware() into three functions,
_request_firmware_prepare() doing preparatory work that need not be
done under umhelper_sem, _request_firmware_cleanup() doing the
post-error cleanup and _request_firmware() carrying out the remaining
operations.
This change is requisite for moving the acquisition of umhelper_sem
from _request_firmware() to the callers, which is going to be done
subsequently.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: stable@vger.kernel.org
Instead of two functions, read_lock_usermodehelper() and
usermodehelper_is_disabled(), used in combination, introduce
usermodehelper_read_trylock() that will only return with umhelper_sem
held if usermodehelper_disabled is unset (and will return -EAGAIN
otherwise) and make _request_firmware() use it.
Rename read_unlock_usermodehelper() to
usermodehelper_read_unlock() to follow the naming convention of the
new function.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org
This patch (as1535) fixes a bug in the runtime PM core. When a
runtime suspend attempt completes, whether successfully or not, the
device's power.wait_queue is supposed to be signalled. But this
doesn't happen in the failure pathway of rpm_suspend() when another
autosuspend attempt is rescheduled. As a result, a task can get stuck
indefinitely on the wait queue (I have seen this happen in testing).
The patch fixes the problem by moving the wake_up_all() call up near
the start of the failure code.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Big differences to other contenders in the field (like ion) is
that this also supports highmem, so we have to split up the cpu
access from the kernel side into a prepare and a kmap step.
Prepare is allowed to fail and should do everything required so that
the kmap calls can succeed (like swapin/backing storage allocation,
flushing, ...).
More in-depth explanations will follow in the follow-up documentation
patch.
Changes in v2:
- Clear up begin_cpu_access confusion noticed by Sumit Semwal.
- Don't automatically fallback from the _atomic variants to the
non-atomic variants. The _atomic callbacks are not allowed to
sleep, so we want exporters to make this decision explicit. The
function signatures are explicit, so simpler exporters can still
use the same function for both.
- Make the unmap functions optional. Simpler exporters with permanent
mappings don't need to do anything at unmap time.
Changes in v3:
- Adjust the WARN_ON checks for the new ->ops functions as suggested
by Rob Clark and Sumit Semwal.
- Rebased on top of latest dma-buf-next git.
Changes in v4:
- Fixup a missing - in a return -EINVAL; statement.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rob Clark <rob@ti.com>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
The mutex protects the attachment list and hence needs to be held
around the callbakc to the exporters (optional) attach/detach
functions.
Holding the mutex around the map/unmap calls doesn't protect any
dma_buf state. Exporters need to properly protect any of their own
state anyway (to protect against calls from their own interfaces).
So this only makes the locking messier (and lockdep easier to anger).
Therefore let's just drop this.
v2: Rebased on top of latest dma-buf-next git.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <rob.clark@linaro.org>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
We need to pass the flags into dma_buf_fd at this point,
so the flags end up doing the right thing for O_CLOEXEC.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Rob Clark <rob@ti.com>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Some exporters may use DMA map/unmap APIs in dma-buf ops, which require
enum dma_data_direction for both map and unmap operations.
Thus, the unmap dma_buf_op also needs to have enum dma_data_direction as
a parameter.
Reported-by: Tomasz Stanislawski <t.stanislaws@samsung.com>
Signed-off-by: Sumit Semwal <sumit.semwal@ti.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
"[RFC PATCH 0/2] audit of linux/device.h users in include/*"
https://lkml.org/lkml/2012/3/4/159
--
Nearly every subsystem has some kind of header with a proto like:
void foo(struct device *dev);
and yet there is no reason for most of these guys to care about the
sub fields within the device struct. This allows us to significantly
reduce the scope of headers including headers. For this instance, a
reduction of about 40% is achieved by replacing the include with the
simple fact that the device is some kind of a struct.
Unlike the much larger module.h cleanup, this one is simply two
commits. One to fix the implicit <linux/device.h> users, and then
one to delete the device.h includes from the linux/include/ dir
wherever possible.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPbNxLAAoJEOvOhAQsB9HWR6QQAMRUZ94O2069/nW9h4TO/xTr
Hq/80lo/TBBiRmob3iWBP76lzgeeMPPVEX1I6N7YYlhL3IL7HsaJH1DvpIPPHXQP
GFKcBsZ5ZLV8c4CBDSr+/HFNdhXc0bw0awBjBvR7gAsWuZpNFn4WbhizJi4vWAoE
4ydhPu55G1G8TkBtYLJQ8xavxsmiNBSDhd2i+0vn6EVpgmXynjOMG8qXyaS97Jvg
pZLwnN5Wu21coj6+xH3QUKCl1mJ+KGyamWX5gFBVIfsDB3k5H4neijVm7t1en4b0
cWxmXeR/JE3VLEl/17yN2dodD8qw1QzmTWzz1vmwJl2zK+rRRAByBrL0DP7QCwCZ
ppeJbdhkMBwqjtknwrmMwsuAzUdJd79GXA+6Vm+xSEkr6FEPK1M0kGbvaqV9Usgd
ohMewewbO6ddgR9eF7Kw2FAwo0hwkPNEplXIym9rZzFG1h+T0STGSHvkn7LV765E
ul1FapSV3GCxEVRwWTwD28FLU2+0zlkOZ5sxXwNPTT96cNmW+R7TGuslZKNaMNjX
q7eBZxo8DtVt/jqJTntR8bs8052c8g1Ac1IKmlW8VSmFwT1M6VBGRn1/JWAhuUgv
dBK/FF+I1GJTAJWIhaFcKXLHvmV9uhS6JaIhLMDOetoOkpqSptJ42hDG+89WkFRk
o55GQ5TFdoOpqxVzGbvE
=3j4+
-----END PGP SIGNATURE-----
Merge tag 'device-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux
Pull <linux/device.h> avoidance patches from Paul Gortmaker:
"Nearly every subsystem has some kind of header with a proto like:
void foo(struct device *dev);
and yet there is no reason for most of these guys to care about the
sub fields within the device struct. This allows us to significantly
reduce the scope of headers including headers. For this instance, a
reduction of about 40% is achieved by replacing the include with the
simple fact that the device is some kind of a struct.
Unlike the much larger module.h cleanup, this one is simply two
commits. One to fix the implicit <linux/device.h> users, and then one
to delete the device.h includes from the linux/include/ dir wherever
possible."
* tag 'device-for-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux:
device.h: audit and cleanup users in main include dir
device.h: cleanup users outside of linux/include (C files)