This reverts change Ia3e8209959fe377281f28106640a13f10501b47e.
Disabling requeueing of a completed request will cause a request
not to be retried (by requeueing it) in cases where the low-level
driver detected some error but finished processing of the requeust.
Specifically, not retrying the request will cause calls to functions
such as blk_execute_rq to block forever.
Change-Id: Ie0e97cb7560385d48d022dd4ae09f96cfd75b752
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Split an IO into multiple requests so the the crypto accelerators
can be exercized in parallel to reduce latency.
Change-Id: I24b15568b5afd375ad39bf3b74f60743f0e1dde9
Acked-by: Baranidharan Muthukumaran <bmuthuku@qti.qualcomm.com>
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
Expose "sector_range", which will indicate to the low-level driver
unit-tests the size (in sectors, starting from "start_sector") of the
address space in which they can perform I/O operations. This user-defined
variable can be used to change the address space size from the default
512MiB.
Change-Id: I515a6849eb39b78e653f4018993a2c8e64e2a77f
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
Unit tests submit large requests of 512KB made of 128 bios.
Current allocation was done via kmalloc which may not be able
to allocate such a large buffer which is also physically contiguous.
Using kmalloc to allocate each bio separately is also problematic as
it might not be page aligned. Some bio may end up using more than a
single memory segment which will fail the mapping of the bios to
the request which supports up to 128 physical segments.
To avoid such failure, allocate a separate page for each bio
(bio size is single page size).
Change-Id: Id0da394d458942e093d924bc7aa23aa3231cdca7
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Current test-iosched design enables running only a single test
for a single block device.
This change modifies the test-iosched framework to allow running
several tests on several block devices.
Change-Id: I051d842733873488b64e89053d9c4e30e1249870
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Verify request is not yet completed before requeueing it,
as requeueing a request ends its tag and sets it to -1 while
it is possible that the request has timed out and it now being
processed for error handling. Since it may be active and processed
in the low lever driver, we mustn't reset its tag.
Change-Id: Ia3e8209959fe377281f28106640a13f10501b47e
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
blk_run_queue() takes the queue spinlock and disabled irqs.
Consider the following callstack:
blk_run_queue
->__blk_run_queue
-> scsi_request_fn
-> blk_peek_request
-> __elv_next_request
-> elevator_dispatch_fn
-> test_dispatch_requests
-> test_dispatch_from
test_dispatch_from() will release the test-iosched spinlock
using spin_unlock_irq which will enable interrupts, however,
caller is assuming interrupts are disabled.
An interrupt can occur now and scsi soft-irq may be scheduled
with the following call stack:
scsi_softirq_done
-> scsi_finish_command
-> scsi_device_unbusy
scsi_device_unbusy() tries to lock the queue spinlock which was
previously locked when blk_run_queue was called, resulting in a
spinlock recursion.
Change test_dispatch_from() to use the spinlock irq save/restore variants
to prevent enabling the irq in case they were previously disabled.
Change-Id: Icaea4f9ba54771edb0302c6005047fcc5478ce8d
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Recalculate nr_phys_segments after pages are allocated
for write requests. Move _req_crypt_io_pool allocation
and de-allocation to ctr and dtr instead of driver init
and exit.
Change-Id: I8576dce1f7c9bc39dcc975762562fb84a349bba7
Acked-by: Baranidharan Muthukumaran <bmuthuku@qti.qualcomm.com>
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
When running a test, a timer was set to detect test timeout
and to unblock the wait_event() function which is waiting for the
test to finish. This is redundant as wait_event timeout variant
gives the same functionality without the overhead of managing a
timer for this purpose and improve code readability.
Change-Id: Icbd3cb0f3fcb5854673f4506b102b0c80e97d6bb
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Adding a new element "tsk_dirty" to struct page increases the size
of mem_map/vmemmap, restrict this to a debug only functionality to
save few MB of memory.
Considering a system with 1G of RAM, there will be nearly 262144
pages and thus that many number of page structures in mem_map/vmemmap.
With pointer size of 8 bytes on a 64 bit system, adding this
pointer to "struct page" means an increase of "2MB" for mem_map.
CRs-Fixed: 738692
Change-Id: Idf3217dcbe17cf1ab4d462d2aa8d39da1ffd8b13
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
The UFS tests are used for testing the functionality and performance
of the UFS driver. Some of the tests call compare_buffer_to_pattern
for data integrity checking. This function should be exposed in order
to allow compilation of ufs_test as a module.
Change-Id: I2279b0ae9dbdf4ecad073fab2b15116be2ea1713
Signed-off-by: Gilad Broner <gbroner@codeaurora.org>
Signed-off-by: Maya Erez <merez@codeaurora.org>
* commit 'v3.10.49': (529 commits)
Linux 3.10.49
ACPI / battery: Retry to get battery information if failed during probing
x86, ioremap: Speed up check for RAM pages
Score: Modify the Makefile of Score, remove -mlong-calls for compiling
Score: The commit is for compiling successfully.
Score: Implement the function csum_ipv6_magic
score: normalize global variables exported by vmlinux.lds
rtmutex: Plug slow unlock race
rtmutex: Handle deadlock detection smarter
rtmutex: Detect changes in the pi lock chain
rtmutex: Fix deadlock detector for real
ring-buffer: Check if buffer exists before polling
drm/radeon: stop poisoning the GART TLB
drm/radeon: fix typo in golden register setup on evergreen
ext4: disable synchronous transaction batching if max_batch_time==0
ext4: clarify error count warning messages
ext4: fix unjournalled bg descriptor while initializing inode bitmap
dm io: fix a race condition in the wake up code for sync_io
Drivers: hv: vmbus: Fix a bug in the channel callback dispatch code
clk: spear3xx: Use proper control register offset
...
In addition to bringing in upstream commits, this merge also makes minor
changes to mainitain compatibility with upstream:
The definition of list_next_entry in qcrypto.c and ipa_dp.c has been
removed, as upstream has moved the definition to list.h. The implementation
of list_next_entry was identical between the two.
irq.c, for both arm and arm64 architecture, has had its calls to
__irq_set_affinity_locked updated to reflect changes to the API upstream.
Finally, as we have removed the sleep_length member variable of the
tick_sched struct, all changes made by upstream commit ec804bd do not
apply to our tree and have been removed from this merge. Only
kernel/time/tick-sched.c is impacted.
Change-Id: I63b7e0c1354812921c94804e1f3b33d1ad6ee3f1
Signed-off-by: Ian Maund <imaund@codeaurora.org>
When adding new field to struct bio there is a crash in the removed
code lines. This issue was introduced by commit
80a8f0f87b "block: row-iosched idling
triggered by readahead pages"
(Partly) reverting this patch till root cause is fixed (on FS level).
Change-Id: Idce180802227aaab495bf0723768ba4cb437bcab
Signed-off-by: Tanya Brokhman <tlinder@codeaurora.org>
commit af5040da01ef980670b3741b3e10733ee3e33566 upstream.
trace_block_rq_complete does not take into account that request can
be partially completed, so we can get the following incorrect output
of blkparser:
C R 232 + 240 [0]
C R 240 + 232 [0]
C R 248 + 224 [0]
C R 256 + 216 [0]
but should be:
C R 232 + 8 [0]
C R 240 + 8 [0]
C R 248 + 8 [0]
C R 256 + 8 [0]
Also, the whole output summary statistics of completed requests and
final throughput will be incorrect.
This patch takes into account real completion size of the request and
fixes wrong completion accounting.
Signed-off-by: Roman Pen <r.peniaev@gmail.com>
CC: Steven Rostedt <rostedt@goodmis.org>
CC: Frederic Weisbecker <fweisbec@gmail.com>
CC: Ingo Molnar <mingo@redhat.com>
CC: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Background writes happen in the context of a background thread.
It is very useful to identify the actual task that generated the
request instead of background task that submited the request.
Hence keep track of the task when a page gets dirtied and dump
this task info while tracing. Not all the pages in the bio are
dirtied by the same task but most likely it will be, since the
sectors accessed on the device must be adjacent.
Change-Id: I6afba85a2063dd3350a0141ba87cf8440ce9f777
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
The following commits have been reverted from this merge, as they are
known to introduce new bugs and are currently incompatible with our
audio implementation. Investigation of these commits is ongoing, and
they are expected to be brought in at a later time:
86e6de7 ALSA: compress: fix drain calls blocking other compress functions (v6)
16442d4 ALSA: compress: fix drain calls blocking other compress functions
This merge commit also includes a change in block, necessary for
compilation. Upstream has modified elevator_init_fn to prevent race
conditions, requring updates to row_init_queue and test_init_queue.
* commit 'v3.10.28': (1964 commits)
Linux 3.10.28
ARM: 7938/1: OMAP4/highbank: Flush L2 cache before disabling
drm/i915: Don't grab crtc mutexes in intel_modeset_gem_init()
serial: amba-pl011: use port lock to guard control register access
mm: Make {,set}page_address() static inline if WANT_PAGE_VIRTUAL
md/raid5: Fix possible confusion when multiple write errors occur.
md/raid10: fix two bugs in handling of known-bad-blocks.
md/raid10: fix bug when raid10 recovery fails to recover a block.
md: fix problem when adding device to read-only array with bitmap.
drm/i915: fix DDI PLLs HW state readout code
nilfs2: fix segctor bug that causes file system corruption
thp: fix copy_page_rep GPF by testing is_huge_zero_pmd once only
ftrace/x86: Load ftrace_ops in parameter not the variable holding it
SELinux: Fix possible NULL pointer dereference in selinux_inode_permission()
writeback: Fix data corruption on NFS
hwmon: (coretemp) Fix truncated name of alarm attributes
vfs: In d_path don't call d_dname on a mount point
staging: comedi: adl_pci9111: fix incorrect irq passed to request_irq()
staging: comedi: addi_apci_1032: fix subdevice type/flags bug
mm/memory-failure.c: recheck PageHuge() after hugetlb page migrate successfully
GFS2: Increase i_writecount during gfs2_setattr_chown
perf/x86/amd/ibs: Fix waking up from S3 for AMD family 10h
perf scripting perl: Fix build error on Fedora 12
ARM: 7815/1: kexec: offline non panic CPUs on Kdump panic
Linux 3.10.27
sched: Guarantee new group-entities always have weight
sched: Fix hrtimer_cancel()/rq->lock deadlock
sched: Fix cfs_bandwidth misuse of hrtimer_expires_remaining
sched: Fix race on toggling cfs_bandwidth_used
x86, fpu, amd: Clear exceptions in AMD FXSAVE workaround
netfilter: nf_nat: fix access to uninitialized buffer in IRC NAT helper
SCSI: sd: Reduce buffer size for vpd request
intel_pstate: Add X86_FEATURE_APERFMPERF to cpu match parameters.
mac80211: move "bufferable MMPDU" check to fix AP mode scan
ACPI / Battery: Add a _BIX quirk for NEC LZ750/LS
ACPI / TPM: fix memory leak when walking ACPI namespace
mfd: rtsx_pcr: Disable interrupts before cancelling delayed works
clk: exynos5250: fix sysmmu_mfc{l,r} gate clocks
clk: samsung: exynos5250: Add CLK_IGNORE_UNUSED flag for the sysreg clock
clk: samsung: exynos4: Correct SRC_MFC register
clk: clk-divider: fix divisor > 255 bug
ahci: add PCI ID for Marvell 88SE9170 SATA controller
parisc: Ensure full cache coherency for kmap/kunmap
drm/nouveau/bios: make jump conditional
ARM: shmobile: mackerel: Fix coherent DMA mask
ARM: shmobile: armadillo: Fix coherent DMA mask
ARM: shmobile: kzm9g: Fix coherent DMA mask
ARM: dts: exynos5250: Fix MDMA0 clock number
ARM: fix "bad mode in ... handler" message for undefined instructions
ARM: fix footbridge clockevent device
net: Loosen constraints for recalculating checksum in skb_segment()
bridge: use spin_lock_bh() in br_multicast_set_hash_max
netpoll: Fix missing TXQ unlock and and OOPS.
net: llc: fix use after free in llc_ui_recvmsg
virtio-net: fix refill races during restore
virtio_net: don't leak memory or block when too many frags
virtio-net: make all RX paths handle errors consistently
virtio_net: fix error handling for mergeable buffers
vlan: Fix header ops passthru when doing TX VLAN offload.
net: rose: restore old recvmsg behavior
rds: prevent dereference of a NULL device
ipv6: always set the new created dst's from in ip6_rt_copy
net: fec: fix potential use after free
hamradio/yam: fix info leak in ioctl
drivers/net/hamradio: Integer overflow in hdlcdrv_ioctl()
net: inet_diag: zero out uninitialized idiag_{src,dst} fields
ip_gre: fix msg_name parsing for recvfrom/recvmsg
net: unix: allow bind to fail on mutex lock
ipv6: fix illegal mac_header comparison on 32bit
netvsc: don't flush peers notifying work during setting mtu
tg3: Initialize REG_BASE_ADDR at PCI config offset 120 to 0
net: unix: allow set_peek_off to fail
net: drop_monitor: fix the value of maxattr
ipv6: don't count addrconf generated routes against gc limit
packet: fix send path when running with proto == 0
virtio: delete napi structures from netdev before releasing memory
macvtap: signal truncated packets
tun: update file current position
macvtap: update file current position
macvtap: Do not double-count received packets
rds: prevent BUG_ON triggered on congestion update to loopback
net: do not pretend FRAGLIST support
IPv6: Fixed support for blackhole and prohibit routes
HID: Revert "Revert "HID: Fix logitech-dj: missing Unifying device issue""
gpio-rcar: R-Car GPIO IRQ share interrupt
clocksource: em_sti: Set cpu_possible_mask to fix SMP broadcast
irqchip: renesas-irqc: Fix irqc_probe error handling
Linux 3.10.26
sh: add EXPORT_SYMBOL(min_low_pfn) and EXPORT_SYMBOL(max_low_pfn) to sh_ksyms_32.c
ext4: fix bigalloc regression
arm64: Use Normal NonCacheable memory for writecombine
arm64: Do not flush the D-cache for anonymous pages
arm64: Avoid cache flushing in flush_dcache_page()
ARM: KVM: arch_timers: zero CNTVOFF upon return to host
ARM: hyp: initialize CNTVOFF to zero
clocksource: arch_timer: use virtual counters
arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.S
arm64: dts: Reserve the memory used for secondary CPU release address
arm64: check for number of arguments in syscall_get/set_arguments()
arm64: fix possible invalid FPSIMD initialization state
...
Change-Id: Ia0e5d71b536ab49ec3a1179d59238c05bdd03106
Signed-off-by: Ian Maund <imaund@codeaurora.org>
commit c8123f8c9cb517403b51aa41c3c46ff5e10b2c17 upstream.
When mkfs issues a full device discard and the device only
supports discards of a smallish size, we can loop in
blkdev_issue_discard() for a long time. If preempt isn't enabled,
this can turn into a softlock situation and the kernel will
start complaining.
Add an explicit cond_resched() at the end of the loop to avoid
that.
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 556ee818c06f37b2e583af0363e6b16d0e0270de upstream.
request_queue bypassing is used to suppress higher-level function of a
request_queue so that they can be switched, reconfigured and shut
down. A request_queue does the followings while bypassing.
* bypasses elevator and io_cq association and queues requests directly
to the FIFO dispatch queue.
* bypasses block cgroup request_list lookup and always uses the root
request_list.
Once confirmed to be bypassing, specific elevator and block cgroup
policy implementations can assume that nothing is in flight for them
and perform various operations which would be dangerous otherwise.
Such confirmation is acheived by short-circuiting all new requests
directly to the dispatch queue and waiting for all the requests which
were issued before to finish. Unfortunately, while the request
allocating and draining sides were properly handled, we forgot to
actually plug the request dispatch path. Even after bypassing mode is
confirmed, if the attached driver tries to fetch a request and the
dispatch queue is empty, __elv_next_request() would invoke the current
elevator's elevator_dispatch_fn() callback. As all in-flight requests
were drained, the elevator wouldn't contain any request but once
bypass is confirmed we don't even know whether the elevator is even
there. It might be in the process of being switched and half torn
down.
Frank Mayhar reports that this actually happened while switching
elevators, leading to an oops.
Let's fix it by making __elv_next_request() avoid invoking the
elevator_dispatch_fn() callback if the queue is bypassing. It already
avoids invoking the callback if the queue is dying. As a dying queue
is guaranteed to be bypassing, we can simply replace blk_queue_dying()
check with blk_queue_bypass().
Reported-by: Frank Mayhar <fmayhar@google.com>
References: http://lkml.kernel.org/g/1390319905.20232.38.camel@bobble.lax.corp.google.com
Tested-by: Frank Mayhar <fmayhar@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We've switched over every architecture that supports SMP to it, so
remove the new useless config variable.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 0a06ff068f1255bcd7965ab07bc0f4adc3eb639a
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[imaund@codeaurora.org: resolve merge conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
Add a define for the test bio size (which is the size of a page),
this is used for allocating the right sized buffer for the bio during
test request creation.
Change-Id: I9505c85c4352009bdee442172eb8ae8f4254cfb0
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
dm-crypt provides bios based device mapper module. dm-crypt
operates on packets with 512 bytes size which is not effiicent
way for HW based crypto blocks. dm-req-crypt is developed to
address this. dm-req-crypt works on requests which carry upto
512KB of data for unmerged requests.
Change-Id: I7d6a63d516dc2dbe80f46c06dd0722847d55bc9f
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
MMC device driver implements URGENT request execution with priority
(using stop flow), as a result currently running (and prepeared) request
may be reinserted back into I/O scheduler. This will break block layer
logic of flushes (flush request should not be inserted into I/O scheduler).
Block layer flush machinery keep q->flush_data_in_flight list updated with
started but not completed flush requests with data (REQ_FUA).
This change will not notify underling block device driver about pending
urgent request during flushes in flight.
Change-Id: I98113621223fe0c7d224de023db888a73bd62b48
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
commit 7c8a3679e3d8e9d92d58f282161760a0e247df97 upstream.
Add locking of q->sysfs_lock into elevator_change() (an exported function)
to ensure it is held to protect q->elevator from elevator_init(), even if
elevator_change() is called from non-sysfs paths.
sysfs path (elv_iosched_store) uses __elevator_change(), non-locking
version, as the lock is already taken by elv_iosched_store().
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Josh Boyer <jwboyer@fedoraproject.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit eb1c160b22655fd4ec44be732d6594fd1b1e44f4 upstream.
The soft lockup below happens at the boot time of the system using dm
multipath and the udev rules to switch scheduler.
[ 356.127001] BUG: soft lockup - CPU#3 stuck for 22s! [sh:483]
[ 356.127001] RIP: 0010:[<ffffffff81072a7d>] [<ffffffff81072a7d>] lock_timer_base.isra.35+0x1d/0x50
...
[ 356.127001] Call Trace:
[ 356.127001] [<ffffffff81073810>] try_to_del_timer_sync+0x20/0x70
[ 356.127001] [<ffffffff8118b08a>] ? kmem_cache_alloc_node_trace+0x20a/0x230
[ 356.127001] [<ffffffff810738b2>] del_timer_sync+0x52/0x60
[ 356.127001] [<ffffffff812ece22>] cfq_exit_queue+0x32/0xf0
[ 356.127001] [<ffffffff812c98df>] elevator_exit+0x2f/0x50
[ 356.127001] [<ffffffff812c9f21>] elevator_change+0xf1/0x1c0
[ 356.127001] [<ffffffff812caa50>] elv_iosched_store+0x20/0x50
[ 356.127001] [<ffffffff812d1d09>] queue_attr_store+0x59/0xb0
[ 356.127001] [<ffffffff812143f6>] sysfs_write_file+0xc6/0x140
[ 356.127001] [<ffffffff811a326d>] vfs_write+0xbd/0x1e0
[ 356.127001] [<ffffffff811a3ca9>] SyS_write+0x49/0xa0
[ 356.127001] [<ffffffff8164e899>] system_call_fastpath+0x16/0x1b
This is caused by a race between md device initialization by multipathd and
shell script to switch the scheduler using sysfs.
- multipathd:
SyS_ioctl -> do_vfs_ioctl -> dm_ctl_ioctl -> ctl_ioctl -> table_load
-> dm_setup_md_queue -> blk_init_allocated_queue -> elevator_init
q->elevator = elevator_alloc(q, e); // not yet initialized
- sh -c 'echo deadline > /sys/$DEVPATH/queue/scheduler':
elevator_switch (in the call trace above)
struct elevator_queue *old = q->elevator;
q->elevator = elevator_alloc(q, new_e);
elevator_exit(old); // lockup! (*)
- multipathd: (cont.)
err = e->ops.elevator_init_fn(q); // init fails; q->elevator is modified
(*) When del_timer_sync() is called, lock_timer_base() will loop infinitely
while timer->base == NULL. In this case, as timer will never initialized,
it results in lockup.
This patch introduces acquisition of q->sysfs_lock around elevator_init()
into blk_init_allocated_queue(), to provide mutual exclusion between
initialization of the q->scheduler and switching of the scheduler.
This should fix this bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=902012
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d82ae52e68892338068e7559a0c0657193341ce4 upstream.
Without this patch all DM devices will default to BLK_MAX_SEGMENT_SIZE
(65536) even if the underlying device(s) have a larger value -- this is
due to blk_stack_limits() using min_not_zero() when stacking the
max_segment_size limit.
1073741824
before patch:
65536
after patch:
1073741824
Reported-by: Lukasz Flis <l.flis@cyfronet.pl>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4912aa6c11e6a5d910264deedbec2075c6f1bb73 upstream.
crocode i2c_i801 i2c_core iTCO_wdt iTCO_vendor_support shpchp ioatdma dca be2net sg ses enclosure ext4 mbcache jbd2 sd_mod crc_t10dif ahci megaraid_sas(U) dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Pid: 491, comm: scsi_eh_0 Tainted: G W ---------------- 2.6.32-220.13.1.el6.x86_64 #1 IBM -[8722PAX]-/00D1461
RIP: 0010:[<ffffffff8124e424>] [<ffffffff8124e424>] blk_requeue_request+0x94/0xa0
RSP: 0018:ffff881057eefd60 EFLAGS: 00010012
RAX: ffff881d99e3e8a8 RBX: ffff881d99e3e780 RCX: ffff881d99e3e8a8
RDX: ffff881d99e3e8a8 RSI: ffff881d99e3e780 RDI: ffff881d99e3e780
RBP: ffff881057eefd80 R08: ffff881057eefe90 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff881057f92338
R13: 0000000000000000 R14: ffff881057f92338 R15: ffff883058188000
FS: 0000000000000000(0000) GS:ffff880040200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 00000000006d3ec0 CR3: 000000302cd7d000 CR4: 00000000000406b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process scsi_eh_0 (pid: 491, threadinfo ffff881057eee000, task ffff881057e29540)
Stack:
0000000000001057 0000000000000286 ffff8810275efdc0 ffff881057f16000
<0> ffff881057eefdd0 ffffffff81362323 ffff881057eefe20 ffffffff8135f393
<0> ffff881057e29af8 ffff8810275efdc0 ffff881057eefe78 ffff881057eefe90
Call Trace:
[<ffffffff81362323>] __scsi_queue_insert+0xa3/0x150
[<ffffffff8135f393>] ? scsi_eh_ready_devs+0x5e3/0x850
[<ffffffff81362a23>] scsi_queue_insert+0x13/0x20
[<ffffffff8135e4d4>] scsi_eh_flush_done_q+0x104/0x160
[<ffffffff8135fb6b>] scsi_error_handler+0x35b/0x660
[<ffffffff8135f810>] ? scsi_error_handler+0x0/0x660
[<ffffffff810908c6>] kthread+0x96/0xa0
[<ffffffff8100c14a>] child_rip+0xa/0x20
[<ffffffff81090830>] ? kthread+0x0/0xa0
[<ffffffff8100c140>] ? child_rip+0x0/0x20
Code: 00 00 eb d1 4c 8b 2d 3c 8f 97 00 4d 85 ed 74 bf 49 8b 45 00 49 83 c5 08 48 89 de 4c 89 e7 ff d0 49 8b 45 00 48 85 c0 75 eb eb a4 <0f> 0b eb fe 0f 1f 84 00 00 00 00 00 55 48 89 e5 0f 1f 44 00 00
RIP [<ffffffff8124e424>] blk_requeue_request+0x94/0xa0
RSP <ffff881057eefd60>
The RIP is this line:
BUG_ON(blk_queued_rq(rq));
After digging through the code, I think there may be a race between the
request completion and the timer handler running.
A timer is started for each request put on the device's queue (see
blk_start_request->blk_add_timer). If the request does not complete
before the timer expires, the timer handler (blk_rq_timed_out_timer)
will mark the request complete atomically:
static inline int blk_mark_rq_complete(struct request *rq)
{
return test_and_set_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
}
and then call blk_rq_timed_out. The latter function will call
scsi_times_out, which will return one of BLK_EH_HANDLED,
BLK_EH_RESET_TIMER or BLK_EH_NOT_HANDLED. If BLK_EH_RESET_TIMER is
returned, blk_clear_rq_complete is called, and blk_add_timer is again
called to simply wait longer for the request to complete.
Now, if the request happens to complete while this is going on, what
happens? Given that we know the completion handler will bail if it
finds the REQ_ATOM_COMPLETE bit set, we need to focus on the completion
handler running after that bit is cleared. So, from the above
paragraph, after the call to blk_clear_rq_complete. If the completion
sets REQ_ATOM_COMPLETE before the BUG_ON in blk_add_timer, we go boom
there (I haven't seen this in the cores). Next, if we get the
completion before the call to list_add_tail, then the timer will
eventually fire for an old req, which may either be freed or reallocated
(there is evidence that this might be the case). Finally, if the
completion comes in *after* the addition to the timeout list, I think
it's harmless. The request will be removed from the timeout list,
req_atom_complete will be set, and all will be well.
This will only actually explain the coredumps *IF* the request
structure was freed, reallocated *and* queued before the error handler
thread had a chance to process it. That is possible, but it may make
sense to keep digging for another race. I think that if this is what
was happening, we would see other instances of this problem showing up
as null pointer or garbage pointer dereferences, for example when the
request structure was not re-used. It looks like we actually do run
into that situation in other reports.
This patch moves the BUG_ON(test_bit(REQ_ATOM_COMPLETE,
&req->atomic_flags)); from blk_add_timer to the only caller that could
trip over it (blk_start_request). It then inverts the calls to
blk_clear_rq_complete and blk_add_timer in blk_rq_timed_out to address
the race. I've boot tested this patch, but nothing more.
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Acked-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Update logging with:
- prefix with module name
- add '\n' in the end
- test_pr_* removed
Change-Id: I465c9809def9d294dcbb3f7cf7f474c189f5fdbf
Signed-off-by: Konstantin Dorfman <kdorfman@codeaurora.org>
This reverts commit f97d4f6148.
Although this check prevents NULL reference, it hides the real problem.
Requests that wish to avoid statistics update have to disable the
REQ_IO_STAT flag, otherwise req->part is expected to be initialized.
Change-Id: I680b95ab9aa668612d948770347929ffde30aeab
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
The flag REQ_IO_STAT is enabled by default this assumes statistics are
initialized and might cause NULL references in the kernel. To avoid it
this flag is cleared in the request and stats are not updated.
Change-Id: I6a1890dde51dfa8ffdd376b13f4466c9db0ae05b
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
Block layer is accessing req->part without checking for NULL
pointer access, it is done when statistcs is not fully
initialized. Preventing the NULL pointer access effects only
the statistics update.
Change-Id: I45c91c074ecec1c3849f4f36185edcc6db35383c
Signed-off-by: Yaniv Gardi <ygardi@codeaurora.org>
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
The test will verify correctness of sequential data pattern
written to the device while new data (with same pattern) is
written simultaneously.
First this test will run a long sequential write scenario.
This first stage will write the pattern that will be read
later. Second, sequential read requests will read and
compare the same data. The second stage reads, will issue in
Parallel to write requests with the same LBA and size.
NOTE: The test requires a long timeout.
The purpose of this test is to mix read and write requests on the same
LBA while checking for the read data correctness.
Change-Id: I6a437ce689b66233af3055d07a7f62f1e7b40765
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
Introduce a new callback 'check_test_completion_fn' to test-iosched
framework. This callback is necessary to determine if a test has
completed or not in situation where the request queue is empty, but the
test was not completed.
Change-Id: I60bd8cccffacab11a5a7cba78caccf53fea3e1d8
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
Some times even though the block device is suspended by the block layer
the low-level driver might want to queue the PM requests to the device.
Allow such requests to get peeked as the blk_pm_add_request() has already
added it to the I/O scheduler otherwise the request would be forever stuck
in the I/O scheduler without being fetched by the driver.
Change-Id: I353943a7008ea1d92ff825d220cad1828fe37c27
Signed-off-by: Sujit Reddy Thumma <sthumma@codeaurora.org>
commit f3cff25f05f2ac29b2ee355e611b0657482f6f1d upstream.
'samples' is 64bit operant, but do_div() second parameter is 32.
do_div silently truncates high 32 bits and calculated result
is invalid.
In case if low 32bit of 'samples' are zeros then do_div() produces
kernel crash.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Jonghwan Choi <jhbird.choi@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This reverts commit b410a82118cdaa1dc92759e7995c20dcce0d1f1a.
The reverted commit was identified as the cause of the FS error
mentioned in the CR bellow.
It's reverted till further annalists of the root cause of FS error.
Change-Id: Ia75216de8012a2491b87f33e8c21f75592d87c80
CRs-fixed: 531257
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
When the scheduler reports to the block layer that there is an urgent
request pending, the device driver may decide to stop the transmission
of the current request in order to handle the urgent one. This is done
in order to reduce the latency of an urgent request. For example:
long WRITE may be stopped to handle an urgent READ.
Change-Id: I3072b8a1423870fed9c04c28d93caaf9557a7b89
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
This test adds the ability to test the UFS task management feature
in the driver. It loads the queue with requests in order to allow
the task management to operate in full capacity.
Modify test-iosched infrastructure to support the new tests:
- expose check_test_completion()
Note: we submit 16-bio requests since the current HW is very slow
and we don't want to exceed the timeout duration.
Change-Id: I8ee752cba3c6838d8edc05747fa0288c4b347ef6
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
Change time measurements in long_sequential_test from jiffies to ktime,
and make the relevant change in test-iosched infrastructure.
In long_sequential_test we measure throughput, and the jiffies resolution
is not sensitive enough for this calculation.
Change-Id: If7c9a03c687f61996609c014e056bcd7132b9012
Signed-off-by: Lee Susman <lsusman@codeaurora.org>
It is possible for URGENT request to be requeued/reinserted if it was
fetched during the creation of a packed list. This end case is rare and is
not handled at the moment.
This patch changes the messages notifying of the above to debug level
(instead of error) in order to clear the dmesg log.
Change-Id: Ie8bc067e61559a6f702077b95c5dbcc426404232
Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>