scsi: use dma_get_cache_alignment() as minimum DMA alignment

commit 90addc6b3c9cda0146fbd62a08e234c2b224a80c upstream.

In non-coherent DMA mode, kernel uses cache flushing operations to
maintain I/O coherency, so scsi's block queue should be aligned to the
value returned by dma_get_cache_alignment().  Otherwise, If a DMA buffer
and a kernel structure share a same cache line, and if the kernel
structure has dirty data, cache_invalidate (no writeback) will cause
data corruption.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
[hch: rebased and updated the comment and changelog]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
This commit is contained in:
Huacai Chen 2017-11-21 14:23:38 +01:00 committed by syphyr
parent 6de0c4e8d2
commit 375d1ebdce
1 changed files with 6 additions and 4 deletions

View File

@ -1666,11 +1666,13 @@ struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost,
q->limits.cluster = 0;
/*
* set a reasonable default alignment on word boundaries: the
* host and device may alter it using
* blk_queue_update_dma_alignment() later.
* Set a reasonable default alignment: The larger of 32-byte (dword),
* which is a common minimum for HBAs, and the minimum DMA alignment,
* which is set by the platform.
*
* Devices that require a bigger alignment can increase it later.
*/
blk_queue_dma_alignment(q, 0x03);
blk_queue_dma_alignment(q, max(4, dma_get_cache_alignment()) - 1);
return q;
}