crypto: qcom: Add support for Inline Crypto Engine

Storage hardware can have embedded crypto engine which can greatly
reduce degradation in IO performance if crypto operations are performed
on data. Adding support for Inline Crypto Engine (ICE).

Change-Id: I73ded1326c68e63aa3320ea9c9c6dfaaf2d95cbf
Signed-off-by: Dinesh K Garg <dineshg@codeaurora.org>
Signed-off-by: Noa Rubens <noag@codeaurora.org>
Signed-off-by: Yaniv Gardi <ygardi@codeaurora.org>
This commit is contained in:
Dinesh K Garg 2014-10-07 11:01:42 -07:00
parent dca34e2b69
commit 384ae0057b
7 changed files with 1272 additions and 0 deletions

View File

@ -0,0 +1,235 @@
Introduction:
=============
Storage encryption has been one of the most required feature from security
point of view. QTI based storage encryption solution uses general purpose
crypto engine. While this kind of solution provide a decent amount of
performance, it falls short as storage speed is improving significantly
continuously. To overcome performance degradation, newer chips are going to
have Inline Crypto Engine (ICE) embedded into storage device. ICE is supposed
to meet the line speed of storage devices.
Hardware Description
====================
ICE is a HW block that is embedded into storage device such as UFS/eMMC. By
default, ICE works in bypass mode i.e. ICE HW does not perform any crypto
operation on data to be processed by storage device. If required, ICE can be
configured to perform crypto operation in one direction (i.e. either encryption
or decryption) or in both direction(both encryption & decryption).
When a switch between the operation modes(plain to crypto or crypto to plain)
is desired for a particular partition, SW must complete all transactions for
that particular partition before switching the crypto mode i.e. no crypto, one
direction crypto or both direction crypto operation. Requests for other
partitions are not impacted due to crypto mode switch.
ICE HW currently supports AES128/256 bit ECB & XTS mode encryption algorithms.
Keys for crypto operations are loaded from SW. Keys are stored in a lookup
table(LUT) located inside ICE HW. Maximum of 32 keys can be loaded in ICE key
LUT. A Key inside the LUT can be referred using a key index.
SW Description
==============
ICE HW has catagorized ICE registers in 2 groups: those which can be accessed by
only secure side i.e. TZ and those which can be accessed by non-secure side such
as HLOS as well. This requires that ICE driver to be split in two pieces: one
running from TZ space and another from HLOS space.
ICE driver from TZ would configure keys as requested by HLOS side.
ICE driver on HLOS side is responsible for initialization of ICE HW.
SW Architecture Diagram
=======================
Following are all the components involved in the ICE driver for control path:
+++++++++++++++++++++++++++++++++++++++++
+ App layer +
+++++++++++++++++++++++++++++++++++++++++
+ System layer +
+ ++++++++ +++++++ +
+ + VOLD + + PFM + +
+ ++++++++ +++++++ +
+ || || +
+ || || +
+ \/ \/ +
+ ++++++++++++++ +
+ + LibQSEECom + +
+ ++++++++++++++ +
+++++++++++++++++++++++++++++++++++++++++
+ Kernel + +++++++++++++++++
+ + + KMS +
+ +++++++ +++++++++++ +++++++++++ + +++++++++++++++++
+ + ICE + + Storage + + QSEECom + + + ICE Driver +
+++++++++++++++++++++++++++++++++++++++++ <===> +++++++++++++++++
|| ||
|| ||
\/ \/
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Storage Device +
+ ++++++++++++++ +
+ + ICE HW + +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Use Cases:
----------
a) Device bootup
ICE HW is detected during bootup time and corresponding probe function is
called. ICE driver parses its data from device tree node. ICE HW and storage
HW are tightly coupled. Storage device probing is dependent upon ICE device
probing. ICE driver configures all the required registers to put the ICE HW
in bypass mode.
b) Configuring keys
Currently, there are couple of use cases to configure the keys.
1) Full Disk Encryption(FDE)
System layer(VOLD) at invocation of apps layer would call libqseecom to create
the encryption key. Libqseecom calls qseecom driver to communicate with KMS
module on the secure side i.e. TZ. KMS would call ICE driver on the TZ side to
create and set the keys in ICE HW. At the end of transaction, VOLD would have
key index of key LUT where encryption key is present.
2) Per File Encryption (PFE)
Per File Manager(PFM) calls QSEECom api to create the key. PFM has a peer comp-
onent(PFT) at kernel layer which gets the corresponding key index from PFM.
Following are all the components involved in the ICE driver for data path:
+++++++++++++++++++++++++++++++++++++++++
+ App layer +
+++++++++++++++++++++++++++++++++++++++++
+ VFS +
+---------------------------------------+
+ File System (EXT4) +
+---------------------------------------+
+ Block Layer +
+ --------------------------------------+
+ +++++++ +
+ dm-req-crypt => + PFT + +
+ +++++++ +
+ +
+---------------------------------------+
+ +++++++++++ +++++++ +
+ + Storage + + ICE + +
+++++++++++++++++++++++++++++++++++++++++
+ || +
+ || (Storage Req with +
+ \/ ICE parameters ) +
+++++++++++++++++++++++++++++++++++++++++
+ Storage Device +
+ ++++++++++++++ +
+ + ICE HW + +
+++++++++++++++++++++++++++++++++++++++++
c) Data transaction
Once the crypto key has been configured, VOLD/PFM creates device mapping for
data partition. As part of device mapping VOLD passes key index, crypto
algorithm, mode and key length to dm layer. In case of PFE, keys are provided
by PFT as and when request is processed by dm-req-crypt. When any application
needs to read/write data, it would go through DM layer which would add crypto
information, provided by VOLD/PFT, to Request. For each Request, Storage driver
would ask ICE driver to configure crypto part of request. ICE driver extracts
crypto data from Request structure and provide it to storage driver which would
finally dispatch request to storage device.
d) Error Handling
Due to issue # 1 mentioned in "Known Issues", ICE driver does not register for
any interrupt. However, it enables sources of interrupt for ICE HW. After each
data transaction, Storage driver receives transaction completion event. As part
of event handling, storage driver calls ICE driver to check if any of ICE
interrupt status is set. If yes, storage driver returns error to upper layer.
Error handling would be changed in future chips.
Interfaces
==========
ICE driver exposes interfaces for storage driver to :
1. Get the global instance of ICE driver
2. Get the implemented interfaces of the particular ice instance
3. Initialize the ICE HW
4. Reset the ICE HW
5. Resume/Suspend the ICE HW
6. Get the Crypto configuration for the data request for storage
7. Check if current data transaction has generated any interrupt
Driver Parameters
=================
This driver is built and statically linked into the kernel; therefore,
there are no module parameters supported by this driver.
There are no kernel command line parameters supported by this driver.
Power Management
================
ICE driver does not do power management on its own as it is part of storage
hardware. Whenever storage driver receives request for power collapse/suspend
resume, it would call ICE driver which exposes APIs for Storage HW. ICE HW
during power collapse or reset, wipes crypto configuration data. When ICE
driver receives request to resume, it would ask ICE driver on TZ side to
restore the configuration. ICE driver does not do anything as part of power
collapse or suspend event.
Interface:
==========
ICE driver exposes following APIs for storage driver to use:
int (*init)(struct platform_device *, void *, ice_success_cb, ice_error_cb);
-- This function is invoked by storage controller during initialization of
storage controller. Storage controller would provide success and error call
backs which would be invoked asynchronously once ICE HW init is done.
int (*reset)(struct platform_device *);
-- ICE HW reset as part of storage controller reset. When storage controller
received reset command, it would call reset on ICE HW. As of now, ICE HW
does not need to do anything as part of reset.
int (*resume)(struct platform_device *);
-- ICE HW while going to reset, wipes all crypto keys and other data from ICE
HW. ICE driver would reconfigure those data as part of resume operation.
int (*suspend)(struct platform_device *);
-- This API would be called by storage driver when storage device is going to
suspend mode. As of today, ICE driver does not do anything to handle suspend.
int (*config)(struct platform_device *, struct request* , struct ice_data_setting*);
-- Storage driver would call this interface to get all crypto data required to
perform crypto operation.
int (*status)(struct platform_device *);
-- Storage driver would call this interface to check if previous data transfer
generated any error.
Config options
==============
This driver is enabled by the kernel config option CONFIG_CRYPTO_DEV_MSM_ICE.
Dependencies
============
ICE driver depends upon corresponding ICE driver on TZ side to function
appropriately.
Known Issues
============
1. ICE HW emits 0s even if it has generated an interrupt
This issue has significant impact on how ICE interrupts are handled. Currently,
ICE driver does not register for any of the ICE interrupts but enables the
sources of interrupt. Once storage driver asks to check the status of interrupt,
it reads and clears the clear status and provide read status to storage driver.
This mechanism though not optimal but prevents filesystem curruption.
This issue has been fixed in newer chips.
2. ICE HW wipes all crypto data during power collapse
This issue necessiate that ICE driver on TZ side store the crypto material
which is not required in the case of general purpose crypto engine.
This issue has been fixed in newer chips.
Further Improvements
====================
Currently, Due to PFE use case, ICE driver is dependent upon dm-req-crypt to
provide the keys as part of request structure. This couples ICE driver with
dm-req-crypt based solution. It is under discussion to expose an IOCTL based
and registeration based interface APIs from ICE driver. ICE driver would use
these two interfaces to find out if any key exists for current request. If
yes, choose the right key index received from IOCTL or registeration based
APIs. If not, dont set any crypto parameter in the request.

View File

@ -0,0 +1,18 @@
* Inline Crypto Engine (ICE)
Required properties:
- compatible : should be "qcom,ice"
- reg : <register mapping>
Optional properties:
- interrupt-names : name describing the interrupts for ICE IRQ
- interrupts : <interrupt mapping for ICE IRQ>
Example:
ufs_ice: ufsice@fc5a0000 {
compatible = "qcom,ice";
reg = <0xfc5a0000 0x8000>;
interrupt-names = "ufs_ice_nonsec_level_irq", "ufs_ice_sec_level_irq";
interrupts = <0 258 0>, <0 257 0>;
status = "disabled";
};

View File

@ -356,6 +356,16 @@ config CRYPTO_DEV_OTA_CRYPTO
To compile this driver as a module, choose M here: the
module will be called ota_crypto.
config CRYPTO_DEV_QCOM_ICE
tristate "Inline Crypto Module"
default n
help
This driver supports Inline Crypto Engine for QTI chipsets, MSM8994
and later, to accelerate crypto operations for storage needs.
To compile this driver as a module, choose M here: the
module will be called ice.
config CRYPTO_DEV_TEGRA_AES
tristate "Support for TEGRA AES hw engine"
depends on ARCH_TEGRA

View File

@ -18,3 +18,4 @@ ifeq ($(CONFIG_FIPS_ENABLE), y)
endif
obj-$(CONFIG_CRYPTO_DEV_QCRYPTO) += qcrypto.o
obj-$(CONFIG_CRYPTO_DEV_OTA_CRYPTO) += ota_crypto.o
obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += ice.o

807
drivers/crypto/msm/ice.c Normal file
View File

@ -0,0 +1,807 @@
/* Copyright (c) 2014, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/async.h>
#include <linux/of.h>
#include <soc/qcom/scm.h>
#include <linux/device-mapper.h>
#include <linux/blk_types.h>
#include <linux/blkdev.h>
#include <crypto/ice.h>
#include "iceregs.h"
#define SCM_IO_READ 0x1
#define SCM_IO_WRITE 0x2
#define TZ_SYSCALL_CREATE_SMC_ID(o, s, f) \
((uint32_t)((((o & 0x3f) << 24) | (s & 0xff) << 8) | (f & 0xff)))
#define TZ_OWNER_QSEE_OS 50
#define TZ_SVC_KEYSTORE 5 /* Keystore management */
#define TZ_OS_KS_RESTORE_KEY_ID \
TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_QSEE_OS, TZ_SVC_KEYSTORE, 0x06)
#define TZ_SYSCALL_CREATE_PARAM_ID_0 0
#define TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID \
TZ_SYSCALL_CREATE_PARAM_ID_0
const struct qcom_ice_variant_ops qcom_ice_ops;
static LIST_HEAD(ice_devices);
/*
* ICE HW device structure.
*/
struct ice_device {
struct list_head list;
struct device *pdev;
void __iomem *mmio;
int irq;
bool is_irq_enabled;
bool is_ice_enabled;
bool is_ice_disable_fuse_blown;
bool is_clear_irq_pending;
ice_success_cb success_cb;
ice_error_cb error_cb;
void *host_controller_data; /* UFS/EMMC/other? */
spinlock_t lock;
};
static void qcom_ice_low_power_mode_enable(struct ice_device *ice_dev)
{
/*
* Enable low power mode sequence
* [0]-0, [1]-0, [2]-0, [3]-E, [4]-0, [5]-0, [6]-0, [7]-0
*/
qcom_ice_writel(ice_dev, 0x7000, QCOM_ICE_REGS_ADVANCED_CONTROL);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static void qcom_ice_enable_test_bus_config(struct ice_device *ice_dev)
{
/*
* Configure & enable ICE_TEST_BUS_REG to reflect ICE intr lines
* MAIN_TEST_BUS_SELECTOR = 0 (ICE_CONFIG)
* TEST_BUS_REG_EN = 1 (ENABLE)
*/
u32 regval;
regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_TEST_BUS_CONTROL);
regval &= 0x0FFFFFFF;
/* TBD: replace 0x2 with define in iceregs.h */
regval |= 0x2;
qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_TEST_BUS_CONTROL);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static void qcom_ice_optimization_enable(struct ice_device *ice_dev)
{
u32 regval;
regval = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_ADVANCED_CONTROL);
regval |= 0x3F007100;
/* ICE Optimizations Enable Sequence */
udelay(5);
/* [0]-0, [1]-0, [2]-8, [3]-E, [4]-0, [5]-0, [6]-F, [7]-A */
qcom_ice_writel(ice_dev, regval, QCOM_ICE_REGS_ADVANCED_CONTROL);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
/* ICE HPG requires sleep before writing */
udelay(5);
qcom_ice_writel(ice_dev, 0xF, QCOM_ICE_REGS_ENDIAN_SWAP);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static void qcom_ice_enable(struct ice_device *ice_dev)
{
unsigned int reg;
/*
* To enable ICE, perform following
* 1. Set IGNORE_CONTROLLER_RESET to USE in ICE_RESET register
* 2. Disable GLOBAL_BYPASS bit in ICE_CONTROL register
*/
reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_RESET);
/* ~0x100 => CONTROLLER_RESET = RESET_ON
* IGNORE_CONTROLLER_RESET = USE
* ICE_RESET = RESET_ON
*/
reg &= ~0x100;
qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_RESET);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_CONTROL);
/*
* ~0x7 => DECR_BYPASS = BYPASS_DISABLE
* ENCR_BYPASS = BYPASS_DISABLE
* GLOBAL_BYPASS = BYPASS_DISABLE
*/
reg &= ~0x7;
qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_CONTROL);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static int qcom_ice_verify_ice(struct ice_device *ice_dev)
{
unsigned int rev;
unsigned int maj_rev, min_rev, step_rev;
rev = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_VERSION);
maj_rev = (rev & ICE_CORE_MAJOR_REV_MASK) >> ICE_CORE_MAJOR_REV;
min_rev = (rev & ICE_CORE_MINOR_REV_MASK) >> ICE_CORE_MINOR_REV;
step_rev = (rev & ICE_CORE_STEP_REV_MASK) >> ICE_CORE_STEP_REV;
if (maj_rev != ICE_CORE_CURRENT_MAJOR_VERSION) {
pr_err("%s: Unknown QC ICE device at 0x%lu, rev %d.%d.%d\n",
__func__, (unsigned long)ice_dev->mmio,
maj_rev, min_rev, step_rev);
return -EIO;
}
dev_info(ice_dev->pdev, "QC ICE %d.%d.%d device found @0x%p\n",
maj_rev, min_rev, step_rev,
ice_dev->mmio);
return 0;
}
static void qcom_ice_enable_intr(struct ice_device *ice_dev)
{
unsigned reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
reg &= ~QCOM_ICE_NON_SEC_IRQ_MASK;
qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static void qcom_ice_disable_intr(struct ice_device *ice_dev)
{
unsigned reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
reg |= QCOM_ICE_NON_SEC_IRQ_MASK;
qcom_ice_writel(ice_dev, reg, QCOM_ICE_REGS_NON_SEC_IRQ_MASK);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
}
static int qcom_ice_clear_irq(struct ice_device *ice_dev)
{
qcom_ice_writel(ice_dev, QCOM_ICE_NON_SEC_IRQ_MASK,
QCOM_ICE_REGS_NON_SEC_IRQ_CLR);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
ice_dev->is_clear_irq_pending = false;
return 0;
}
static int qcom_ice_get_device_tree_data(struct platform_device *pdev,
struct ice_device *ice_dev)
{
struct resource *res;
struct device *dev = &pdev->dev;
int rc = 0;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
pr_err("%s: Error = %d No memory available for IORESOURCE\n",
__func__, rc);
return -ENOMEM;
}
ice_dev->mmio = devm_ioremap_resource(dev, res);
if (IS_ERR(ice_dev->mmio)) {
rc = PTR_ERR(ice_dev->mmio);
pr_err("%s: Error = %d mapping ICE io memory\n",
__func__, rc);
goto out;
}
out:
return rc;
}
static int qcom_ice_probe(struct platform_device *pdev)
{
struct ice_device *ice_dev;
int rc = 0;
if (!pdev) {
pr_err("%s: Invalid platform_device passed\n",
__func__);
return -EINVAL;
}
ice_dev = kzalloc(sizeof(struct ice_device), GFP_KERNEL);
if (!ice_dev) {
rc = -ENOMEM;
pr_err("%s: Error %d allocating memory for ICE device:\n",
__func__, rc);
goto out;
}
ice_dev->pdev = &pdev->dev;
if (!ice_dev->pdev) {
rc = -EINVAL;
pr_err("%s: Invalid device passed in platform_device\n",
__func__);
goto err_ice_dev;
}
if (pdev->dev.of_node)
rc = qcom_ice_get_device_tree_data(pdev, ice_dev);
else {
rc = -EINVAL;
pr_err("%s: ICE device node not found\n", __func__);
}
if (rc)
goto err_ice_dev;
/*
* If ICE is enabled here, it would be waste of power.
* We would enable ICE when first request for crypto
* operation arrives.
*/
ice_dev->is_ice_enabled = false;
platform_set_drvdata(pdev, ice_dev);
list_add_tail(&ice_dev->list, &ice_devices);
goto out;
err_ice_dev:
kfree(ice_dev);
out:
return rc;
}
static int qcom_ice_remove(struct platform_device *pdev)
{
struct ice_device *ice_dev;
ice_dev = (struct ice_device *)platform_get_drvdata(pdev);
if (!ice_dev)
return 0;
qcom_ice_disable_intr(ice_dev);
device_init_wakeup(&pdev->dev, false);
if (ice_dev->mmio)
iounmap(ice_dev->mmio);
list_del_init(&ice_dev->list);
kfree(ice_dev);
return 1;
}
static int qcom_ice_suspend(struct platform_device *pdev)
{
/* ICE driver does need to do anything */
return 0;
}
static int qcom_ice_restore_config(void)
{
struct scm_desc desc = {0};
int ret;
/*
* TZ would check KEYS_RAM_RESET_COMPLETED status bit before processing
* restore config command. This would prevent two calls from HLOS to TZ
* One to check KEYS_RAM_RESET_COMPLETED status bit second to restore
* config
*/
desc.arginfo = TZ_OS_KS_RESTORE_KEY_ID_PARAM_ID;
ret = scm_call2(TZ_OS_KS_RESTORE_KEY_ID, &desc);
if (ret)
pr_err("%s: Error: 0x%x\n", __func__, ret);
return ret;
}
static int qcom_ice_secure_ice_init(struct ice_device *ice_dev)
{
/* We need to enable source for ICE secure interrupts */
int ret = 0;
u32 regval;
regval = scm_call_atomic1(SCM_SVC_TZ, SCM_IO_READ,
(unsigned long)ice_dev->mmio +
QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK);
regval &= ~QCOM_ICE_SEC_IRQ_MASK;
ret = scm_call_atomic2(SCM_SVC_TZ, SCM_IO_WRITE,
(unsigned long)ice_dev->mmio +
QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK, regval);
/*
* Ensure previous instructions was completed before issuing next
* ICE initialization/optimization instruction
*/
mb();
if (!ret)
pr_err("%s: failed(0x%x) to init secure ICE config\n",
__func__, ret);
return ret;
}
static int qcom_ice_update_sec_cfg(struct ice_device *ice_dev)
{
int ret = 0, scm_ret = 0;
/* scm command buffer structrue */
struct qcom_scm_cmd_buf {
unsigned int device_id;
unsigned int spare;
} cbuf = {0};
/*
* Store dev_id in ice_device structure so that emmc/ufs cases can be
* handled properly
*/
#define RESTORE_SEC_CFG_CMD 0x2
#define ICE_TZ_DEV_ID 20
cbuf.device_id = ICE_TZ_DEV_ID;
ret = scm_restore_sec_cfg(cbuf.device_id, cbuf.spare, &scm_ret);
if (ret || scm_ret) {
pr_err("%s: failed, ret %d scm_ret %d\n",
__func__, ret, scm_ret);
if (!ret)
ret = scm_ret;
}
return ret;
}
static void qcom_ice_finish_init(void *data, async_cookie_t cookie)
{
struct ice_device *ice_dev = data;
unsigned reg;
if (!ice_dev) {
pr_err("%s: Null data received\n", __func__);
return;
}
/*
* It is possible that ICE device is not probed when host is probed
* This would cause host probe to be deferred. When probe for host is
* defered, it can cause power collapse for host and that can wipe
* configurations of host & ice. It is prudent to restore the config
*/
if (qcom_ice_update_sec_cfg(ice_dev)) {
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_ICE_TZ_INIT_FAILED);
return;
}
if (qcom_ice_verify_ice(ice_dev)) {
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_UNEXPECTED_ICE_DEVICE);
return;
}
/* if ICE_DISABLE_FUSE is blown, return immediately */
reg = qcom_ice_readl(ice_dev, QCOM_ICE_REGS_FUSE_SETTING);
reg &= ICE_FUSE_SETTING_MASK;
if (reg) {
ice_dev->is_ice_disable_fuse_blown = true;
pr_err("%s: Error: ICE_ERROR_HW_DISABLE_FUSE_BLOWN\n",
__func__);
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_HW_DISABLE_FUSE_BLOWN);
return;
}
if (!qcom_ice_secure_ice_init(ice_dev)) {
pr_err("%s: Error: ICE_ERROR_ICE_TZ_INIT_FAILED\n", __func__);
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_ICE_TZ_INIT_FAILED);
return;
}
qcom_ice_low_power_mode_enable(ice_dev);
/*
* ICE optimization sequence qcom_ice_optimization_enable(ice_dev)
* is not required during boot up. This would be called from
* qcom_ice_config is called.
* However, we need to enable interrupts here.
*/
qcom_ice_enable_intr(ice_dev);
ice_dev->success_cb(ice_dev->host_controller_data,
ICE_INIT_COMPLETION);
return;
}
static int qcom_ice_init(struct platform_device *pdev,
void *host_controller_data,
ice_success_cb success_cb,
ice_error_cb error_cb)
{
/*
* A completion event for host controller would be triggered upon
* initialization completion
* When ICE is initialized, it would put ICE into Global Bypass mode
* When any request for data transfer is received, it would enable
* the ICE for that particular request
*/
struct ice_device *ice_dev;
ice_dev = platform_get_drvdata(pdev);
if (!ice_dev) {
pr_err("%s: invalid device\n", __func__);
return -EINVAL;
}
ice_dev->success_cb = success_cb;
ice_dev->error_cb = error_cb;
ice_dev->host_controller_data = host_controller_data;
/*
* As ICE init may take time, create an async task to complete rest
* of init
*/
async_schedule(qcom_ice_finish_init, ice_dev);
return 0;
}
EXPORT_SYMBOL(qcom_ice_init);
static void qcom_ice_finish_power_collapse(void *data, async_cookie_t cookie)
{
struct ice_device *ice_dev = data;
if (ice_dev->is_ice_disable_fuse_blown) {
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_HW_DISABLE_FUSE_BLOWN);
return;
}
if (ice_dev->is_ice_enabled) {
qcom_ice_low_power_mode_enable(ice_dev);
qcom_ice_enable_test_bus_config(ice_dev);
qcom_ice_optimization_enable(ice_dev);
/*
* When ICE resets, it wipes all of keys from LUTs
* ICE driver should call TZ to restore keys
*/
if (qcom_ice_restore_config())
ice_dev->error_cb(ice_dev->host_controller_data,
ICE_ERROR_ICE_KEY_RESTORE_FAILED);
if (ice_dev->is_clear_irq_pending)
qcom_ice_clear_irq(ice_dev);
}
if (ice_dev->success_cb && ice_dev->host_controller_data)
ice_dev->success_cb(ice_dev->host_controller_data,
ICE_RESUME_COMPLETION);
return;
}
static int qcom_ice_resume(struct platform_device *pdev)
{
/*
* ICE is power collapsed when storage controller is power collapsed
* ICE resume function is responsible for:
* ICE HW enabling sequence
* Key restoration
* A completion event should be triggered
* upon resume completion
* Storage driver will be fully operational only
* after receiving this event
*/
struct ice_device *ice_dev;
ice_dev = platform_get_drvdata(pdev);
if (!ice_dev)
return -EINVAL;
async_schedule(qcom_ice_finish_power_collapse, ice_dev);
return 0;
}
EXPORT_SYMBOL(qcom_ice_resume);
static int qcom_ice_reset(struct platform_device *pdev)
{
/*
* There are two ways by which ICE can be reset
* 1. storage driver calls ICE reset before proceeding with its reset
* ICE completes resets sequence and returns to storage driver
* 2. ICE generates QCOM_DBG_OPEN_EVENT interrupt which should cause
* ICE RESET
* ICE driver listen for KEYS_RAM_RESET_COMPLETED and send
* completion notice to storage driver
*
* Upon storage reset ice reset function will be invoked.
* ICE reset function is responsible for
* - Setting ICE RESET bit
* - ICE HW enabling sequence
* - Key restoration
* - Clear ICE interrupts
* A completion event should be triggered upon reset completion
*/
struct ice_device *ice_dev;
ice_dev = platform_get_drvdata(pdev);
if (!ice_dev) {
pr_err("%s: INVALID ice_dev\n", __func__);
return -EINVAL;
}
ice_dev->is_clear_irq_pending = true;
return qcom_ice_resume(pdev);
}
EXPORT_SYMBOL(qcom_ice_reset);
static int qcom_ice_config(struct platform_device *pdev, struct request *req,
struct ice_data_setting *setting)
{
struct ice_crypto_setting *crypto_data;
struct ice_device *ice_dev;
union map_info *info;
if (!pdev || !req || !setting) {
pr_err("%s: Invalid params passed\n", __func__);
return -EINVAL;
}
/*
* It is not an error to have a request with no bio
* Such requests must bypass ICE. So first set bypass and then
* return if bio is not available in request
*/
if (setting) {
setting->encr_bypass = true;
setting->decr_bypass = true;
}
if (!req->bio) {
/* It is not an error to have a request with no bio */
return 0;
}
/*
* info field in req->end_io_data could be used by mulitple dm or
* non-dm entities. To ensure that we are running operation on dm
* based request, check BIO_DONT_FREE flag
*/
if (bio_flagged(req->bio, BIO_DONTFREE)) {
info = dm_get_rq_mapinfo(req);
if (!info) {
pr_err("%s info not available in request\n", __func__);
return 0;
}
ice_dev = platform_get_drvdata(pdev);
crypto_data = (struct ice_crypto_setting *)info->ptr;
if (!ice_dev)
return 0;
if (ice_dev->is_ice_disable_fuse_blown) {
pr_err("%s ICE disabled fuse is blown\n", __func__);
return -ENODEV;
}
if (crypto_data->key_index >= 0) {
if (!ice_dev->is_ice_enabled) {
/*
* When we get here, ICE is already in low power
* mode Keys would be already set by TrustZone
* So we need to run ICE optimization sequence
* and enable ICE once.
*/
qcom_ice_optimization_enable(ice_dev);
qcom_ice_enable(ice_dev);
qcom_ice_enable_test_bus_config(ice_dev);
ice_dev->is_ice_enabled = true;
}
memcpy(&setting->crypto_data, crypto_data,
sizeof(struct ice_crypto_setting));
if (rq_data_dir(req) == WRITE)
setting->encr_bypass = false;
else if (rq_data_dir(req) == READ)
setting->decr_bypass = false;
else {
/* Should I say BUG_ON */
setting->encr_bypass = true;
setting->decr_bypass = true;
}
}
}
/*
* It is not an error. If target is not req-crypt based, all request
* from storage driver would come here to check if there is any ICE
* setting required
*/
return 0;
}
EXPORT_SYMBOL(qcom_ice_config);
static int qcom_ice_status(struct platform_device *pdev)
{
struct ice_device *ice_dev;
unsigned int test_bus_reg_status;
if (!pdev) {
pr_err("%s: Invalid params passed\n", __func__);
return -EINVAL;
}
ice_dev = platform_get_drvdata(pdev);
if (!ice_dev)
return -ENODEV;
if (!ice_dev->is_ice_enabled)
return -ENODEV;
test_bus_reg_status = qcom_ice_readl(ice_dev,
QCOM_ICE_REGS_TEST_BUS_REG);
if ((test_bus_reg_status & QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR) ||
(test_bus_reg_status & QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR))
return 1;
else
return 0;
}
EXPORT_SYMBOL(qcom_ice_status);
const struct qcom_ice_variant_ops qcom_ice_ops = {
.name = "qcom",
.init = qcom_ice_init,
.reset = qcom_ice_reset,
.resume = qcom_ice_resume,
.suspend = qcom_ice_suspend,
.config = qcom_ice_config,
.status = qcom_ice_status,
};
/* Following struct is required to match device with driver from dts file */
static struct of_device_id qcom_ice_match[] = {
{ .compatible = "qcom,ice",
.data = (void *)&qcom_ice_ops},
{},
};
MODULE_DEVICE_TABLE(of, qcom_ice_match);
struct platform_device *qcom_ice_get_pdevice(struct device_node *node)
{
struct platform_device *ice_pdev = NULL;
struct ice_device *ice_dev = NULL;
if (!node) {
pr_err("%s: invalid node %p", __func__, node);
goto out;
}
if (!of_device_is_available(node)) {
pr_err("%s: device unavailable\n", __func__);
goto out;
}
if (list_empty(&ice_devices)) {
pr_err("%s: invalid device list\n", __func__);
ice_pdev = ERR_PTR(-EPROBE_DEFER);
goto out;
}
list_for_each_entry(ice_dev, &ice_devices, list) {
if (ice_dev->pdev->of_node == node) {
pr_info("%s: found ice device %p\n", __func__, ice_dev);
break;
}
}
ice_pdev = to_platform_device(ice_dev->pdev);
pr_info("%s: matching platform device %p\n", __func__, ice_pdev);
out:
return ice_pdev;
}
struct qcom_ice_variant_ops *qcom_ice_get_variant_ops(struct device_node *node)
{
if (node) {
const struct of_device_id *match;
match = of_match_node(qcom_ice_match, node);
if (match)
return (struct qcom_ice_variant_ops *)(match->data);
pr_err("%s: error matching\n", __func__);
} else {
pr_err("%s: invalid node\n", __func__);
}
return NULL;
}
EXPORT_SYMBOL(qcom_ice_get_variant_ops);
static struct platform_driver qcom_ice_driver = {
.probe = qcom_ice_probe,
.remove = qcom_ice_remove,
.driver = {
.owner = THIS_MODULE,
.name = "qcom_ice",
.of_match_table = qcom_ice_match,
},
};
module_platform_driver(qcom_ice_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("QTI Inline Crypto Engine driver");

View File

@ -0,0 +1,111 @@
/* Copyright (c) 2014, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
#define _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
/* Register bits for ICE version */
#define ICE_CORE_CURRENT_MAJOR_VERSION 0x01
#define ICE_CORE_STEP_REV_MASK 0xFFFF
#define ICE_CORE_STEP_REV 0 /* bit 15-0 */
#define ICE_CORE_MAJOR_REV_MASK 0xFF000000
#define ICE_CORE_MAJOR_REV 24 /* bit 31-24 */
#define ICE_CORE_MINOR_REV_MASK 0xFF0000
#define ICE_CORE_MINOR_REV 16 /* bit 23-16 */
#define ICE_FUSE_SETTING_MASK 0x1
/* QCOM ICE Registers from SWI */
#define QCOM_ICE_REGS_CONTROL 0x0000
#define QCOM_ICE_REGS_RESET 0x0004
#define QCOM_ICE_REGS_VERSION 0x0008
#define QCOM_ICE_REGS_FUSE_SETTING 0x0010
#define QCOM_ICE_REGS_PARAMETERS_1 0x0014
#define QCOM_ICE_REGS_PARAMETERS_2 0x0018
#define QCOM_ICE_REGS_PARAMETERS_3 0x001C
#define QCOM_ICE_REGS_PARAMETERS_4 0x0020
#define QCOM_ICE_REGS_PARAMETERS_5 0x0024
#define QCOM_ICE_REGS_NON_SEC_IRQ_STTS 0x0040
#define QCOM_ICE_REGS_NON_SEC_IRQ_MASK 0x0044
#define QCOM_ICE_REGS_NON_SEC_IRQ_CLR 0x0048
#define QCOM_ICE_REGS_ADVANCED_CONTROL 0x1000
#define QCOM_ICE_REGS_ENDIAN_SWAP 0x1004
#define QCOM_ICE_REGS_TEST_BUS_CONTROL 0x1010
#define QCOM_ICE_REGS_TEST_BUS_REG 0x1014
#define QCOM_ICE_REGS_STREAM1_COUNTERS1 0x1100
#define QCOM_ICE_REGS_STREAM1_COUNTERS2 0x1104
#define QCOM_ICE_REGS_STREAM1_COUNTERS3 0x1108
#define QCOM_ICE_REGS_STREAM1_COUNTERS4 0x110C
#define QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB 0x1110
#define QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB 0x1114
#define QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB 0x1118
#define QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB 0x111C
#define QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB 0x1120
#define QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB 0x1124
#define QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB 0x1128
#define QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB 0x112C
#define QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB 0x1130
#define QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB 0x1134
#define QCOM_ICE_REGS_STREAM2_COUNTERS1 0x1200
#define QCOM_ICE_REGS_STREAM2_COUNTERS2 0x1204
#define QCOM_ICE_REGS_STREAM2_COUNTERS3 0x1208
#define QCOM_ICE_REGS_STREAM2_COUNTERS4 0x120C
#define QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB 0x1210
#define QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB 0x1214
#define QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB 0x1218
#define QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB 0x121C
#define QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB 0x1220
#define QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB 0x1224
#define QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB 0x1228
#define QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB 0x122C
#define QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB 0x1230
#define QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB 0x1234
#define QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE (1L << 0)
#define QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE (1L << 1)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_LBO (1L << 2)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_LBO (1L << 3)
#define QCOM_ICE_NON_SEC_IRQ_MASK \
(QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE |\
QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE |\
QCOM_ICE_STREAM1_NOT_EXPECTED_LBO |\
QCOM_ICE_STREAM2_NOT_EXPECTED_LBO)
/* QCOM ICE registers from secure side */
#define QCOM_ICE_TEST_BUS_REG_SECURE_INTR (1L << 28)
#define QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR (1L << 2)
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_STTS 0x2050
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK 0x2054
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_CLR 0x2058
#define QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED (1L << 0)
#define QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED (1L << 1)
#define QCOM_ICE_QCOMC_DBG_OPEN_EVENT (1L << 30)
#define QCOM_ICE_KEYS_RAM_RESET_COMPLETED (1L << 31)
#define QCOM_ICE_SEC_IRQ_MASK \
(QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
QCOM_ICE_QCOMC_DBG_OPEN_EVENT | \
QCOM_ICE_KEYS_RAM_RESET_COMPLETED)
#define qcom_ice_writel(ice, val, reg) \
writel_relaxed((val), (ice)->mmio + (reg))
#define qcom_ice_readl(ice, reg) \
readl_relaxed((ice)->mmio + (reg))
#endif /* _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_ */

90
include/crypto/ice.h Normal file
View File

@ -0,0 +1,90 @@
/* Copyright (c) 2014, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _QCOM_INLINE_CRYPTO_ENGINE_H_
#define _QCOM_INLINE_CRYPTO_ENGINE_H_
#include <linux/platform_device.h>
struct request;
enum ice_cryto_algo_mode {
ICE_CRYPTO_ALGO_MODE_AES_ECB = 0x0,
ICE_CRYPTO_ALGO_MODE_AES_XTS = 0x3,
};
enum ice_crpto_key_size {
ICE_CRYPTO_KEY_SIZE_128 = 0x0,
ICE_CRYPTO_KEY_SIZE_256 = 0x2,
};
enum ice_crpto_key_mode {
ICE_CRYPTO_USE_KEY0_HW_KEY = 0x0,
ICE_CRYPTO_USE_KEY1_HW_KEY = 0x1,
ICE_CRYPTO_USE_LUT_SW_KEY0 = 0x2,
ICE_CRYPTO_USE_LUT_SW_KEY = 0x3
};
struct ice_crypto_setting {
enum ice_crpto_key_size key_size;
enum ice_cryto_algo_mode algo_mode;
enum ice_crpto_key_mode key_mode;
unsigned short key_index;
};
struct ice_data_setting {
struct ice_crypto_setting crypto_data;
bool sw_forced_context_switch;
bool decr_bypass;
bool encr_bypass;
};
enum ice_event_completion {
ICE_INIT_COMPLETION,
ICE_RESUME_COMPLETION,
ICE_RESET_COMPLETION,
};
enum ice_error_code {
ICE_ERROR_UNEXPECTED_ICE_DEVICE,
ICE_ERROR_PARTITIAL_KEY_LOAD,
ICE_ERROR_IMPROPER_INITIALIZATION,
ICE_ERROR_INVALID_ARGUMENTS,
ICE_ERROR_HW_DISABLE_FUSE_BLOWN,
ICE_ERROR_ICE_KEY_RESTORE_FAILED,
ICE_ERROR_ICE_TZ_INIT_FAILED,
ICE_ERROR_STREAM1_PREMATURE_LBA_CHANGE,
ICE_ERROR_STREAM2_PREMATURE_LBA_CHANGE,
ICE_ERROR_STREAM1_UNEXPECTED_LBA,
ICE_ERROR_STREAM2_UNEXPECTED_LBA,
};
typedef void (*ice_success_cb)(void *, enum ice_event_completion);
typedef void (*ice_error_cb)(void *, enum ice_error_code);
struct qcom_ice_variant_ops *qcom_ice_get_variant_ops(struct device_node *node);
struct platform_device *qcom_ice_get_pdevice(struct device_node *node);
struct qcom_ice_variant_ops {
const char *name;
int (*init)(struct platform_device *, void *,
ice_success_cb, ice_error_cb);
int (*reset)(struct platform_device *);
int (*resume)(struct platform_device *);
int (*suspend)(struct platform_device *);
int (*config)(struct platform_device *, struct request* ,
struct ice_data_setting*);
int (*status)(struct platform_device *);
};
#endif /* _QCOM_INLINE_CRYPTO_ENGINE_H_ */