mirror of
https://github.com/followmsi/android_kernel_google_msm.git
synced 2024-11-06 23:17:41 +00:00
Merge commit 'v3.4-rc5' into android-3.4
This commit is contained in:
commit
aadf030d84
284 changed files with 2530 additions and 1562 deletions
19
Documentation/ABI/testing/sysfs-bus-hsi
Normal file
19
Documentation/ABI/testing/sysfs-bus-hsi
Normal file
|
@ -0,0 +1,19 @@
|
|||
What: /sys/bus/hsi
|
||||
Date: April 2012
|
||||
KernelVersion: 3.4
|
||||
Contact: Carlos Chinea <carlos.chinea@nokia.com>
|
||||
Description:
|
||||
High Speed Synchronous Serial Interface (HSI) is a
|
||||
serial interface mainly used for connecting application
|
||||
engines (APE) with cellular modem engines (CMT) in cellular
|
||||
handsets.
|
||||
The bus will be populated with devices (hsi_clients) representing
|
||||
the protocols available in the system. Bus drivers implement
|
||||
those protocols.
|
||||
|
||||
What: /sys/bus/hsi/devices/.../modalias
|
||||
Date: April 2012
|
||||
KernelVersion: 3.4
|
||||
Contact: Carlos Chinea <carlos.chinea@nokia.com>
|
||||
Description: Stores the same MODALIAS value emitted by uevent
|
||||
Format: hsi:<hsi_client device name>
|
|
@ -9,7 +9,7 @@ architectures).
|
|||
|
||||
II. How does it work?
|
||||
|
||||
There are four per-task flags used for that, PF_NOFREEZE, PF_FROZEN, TIF_FREEZE
|
||||
There are three per-task flags used for that, PF_NOFREEZE, PF_FROZEN
|
||||
and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have
|
||||
PF_NOFREEZE unset (all user space processes and some kernel threads) are
|
||||
regarded as 'freezable' and treated in a special way before the system enters a
|
||||
|
@ -17,30 +17,31 @@ suspend state as well as before a hibernation image is created (in what follows
|
|||
we only consider hibernation, but the description also applies to suspend).
|
||||
|
||||
Namely, as the first step of the hibernation procedure the function
|
||||
freeze_processes() (defined in kernel/power/process.c) is called. It executes
|
||||
try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and
|
||||
either wakes them up, if they are kernel threads, or sends fake signals to them,
|
||||
if they are user space processes. A task that has TIF_FREEZE set, should react
|
||||
to it by calling the function called __refrigerator() (defined in
|
||||
kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state
|
||||
to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it.
|
||||
Then, we say that the task is 'frozen' and therefore the set of functions
|
||||
handling this mechanism is referred to as 'the freezer' (these functions are
|
||||
defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h).
|
||||
User space processes are generally frozen before kernel threads.
|
||||
freeze_processes() (defined in kernel/power/process.c) is called. A system-wide
|
||||
variable system_freezing_cnt (as opposed to a per-task flag) is used to indicate
|
||||
whether the system is to undergo a freezing operation. And freeze_processes()
|
||||
sets this variable. After this, it executes try_to_freeze_tasks() that sends a
|
||||
fake signal to all user space processes, and wakes up all the kernel threads.
|
||||
All freezable tasks must react to that by calling try_to_freeze(), which
|
||||
results in a call to __refrigerator() (defined in kernel/freezer.c), which sets
|
||||
the task's PF_FROZEN flag, changes its state to TASK_UNINTERRUPTIBLE and makes
|
||||
it loop until PF_FROZEN is cleared for it. Then, we say that the task is
|
||||
'frozen' and therefore the set of functions handling this mechanism is referred
|
||||
to as 'the freezer' (these functions are defined in kernel/power/process.c,
|
||||
kernel/freezer.c & include/linux/freezer.h). User space processes are generally
|
||||
frozen before kernel threads.
|
||||
|
||||
__refrigerator() must not be called directly. Instead, use the
|
||||
try_to_freeze() function (defined in include/linux/freezer.h), that checks
|
||||
the task's TIF_FREEZE flag and makes the task enter __refrigerator() if the
|
||||
flag is set.
|
||||
if the task is to be frozen and makes the task enter __refrigerator().
|
||||
|
||||
For user space processes try_to_freeze() is called automatically from the
|
||||
signal-handling code, but the freezable kernel threads need to call it
|
||||
explicitly in suitable places or use the wait_event_freezable() or
|
||||
wait_event_freezable_timeout() macros (defined in include/linux/freezer.h)
|
||||
that combine interruptible sleep with checking if TIF_FREEZE is set and calling
|
||||
try_to_freeze(). The main loop of a freezable kernel thread may look like the
|
||||
following one:
|
||||
that combine interruptible sleep with checking if the task is to be frozen and
|
||||
calling try_to_freeze(). The main loop of a freezable kernel thread may look
|
||||
like the following one:
|
||||
|
||||
set_freezable();
|
||||
do {
|
||||
|
@ -53,7 +54,7 @@ following one:
|
|||
(from drivers/usb/core/hub.c::hub_thread()).
|
||||
|
||||
If a freezable kernel thread fails to call try_to_freeze() after the freezer has
|
||||
set TIF_FREEZE for it, the freezing of tasks will fail and the entire
|
||||
initiated a freezing operation, the freezing of tasks will fail and the entire
|
||||
hibernation operation will be cancelled. For this reason, freezable kernel
|
||||
threads must call try_to_freeze() somewhere or use one of the
|
||||
wait_event_freezable() and wait_event_freezable_timeout() macros.
|
||||
|
|
|
@ -123,7 +123,7 @@ KEY SERVICE OVERVIEW
|
|||
|
||||
The key service provides a number of features besides keys:
|
||||
|
||||
(*) The key service defines two special key types:
|
||||
(*) The key service defines three special key types:
|
||||
|
||||
(+) "keyring"
|
||||
|
||||
|
@ -137,6 +137,18 @@ The key service provides a number of features besides keys:
|
|||
blobs of data. These can be created, updated and read by userspace,
|
||||
and aren't intended for use by kernel services.
|
||||
|
||||
(+) "logon"
|
||||
|
||||
Like a "user" key, a "logon" key has a payload that is an arbitrary
|
||||
blob of data. It is intended as a place to store secrets which are
|
||||
accessible to the kernel but not to userspace programs.
|
||||
|
||||
The description can be arbitrary, but must be prefixed with a non-zero
|
||||
length string that describes the key "subclass". The subclass is
|
||||
separated from the rest of the description by a ':'. "logon" keys can
|
||||
be created and updated from userspace, but the payload is only
|
||||
readable from kernel space.
|
||||
|
||||
(*) Each process subscribes to three keyrings: a thread-specific keyring, a
|
||||
process-specific keyring, and a session-specific keyring.
|
||||
|
||||
|
|
|
@ -3592,6 +3592,7 @@ S: Supported
|
|||
F: drivers/net/wireless/iwlegacy/
|
||||
|
||||
INTEL WIRELESS WIFI LINK (iwlwifi)
|
||||
M: Johannes Berg <johannes.berg@intel.com>
|
||||
M: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
||||
M: Intel Linux Wireless <ilw@linux.intel.com>
|
||||
L: linux-wireless@vger.kernel.org
|
||||
|
@ -7578,8 +7579,8 @@ F: Documentation/filesystems/xfs.txt
|
|||
F: fs/xfs/
|
||||
|
||||
XILINX AXI ETHERNET DRIVER
|
||||
M: Ariane Keller <ariane.keller@tik.ee.ethz.ch>
|
||||
M: Daniel Borkmann <daniel.borkmann@tik.ee.ethz.ch>
|
||||
M: Anirudha Sarangi <anirudh@xilinx.com>
|
||||
M: John Linn <John.Linn@xilinx.com>
|
||||
S: Maintained
|
||||
F: drivers/net/ethernet/xilinx/xilinx_axienet*
|
||||
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
VERSION = 3
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc4
|
||||
EXTRAVERSION = -rc5
|
||||
NAME = Saber-toothed Squirrel
|
||||
|
||||
# *DOCUMENTATION*
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
intc: interrupt-controller@02080000 {
|
||||
compatible = "qcom,msm-8660-qgic";
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <1>;
|
||||
#interrupt-cells = <3>;
|
||||
reg = < 0x02080000 0x1000 >,
|
||||
< 0x02081000 0x1000 >;
|
||||
};
|
||||
|
@ -19,6 +19,6 @@
|
|||
compatible = "qcom,msm-hsuart", "qcom,msm-uart";
|
||||
reg = <0x19c40000 0x1000>,
|
||||
<0x19c00000 0x1000>;
|
||||
interrupts = <195>;
|
||||
interrupts = <0 195 0x0>;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -14,6 +14,8 @@ CONFIG_MODULE_FORCE_UNLOAD=y
|
|||
# CONFIG_BLK_DEV_BSG is not set
|
||||
CONFIG_BLK_DEV_INTEGRITY=y
|
||||
CONFIG_ARCH_S3C24XX=y
|
||||
# CONFIG_CPU_S3C2410 is not set
|
||||
CONFIG_CPU_S3C2440=y
|
||||
CONFIG_S3C_ADC=y
|
||||
CONFIG_S3C24XX_PWM=y
|
||||
CONFIG_MACH_MINI2440=y
|
||||
|
|
|
@ -118,14 +118,10 @@ static int twd_cpufreq_transition(struct notifier_block *nb,
|
|||
* The twd clock events must be reprogrammed to account for the new
|
||||
* frequency. The timer is local to a cpu, so cross-call to the
|
||||
* changing cpu.
|
||||
*
|
||||
* Only wait for it to finish, if the cpu is active to avoid
|
||||
* deadlock when cpu1 is spinning on while(!cpu_active(cpu1)) during
|
||||
* booting of that cpu.
|
||||
*/
|
||||
if (state == CPUFREQ_POSTCHANGE || state == CPUFREQ_RESUMECHANGE)
|
||||
smp_call_function_single(freqs->cpu, twd_update_frequency,
|
||||
NULL, cpu_active(freqs->cpu));
|
||||
NULL, 1);
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
|
|
@ -497,25 +497,25 @@ static struct clk exynos4_init_clocks_off[] = {
|
|||
.ctrlbit = (1 << 3),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.0",
|
||||
.devname = "exynos4-sdhci.0",
|
||||
.parent = &exynos4_clk_aclk_133.clk,
|
||||
.enable = exynos4_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 5),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.1",
|
||||
.devname = "exynos4-sdhci.1",
|
||||
.parent = &exynos4_clk_aclk_133.clk,
|
||||
.enable = exynos4_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 6),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.2",
|
||||
.devname = "exynos4-sdhci.2",
|
||||
.parent = &exynos4_clk_aclk_133.clk,
|
||||
.enable = exynos4_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 7),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.3",
|
||||
.devname = "exynos4-sdhci.3",
|
||||
.parent = &exynos4_clk_aclk_133.clk,
|
||||
.enable = exynos4_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 8),
|
||||
|
@ -1202,7 +1202,7 @@ static struct clksrc_clk exynos4_clk_sclk_uart3 = {
|
|||
static struct clksrc_clk exynos4_clk_sclk_mmc0 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.0",
|
||||
.devname = "exynos4-sdhci.0",
|
||||
.parent = &exynos4_clk_dout_mmc0.clk,
|
||||
.enable = exynos4_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 0),
|
||||
|
@ -1213,7 +1213,7 @@ static struct clksrc_clk exynos4_clk_sclk_mmc0 = {
|
|||
static struct clksrc_clk exynos4_clk_sclk_mmc1 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.1",
|
||||
.devname = "exynos4-sdhci.1",
|
||||
.parent = &exynos4_clk_dout_mmc1.clk,
|
||||
.enable = exynos4_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 4),
|
||||
|
@ -1224,7 +1224,7 @@ static struct clksrc_clk exynos4_clk_sclk_mmc1 = {
|
|||
static struct clksrc_clk exynos4_clk_sclk_mmc2 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.2",
|
||||
.devname = "exynos4-sdhci.2",
|
||||
.parent = &exynos4_clk_dout_mmc2.clk,
|
||||
.enable = exynos4_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 8),
|
||||
|
@ -1235,7 +1235,7 @@ static struct clksrc_clk exynos4_clk_sclk_mmc2 = {
|
|||
static struct clksrc_clk exynos4_clk_sclk_mmc3 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.3",
|
||||
.devname = "exynos4-sdhci.3",
|
||||
.parent = &exynos4_clk_dout_mmc3.clk,
|
||||
.enable = exynos4_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 12),
|
||||
|
@ -1340,10 +1340,10 @@ static struct clk_lookup exynos4_clk_lookup[] = {
|
|||
CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos4_clk_sclk_uart1.clk),
|
||||
CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos4_clk_sclk_uart2.clk),
|
||||
CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos4_clk_sclk_uart3.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk),
|
||||
CLKDEV_INIT("exynos4-fb.0", "lcd", &exynos4_clk_fimd0),
|
||||
CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos4_clk_pdma0),
|
||||
CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos4_clk_pdma1),
|
||||
|
|
|
@ -455,25 +455,25 @@ static struct clk exynos5_init_clocks_off[] = {
|
|||
.ctrlbit = (1 << 20),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.0",
|
||||
.devname = "exynos4-sdhci.0",
|
||||
.parent = &exynos5_clk_aclk_200.clk,
|
||||
.enable = exynos5_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 12),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.1",
|
||||
.devname = "exynos4-sdhci.1",
|
||||
.parent = &exynos5_clk_aclk_200.clk,
|
||||
.enable = exynos5_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 13),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.2",
|
||||
.devname = "exynos4-sdhci.2",
|
||||
.parent = &exynos5_clk_aclk_200.clk,
|
||||
.enable = exynos5_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 14),
|
||||
}, {
|
||||
.name = "hsmmc",
|
||||
.devname = "s3c-sdhci.3",
|
||||
.devname = "exynos4-sdhci.3",
|
||||
.parent = &exynos5_clk_aclk_200.clk,
|
||||
.enable = exynos5_clk_ip_fsys_ctrl,
|
||||
.ctrlbit = (1 << 15),
|
||||
|
@ -813,7 +813,7 @@ static struct clksrc_clk exynos5_clk_sclk_uart3 = {
|
|||
static struct clksrc_clk exynos5_clk_sclk_mmc0 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.0",
|
||||
.devname = "exynos4-sdhci.0",
|
||||
.parent = &exynos5_clk_dout_mmc0.clk,
|
||||
.enable = exynos5_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 0),
|
||||
|
@ -824,7 +824,7 @@ static struct clksrc_clk exynos5_clk_sclk_mmc0 = {
|
|||
static struct clksrc_clk exynos5_clk_sclk_mmc1 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.1",
|
||||
.devname = "exynos4-sdhci.1",
|
||||
.parent = &exynos5_clk_dout_mmc1.clk,
|
||||
.enable = exynos5_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 4),
|
||||
|
@ -835,7 +835,7 @@ static struct clksrc_clk exynos5_clk_sclk_mmc1 = {
|
|||
static struct clksrc_clk exynos5_clk_sclk_mmc2 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.2",
|
||||
.devname = "exynos4-sdhci.2",
|
||||
.parent = &exynos5_clk_dout_mmc2.clk,
|
||||
.enable = exynos5_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 8),
|
||||
|
@ -846,7 +846,7 @@ static struct clksrc_clk exynos5_clk_sclk_mmc2 = {
|
|||
static struct clksrc_clk exynos5_clk_sclk_mmc3 = {
|
||||
.clk = {
|
||||
.name = "sclk_mmc",
|
||||
.devname = "s3c-sdhci.3",
|
||||
.devname = "exynos4-sdhci.3",
|
||||
.parent = &exynos5_clk_dout_mmc3.clk,
|
||||
.enable = exynos5_clksrc_mask_fsys_ctrl,
|
||||
.ctrlbit = (1 << 12),
|
||||
|
@ -990,10 +990,10 @@ static struct clk_lookup exynos5_clk_lookup[] = {
|
|||
CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos5_clk_sclk_uart1.clk),
|
||||
CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos5_clk_sclk_uart2.clk),
|
||||
CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos5_clk_sclk_uart3.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk),
|
||||
CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk),
|
||||
CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk),
|
||||
CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos5_clk_pdma0),
|
||||
CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos5_clk_pdma1),
|
||||
CLKDEV_INIT("dma-pl330.2", "apb_pclk", &exynos5_clk_mdma1),
|
||||
|
|
|
@ -326,6 +326,11 @@ static void __init exynos4_map_io(void)
|
|||
s3c_fimc_setname(2, "exynos4-fimc");
|
||||
s3c_fimc_setname(3, "exynos4-fimc");
|
||||
|
||||
s3c_sdhci_setname(0, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(1, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(2, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(3, "exynos4-sdhci");
|
||||
|
||||
/* The I2C bus controllers are directly compatible with s3c2440 */
|
||||
s3c_i2c0_setname("s3c2440-i2c");
|
||||
s3c_i2c1_setname("s3c2440-i2c");
|
||||
|
@ -344,6 +349,11 @@ static void __init exynos5_map_io(void)
|
|||
s3c_device_i2c0.resource[1].start = EXYNOS5_IRQ_IIC;
|
||||
s3c_device_i2c0.resource[1].end = EXYNOS5_IRQ_IIC;
|
||||
|
||||
s3c_sdhci_setname(0, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(1, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(2, "exynos4-sdhci");
|
||||
s3c_sdhci_setname(3, "exynos4-sdhci");
|
||||
|
||||
/* The I2C bus controllers are directly compatible with s3c2440 */
|
||||
s3c_i2c0_setname("s3c2440-i2c");
|
||||
s3c_i2c1_setname("s3c2440-i2c");
|
||||
|
@ -537,7 +547,9 @@ void __init exynos5_init_irq(void)
|
|||
{
|
||||
int irq;
|
||||
|
||||
gic_init(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU);
|
||||
#ifdef CONFIG_OF
|
||||
of_irq_init(exynos4_dt_irq_match);
|
||||
#endif
|
||||
|
||||
for (irq = 0; irq < EXYNOS5_MAX_COMBINER_NR; irq++) {
|
||||
combiner_init(irq, (void __iomem *)S5P_VA_COMBINER(irq),
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#include <linux/dma-mapping.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/ioport.h>
|
||||
#include <linux/mmc/dw_mmc.h>
|
||||
|
||||
#include <plat/devs.h>
|
||||
|
@ -33,16 +34,8 @@ static int exynos4_dwmci_init(u32 slot_id, irq_handler_t handler, void *data)
|
|||
}
|
||||
|
||||
static struct resource exynos4_dwmci_resource[] = {
|
||||
[0] = {
|
||||
.start = EXYNOS4_PA_DWMCI,
|
||||
.end = EXYNOS4_PA_DWMCI + SZ_4K - 1,
|
||||
.flags = IORESOURCE_MEM,
|
||||
},
|
||||
[1] = {
|
||||
.start = IRQ_DWMCI,
|
||||
.end = IRQ_DWMCI,
|
||||
.flags = IORESOURCE_IRQ,
|
||||
}
|
||||
[0] = DEFINE_RES_MEM(EXYNOS4_PA_DWMCI, SZ_4K),
|
||||
[1] = DEFINE_RES_IRQ(EXYNOS4_IRQ_DWMCI),
|
||||
};
|
||||
|
||||
static struct dw_mci_board exynos4_dwci_pdata = {
|
||||
|
|
|
@ -112,6 +112,7 @@ static struct s3c_sdhci_platdata nuri_hsmmc0_data __initdata = {
|
|||
.host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA |
|
||||
MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
|
||||
MMC_CAP_ERASE),
|
||||
.host_caps2 = MMC_CAP2_BROKEN_VOLTAGE,
|
||||
.cd_type = S3C_SDHCI_CD_PERMANENT,
|
||||
.clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL,
|
||||
};
|
||||
|
|
|
@ -747,6 +747,7 @@ static struct s3c_sdhci_platdata universal_hsmmc0_data __initdata = {
|
|||
.max_width = 8,
|
||||
.host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA |
|
||||
MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED),
|
||||
.host_caps2 = MMC_CAP2_BROKEN_VOLTAGE,
|
||||
.cd_type = S3C_SDHCI_CD_PERMANENT,
|
||||
.clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL,
|
||||
};
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <linux/irqdomain.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/memblock.h>
|
||||
|
||||
|
@ -49,10 +50,22 @@ static void __init msm8x60_map_io(void)
|
|||
msm_map_msm8x60_io();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static struct of_device_id msm_dt_gic_match[] __initdata = {
|
||||
{ .compatible = "qcom,msm-8660-qgic", .data = gic_of_init },
|
||||
{}
|
||||
};
|
||||
#endif
|
||||
|
||||
static void __init msm8x60_init_irq(void)
|
||||
{
|
||||
gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE,
|
||||
(void *)MSM_QGIC_CPU_BASE);
|
||||
if (!of_have_populated_dt())
|
||||
gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE,
|
||||
(void *)MSM_QGIC_CPU_BASE);
|
||||
#ifdef CONFIG_OF
|
||||
else
|
||||
of_irq_init(msm_dt_gic_match);
|
||||
#endif
|
||||
|
||||
/* Edge trigger PPIs except AVS_SVICINT and AVS_SVICINTSWDONE */
|
||||
writel(0xFFFFD7FF, MSM_QGIC_DIST_BASE + GIC_DIST_CONFIG + 4);
|
||||
|
@ -73,16 +86,8 @@ static struct of_dev_auxdata msm_auxdata_lookup[] __initdata = {
|
|||
{}
|
||||
};
|
||||
|
||||
static struct of_device_id msm_dt_gic_match[] __initdata = {
|
||||
{ .compatible = "qcom,msm-8660-qgic", },
|
||||
{}
|
||||
};
|
||||
|
||||
static void __init msm8x60_dt_init(void)
|
||||
{
|
||||
irq_domain_generate_simple(msm_dt_gic_match, MSM8X60_QGIC_DIST_PHYS,
|
||||
GIC_SPI_START);
|
||||
|
||||
if (of_machine_is_compatible("qcom,msm8660-surf")) {
|
||||
printk(KERN_INFO "Init surf UART registers\n");
|
||||
msm8x60_init_uart12dm();
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
*
|
||||
* bit 23 - Input/Output (PXA2xx specific)
|
||||
* bit 24 - Wakeup Enable(PXA2xx specific)
|
||||
* bit 25 - Keep Output (PXA2xx specific)
|
||||
*/
|
||||
|
||||
#define MFP_DIR_IN (0x0 << 23)
|
||||
|
@ -25,6 +26,12 @@
|
|||
#define MFP_DIR(x) (((x) >> 23) & 0x1)
|
||||
|
||||
#define MFP_LPM_CAN_WAKEUP (0x1 << 24)
|
||||
|
||||
/*
|
||||
* MFP_LPM_KEEP_OUTPUT must be specified for pins that need to
|
||||
* retain their last output level (low or high).
|
||||
* Note: MFP_LPM_KEEP_OUTPUT has no effect on pins configured for input.
|
||||
*/
|
||||
#define MFP_LPM_KEEP_OUTPUT (0x1 << 25)
|
||||
|
||||
#define WAKEUP_ON_EDGE_RISE (MFP_LPM_CAN_WAKEUP | MFP_LPM_EDGE_RISE)
|
||||
|
|
|
@ -33,6 +33,8 @@
|
|||
#define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2))
|
||||
#define GPLR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5))
|
||||
#define GPDR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x0c)
|
||||
#define GPSR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x18)
|
||||
#define GPCR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x24)
|
||||
|
||||
#define PWER_WE35 (1 << 24)
|
||||
|
||||
|
@ -348,6 +350,7 @@ static inline void pxa27x_mfp_init(void) {}
|
|||
#ifdef CONFIG_PM
|
||||
static unsigned long saved_gafr[2][4];
|
||||
static unsigned long saved_gpdr[4];
|
||||
static unsigned long saved_gplr[4];
|
||||
static unsigned long saved_pgsr[4];
|
||||
|
||||
static int pxa2xx_mfp_suspend(void)
|
||||
|
@ -366,14 +369,26 @@ static int pxa2xx_mfp_suspend(void)
|
|||
}
|
||||
|
||||
for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) {
|
||||
|
||||
saved_gafr[0][i] = GAFR_L(i);
|
||||
saved_gafr[1][i] = GAFR_U(i);
|
||||
saved_gpdr[i] = GPDR(i * 32);
|
||||
saved_gplr[i] = GPLR(i * 32);
|
||||
saved_pgsr[i] = PGSR(i);
|
||||
|
||||
GPDR(i * 32) = gpdr_lpm[i];
|
||||
GPSR(i * 32) = PGSR(i);
|
||||
GPCR(i * 32) = ~PGSR(i);
|
||||
}
|
||||
|
||||
/* set GPDR bits taking into account MFP_LPM_KEEP_OUTPUT */
|
||||
for (i = 0; i < pxa_last_gpio; i++) {
|
||||
if ((gpdr_lpm[gpio_to_bank(i)] & GPIO_bit(i)) ||
|
||||
((gpio_desc[i].config & MFP_LPM_KEEP_OUTPUT) &&
|
||||
(saved_gpdr[gpio_to_bank(i)] & GPIO_bit(i))))
|
||||
GPDR(i) |= GPIO_bit(i);
|
||||
else
|
||||
GPDR(i) &= ~GPIO_bit(i);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -384,6 +399,8 @@ static void pxa2xx_mfp_resume(void)
|
|||
for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) {
|
||||
GAFR_L(i) = saved_gafr[0][i];
|
||||
GAFR_U(i) = saved_gafr[1][i];
|
||||
GPSR(i * 32) = saved_gplr[i];
|
||||
GPCR(i * 32) = ~saved_gplr[i];
|
||||
GPDR(i * 32) = saved_gpdr[i];
|
||||
PGSR(i) = saved_pgsr[i];
|
||||
}
|
||||
|
|
|
@ -421,8 +421,11 @@ void __init pxa27x_set_i2c_power_info(struct i2c_pxa_platform_data *info)
|
|||
pxa_register_device(&pxa27x_device_i2c_power, info);
|
||||
}
|
||||
|
||||
static struct pxa_gpio_platform_data pxa27x_gpio_info __initdata = {
|
||||
.gpio_set_wake = gpio_set_wake,
|
||||
};
|
||||
|
||||
static struct platform_device *devices[] __initdata = {
|
||||
&pxa_device_gpio,
|
||||
&pxa27x_device_udc,
|
||||
&pxa_device_pmu,
|
||||
&pxa_device_i2s,
|
||||
|
@ -458,6 +461,7 @@ static int __init pxa27x_init(void)
|
|||
register_syscore_ops(&pxa2xx_mfp_syscore_ops);
|
||||
register_syscore_ops(&pxa2xx_clock_syscore_ops);
|
||||
|
||||
pxa_register_device(&pxa_device_gpio, &pxa27x_gpio_info);
|
||||
ret = platform_add_devices(devices, ARRAY_SIZE(devices));
|
||||
}
|
||||
|
||||
|
|
|
@ -111,10 +111,6 @@ config S3C24XX_SETUP_TS
|
|||
help
|
||||
Compile in platform device definition for Samsung TouchScreen.
|
||||
|
||||
# cpu-specific sections
|
||||
|
||||
if CPU_S3C2410
|
||||
|
||||
config S3C2410_DMA
|
||||
bool
|
||||
depends on S3C24XX_DMA && (CPU_S3C2410 || CPU_S3C2442)
|
||||
|
@ -127,6 +123,10 @@ config S3C2410_PM
|
|||
help
|
||||
Power Management code common to S3C2410 and better
|
||||
|
||||
# cpu-specific sections
|
||||
|
||||
if CPU_S3C2410
|
||||
|
||||
config S3C24XX_SIMTEC_NOR
|
||||
bool
|
||||
help
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
#include <linux/gpio_keys.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/mmc/host.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
||||
#include <asm/hardware/vic.h>
|
||||
|
@ -765,6 +766,7 @@ static void __init goni_pmic_init(void)
|
|||
/* MoviNAND */
|
||||
static struct s3c_sdhci_platdata goni_hsmmc0_data __initdata = {
|
||||
.max_width = 4,
|
||||
.host_caps2 = MMC_CAP2_BROKEN_VOLTAGE,
|
||||
.cd_type = S3C_SDHCI_CD_PERMANENT,
|
||||
};
|
||||
|
||||
|
|
|
@ -306,7 +306,7 @@ void sa11x0_register_irda(struct irda_platform_data *irda)
|
|||
}
|
||||
|
||||
static struct resource sa1100_rtc_resources[] = {
|
||||
DEFINE_RES_MEM(0x90010000, 0x9001003f),
|
||||
DEFINE_RES_MEM(0x90010000, 0x40),
|
||||
DEFINE_RES_IRQ_NAMED(IRQ_RTC1Hz, "rtc 1Hz"),
|
||||
DEFINE_RES_IRQ_NAMED(IRQ_RTCAlrm, "rtc alarm"),
|
||||
};
|
||||
|
|
|
@ -1667,8 +1667,10 @@ void __init u300_init_irq(void)
|
|||
|
||||
for (i = 0; i < U300_VIC_IRQS_END; i++)
|
||||
set_bit(i, (unsigned long *) &mask[0]);
|
||||
vic_init((void __iomem *) U300_INTCON0_VBASE, 0, mask[0], mask[0]);
|
||||
vic_init((void __iomem *) U300_INTCON1_VBASE, 32, mask[1], mask[1]);
|
||||
vic_init((void __iomem *) U300_INTCON0_VBASE, IRQ_U300_INTCON0_START,
|
||||
mask[0], mask[0]);
|
||||
vic_init((void __iomem *) U300_INTCON1_VBASE, IRQ_U300_INTCON1_START,
|
||||
mask[1], mask[1]);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -146,9 +146,6 @@ static struct ab3100_platform_data ab3100_plf_data = {
|
|||
.min_uV = 1800000,
|
||||
.max_uV = 1800000,
|
||||
.valid_modes_mask = REGULATOR_MODE_NORMAL,
|
||||
.valid_ops_mask =
|
||||
REGULATOR_CHANGE_VOLTAGE |
|
||||
REGULATOR_CHANGE_STATUS,
|
||||
.always_on = 1,
|
||||
.boot_on = 1,
|
||||
},
|
||||
|
@ -160,9 +157,6 @@ static struct ab3100_platform_data ab3100_plf_data = {
|
|||
.min_uV = 2500000,
|
||||
.max_uV = 2500000,
|
||||
.valid_modes_mask = REGULATOR_MODE_NORMAL,
|
||||
.valid_ops_mask =
|
||||
REGULATOR_CHANGE_VOLTAGE |
|
||||
REGULATOR_CHANGE_STATUS,
|
||||
.always_on = 1,
|
||||
.boot_on = 1,
|
||||
},
|
||||
|
@ -230,8 +224,7 @@ static struct ab3100_platform_data ab3100_plf_data = {
|
|||
.max_uV = 1800000,
|
||||
.valid_modes_mask = REGULATOR_MODE_NORMAL,
|
||||
.valid_ops_mask =
|
||||
REGULATOR_CHANGE_VOLTAGE |
|
||||
REGULATOR_CHANGE_STATUS,
|
||||
REGULATOR_CHANGE_VOLTAGE,
|
||||
.always_on = 1,
|
||||
.boot_on = 1,
|
||||
},
|
||||
|
|
|
@ -12,101 +12,101 @@
|
|||
#ifndef __MACH_IRQS_H
|
||||
#define __MACH_IRQS_H
|
||||
|
||||
#define IRQ_U300_INTCON0_START 0
|
||||
#define IRQ_U300_INTCON1_START 32
|
||||
#define IRQ_U300_INTCON0_START 1
|
||||
#define IRQ_U300_INTCON1_START 33
|
||||
/* These are on INTCON0 - 30 lines */
|
||||
#define IRQ_U300_IRQ0_EXT 0
|
||||
#define IRQ_U300_IRQ1_EXT 1
|
||||
#define IRQ_U300_DMA 2
|
||||
#define IRQ_U300_VIDEO_ENC_0 3
|
||||
#define IRQ_U300_VIDEO_ENC_1 4
|
||||
#define IRQ_U300_AAIF_RX 5
|
||||
#define IRQ_U300_AAIF_TX 6
|
||||
#define IRQ_U300_AAIF_VGPIO 7
|
||||
#define IRQ_U300_AAIF_WAKEUP 8
|
||||
#define IRQ_U300_PCM_I2S0_FRAME 9
|
||||
#define IRQ_U300_PCM_I2S0_FIFO 10
|
||||
#define IRQ_U300_PCM_I2S1_FRAME 11
|
||||
#define IRQ_U300_PCM_I2S1_FIFO 12
|
||||
#define IRQ_U300_XGAM_GAMCON 13
|
||||
#define IRQ_U300_XGAM_CDI 14
|
||||
#define IRQ_U300_XGAM_CDICON 15
|
||||
#define IRQ_U300_IRQ0_EXT 1
|
||||
#define IRQ_U300_IRQ1_EXT 2
|
||||
#define IRQ_U300_DMA 3
|
||||
#define IRQ_U300_VIDEO_ENC_0 4
|
||||
#define IRQ_U300_VIDEO_ENC_1 5
|
||||
#define IRQ_U300_AAIF_RX 6
|
||||
#define IRQ_U300_AAIF_TX 7
|
||||
#define IRQ_U300_AAIF_VGPIO 8
|
||||
#define IRQ_U300_AAIF_WAKEUP 9
|
||||
#define IRQ_U300_PCM_I2S0_FRAME 10
|
||||
#define IRQ_U300_PCM_I2S0_FIFO 11
|
||||
#define IRQ_U300_PCM_I2S1_FRAME 12
|
||||
#define IRQ_U300_PCM_I2S1_FIFO 13
|
||||
#define IRQ_U300_XGAM_GAMCON 14
|
||||
#define IRQ_U300_XGAM_CDI 15
|
||||
#define IRQ_U300_XGAM_CDICON 16
|
||||
#if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330)
|
||||
/* MMIACC not used on the DB3210 or DB3350 chips */
|
||||
#define IRQ_U300_XGAM_MMIACC 16
|
||||
#define IRQ_U300_XGAM_MMIACC 17
|
||||
#endif
|
||||
#define IRQ_U300_XGAM_PDI 17
|
||||
#define IRQ_U300_XGAM_PDICON 18
|
||||
#define IRQ_U300_XGAM_GAMEACC 19
|
||||
#define IRQ_U300_XGAM_MCIDCT 20
|
||||
#define IRQ_U300_APEX 21
|
||||
#define IRQ_U300_UART0 22
|
||||
#define IRQ_U300_SPI 23
|
||||
#define IRQ_U300_TIMER_APP_OS 24
|
||||
#define IRQ_U300_TIMER_APP_DD 25
|
||||
#define IRQ_U300_TIMER_APP_GP1 26
|
||||
#define IRQ_U300_TIMER_APP_GP2 27
|
||||
#define IRQ_U300_TIMER_OS 28
|
||||
#define IRQ_U300_TIMER_MS 29
|
||||
#define IRQ_U300_KEYPAD_KEYBF 30
|
||||
#define IRQ_U300_KEYPAD_KEYBR 31
|
||||
#define IRQ_U300_XGAM_PDI 18
|
||||
#define IRQ_U300_XGAM_PDICON 19
|
||||
#define IRQ_U300_XGAM_GAMEACC 20
|
||||
#define IRQ_U300_XGAM_MCIDCT 21
|
||||
#define IRQ_U300_APEX 22
|
||||
#define IRQ_U300_UART0 23
|
||||
#define IRQ_U300_SPI 24
|
||||
#define IRQ_U300_TIMER_APP_OS 25
|
||||
#define IRQ_U300_TIMER_APP_DD 26
|
||||
#define IRQ_U300_TIMER_APP_GP1 27
|
||||
#define IRQ_U300_TIMER_APP_GP2 28
|
||||
#define IRQ_U300_TIMER_OS 29
|
||||
#define IRQ_U300_TIMER_MS 30
|
||||
#define IRQ_U300_KEYPAD_KEYBF 31
|
||||
#define IRQ_U300_KEYPAD_KEYBR 32
|
||||
/* These are on INTCON1 - 32 lines */
|
||||
#define IRQ_U300_GPIO_PORT0 32
|
||||
#define IRQ_U300_GPIO_PORT1 33
|
||||
#define IRQ_U300_GPIO_PORT2 34
|
||||
#define IRQ_U300_GPIO_PORT0 33
|
||||
#define IRQ_U300_GPIO_PORT1 34
|
||||
#define IRQ_U300_GPIO_PORT2 35
|
||||
|
||||
#if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) || \
|
||||
defined(CONFIG_MACH_U300_BS335)
|
||||
/* These are for DB3150, DB3200 and DB3350 */
|
||||
#define IRQ_U300_WDOG 35
|
||||
#define IRQ_U300_EVHIST 36
|
||||
#define IRQ_U300_MSPRO 37
|
||||
#define IRQ_U300_MMCSD_MCIINTR0 38
|
||||
#define IRQ_U300_MMCSD_MCIINTR1 39
|
||||
#define IRQ_U300_I2C0 40
|
||||
#define IRQ_U300_I2C1 41
|
||||
#define IRQ_U300_RTC 42
|
||||
#define IRQ_U300_NFIF 43
|
||||
#define IRQ_U300_NFIF2 44
|
||||
#define IRQ_U300_WDOG 36
|
||||
#define IRQ_U300_EVHIST 37
|
||||
#define IRQ_U300_MSPRO 38
|
||||
#define IRQ_U300_MMCSD_MCIINTR0 39
|
||||
#define IRQ_U300_MMCSD_MCIINTR1 40
|
||||
#define IRQ_U300_I2C0 41
|
||||
#define IRQ_U300_I2C1 42
|
||||
#define IRQ_U300_RTC 43
|
||||
#define IRQ_U300_NFIF 44
|
||||
#define IRQ_U300_NFIF2 45
|
||||
#endif
|
||||
|
||||
/* DB3150 and DB3200 have only 45 IRQs */
|
||||
#if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330)
|
||||
#define U300_VIC_IRQS_END 45
|
||||
#define U300_VIC_IRQS_END 46
|
||||
#endif
|
||||
|
||||
/* The DB3350-specific interrupt lines */
|
||||
#ifdef CONFIG_MACH_U300_BS335
|
||||
#define IRQ_U300_ISP_F0 45
|
||||
#define IRQ_U300_ISP_F1 46
|
||||
#define IRQ_U300_ISP_F2 47
|
||||
#define IRQ_U300_ISP_F3 48
|
||||
#define IRQ_U300_ISP_F4 49
|
||||
#define IRQ_U300_GPIO_PORT3 50
|
||||
#define IRQ_U300_SYSCON_PLL_LOCK 51
|
||||
#define IRQ_U300_UART1 52
|
||||
#define IRQ_U300_GPIO_PORT4 53
|
||||
#define IRQ_U300_GPIO_PORT5 54
|
||||
#define IRQ_U300_GPIO_PORT6 55
|
||||
#define U300_VIC_IRQS_END 56
|
||||
#define IRQ_U300_ISP_F0 46
|
||||
#define IRQ_U300_ISP_F1 47
|
||||
#define IRQ_U300_ISP_F2 48
|
||||
#define IRQ_U300_ISP_F3 49
|
||||
#define IRQ_U300_ISP_F4 50
|
||||
#define IRQ_U300_GPIO_PORT3 51
|
||||
#define IRQ_U300_SYSCON_PLL_LOCK 52
|
||||
#define IRQ_U300_UART1 53
|
||||
#define IRQ_U300_GPIO_PORT4 54
|
||||
#define IRQ_U300_GPIO_PORT5 55
|
||||
#define IRQ_U300_GPIO_PORT6 56
|
||||
#define U300_VIC_IRQS_END 57
|
||||
#endif
|
||||
|
||||
/* The DB3210-specific interrupt lines */
|
||||
#ifdef CONFIG_MACH_U300_BS365
|
||||
#define IRQ_U300_GPIO_PORT3 35
|
||||
#define IRQ_U300_GPIO_PORT4 36
|
||||
#define IRQ_U300_WDOG 37
|
||||
#define IRQ_U300_EVHIST 38
|
||||
#define IRQ_U300_MSPRO 39
|
||||
#define IRQ_U300_MMCSD_MCIINTR0 40
|
||||
#define IRQ_U300_MMCSD_MCIINTR1 41
|
||||
#define IRQ_U300_I2C0 42
|
||||
#define IRQ_U300_I2C1 43
|
||||
#define IRQ_U300_RTC 44
|
||||
#define IRQ_U300_NFIF 45
|
||||
#define IRQ_U300_NFIF2 46
|
||||
#define IRQ_U300_SYSCON_PLL_LOCK 47
|
||||
#define U300_VIC_IRQS_END 48
|
||||
#define IRQ_U300_GPIO_PORT3 36
|
||||
#define IRQ_U300_GPIO_PORT4 37
|
||||
#define IRQ_U300_WDOG 38
|
||||
#define IRQ_U300_EVHIST 39
|
||||
#define IRQ_U300_MSPRO 40
|
||||
#define IRQ_U300_MMCSD_MCIINTR0 41
|
||||
#define IRQ_U300_MMCSD_MCIINTR1 42
|
||||
#define IRQ_U300_I2C0 43
|
||||
#define IRQ_U300_I2C1 44
|
||||
#define IRQ_U300_RTC 45
|
||||
#define IRQ_U300_NFIF 46
|
||||
#define IRQ_U300_NFIF2 47
|
||||
#define IRQ_U300_SYSCON_PLL_LOCK 48
|
||||
#define U300_VIC_IRQS_END 49
|
||||
#endif
|
||||
|
||||
/* Maximum 8*7 GPIO lines */
|
||||
|
@ -117,6 +117,6 @@
|
|||
#define IRQ_U300_GPIO_END (U300_VIC_IRQS_END)
|
||||
#endif
|
||||
|
||||
#define NR_IRQS (IRQ_U300_GPIO_END)
|
||||
#define NR_IRQS (IRQ_U300_GPIO_END - IRQ_U300_INTCON0_START)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -168,7 +168,7 @@ static ssize_t mbox_read_fifo(struct device *dev,
|
|||
return sprintf(buf, "0x%X\n", mbox_value);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(fifo, S_IWUGO | S_IRUGO, mbox_read_fifo, mbox_write_fifo);
|
||||
static DEVICE_ATTR(fifo, S_IWUSR | S_IRUGO, mbox_read_fifo, mbox_write_fifo);
|
||||
|
||||
static int mbox_show(struct seq_file *s, void *data)
|
||||
{
|
||||
|
|
|
@ -18,6 +18,8 @@
|
|||
#ifndef __PLAT_S3C_SDHCI_H
|
||||
#define __PLAT_S3C_SDHCI_H __FILE__
|
||||
|
||||
#include <plat/devs.h>
|
||||
|
||||
struct platform_device;
|
||||
struct mmc_host;
|
||||
struct mmc_card;
|
||||
|
@ -356,4 +358,30 @@ static inline void exynos4_default_sdhci3(void) { }
|
|||
|
||||
#endif /* CONFIG_EXYNOS4_SETUP_SDHCI */
|
||||
|
||||
static inline void s3c_sdhci_setname(int id, char *name)
|
||||
{
|
||||
switch (id) {
|
||||
#ifdef CONFIG_S3C_DEV_HSMMC
|
||||
case 0:
|
||||
s3c_device_hsmmc0.name = name;
|
||||
break;
|
||||
#endif
|
||||
#ifdef CONFIG_S3C_DEV_HSMMC1
|
||||
case 1:
|
||||
s3c_device_hsmmc1.name = name;
|
||||
break;
|
||||
#endif
|
||||
#ifdef CONFIG_S3C_DEV_HSMMC2
|
||||
case 2:
|
||||
s3c_device_hsmmc2.name = name;
|
||||
break;
|
||||
#endif
|
||||
#ifdef CONFIG_S3C_DEV_HSMMC3
|
||||
case 3:
|
||||
s3c_device_hsmmc3.name = name;
|
||||
break;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* __PLAT_S3C_SDHCI_H */
|
||||
|
|
|
@ -38,7 +38,7 @@ static struct platform_device rtc_device = {
|
|||
.name = "rtc-bfin",
|
||||
.id = -1,
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_RTC_DRV_BFIN */
|
||||
|
||||
#if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE)
|
||||
#ifdef CONFIG_SERIAL_BFIN_UART0
|
||||
|
@ -100,7 +100,7 @@ static struct platform_device bfin_uart0_device = {
|
|||
.platform_data = &bfin_uart0_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_UART0 */
|
||||
#ifdef CONFIG_SERIAL_BFIN_UART1
|
||||
static struct resource bfin_uart1_resources[] = {
|
||||
{
|
||||
|
@ -148,7 +148,7 @@ static struct platform_device bfin_uart1_device = {
|
|||
.platform_data = &bfin_uart1_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_UART1 */
|
||||
#ifdef CONFIG_SERIAL_BFIN_UART2
|
||||
static struct resource bfin_uart2_resources[] = {
|
||||
{
|
||||
|
@ -196,8 +196,8 @@ static struct platform_device bfin_uart2_device = {
|
|||
.platform_data = &bfin_uart2_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_UART2 */
|
||||
#endif /* CONFIG_SERIAL_BFIN */
|
||||
|
||||
#if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE)
|
||||
#ifdef CONFIG_BFIN_SIR0
|
||||
|
@ -224,7 +224,7 @@ static struct platform_device bfin_sir0_device = {
|
|||
.num_resources = ARRAY_SIZE(bfin_sir0_resources),
|
||||
.resource = bfin_sir0_resources,
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_BFIN_SIR0 */
|
||||
#ifdef CONFIG_BFIN_SIR1
|
||||
static struct resource bfin_sir1_resources[] = {
|
||||
{
|
||||
|
@ -249,7 +249,7 @@ static struct platform_device bfin_sir1_device = {
|
|||
.num_resources = ARRAY_SIZE(bfin_sir1_resources),
|
||||
.resource = bfin_sir1_resources,
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_BFIN_SIR1 */
|
||||
#ifdef CONFIG_BFIN_SIR2
|
||||
static struct resource bfin_sir2_resources[] = {
|
||||
{
|
||||
|
@ -274,8 +274,8 @@ static struct platform_device bfin_sir2_device = {
|
|||
.num_resources = ARRAY_SIZE(bfin_sir2_resources),
|
||||
.resource = bfin_sir2_resources,
|
||||
};
|
||||
#endif
|
||||
#endif
|
||||
#endif /* CONFIG_BFIN_SIR2 */
|
||||
#endif /* CONFIG_BFIN_SIR */
|
||||
|
||||
#if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE)
|
||||
#ifdef CONFIG_SERIAL_BFIN_SPORT0_UART
|
||||
|
@ -311,7 +311,7 @@ static struct platform_device bfin_sport0_uart_device = {
|
|||
.platform_data = &bfin_sport0_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_SPORT0_UART */
|
||||
#ifdef CONFIG_SERIAL_BFIN_SPORT1_UART
|
||||
static struct resource bfin_sport1_uart_resources[] = {
|
||||
{
|
||||
|
@ -345,7 +345,7 @@ static struct platform_device bfin_sport1_uart_device = {
|
|||
.platform_data = &bfin_sport1_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_SPORT1_UART */
|
||||
#ifdef CONFIG_SERIAL_BFIN_SPORT2_UART
|
||||
static struct resource bfin_sport2_uart_resources[] = {
|
||||
{
|
||||
|
@ -379,7 +379,7 @@ static struct platform_device bfin_sport2_uart_device = {
|
|||
.platform_data = &bfin_sport2_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_SPORT2_UART */
|
||||
#ifdef CONFIG_SERIAL_BFIN_SPORT3_UART
|
||||
static struct resource bfin_sport3_uart_resources[] = {
|
||||
{
|
||||
|
@ -413,8 +413,8 @@ static struct platform_device bfin_sport3_uart_device = {
|
|||
.platform_data = &bfin_sport3_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif
|
||||
#endif /* CONFIG_SERIAL_BFIN_SPORT3_UART */
|
||||
#endif /* CONFIG_SERIAL_BFIN_SPORT */
|
||||
|
||||
#if defined(CONFIG_CAN_BFIN) || defined(CONFIG_CAN_BFIN_MODULE)
|
||||
static unsigned short bfin_can_peripherals[] = {
|
||||
|
@ -452,7 +452,7 @@ static struct platform_device bfin_can_device = {
|
|||
.platform_data = &bfin_can_peripherals, /* Passed to driver */
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_CAN_BFIN */
|
||||
|
||||
/*
|
||||
* USB-LAN EzExtender board
|
||||
|
@ -488,7 +488,7 @@ static struct platform_device smc91x_device = {
|
|||
.platform_data = &smc91x_info,
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_SMC91X */
|
||||
|
||||
#if defined(CONFIG_SPI_BFIN5XX) || defined(CONFIG_SPI_BFIN5XX_MODULE)
|
||||
/* all SPI peripherals info goes here */
|
||||
|
@ -518,7 +518,8 @@ static struct flash_platform_data bfin_spi_flash_data = {
|
|||
static struct bfin5xx_spi_chip spi_flash_chip_info = {
|
||||
.enable_dma = 0, /* use dma transfer with this chip*/
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_MTD_M25P80 */
|
||||
#endif /* CONFIG_SPI_BFIN5XX */
|
||||
|
||||
#if defined(CONFIG_TOUCHSCREEN_AD7879) || defined(CONFIG_TOUCHSCREEN_AD7879_MODULE)
|
||||
#include <linux/spi/ad7879.h>
|
||||
|
@ -535,7 +536,7 @@ static const struct ad7879_platform_data bfin_ad7879_ts_info = {
|
|||
.gpio_export = 1, /* Export GPIO to gpiolib */
|
||||
.gpio_base = -1, /* Dynamic allocation */
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_TOUCHSCREEN_AD7879 */
|
||||
|
||||
#if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE)
|
||||
#include <asm/bfin-lq035q1.h>
|
||||
|
@ -564,7 +565,7 @@ static struct platform_device bfin_lq035q1_device = {
|
|||
.platform_data = &bfin_lq035q1_data,
|
||||
},
|
||||
};
|
||||
#endif
|
||||
#endif /* CONFIG_FB_BFIN_LQ035Q1 */
|
||||
|
||||
static struct spi_board_info bf538_spi_board_info[] __initdata = {
|
||||
#if defined(CONFIG_MTD_M25P80) \
|
||||
|
@ -579,7 +580,7 @@ static struct spi_board_info bf538_spi_board_info[] __initdata = {
|
|||
.controller_data = &spi_flash_chip_info,
|
||||
.mode = SPI_MODE_3,
|
||||
},
|
||||
#endif
|
||||
#endif /* CONFIG_MTD_M25P80 */
|
||||
#if defined(CONFIG_TOUCHSCREEN_AD7879_SPI) || defined(CONFIG_TOUCHSCREEN_AD7879_SPI_MODULE)
|
||||
{
|
||||
.modalias = "ad7879",
|
||||
|
@ -590,7 +591,7 @@ static struct spi_board_info bf538_spi_board_info[] __initdata = {
|
|||
.chip_select = 1,
|
||||
.mode = SPI_CPHA | SPI_CPOL,
|
||||
},
|
||||
#endif
|
||||
#endif /* CONFIG_TOUCHSCREEN_AD7879_SPI */
|
||||
#if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE)
|
||||
{
|
||||
.modalias = "bfin-lq035q1-spi",
|
||||
|
@ -599,7 +600,7 @@ static struct spi_board_info bf538_spi_board_info[] __initdata = {
|
|||
.chip_select = 2,
|
||||
.mode = SPI_CPHA | SPI_CPOL,
|
||||
},
|
||||
#endif
|
||||
#endif /* CONFIG_FB_BFIN_LQ035Q1 */
|
||||
#if defined(CONFIG_SPI_SPIDEV) || defined(CONFIG_SPI_SPIDEV_MODULE)
|
||||
{
|
||||
.modalias = "spidev",
|
||||
|
@ -607,7 +608,7 @@ static struct spi_board_info bf538_spi_board_info[] __initdata = {
|
|||
.bus_num = 0,
|
||||
.chip_select = 1,
|
||||
},
|
||||
#endif
|
||||
#endif /* CONFIG_SPI_SPIDEV */
|
||||
};
|
||||
|
||||
/* SPI (0) */
|
||||
|
@ -716,8 +717,6 @@ static struct platform_device bf538_spi_master2 = {
|
|||
},
|
||||
};
|
||||
|
||||
#endif /* spi master and devices */
|
||||
|
||||
#if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE)
|
||||
static struct resource bfin_twi0_resource[] = {
|
||||
[0] = {
|
||||
|
@ -759,8 +758,8 @@ static struct platform_device i2c_bfin_twi1_device = {
|
|||
.num_resources = ARRAY_SIZE(bfin_twi1_resource),
|
||||
.resource = bfin_twi1_resource,
|
||||
};
|
||||
#endif
|
||||
#endif
|
||||
#endif /* CONFIG_BF542 */
|
||||
#endif /* CONFIG_I2C_BLACKFIN_TWI */
|
||||
|
||||
#if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE)
|
||||
#include <linux/gpio_keys.h>
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/bootmem.h>
|
||||
#include <linux/genalloc.h>
|
||||
#include <asm/dma-mapping.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
struct dma_map_ops *dma_ops;
|
||||
EXPORT_SYMBOL(dma_ops);
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*
|
||||
* Process creation support for Hexagon
|
||||
*
|
||||
* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
|
||||
* Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
|
@ -88,7 +88,7 @@ void (*idle_sleep)(void) = default_idle;
|
|||
void cpu_idle(void)
|
||||
{
|
||||
while (1) {
|
||||
tick_nohz_stop_sched_tick(1);
|
||||
tick_nohz_idle_enter();
|
||||
local_irq_disable();
|
||||
while (!need_resched()) {
|
||||
idle_sleep();
|
||||
|
@ -97,7 +97,7 @@ void cpu_idle(void)
|
|||
local_irq_disable();
|
||||
}
|
||||
local_irq_enable();
|
||||
tick_nohz_restart_sched_tick();
|
||||
tick_nohz_idle_exit();
|
||||
schedule();
|
||||
}
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <linux/ptrace.h>
|
||||
#include <linux/regset.h>
|
||||
#include <linux/user.h>
|
||||
#include <linux/elf.h>
|
||||
|
||||
#include <asm/user.h>
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*
|
||||
* SMP support for Hexagon
|
||||
*
|
||||
* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
|
||||
* Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
|
@ -28,6 +28,7 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/cpu.h>
|
||||
|
||||
#include <asm/time.h> /* timer_interrupt */
|
||||
#include <asm/hexagon_vm.h>
|
||||
|
@ -177,7 +178,12 @@ void __cpuinit start_secondary(void)
|
|||
|
||||
printk(KERN_INFO "%s cpu %d\n", __func__, current_thread_info()->cpu);
|
||||
|
||||
notify_cpu_starting(cpu);
|
||||
|
||||
ipi_call_lock();
|
||||
set_cpu_online(cpu, true);
|
||||
ipi_call_unlock();
|
||||
|
||||
local_irq_enable();
|
||||
|
||||
cpu_idle();
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/of_irq.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <asm/timer-regs.h>
|
||||
#include <asm/hexagon_vm.h>
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#include <linux/err.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/binfmts.h>
|
||||
|
||||
#include <asm/vdso.h>
|
||||
|
||||
|
|
43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
Normal file
43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
Normal file
|
@ -0,0 +1,43 @@
|
|||
/*
|
||||
* PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ]
|
||||
*
|
||||
* Copyright 2012 Freescale Semiconductor Inc.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of Freescale Semiconductor nor the
|
||||
* names of its contributors may be used to endorse or promote products
|
||||
* derived from this software without specific prior written permission.
|
||||
*
|
||||
*
|
||||
* ALTERNATIVELY, this software may be distributed under the terms of the
|
||||
* GNU General Public License ("GPL") as published by the Free Software
|
||||
* Foundation, either version 2 of that License or (at your option) any
|
||||
* later version.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
|
||||
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
|
||||
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
message@42400 {
|
||||
compatible = "fsl,mpic-v3.1-msgr";
|
||||
reg = <0x42400 0x200>;
|
||||
interrupts = <
|
||||
0xb4 2 0 0
|
||||
0xb5 2 0 0
|
||||
0xb6 2 0 0
|
||||
0xb7 2 0 0>;
|
||||
};
|
|
@ -53,6 +53,16 @@ timer@41100 {
|
|||
3 0 3 0>;
|
||||
};
|
||||
|
||||
message@41400 {
|
||||
compatible = "fsl,mpic-v3.1-msgr";
|
||||
reg = <0x41400 0x200>;
|
||||
interrupts = <
|
||||
0xb0 2 0 0
|
||||
0xb1 2 0 0
|
||||
0xb2 2 0 0
|
||||
0xb3 2 0 0>;
|
||||
};
|
||||
|
||||
msi@41600 {
|
||||
compatible = "fsl,mpic-msi";
|
||||
reg = <0x41600 0x80>;
|
||||
|
|
|
@ -275,9 +275,6 @@ struct mpic
|
|||
unsigned int isu_mask;
|
||||
/* Number of sources */
|
||||
unsigned int num_sources;
|
||||
/* default senses array */
|
||||
unsigned char *senses;
|
||||
unsigned int senses_count;
|
||||
|
||||
/* vector numbers used for internal sources (ipi/timers) */
|
||||
unsigned int ipi_vecs[4];
|
||||
|
@ -415,21 +412,6 @@ extern struct mpic *mpic_alloc(struct device_node *node,
|
|||
extern void mpic_assign_isu(struct mpic *mpic, unsigned int isu_num,
|
||||
phys_addr_t phys_addr);
|
||||
|
||||
/* Set default sense codes
|
||||
*
|
||||
* @mpic: controller
|
||||
* @senses: array of sense codes
|
||||
* @count: size of above array
|
||||
*
|
||||
* Optionally provide an array (indexed on hardware interrupt numbers
|
||||
* for this MPIC) of default sense codes for the chip. Those are linux
|
||||
* sense codes IRQ_TYPE_*
|
||||
*
|
||||
* The driver gets ownership of the pointer, don't dispose of it or
|
||||
* anything like that. __init only.
|
||||
*/
|
||||
extern void mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count);
|
||||
|
||||
|
||||
/* Initialize the controller. After this has been called, none of the above
|
||||
* should be called again for this mpic
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
|
||||
#include <linux/types.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <asm/smp.h>
|
||||
|
||||
struct mpic_msgr {
|
||||
u32 __iomem *base;
|
||||
|
|
|
@ -15,11 +15,6 @@
|
|||
#ifndef __ASM_POWERPC_REG_BOOKE_H__
|
||||
#define __ASM_POWERPC_REG_BOOKE_H__
|
||||
|
||||
#ifdef CONFIG_BOOKE_WDT
|
||||
extern u32 booke_wdt_enabled;
|
||||
extern u32 booke_wdt_period;
|
||||
#endif /* CONFIG_BOOKE_WDT */
|
||||
|
||||
/* Machine State Register (MSR) Fields */
|
||||
#define MSR_GS (1<<28) /* Guest state */
|
||||
#define MSR_UCLE (1<<26) /* User-mode cache lock enable */
|
||||
|
|
|
@ -150,6 +150,9 @@ notrace void __init machine_init(u64 dt_ptr)
|
|||
}
|
||||
|
||||
#ifdef CONFIG_BOOKE_WDT
|
||||
extern u32 booke_wdt_enabled;
|
||||
extern u32 booke_wdt_period;
|
||||
|
||||
/* Checks wdt=x and wdt_period=xx command-line option */
|
||||
notrace int __init early_parse_wdt(char *p)
|
||||
{
|
||||
|
|
|
@ -21,6 +21,12 @@ static struct of_device_id __initdata mpc85xx_common_ids[] = {
|
|||
{ .compatible = "fsl,qe", },
|
||||
{ .compatible = "fsl,cpm2", },
|
||||
{ .compatible = "fsl,srio", },
|
||||
/* So that the DMA channel nodes can be probed individually: */
|
||||
{ .compatible = "fsl,eloplus-dma", },
|
||||
/* For the PMC driver */
|
||||
{ .compatible = "fsl,mpc8548-guts", },
|
||||
/* Probably unnecessary? */
|
||||
{ .compatible = "gpio-leds", },
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
@ -399,12 +399,6 @@ static int __init board_fixups(void)
|
|||
machine_arch_initcall(mpc8568_mds, board_fixups);
|
||||
machine_arch_initcall(mpc8569_mds, board_fixups);
|
||||
|
||||
static struct of_device_id mpc85xx_ids[] = {
|
||||
{ .compatible = "fsl,mpc8548-guts", },
|
||||
{ .compatible = "gpio-leds", },
|
||||
{},
|
||||
};
|
||||
|
||||
static int __init mpc85xx_publish_devices(void)
|
||||
{
|
||||
if (machine_is(mpc8568_mds))
|
||||
|
@ -412,10 +406,7 @@ static int __init mpc85xx_publish_devices(void)
|
|||
if (machine_is(mpc8569_mds))
|
||||
simple_gpiochip_init("fsl,mpc8569mds-bcsr-gpio");
|
||||
|
||||
mpc85xx_common_publish_devices();
|
||||
of_platform_bus_probe(NULL, mpc85xx_ids, NULL);
|
||||
|
||||
return 0;
|
||||
return mpc85xx_common_publish_devices();
|
||||
}
|
||||
|
||||
machine_device_initcall(mpc8568_mds, mpc85xx_publish_devices);
|
||||
|
|
|
@ -460,18 +460,7 @@ static void __init p1022_ds_setup_arch(void)
|
|||
pr_info("Freescale P1022 DS reference board\n");
|
||||
}
|
||||
|
||||
static struct of_device_id __initdata p1022_ds_ids[] = {
|
||||
/* So that the DMA channel nodes can be probed individually: */
|
||||
{ .compatible = "fsl,eloplus-dma", },
|
||||
{},
|
||||
};
|
||||
|
||||
static int __init p1022_ds_publish_devices(void)
|
||||
{
|
||||
mpc85xx_common_publish_devices();
|
||||
return of_platform_bus_probe(NULL, p1022_ds_ids, NULL);
|
||||
}
|
||||
machine_device_initcall(p1022_ds, p1022_ds_publish_devices);
|
||||
machine_device_initcall(p1022_ds, mpc85xx_common_publish_devices);
|
||||
|
||||
machine_arch_initcall(p1022_ds, swiotlb_setup_bus_notifier);
|
||||
|
||||
|
|
|
@ -366,11 +366,20 @@ static void kw_i2c_timeout(unsigned long data)
|
|||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
|
||||
/*
|
||||
* If the timer is pending, that means we raced with the
|
||||
* irq, in which case we just return
|
||||
*/
|
||||
if (timer_pending(&host->timeout_timer))
|
||||
goto skip;
|
||||
|
||||
kw_i2c_handle_interrupt(host, kw_read_reg(reg_isr));
|
||||
if (host->state != state_idle) {
|
||||
host->timeout_timer.expires = jiffies + KW_POLL_TIMEOUT;
|
||||
add_timer(&host->timeout_timer);
|
||||
}
|
||||
skip:
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
}
|
||||
|
||||
|
|
|
@ -1076,7 +1076,7 @@ static void eeh_add_device_late(struct pci_dev *dev)
|
|||
pr_debug("EEH: Adding device %s\n", pci_name(dev));
|
||||
|
||||
dn = pci_device_to_OF_node(dev);
|
||||
edev = pci_dev_to_eeh_dev(dev);
|
||||
edev = of_node_to_eeh_dev(dn);
|
||||
if (edev->pdev == dev) {
|
||||
pr_debug("EEH: Already referenced !\n");
|
||||
return;
|
||||
|
|
|
@ -604,18 +604,14 @@ static struct mpic *mpic_find(unsigned int irq)
|
|||
}
|
||||
|
||||
/* Determine if the linux irq is an IPI */
|
||||
static unsigned int mpic_is_ipi(struct mpic *mpic, unsigned int irq)
|
||||
static unsigned int mpic_is_ipi(struct mpic *mpic, unsigned int src)
|
||||
{
|
||||
unsigned int src = virq_to_hw(irq);
|
||||
|
||||
return (src >= mpic->ipi_vecs[0] && src <= mpic->ipi_vecs[3]);
|
||||
}
|
||||
|
||||
/* Determine if the linux irq is a timer */
|
||||
static unsigned int mpic_is_tm(struct mpic *mpic, unsigned int irq)
|
||||
static unsigned int mpic_is_tm(struct mpic *mpic, unsigned int src)
|
||||
{
|
||||
unsigned int src = virq_to_hw(irq);
|
||||
|
||||
return (src >= mpic->timer_vecs[0] && src <= mpic->timer_vecs[7]);
|
||||
}
|
||||
|
||||
|
@ -876,21 +872,45 @@ int mpic_set_irq_type(struct irq_data *d, unsigned int flow_type)
|
|||
if (src >= mpic->num_sources)
|
||||
return -EINVAL;
|
||||
|
||||
if (flow_type == IRQ_TYPE_NONE)
|
||||
if (mpic->senses && src < mpic->senses_count)
|
||||
flow_type = mpic->senses[src];
|
||||
if (flow_type == IRQ_TYPE_NONE)
|
||||
flow_type = IRQ_TYPE_LEVEL_LOW;
|
||||
vold = mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI));
|
||||
|
||||
/* We don't support "none" type */
|
||||
if (flow_type == IRQ_TYPE_NONE)
|
||||
flow_type = IRQ_TYPE_DEFAULT;
|
||||
|
||||
/* Default: read HW settings */
|
||||
if (flow_type == IRQ_TYPE_DEFAULT) {
|
||||
switch(vold & (MPIC_INFO(VECPRI_POLARITY_MASK) |
|
||||
MPIC_INFO(VECPRI_SENSE_MASK))) {
|
||||
case MPIC_INFO(VECPRI_SENSE_EDGE) |
|
||||
MPIC_INFO(VECPRI_POLARITY_POSITIVE):
|
||||
flow_type = IRQ_TYPE_EDGE_RISING;
|
||||
break;
|
||||
case MPIC_INFO(VECPRI_SENSE_EDGE) |
|
||||
MPIC_INFO(VECPRI_POLARITY_NEGATIVE):
|
||||
flow_type = IRQ_TYPE_EDGE_FALLING;
|
||||
break;
|
||||
case MPIC_INFO(VECPRI_SENSE_LEVEL) |
|
||||
MPIC_INFO(VECPRI_POLARITY_POSITIVE):
|
||||
flow_type = IRQ_TYPE_LEVEL_HIGH;
|
||||
break;
|
||||
case MPIC_INFO(VECPRI_SENSE_LEVEL) |
|
||||
MPIC_INFO(VECPRI_POLARITY_NEGATIVE):
|
||||
flow_type = IRQ_TYPE_LEVEL_LOW;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* Apply to irq desc */
|
||||
irqd_set_trigger_type(d, flow_type);
|
||||
|
||||
/* Apply to HW */
|
||||
if (mpic_is_ht_interrupt(mpic, src))
|
||||
vecpri = MPIC_VECPRI_POLARITY_POSITIVE |
|
||||
MPIC_VECPRI_SENSE_EDGE;
|
||||
else
|
||||
vecpri = mpic_type_to_vecpri(mpic, flow_type);
|
||||
|
||||
vold = mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI));
|
||||
vnew = vold & ~(MPIC_INFO(VECPRI_POLARITY_MASK) |
|
||||
MPIC_INFO(VECPRI_SENSE_MASK));
|
||||
vnew |= vecpri;
|
||||
|
@ -1026,7 +1046,7 @@ static int mpic_host_map(struct irq_domain *h, unsigned int virq,
|
|||
irq_set_chip_and_handler(virq, chip, handle_fasteoi_irq);
|
||||
|
||||
/* Set default irq type */
|
||||
irq_set_irq_type(virq, IRQ_TYPE_NONE);
|
||||
irq_set_irq_type(virq, IRQ_TYPE_DEFAULT);
|
||||
|
||||
/* If the MPIC was reset, then all vectors have already been
|
||||
* initialized. Otherwise, a per source lazy initialization
|
||||
|
@ -1417,12 +1437,6 @@ void __init mpic_assign_isu(struct mpic *mpic, unsigned int isu_num,
|
|||
mpic->num_sources = isu_first + mpic->isu_size;
|
||||
}
|
||||
|
||||
void __init mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count)
|
||||
{
|
||||
mpic->senses = senses;
|
||||
mpic->senses_count = count;
|
||||
}
|
||||
|
||||
void __init mpic_init(struct mpic *mpic)
|
||||
{
|
||||
int i, cpu;
|
||||
|
@ -1555,12 +1569,12 @@ void mpic_irq_set_priority(unsigned int irq, unsigned int pri)
|
|||
return;
|
||||
|
||||
raw_spin_lock_irqsave(&mpic_lock, flags);
|
||||
if (mpic_is_ipi(mpic, irq)) {
|
||||
if (mpic_is_ipi(mpic, src)) {
|
||||
reg = mpic_ipi_read(src - mpic->ipi_vecs[0]) &
|
||||
~MPIC_VECPRI_PRIORITY_MASK;
|
||||
mpic_ipi_write(src - mpic->ipi_vecs[0],
|
||||
reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT));
|
||||
} else if (mpic_is_tm(mpic, irq)) {
|
||||
} else if (mpic_is_tm(mpic, src)) {
|
||||
reg = mpic_tm_read(src - mpic->timer_vecs[0]) &
|
||||
~MPIC_VECPRI_PRIORITY_MASK;
|
||||
mpic_tm_write(src - mpic->timer_vecs[0],
|
||||
|
|
|
@ -27,6 +27,7 @@
|
|||
|
||||
static struct mpic_msgr **mpic_msgrs;
|
||||
static unsigned int mpic_msgr_count;
|
||||
static DEFINE_RAW_SPINLOCK(msgrs_lock);
|
||||
|
||||
static inline void _mpic_msgr_mer_write(struct mpic_msgr *msgr, u32 value)
|
||||
{
|
||||
|
@ -56,12 +57,11 @@ struct mpic_msgr *mpic_msgr_get(unsigned int reg_num)
|
|||
if (reg_num >= mpic_msgr_count)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
raw_spin_lock_irqsave(&msgr->lock, flags);
|
||||
if (mpic_msgrs[reg_num]->in_use == MSGR_FREE) {
|
||||
msgr = mpic_msgrs[reg_num];
|
||||
raw_spin_lock_irqsave(&msgrs_lock, flags);
|
||||
msgr = mpic_msgrs[reg_num];
|
||||
if (msgr->in_use == MSGR_FREE)
|
||||
msgr->in_use = MSGR_INUSE;
|
||||
}
|
||||
raw_spin_unlock_irqrestore(&msgr->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&msgrs_lock, flags);
|
||||
|
||||
return msgr;
|
||||
}
|
||||
|
@ -228,7 +228,7 @@ static __devinit int mpic_msgr_probe(struct platform_device *dev)
|
|||
|
||||
reg_number = block_number * MPIC_MSGR_REGISTERS_PER_BLOCK + i;
|
||||
msgr->base = msgr_block_addr + i * MPIC_MSGR_STRIDE;
|
||||
msgr->mer = msgr->base + MPIC_MSGR_MER_OFFSET;
|
||||
msgr->mer = (u32 *)((u8 *)msgr->base + MPIC_MSGR_MER_OFFSET);
|
||||
msgr->in_use = MSGR_FREE;
|
||||
msgr->num = i;
|
||||
raw_spin_lock_init(&msgr->lock);
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
#include <linux/debugfs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/export.h>
|
||||
#include <asm/debug.h>
|
||||
#include <asm/prom.h>
|
||||
#include <asm/scom.h>
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
#include <linux/types.h>
|
||||
#include <asm/cmpxchg.h>
|
||||
|
||||
#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
|
||||
#define ATOMIC_INIT(i) { (i) }
|
||||
|
||||
#define atomic_read(v) (*(volatile int *)&(v)->counter)
|
||||
#define atomic_set(v,i) ((v)->counter = (i))
|
||||
|
|
|
@ -86,7 +86,7 @@ static noinline int vmalloc_fault(unsigned long address)
|
|||
pte_t *pte_k;
|
||||
|
||||
/* Make sure we are in vmalloc/module/P3 area: */
|
||||
if (!(address >= VMALLOC_START && address < P3_ADDR_MAX))
|
||||
if (!(address >= P3SEG && address < P3_ADDR_MAX))
|
||||
return -1;
|
||||
|
||||
/*
|
||||
|
|
|
@ -47,8 +47,8 @@ struct pci_controller {
|
|||
*/
|
||||
#define PCI_DMA_BUS_IS_PHYS 1
|
||||
|
||||
int __devinit tile_pci_init(void);
|
||||
int __devinit pcibios_init(void);
|
||||
int __init tile_pci_init(void);
|
||||
int __init pcibios_init(void);
|
||||
|
||||
static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) {}
|
||||
|
||||
|
|
|
@ -141,7 +141,7 @@ static int __devinit tile_init_irqs(int controller_id,
|
|||
*
|
||||
* Returns the number of controllers discovered.
|
||||
*/
|
||||
int __devinit tile_pci_init(void)
|
||||
int __init tile_pci_init(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -287,7 +287,7 @@ static void __devinit fixup_read_and_payload_sizes(void)
|
|||
* The controllers have been set up by the time we get here, by a call to
|
||||
* tile_pci_init.
|
||||
*/
|
||||
int __devinit pcibios_init(void)
|
||||
int __init pcibios_init(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
|
|
@ -33,6 +33,9 @@
|
|||
__HEAD
|
||||
ENTRY(startup_32)
|
||||
#ifdef CONFIG_EFI_STUB
|
||||
jmp preferred_addr
|
||||
|
||||
.balign 0x10
|
||||
/*
|
||||
* We don't need the return address, so set up the stack so
|
||||
* efi_main() can find its arugments.
|
||||
|
@ -41,12 +44,17 @@ ENTRY(startup_32)
|
|||
|
||||
call efi_main
|
||||
cmpl $0, %eax
|
||||
je preferred_addr
|
||||
movl %eax, %esi
|
||||
call 1f
|
||||
jne 2f
|
||||
1:
|
||||
/* EFI init failed, so hang. */
|
||||
hlt
|
||||
jmp 1b
|
||||
2:
|
||||
call 3f
|
||||
3:
|
||||
popl %eax
|
||||
subl $1b, %eax
|
||||
subl $3b, %eax
|
||||
subl BP_pref_address(%esi), %eax
|
||||
add BP_code32_start(%esi), %eax
|
||||
leal preferred_addr(%eax), %eax
|
||||
|
|
|
@ -200,18 +200,28 @@ ENTRY(startup_64)
|
|||
* entire text+data+bss and hopefully all of memory.
|
||||
*/
|
||||
#ifdef CONFIG_EFI_STUB
|
||||
pushq %rsi
|
||||
/*
|
||||
* The entry point for the PE/COFF executable is 0x210, so only
|
||||
* legacy boot loaders will execute this jmp.
|
||||
*/
|
||||
jmp preferred_addr
|
||||
|
||||
.org 0x210
|
||||
mov %rcx, %rdi
|
||||
mov %rdx, %rsi
|
||||
call efi_main
|
||||
popq %rsi
|
||||
cmpq $0,%rax
|
||||
je preferred_addr
|
||||
movq %rax,%rsi
|
||||
call 1f
|
||||
cmpq $0,%rax
|
||||
jne 2f
|
||||
1:
|
||||
/* EFI init failed, so hang. */
|
||||
hlt
|
||||
jmp 1b
|
||||
2:
|
||||
call 3f
|
||||
3:
|
||||
popq %rax
|
||||
subq $1b, %rax
|
||||
subq $3b, %rax
|
||||
subq BP_pref_address(%rsi), %rax
|
||||
add BP_code32_start(%esi), %eax
|
||||
leaq preferred_addr(%rax), %rax
|
||||
|
|
|
@ -205,8 +205,13 @@ int main(int argc, char ** argv)
|
|||
put_unaligned_le32(file_sz, &buf[pe_header + 0x50]);
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/* Address of entry point */
|
||||
put_unaligned_le32(i, &buf[pe_header + 0x28]);
|
||||
/*
|
||||
* Address of entry point.
|
||||
*
|
||||
* The EFI stub entry point is +16 bytes from the start of
|
||||
* the .text section.
|
||||
*/
|
||||
put_unaligned_le32(i + 16, &buf[pe_header + 0x28]);
|
||||
|
||||
/* .text size */
|
||||
put_unaligned_le32(file_sz, &buf[pe_header + 0xb0]);
|
||||
|
@ -217,9 +222,11 @@ int main(int argc, char ** argv)
|
|||
/*
|
||||
* Address of entry point. startup_32 is at the beginning and
|
||||
* the 64-bit entry point (startup_64) is always 512 bytes
|
||||
* after.
|
||||
* after. The EFI stub entry point is 16 bytes after that, as
|
||||
* the first instruction allows legacy loaders to jump over
|
||||
* the EFI stub initialisation
|
||||
*/
|
||||
put_unaligned_le32(i + 512, &buf[pe_header + 0x28]);
|
||||
put_unaligned_le32(i + 528, &buf[pe_header + 0x28]);
|
||||
|
||||
/* .text size */
|
||||
put_unaligned_le32(file_sz, &buf[pe_header + 0xc0]);
|
||||
|
|
|
@ -7,9 +7,9 @@
|
|||
#else
|
||||
# ifdef __i386__
|
||||
# include "posix_types_32.h"
|
||||
# elif defined(__LP64__)
|
||||
# include "posix_types_64.h"
|
||||
# else
|
||||
# elif defined(__ILP32__)
|
||||
# include "posix_types_x32.h"
|
||||
# else
|
||||
# include "posix_types_64.h"
|
||||
# endif
|
||||
#endif
|
||||
|
|
|
@ -257,7 +257,7 @@ struct sigcontext {
|
|||
__u64 oldmask;
|
||||
__u64 cr2;
|
||||
struct _fpstate __user *fpstate; /* zero when no FPU context */
|
||||
#ifndef __LP64__
|
||||
#ifdef __ILP32__
|
||||
__u32 __fpstate_pad;
|
||||
#endif
|
||||
__u64 reserved1[8];
|
||||
|
|
|
@ -2,7 +2,13 @@
|
|||
#define _ASM_X86_SIGINFO_H
|
||||
|
||||
#ifdef __x86_64__
|
||||
# define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int))
|
||||
# ifdef __ILP32__ /* x32 */
|
||||
typedef long long __kernel_si_clock_t __attribute__((aligned(4)));
|
||||
# define __ARCH_SI_CLOCK_T __kernel_si_clock_t
|
||||
# define __ARCH_SI_ATTRIBUTES __attribute__((aligned(8)))
|
||||
# else /* x86-64 */
|
||||
# define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int))
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#include <asm-generic/siginfo.h>
|
||||
|
|
|
@ -63,10 +63,10 @@
|
|||
#else
|
||||
# ifdef __i386__
|
||||
# include <asm/unistd_32.h>
|
||||
# elif defined(__LP64__)
|
||||
# include <asm/unistd_64.h>
|
||||
# else
|
||||
# elif defined(__ILP32__)
|
||||
# include <asm/unistd_x32.h>
|
||||
# else
|
||||
# include <asm/unistd_64.h>
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
|
|
@ -195,6 +195,5 @@ extern struct x86_msi_ops x86_msi;
|
|||
|
||||
extern void x86_init_noop(void);
|
||||
extern void x86_init_uint_noop(unsigned int unused);
|
||||
extern void x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -24,6 +24,10 @@ unsigned long acpi_realmode_flags;
|
|||
static char temp_stack[4096];
|
||||
#endif
|
||||
|
||||
asmlinkage void acpi_enter_s3(void)
|
||||
{
|
||||
acpi_enter_sleep_state(3, wake_sleep_flags);
|
||||
}
|
||||
/**
|
||||
* acpi_suspend_lowlevel - save kernel state
|
||||
*
|
||||
|
|
|
@ -3,12 +3,16 @@
|
|||
*/
|
||||
|
||||
#include <asm/trampoline.h>
|
||||
#include <linux/linkage.h>
|
||||
|
||||
extern unsigned long saved_video_mode;
|
||||
extern long saved_magic;
|
||||
|
||||
extern int wakeup_pmode_return;
|
||||
|
||||
extern u8 wake_sleep_flags;
|
||||
extern asmlinkage void acpi_enter_s3(void);
|
||||
|
||||
extern unsigned long acpi_copy_wakeup_routine(unsigned long);
|
||||
extern void wakeup_long64(void);
|
||||
|
||||
|
|
|
@ -74,9 +74,7 @@ restore_registers:
|
|||
ENTRY(do_suspend_lowlevel)
|
||||
call save_processor_state
|
||||
call save_registers
|
||||
pushl $3
|
||||
call acpi_enter_sleep_state
|
||||
addl $4, %esp
|
||||
call acpi_enter_s3
|
||||
|
||||
# In case of S3 failure, we'll emerge here. Jump
|
||||
# to ret_point to recover
|
||||
|
|
|
@ -71,9 +71,7 @@ ENTRY(do_suspend_lowlevel)
|
|||
movq %rsi, saved_rsi
|
||||
|
||||
addq $8, %rsp
|
||||
movl $3, %edi
|
||||
xorl %eax, %eax
|
||||
call acpi_enter_sleep_state
|
||||
call acpi_enter_s3
|
||||
/* in case something went wrong, restore the machine status and go on */
|
||||
jmp resume_point
|
||||
|
||||
|
|
|
@ -1637,9 +1637,11 @@ static int __init apic_verify(void)
|
|||
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
|
||||
|
||||
/* The BIOS may have set up the APIC at some other address */
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (l & MSR_IA32_APICBASE_ENABLE)
|
||||
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
|
||||
if (boot_cpu_data.x86 >= 6) {
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (l & MSR_IA32_APICBASE_ENABLE)
|
||||
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
|
||||
}
|
||||
|
||||
pr_info("Found and enabled local APIC!\n");
|
||||
return 0;
|
||||
|
@ -1657,13 +1659,15 @@ int __init apic_force_enable(unsigned long addr)
|
|||
* MSR. This can only be done in software for Intel P6 or later
|
||||
* and AMD K7 (Model > 1) or later.
|
||||
*/
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (!(l & MSR_IA32_APICBASE_ENABLE)) {
|
||||
pr_info("Local APIC disabled by BIOS -- reenabling.\n");
|
||||
l &= ~MSR_IA32_APICBASE_BASE;
|
||||
l |= MSR_IA32_APICBASE_ENABLE | addr;
|
||||
wrmsr(MSR_IA32_APICBASE, l, h);
|
||||
enabled_via_apicbase = 1;
|
||||
if (boot_cpu_data.x86 >= 6) {
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (!(l & MSR_IA32_APICBASE_ENABLE)) {
|
||||
pr_info("Local APIC disabled by BIOS -- reenabling.\n");
|
||||
l &= ~MSR_IA32_APICBASE_BASE;
|
||||
l |= MSR_IA32_APICBASE_ENABLE | addr;
|
||||
wrmsr(MSR_IA32_APICBASE, l, h);
|
||||
enabled_via_apicbase = 1;
|
||||
}
|
||||
}
|
||||
return apic_verify();
|
||||
}
|
||||
|
@ -2209,10 +2213,12 @@ static void lapic_resume(void)
|
|||
* FIXME! This will be wrong if we ever support suspend on
|
||||
* SMP! We'll need to do this as part of the CPU restore!
|
||||
*/
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
l &= ~MSR_IA32_APICBASE_BASE;
|
||||
l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr;
|
||||
wrmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (boot_cpu_data.x86 >= 6) {
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
l &= ~MSR_IA32_APICBASE_BASE;
|
||||
l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr;
|
||||
wrmsr(MSR_IA32_APICBASE, l, h);
|
||||
}
|
||||
}
|
||||
|
||||
maxlvt = lapic_get_maxlvt();
|
||||
|
|
|
@ -207,8 +207,11 @@ static void __init map_csrs(void)
|
|||
|
||||
static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)
|
||||
{
|
||||
c->phys_proc_id = node;
|
||||
per_cpu(cpu_llc_id, smp_processor_id()) = node;
|
||||
|
||||
if (c->phys_proc_id != node) {
|
||||
c->phys_proc_id = node;
|
||||
per_cpu(cpu_llc_id, smp_processor_id()) = node;
|
||||
}
|
||||
}
|
||||
|
||||
static int __init numachip_system_init(void)
|
||||
|
|
|
@ -24,6 +24,12 @@ static int x2apic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
|
|||
{
|
||||
if (x2apic_phys)
|
||||
return x2apic_enabled();
|
||||
else if ((acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID) &&
|
||||
(acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL) &&
|
||||
x2apic_enabled()) {
|
||||
printk(KERN_DEBUG "System requires x2apic physical mode\n");
|
||||
return 1;
|
||||
}
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -26,7 +26,8 @@
|
|||
* contact AMD for precise details and a CPU swap.
|
||||
*
|
||||
* See http://www.multimania.com/poulot/k6bug.html
|
||||
* http://www.amd.com/K6/k6docs/revgd.html
|
||||
* and section 2.6.2 of "AMD-K6 Processor Revision Guide - Model 6"
|
||||
* (Publication # 21266 Issue Date: August 1998)
|
||||
*
|
||||
* The following test is erm.. interesting. AMD neglected to up
|
||||
* the chip setting when fixing the bug but they also tweaked some
|
||||
|
@ -94,7 +95,6 @@ static void __cpuinit init_amd_k6(struct cpuinfo_x86 *c)
|
|||
"system stability may be impaired when more than 32 MB are used.\n");
|
||||
else
|
||||
printk(KERN_CONT "probably OK (after B9730xxxx).\n");
|
||||
printk(KERN_INFO "Please see http://membres.lycos.fr/poulot/k6bug.html\n");
|
||||
}
|
||||
|
||||
/* K6 with old style WHCR */
|
||||
|
@ -353,10 +353,11 @@ static void __cpuinit srat_detect_node(struct cpuinfo_x86 *c)
|
|||
node = per_cpu(cpu_llc_id, cpu);
|
||||
|
||||
/*
|
||||
* If core numbers are inconsistent, it's likely a multi-fabric platform,
|
||||
* so invoke platform-specific handler
|
||||
* On multi-fabric platform (e.g. Numascale NumaChip) a
|
||||
* platform-specific handler needs to be called to fixup some
|
||||
* IDs of the CPU.
|
||||
*/
|
||||
if (c->phys_proc_id != node)
|
||||
if (x86_cpuinit.fixup_cpu_id)
|
||||
x86_cpuinit.fixup_cpu_id(c, node);
|
||||
|
||||
if (!node_online(node)) {
|
||||
|
|
|
@ -1162,15 +1162,6 @@ static void dbg_restore_debug_regs(void)
|
|||
#define dbg_restore_debug_regs()
|
||||
#endif /* ! CONFIG_KGDB */
|
||||
|
||||
/*
|
||||
* Prints an error where the NUMA and configured core-number mismatch and the
|
||||
* platform didn't override this to fix it up
|
||||
*/
|
||||
void __cpuinit x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node)
|
||||
{
|
||||
pr_err("NUMA core number %d differs from configured core number %d\n", node, c->phys_proc_id);
|
||||
}
|
||||
|
||||
/*
|
||||
* cpu_init() initializes state that is per-CPU. Some data is already
|
||||
* initialized (naturally) in the bootstrap process, such as the GDT
|
||||
|
|
|
@ -433,14 +433,14 @@ int amd_set_l3_disable_slot(struct amd_northbridge *nb, int cpu, unsigned slot,
|
|||
/* check if @slot is already used or the index is already disabled */
|
||||
ret = amd_get_l3_disable_slot(nb, slot);
|
||||
if (ret >= 0)
|
||||
return -EINVAL;
|
||||
return -EEXIST;
|
||||
|
||||
if (index > nb->l3_cache.indices)
|
||||
return -EINVAL;
|
||||
|
||||
/* check whether the other slot has disabled the same index already */
|
||||
if (index == amd_get_l3_disable_slot(nb, !slot))
|
||||
return -EINVAL;
|
||||
return -EEXIST;
|
||||
|
||||
amd_l3_disable_index(nb, cpu, slot, index);
|
||||
|
||||
|
@ -468,8 +468,8 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf,
|
|||
err = amd_set_l3_disable_slot(this_leaf->base.nb, cpu, slot, val);
|
||||
if (err) {
|
||||
if (err == -EEXIST)
|
||||
printk(KERN_WARNING "L3 disable slot %d in use!\n",
|
||||
slot);
|
||||
pr_warning("L3 slot %d in use/index already disabled!\n",
|
||||
slot);
|
||||
return err;
|
||||
}
|
||||
return count;
|
||||
|
|
|
@ -235,6 +235,7 @@ int init_fpu(struct task_struct *tsk)
|
|||
if (tsk_used_math(tsk)) {
|
||||
if (HAVE_HWFP && tsk == current)
|
||||
unlazy_fpu(tsk);
|
||||
tsk->thread.fpu.last_cpu = ~0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -82,11 +82,6 @@ static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig)
|
|||
{
|
||||
struct cpuinfo_x86 *c = &cpu_data(cpu);
|
||||
|
||||
if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) {
|
||||
pr_warning("CPU%d: family %d not supported\n", cpu, c->x86);
|
||||
return -1;
|
||||
}
|
||||
|
||||
csig->rev = c->microcode;
|
||||
pr_info("CPU%d: patch_level=0x%08x\n", cpu, csig->rev);
|
||||
|
||||
|
@ -380,6 +375,13 @@ static struct microcode_ops microcode_amd_ops = {
|
|||
|
||||
struct microcode_ops * __init init_amd_microcode(void)
|
||||
{
|
||||
struct cpuinfo_x86 *c = &cpu_data(0);
|
||||
|
||||
if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) {
|
||||
pr_warning("AMD CPU family 0x%x not supported\n", c->x86);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
patch = (void *)get_zeroed_page(GFP_KERNEL);
|
||||
if (!patch)
|
||||
return NULL;
|
||||
|
|
|
@ -419,10 +419,8 @@ static int mc_device_add(struct device *dev, struct subsys_interface *sif)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
if (microcode_init_cpu(cpu) == UCODE_ERROR) {
|
||||
sysfs_remove_group(&dev->kobj, &mc_attr_group);
|
||||
if (microcode_init_cpu(cpu) == UCODE_ERROR)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -528,11 +526,11 @@ static int __init microcode_init(void)
|
|||
microcode_ops = init_intel_microcode();
|
||||
else if (c->x86_vendor == X86_VENDOR_AMD)
|
||||
microcode_ops = init_amd_microcode();
|
||||
|
||||
if (!microcode_ops) {
|
||||
else
|
||||
pr_err("no support for this CPU vendor\n");
|
||||
|
||||
if (!microcode_ops)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
microcode_pdev = platform_device_register_simple("microcode", -1,
|
||||
NULL, 0);
|
||||
|
|
|
@ -93,7 +93,6 @@ struct x86_init_ops x86_init __initdata = {
|
|||
struct x86_cpuinit_ops x86_cpuinit __cpuinitdata = {
|
||||
.early_percpu_clock_init = x86_init_noop,
|
||||
.setup_percpu_clockev = setup_secondary_APIC_clock,
|
||||
.fixup_cpu_id = x86_default_fixup_cpu_id,
|
||||
};
|
||||
|
||||
static void default_nmi_init(void) { };
|
||||
|
|
|
@ -805,7 +805,7 @@ void intel_scu_devices_create(void)
|
|||
} else
|
||||
i2c_register_board_info(i2c_bus[i], i2c_devs[i], 1);
|
||||
}
|
||||
intel_scu_notifier_post(SCU_AVAILABLE, 0L);
|
||||
intel_scu_notifier_post(SCU_AVAILABLE, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_scu_devices_create);
|
||||
|
||||
|
@ -814,7 +814,7 @@ void intel_scu_devices_destroy(void)
|
|||
{
|
||||
int i;
|
||||
|
||||
intel_scu_notifier_post(SCU_DOWN, 0L);
|
||||
intel_scu_notifier_post(SCU_DOWN, NULL);
|
||||
|
||||
for (i = 0; i < ipc_next_dev; i++)
|
||||
platform_device_del(ipc_devs[i]);
|
||||
|
|
|
@ -261,7 +261,8 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
|
|||
|
||||
static bool __init xen_check_mwait(void)
|
||||
{
|
||||
#ifdef CONFIG_ACPI
|
||||
#if defined(CONFIG_ACPI) && !defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR) && \
|
||||
!defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR_MODULE)
|
||||
struct xen_platform_op op = {
|
||||
.cmd = XENPF_set_processor_pminfo,
|
||||
.u.set_pminfo.id = -1,
|
||||
|
@ -349,7 +350,6 @@ static void __init xen_init_cpuid_mask(void)
|
|||
/* Xen will set CR4.OSXSAVE if supported and not disabled by force */
|
||||
if ((cx & xsave_mask) != xsave_mask)
|
||||
cpuid_leaf1_ecx_mask &= ~xsave_mask; /* disable XSAVE & OSXSAVE */
|
||||
|
||||
if (xen_check_mwait())
|
||||
cpuid_leaf1_ecx_set_mask = (1 << (X86_FEATURE_MWAIT % 32));
|
||||
}
|
||||
|
|
|
@ -178,6 +178,7 @@ static void __init xen_fill_possible_map(void)
|
|||
static void __init xen_filter_cpu_maps(void)
|
||||
{
|
||||
int i, rc;
|
||||
unsigned int subtract = 0;
|
||||
|
||||
if (!xen_initial_domain())
|
||||
return;
|
||||
|
@ -192,8 +193,22 @@ static void __init xen_filter_cpu_maps(void)
|
|||
} else {
|
||||
set_cpu_possible(i, false);
|
||||
set_cpu_present(i, false);
|
||||
subtract++;
|
||||
}
|
||||
}
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
/* This is akin to using 'nr_cpus' on the Linux command line.
|
||||
* Which is OK as when we use 'dom0_max_vcpus=X' we can only
|
||||
* have up to X, while nr_cpu_ids is greater than X. This
|
||||
* normally is not a problem, except when CPU hotplugging
|
||||
* is involved and then there might be more than X CPUs
|
||||
* in the guest - which will not work as there is no
|
||||
* hypercall to expand the max number of VCPUs an already
|
||||
* running guest has. So cap it up to X. */
|
||||
if (subtract)
|
||||
nr_cpu_ids = nr_cpu_ids - subtract;
|
||||
#endif
|
||||
|
||||
}
|
||||
|
||||
static void __init xen_smp_prepare_boot_cpu(void)
|
||||
|
|
|
@ -96,7 +96,7 @@ ENTRY(xen_restore_fl_direct)
|
|||
|
||||
/* check for unmasked and pending */
|
||||
cmpw $0x0001, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending
|
||||
jz 1f
|
||||
jnz 1f
|
||||
2: call check_events
|
||||
1:
|
||||
ENDPATCH(xen_restore_fl_direct)
|
||||
|
|
|
@ -11,9 +11,6 @@
|
|||
#ifndef _XTENSA_HARDIRQ_H
|
||||
#define _XTENSA_HARDIRQ_H
|
||||
|
||||
void ack_bad_irq(unsigned int irq);
|
||||
#define ack_bad_irq ack_bad_irq
|
||||
|
||||
#include <asm-generic/hardirq.h>
|
||||
|
||||
#endif /* _XTENSA_HARDIRQ_H */
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#ifdef __KERNEL__
|
||||
#include <asm/byteorder.h>
|
||||
#include <asm/page.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include <linux/types.h>
|
||||
|
|
|
@ -496,6 +496,7 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
|
|||
signr = get_signal_to_deliver(&info, &ka, regs, NULL);
|
||||
|
||||
if (signr > 0) {
|
||||
int ret;
|
||||
|
||||
/* Are we from a system call? */
|
||||
|
||||
|
|
|
@ -28,24 +28,34 @@
|
|||
#include "internal.h"
|
||||
#include "sleep.h"
|
||||
|
||||
u8 wake_sleep_flags = ACPI_NO_OPTIONAL_METHODS;
|
||||
static unsigned int gts, bfs;
|
||||
module_param(gts, uint, 0644);
|
||||
module_param(bfs, uint, 0644);
|
||||
static int set_param_wake_flag(const char *val, struct kernel_param *kp)
|
||||
{
|
||||
int ret = param_set_int(val, kp);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (kp->arg == (const char *)>s) {
|
||||
if (gts)
|
||||
wake_sleep_flags |= ACPI_EXECUTE_GTS;
|
||||
else
|
||||
wake_sleep_flags &= ~ACPI_EXECUTE_GTS;
|
||||
}
|
||||
if (kp->arg == (const char *)&bfs) {
|
||||
if (bfs)
|
||||
wake_sleep_flags |= ACPI_EXECUTE_BFS;
|
||||
else
|
||||
wake_sleep_flags &= ~ACPI_EXECUTE_BFS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
module_param_call(gts, set_param_wake_flag, param_get_int, >s, 0644);
|
||||
module_param_call(bfs, set_param_wake_flag, param_get_int, &bfs, 0644);
|
||||
MODULE_PARM_DESC(gts, "Enable evaluation of _GTS on suspend.");
|
||||
MODULE_PARM_DESC(bfs, "Enable evaluation of _BFS on resume".);
|
||||
|
||||
static u8 wake_sleep_flags(void)
|
||||
{
|
||||
u8 flags = ACPI_NO_OPTIONAL_METHODS;
|
||||
|
||||
if (gts)
|
||||
flags |= ACPI_EXECUTE_GTS;
|
||||
if (bfs)
|
||||
flags |= ACPI_EXECUTE_BFS;
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
static u8 sleep_states[ACPI_S_STATE_COUNT];
|
||||
|
||||
static void acpi_sleep_tts_switch(u32 acpi_state)
|
||||
|
@ -263,7 +273,6 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
|
|||
{
|
||||
acpi_status status = AE_OK;
|
||||
u32 acpi_state = acpi_target_sleep_state;
|
||||
u8 flags = wake_sleep_flags();
|
||||
int error;
|
||||
|
||||
ACPI_FLUSH_CPU_CACHE();
|
||||
|
@ -271,7 +280,7 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
|
|||
switch (acpi_state) {
|
||||
case ACPI_STATE_S1:
|
||||
barrier();
|
||||
status = acpi_enter_sleep_state(acpi_state, flags);
|
||||
status = acpi_enter_sleep_state(acpi_state, wake_sleep_flags);
|
||||
break;
|
||||
|
||||
case ACPI_STATE_S3:
|
||||
|
@ -286,7 +295,7 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
|
|||
acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1);
|
||||
|
||||
/* Reprogram control registers and execute _BFS */
|
||||
acpi_leave_sleep_state_prep(acpi_state, flags);
|
||||
acpi_leave_sleep_state_prep(acpi_state, wake_sleep_flags);
|
||||
|
||||
/* ACPI 3.0 specs (P62) says that it's the responsibility
|
||||
* of the OSPM to clear the status bit [ implying that the
|
||||
|
@ -550,30 +559,27 @@ static int acpi_hibernation_begin(void)
|
|||
|
||||
static int acpi_hibernation_enter(void)
|
||||
{
|
||||
u8 flags = wake_sleep_flags();
|
||||
acpi_status status = AE_OK;
|
||||
|
||||
ACPI_FLUSH_CPU_CACHE();
|
||||
|
||||
/* This shouldn't return. If it returns, we have a problem */
|
||||
status = acpi_enter_sleep_state(ACPI_STATE_S4, flags);
|
||||
status = acpi_enter_sleep_state(ACPI_STATE_S4, wake_sleep_flags);
|
||||
/* Reprogram control registers and execute _BFS */
|
||||
acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags);
|
||||
acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags);
|
||||
|
||||
return ACPI_SUCCESS(status) ? 0 : -EFAULT;
|
||||
}
|
||||
|
||||
static void acpi_hibernation_leave(void)
|
||||
{
|
||||
u8 flags = wake_sleep_flags();
|
||||
|
||||
/*
|
||||
* If ACPI is not enabled by the BIOS and the boot kernel, we need to
|
||||
* enable it here.
|
||||
*/
|
||||
acpi_enable();
|
||||
/* Reprogram control registers and execute _BFS */
|
||||
acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags);
|
||||
acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags);
|
||||
/* Check the hardware signature */
|
||||
if (facs && s4_hardware_signature != facs->hardware_signature) {
|
||||
printk(KERN_EMERG "ACPI: Hardware changed while hibernated, "
|
||||
|
@ -828,12 +834,10 @@ static void acpi_power_off_prepare(void)
|
|||
|
||||
static void acpi_power_off(void)
|
||||
{
|
||||
u8 flags = wake_sleep_flags();
|
||||
|
||||
/* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */
|
||||
printk(KERN_DEBUG "%s called\n", __func__);
|
||||
local_irq_disable();
|
||||
acpi_enter_sleep_state(ACPI_STATE_S5, flags);
|
||||
acpi_enter_sleep_state(ACPI_STATE_S5, wake_sleep_flags);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -404,16 +404,19 @@ int bcma_sprom_get(struct bcma_bus *bus)
|
|||
return -EOPNOTSUPP;
|
||||
|
||||
if (!bcma_sprom_ext_available(bus)) {
|
||||
bool sprom_onchip;
|
||||
|
||||
/*
|
||||
* External SPROM takes precedence so check
|
||||
* on-chip OTP only when no external SPROM
|
||||
* is present.
|
||||
*/
|
||||
if (bcma_sprom_onchip_available(bus)) {
|
||||
sprom_onchip = bcma_sprom_onchip_available(bus);
|
||||
if (sprom_onchip) {
|
||||
/* determine offset */
|
||||
offset = bcma_sprom_onchip_offset(bus);
|
||||
}
|
||||
if (!offset) {
|
||||
if (!offset || !sprom_onchip) {
|
||||
/*
|
||||
* Maybe there is no SPROM on the device?
|
||||
* Now we ask the arch code if there is some sprom
|
||||
|
|
|
@ -1429,6 +1429,7 @@ static int pl08x_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
|
|||
* signal
|
||||
*/
|
||||
release_phy_channel(plchan);
|
||||
plchan->phychan_hold = 0;
|
||||
}
|
||||
/* Dequeue jobs and free LLIs */
|
||||
if (plchan->at) {
|
||||
|
|
|
@ -221,10 +221,6 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
|
|||
|
||||
vdbg_dump_regs(atchan);
|
||||
|
||||
/* clear any pending interrupt */
|
||||
while (dma_readl(atdma, EBCISR))
|
||||
cpu_relax();
|
||||
|
||||
channel_writel(atchan, SADDR, 0);
|
||||
channel_writel(atchan, DADDR, 0);
|
||||
channel_writel(atchan, CTRLA, 0);
|
||||
|
|
|
@ -571,11 +571,14 @@ static void imxdma_tasklet(unsigned long data)
|
|||
if (desc->desc.callback)
|
||||
desc->desc.callback(desc->desc.callback_param);
|
||||
|
||||
dma_cookie_complete(&desc->desc);
|
||||
|
||||
/* If we are dealing with a cyclic descriptor keep it on ld_active */
|
||||
/* If we are dealing with a cyclic descriptor keep it on ld_active
|
||||
* and dont mark the descripor as complete.
|
||||
* Only in non-cyclic cases it would be marked as complete
|
||||
*/
|
||||
if (imxdma_chan_is_doing_cyclic(imxdmac))
|
||||
goto out;
|
||||
else
|
||||
dma_cookie_complete(&desc->desc);
|
||||
|
||||
/* Free 2D slot if it was an interleaved transfer */
|
||||
if (imxdmac->enabled_2d) {
|
||||
|
|
|
@ -201,10 +201,6 @@ static struct mxs_dma_chan *to_mxs_dma_chan(struct dma_chan *chan)
|
|||
|
||||
static dma_cookie_t mxs_dma_tx_submit(struct dma_async_tx_descriptor *tx)
|
||||
{
|
||||
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(tx->chan);
|
||||
|
||||
mxs_dma_enable_chan(mxs_chan);
|
||||
|
||||
return dma_cookie_assign(tx);
|
||||
}
|
||||
|
||||
|
@ -558,9 +554,9 @@ static enum dma_status mxs_dma_tx_status(struct dma_chan *chan,
|
|||
|
||||
static void mxs_dma_issue_pending(struct dma_chan *chan)
|
||||
{
|
||||
/*
|
||||
* Nothing to do. We only have a single descriptor.
|
||||
*/
|
||||
struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan);
|
||||
|
||||
mxs_dma_enable_chan(mxs_chan);
|
||||
}
|
||||
|
||||
static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
|
||||
|
|
|
@ -2225,12 +2225,9 @@ static inline void free_desc_list(struct list_head *list)
|
|||
{
|
||||
struct dma_pl330_dmac *pdmac;
|
||||
struct dma_pl330_desc *desc;
|
||||
struct dma_pl330_chan *pch;
|
||||
struct dma_pl330_chan *pch = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
if (list_empty(list))
|
||||
return;
|
||||
|
||||
/* Finish off the work list */
|
||||
list_for_each_entry(desc, list, node) {
|
||||
dma_async_tx_callback callback;
|
||||
|
@ -2247,6 +2244,10 @@ static inline void free_desc_list(struct list_head *list)
|
|||
desc->pchan = NULL;
|
||||
}
|
||||
|
||||
/* pch will be unset if list was empty */
|
||||
if (!pch)
|
||||
return;
|
||||
|
||||
pdmac = pch->dmac;
|
||||
|
||||
spin_lock_irqsave(&pdmac->pool_lock, flags);
|
||||
|
@ -2257,12 +2258,9 @@ static inline void free_desc_list(struct list_head *list)
|
|||
static inline void handle_cyclic_desc_list(struct list_head *list)
|
||||
{
|
||||
struct dma_pl330_desc *desc;
|
||||
struct dma_pl330_chan *pch;
|
||||
struct dma_pl330_chan *pch = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
if (list_empty(list))
|
||||
return;
|
||||
|
||||
list_for_each_entry(desc, list, node) {
|
||||
dma_async_tx_callback callback;
|
||||
|
||||
|
@ -2274,6 +2272,10 @@ static inline void handle_cyclic_desc_list(struct list_head *list)
|
|||
callback(desc->txd.callback_param);
|
||||
}
|
||||
|
||||
/* pch will be unset if list was empty */
|
||||
if (!pch)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&pch->lock, flags);
|
||||
list_splice_tail_init(list, &pch->work_list);
|
||||
spin_unlock_irqrestore(&pch->lock, flags);
|
||||
|
@ -2926,8 +2928,11 @@ pl330_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
INIT_LIST_HEAD(&pd->channels);
|
||||
|
||||
/* Initialize channel parameters */
|
||||
num_chan = max(pdat ? pdat->nr_valid_peri : (u8)pi->pcfg.num_peri,
|
||||
(u8)pi->pcfg.num_chan);
|
||||
if (pdat)
|
||||
num_chan = max_t(int, pdat->nr_valid_peri, pi->pcfg.num_chan);
|
||||
else
|
||||
num_chan = max_t(int, pi->pcfg.num_peri, pi->pcfg.num_chan);
|
||||
|
||||
pdmac->peripherals = kzalloc(num_chan * sizeof(*pch), GFP_KERNEL);
|
||||
|
||||
for (i = 0; i < num_chan; i++) {
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include <linux/pm_runtime.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/amba/bus.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
|
||||
#include <plat/ste_dma40.h>
|
||||
|
||||
|
@ -68,6 +69,22 @@ enum d40_command {
|
|||
D40_DMA_SUSPENDED = 3
|
||||
};
|
||||
|
||||
/*
|
||||
* enum d40_events - The different Event Enables for the event lines.
|
||||
*
|
||||
* @D40_DEACTIVATE_EVENTLINE: De-activate Event line, stopping the logical chan.
|
||||
* @D40_ACTIVATE_EVENTLINE: Activate the Event line, to start a logical chan.
|
||||
* @D40_SUSPEND_REQ_EVENTLINE: Requesting for suspending a event line.
|
||||
* @D40_ROUND_EVENTLINE: Status check for event line.
|
||||
*/
|
||||
|
||||
enum d40_events {
|
||||
D40_DEACTIVATE_EVENTLINE = 0,
|
||||
D40_ACTIVATE_EVENTLINE = 1,
|
||||
D40_SUSPEND_REQ_EVENTLINE = 2,
|
||||
D40_ROUND_EVENTLINE = 3
|
||||
};
|
||||
|
||||
/*
|
||||
* These are the registers that has to be saved and later restored
|
||||
* when the DMA hw is powered off.
|
||||
|
@ -870,8 +887,8 @@ static void d40_save_restore_registers(struct d40_base *base, bool save)
|
|||
}
|
||||
#endif
|
||||
|
||||
static int d40_channel_execute_command(struct d40_chan *d40c,
|
||||
enum d40_command command)
|
||||
static int __d40_execute_command_phy(struct d40_chan *d40c,
|
||||
enum d40_command command)
|
||||
{
|
||||
u32 status;
|
||||
int i;
|
||||
|
@ -880,6 +897,12 @@ static int d40_channel_execute_command(struct d40_chan *d40c,
|
|||
unsigned long flags;
|
||||
u32 wmask;
|
||||
|
||||
if (command == D40_DMA_STOP) {
|
||||
ret = __d40_execute_command_phy(d40c, D40_DMA_SUSPEND_REQ);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&d40c->base->execmd_lock, flags);
|
||||
|
||||
if (d40c->phy_chan->num % 2 == 0)
|
||||
|
@ -973,67 +996,109 @@ static void d40_term_all(struct d40_chan *d40c)
|
|||
}
|
||||
|
||||
d40c->pending_tx = 0;
|
||||
d40c->busy = false;
|
||||
}
|
||||
|
||||
static void __d40_config_set_event(struct d40_chan *d40c, bool enable,
|
||||
u32 event, int reg)
|
||||
static void __d40_config_set_event(struct d40_chan *d40c,
|
||||
enum d40_events event_type, u32 event,
|
||||
int reg)
|
||||
{
|
||||
void __iomem *addr = chan_base(d40c) + reg;
|
||||
int tries;
|
||||
u32 status;
|
||||
|
||||
switch (event_type) {
|
||||
|
||||
case D40_DEACTIVATE_EVENTLINE:
|
||||
|
||||
if (!enable) {
|
||||
writel((D40_DEACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event))
|
||||
| ~D40_EVENTLINE_MASK(event), addr);
|
||||
return;
|
||||
}
|
||||
break;
|
||||
|
||||
case D40_SUSPEND_REQ_EVENTLINE:
|
||||
status = (readl(addr) & D40_EVENTLINE_MASK(event)) >>
|
||||
D40_EVENTLINE_POS(event);
|
||||
|
||||
if (status == D40_DEACTIVATE_EVENTLINE ||
|
||||
status == D40_SUSPEND_REQ_EVENTLINE)
|
||||
break;
|
||||
|
||||
writel((D40_SUSPEND_REQ_EVENTLINE << D40_EVENTLINE_POS(event))
|
||||
| ~D40_EVENTLINE_MASK(event), addr);
|
||||
|
||||
for (tries = 0 ; tries < D40_SUSPEND_MAX_IT; tries++) {
|
||||
|
||||
status = (readl(addr) & D40_EVENTLINE_MASK(event)) >>
|
||||
D40_EVENTLINE_POS(event);
|
||||
|
||||
cpu_relax();
|
||||
/*
|
||||
* Reduce the number of bus accesses while
|
||||
* waiting for the DMA to suspend.
|
||||
*/
|
||||
udelay(3);
|
||||
|
||||
if (status == D40_DEACTIVATE_EVENTLINE)
|
||||
break;
|
||||
}
|
||||
|
||||
if (tries == D40_SUSPEND_MAX_IT) {
|
||||
chan_err(d40c,
|
||||
"unable to stop the event_line chl %d (log: %d)"
|
||||
"status %x\n", d40c->phy_chan->num,
|
||||
d40c->log_num, status);
|
||||
}
|
||||
break;
|
||||
|
||||
case D40_ACTIVATE_EVENTLINE:
|
||||
/*
|
||||
* The hardware sometimes doesn't register the enable when src and dst
|
||||
* event lines are active on the same logical channel. Retry to ensure
|
||||
* it does. Usually only one retry is sufficient.
|
||||
*/
|
||||
tries = 100;
|
||||
while (--tries) {
|
||||
writel((D40_ACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event))
|
||||
| ~D40_EVENTLINE_MASK(event), addr);
|
||||
tries = 100;
|
||||
while (--tries) {
|
||||
writel((D40_ACTIVATE_EVENTLINE <<
|
||||
D40_EVENTLINE_POS(event)) |
|
||||
~D40_EVENTLINE_MASK(event), addr);
|
||||
|
||||
if (readl(addr) & D40_EVENTLINE_MASK(event))
|
||||
break;
|
||||
}
|
||||
|
||||
if (tries != 99)
|
||||
dev_dbg(chan2dev(d40c),
|
||||
"[%s] workaround enable S%cLNK (%d tries)\n",
|
||||
__func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D',
|
||||
100 - tries);
|
||||
|
||||
WARN_ON(!tries);
|
||||
break;
|
||||
|
||||
case D40_ROUND_EVENTLINE:
|
||||
BUG();
|
||||
break;
|
||||
|
||||
if (readl(addr) & D40_EVENTLINE_MASK(event))
|
||||
break;
|
||||
}
|
||||
|
||||
if (tries != 99)
|
||||
dev_dbg(chan2dev(d40c),
|
||||
"[%s] workaround enable S%cLNK (%d tries)\n",
|
||||
__func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D',
|
||||
100 - tries);
|
||||
|
||||
WARN_ON(!tries);
|
||||
}
|
||||
|
||||
static void d40_config_set_event(struct d40_chan *d40c, bool do_enable)
|
||||
static void d40_config_set_event(struct d40_chan *d40c,
|
||||
enum d40_events event_type)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&d40c->phy_chan->lock, flags);
|
||||
|
||||
/* Enable event line connected to device (or memcpy) */
|
||||
if ((d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_MEM) ||
|
||||
(d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_PERIPH)) {
|
||||
u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.src_dev_type);
|
||||
|
||||
__d40_config_set_event(d40c, do_enable, event,
|
||||
__d40_config_set_event(d40c, event_type, event,
|
||||
D40_CHAN_REG_SSLNK);
|
||||
}
|
||||
|
||||
if (d40c->dma_cfg.dir != STEDMA40_PERIPH_TO_MEM) {
|
||||
u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.dst_dev_type);
|
||||
|
||||
__d40_config_set_event(d40c, do_enable, event,
|
||||
__d40_config_set_event(d40c, event_type, event,
|
||||
D40_CHAN_REG_SDLNK);
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&d40c->phy_chan->lock, flags);
|
||||
}
|
||||
|
||||
static u32 d40_chan_has_events(struct d40_chan *d40c)
|
||||
|
@ -1047,6 +1112,64 @@ static u32 d40_chan_has_events(struct d40_chan *d40c)
|
|||
return val;
|
||||
}
|
||||
|
||||
static int
|
||||
__d40_execute_command_log(struct d40_chan *d40c, enum d40_command command)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
u32 active_status;
|
||||
void __iomem *active_reg;
|
||||
|
||||
if (d40c->phy_chan->num % 2 == 0)
|
||||
active_reg = d40c->base->virtbase + D40_DREG_ACTIVE;
|
||||
else
|
||||
active_reg = d40c->base->virtbase + D40_DREG_ACTIVO;
|
||||
|
||||
|
||||
spin_lock_irqsave(&d40c->phy_chan->lock, flags);
|
||||
|
||||
switch (command) {
|
||||
case D40_DMA_STOP:
|
||||
case D40_DMA_SUSPEND_REQ:
|
||||
|
||||
active_status = (readl(active_reg) &
|
||||
D40_CHAN_POS_MASK(d40c->phy_chan->num)) >>
|
||||
D40_CHAN_POS(d40c->phy_chan->num);
|
||||
|
||||
if (active_status == D40_DMA_RUN)
|
||||
d40_config_set_event(d40c, D40_SUSPEND_REQ_EVENTLINE);
|
||||
else
|
||||
d40_config_set_event(d40c, D40_DEACTIVATE_EVENTLINE);
|
||||
|
||||
if (!d40_chan_has_events(d40c) && (command == D40_DMA_STOP))
|
||||
ret = __d40_execute_command_phy(d40c, command);
|
||||
|
||||
break;
|
||||
|
||||
case D40_DMA_RUN:
|
||||
|
||||
d40_config_set_event(d40c, D40_ACTIVATE_EVENTLINE);
|
||||
ret = __d40_execute_command_phy(d40c, command);
|
||||
break;
|
||||
|
||||
case D40_DMA_SUSPENDED:
|
||||
BUG();
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&d40c->phy_chan->lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int d40_channel_execute_command(struct d40_chan *d40c,
|
||||
enum d40_command command)
|
||||
{
|
||||
if (chan_is_logical(d40c))
|
||||
return __d40_execute_command_log(d40c, command);
|
||||
else
|
||||
return __d40_execute_command_phy(d40c, command);
|
||||
}
|
||||
|
||||
static u32 d40_get_prmo(struct d40_chan *d40c)
|
||||
{
|
||||
static const unsigned int phy_map[] = {
|
||||
|
@ -1149,15 +1272,7 @@ static int d40_pause(struct d40_chan *d40c)
|
|||
spin_lock_irqsave(&d40c->lock, flags);
|
||||
|
||||
res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ);
|
||||
if (res == 0) {
|
||||
if (chan_is_logical(d40c)) {
|
||||
d40_config_set_event(d40c, false);
|
||||
/* Resume the other logical channels if any */
|
||||
if (d40_chan_has_events(d40c))
|
||||
res = d40_channel_execute_command(d40c,
|
||||
D40_DMA_RUN);
|
||||
}
|
||||
}
|
||||
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
pm_runtime_put_autosuspend(d40c->base->dev);
|
||||
spin_unlock_irqrestore(&d40c->lock, flags);
|
||||
|
@ -1174,45 +1289,17 @@ static int d40_resume(struct d40_chan *d40c)
|
|||
|
||||
spin_lock_irqsave(&d40c->lock, flags);
|
||||
pm_runtime_get_sync(d40c->base->dev);
|
||||
if (d40c->base->rev == 0)
|
||||
if (chan_is_logical(d40c)) {
|
||||
res = d40_channel_execute_command(d40c,
|
||||
D40_DMA_SUSPEND_REQ);
|
||||
goto no_suspend;
|
||||
}
|
||||
|
||||
/* If bytes left to transfer or linked tx resume job */
|
||||
if (d40_residue(d40c) || d40_tx_is_linked(d40c)) {
|
||||
|
||||
if (chan_is_logical(d40c))
|
||||
d40_config_set_event(d40c, true);
|
||||
|
||||
if (d40_residue(d40c) || d40_tx_is_linked(d40c))
|
||||
res = d40_channel_execute_command(d40c, D40_DMA_RUN);
|
||||
}
|
||||
|
||||
no_suspend:
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
pm_runtime_put_autosuspend(d40c->base->dev);
|
||||
spin_unlock_irqrestore(&d40c->lock, flags);
|
||||
return res;
|
||||
}
|
||||
|
||||
static int d40_terminate_all(struct d40_chan *chan)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
|
||||
ret = d40_pause(chan);
|
||||
if (!ret && chan_is_physical(chan))
|
||||
ret = d40_channel_execute_command(chan, D40_DMA_STOP);
|
||||
|
||||
spin_lock_irqsave(&chan->lock, flags);
|
||||
d40_term_all(chan);
|
||||
spin_unlock_irqrestore(&chan->lock, flags);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static dma_cookie_t d40_tx_submit(struct dma_async_tx_descriptor *tx)
|
||||
{
|
||||
struct d40_chan *d40c = container_of(tx->chan,
|
||||
|
@ -1232,20 +1319,6 @@ static dma_cookie_t d40_tx_submit(struct dma_async_tx_descriptor *tx)
|
|||
|
||||
static int d40_start(struct d40_chan *d40c)
|
||||
{
|
||||
if (d40c->base->rev == 0) {
|
||||
int err;
|
||||
|
||||
if (chan_is_logical(d40c)) {
|
||||
err = d40_channel_execute_command(d40c,
|
||||
D40_DMA_SUSPEND_REQ);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
||||
if (chan_is_logical(d40c))
|
||||
d40_config_set_event(d40c, true);
|
||||
|
||||
return d40_channel_execute_command(d40c, D40_DMA_RUN);
|
||||
}
|
||||
|
||||
|
@ -1258,10 +1331,10 @@ static struct d40_desc *d40_queue_start(struct d40_chan *d40c)
|
|||
d40d = d40_first_queued(d40c);
|
||||
|
||||
if (d40d != NULL) {
|
||||
if (!d40c->busy)
|
||||
if (!d40c->busy) {
|
||||
d40c->busy = true;
|
||||
|
||||
pm_runtime_get_sync(d40c->base->dev);
|
||||
pm_runtime_get_sync(d40c->base->dev);
|
||||
}
|
||||
|
||||
/* Remove from queue */
|
||||
d40_desc_remove(d40d);
|
||||
|
@ -1388,8 +1461,8 @@ static void dma_tasklet(unsigned long data)
|
|||
|
||||
return;
|
||||
|
||||
err:
|
||||
/* Rescue manoeuvre if receiving double interrupts */
|
||||
err:
|
||||
/* Rescue manouver if receiving double interrupts */
|
||||
if (d40c->pending_tx > 0)
|
||||
d40c->pending_tx--;
|
||||
spin_unlock_irqrestore(&d40c->lock, flags);
|
||||
|
@ -1770,7 +1843,6 @@ static int d40_config_memcpy(struct d40_chan *d40c)
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static int d40_free_dma(struct d40_chan *d40c)
|
||||
{
|
||||
|
||||
|
@ -1806,44 +1878,19 @@ static int d40_free_dma(struct d40_chan *d40c)
|
|||
}
|
||||
|
||||
pm_runtime_get_sync(d40c->base->dev);
|
||||
res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ);
|
||||
if (res) {
|
||||
chan_err(d40c, "suspend failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (chan_is_logical(d40c)) {
|
||||
/* Release logical channel, deactivate the event line */
|
||||
|
||||
d40_config_set_event(d40c, false);
|
||||
d40c->base->lookup_log_chans[d40c->log_num] = NULL;
|
||||
|
||||
/*
|
||||
* Check if there are more logical allocation
|
||||
* on this phy channel.
|
||||
*/
|
||||
if (!d40_alloc_mask_free(phy, is_src, event)) {
|
||||
/* Resume the other logical channels if any */
|
||||
if (d40_chan_has_events(d40c)) {
|
||||
res = d40_channel_execute_command(d40c,
|
||||
D40_DMA_RUN);
|
||||
if (res)
|
||||
chan_err(d40c,
|
||||
"Executing RUN command\n");
|
||||
}
|
||||
goto out;
|
||||
}
|
||||
} else {
|
||||
(void) d40_alloc_mask_free(phy, is_src, 0);
|
||||
}
|
||||
|
||||
/* Release physical channel */
|
||||
res = d40_channel_execute_command(d40c, D40_DMA_STOP);
|
||||
if (res) {
|
||||
chan_err(d40c, "Failed to stop channel\n");
|
||||
chan_err(d40c, "stop failed\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0);
|
||||
|
||||
if (chan_is_logical(d40c))
|
||||
d40c->base->lookup_log_chans[d40c->log_num] = NULL;
|
||||
else
|
||||
d40c->base->lookup_phy_chans[phy->num] = NULL;
|
||||
|
||||
if (d40c->busy) {
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
pm_runtime_put_autosuspend(d40c->base->dev);
|
||||
|
@ -1852,7 +1899,6 @@ static int d40_free_dma(struct d40_chan *d40c)
|
|||
d40c->busy = false;
|
||||
d40c->phy_chan = NULL;
|
||||
d40c->configured = false;
|
||||
d40c->base->lookup_phy_chans[phy->num] = NULL;
|
||||
out:
|
||||
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
|
@ -2070,7 +2116,7 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
|
|||
if (sg_next(&sg_src[sg_len - 1]) == sg_src)
|
||||
desc->cyclic = true;
|
||||
|
||||
if (direction != DMA_NONE) {
|
||||
if (direction != DMA_TRANS_NONE) {
|
||||
dma_addr_t dev_addr = d40_get_dev_addr(chan, direction);
|
||||
|
||||
if (direction == DMA_DEV_TO_MEM)
|
||||
|
@ -2371,6 +2417,31 @@ static void d40_issue_pending(struct dma_chan *chan)
|
|||
spin_unlock_irqrestore(&d40c->lock, flags);
|
||||
}
|
||||
|
||||
static void d40_terminate_all(struct dma_chan *chan)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&d40c->lock, flags);
|
||||
|
||||
pm_runtime_get_sync(d40c->base->dev);
|
||||
ret = d40_channel_execute_command(d40c, D40_DMA_STOP);
|
||||
if (ret)
|
||||
chan_err(d40c, "Failed to stop channel\n");
|
||||
|
||||
d40_term_all(d40c);
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
pm_runtime_put_autosuspend(d40c->base->dev);
|
||||
if (d40c->busy) {
|
||||
pm_runtime_mark_last_busy(d40c->base->dev);
|
||||
pm_runtime_put_autosuspend(d40c->base->dev);
|
||||
}
|
||||
d40c->busy = false;
|
||||
|
||||
spin_unlock_irqrestore(&d40c->lock, flags);
|
||||
}
|
||||
|
||||
static int
|
||||
dma40_config_to_halfchannel(struct d40_chan *d40c,
|
||||
struct stedma40_half_channel_info *info,
|
||||
|
@ -2551,7 +2622,8 @@ static int d40_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
|
|||
|
||||
switch (cmd) {
|
||||
case DMA_TERMINATE_ALL:
|
||||
return d40_terminate_all(d40c);
|
||||
d40_terminate_all(chan);
|
||||
return 0;
|
||||
case DMA_PAUSE:
|
||||
return d40_pause(d40c);
|
||||
case DMA_RESUME:
|
||||
|
@ -2908,6 +2980,12 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
|
|||
dev_info(&pdev->dev, "hardware revision: %d @ 0x%x\n",
|
||||
rev, res->start);
|
||||
|
||||
if (rev < 2) {
|
||||
d40_err(&pdev->dev, "hardware revision: %d is not supported",
|
||||
rev);
|
||||
goto failure;
|
||||
}
|
||||
|
||||
plat_data = pdev->dev.platform_data;
|
||||
|
||||
/* Count the number of logical channels in use */
|
||||
|
@ -2998,6 +3076,7 @@ failure:
|
|||
|
||||
if (base) {
|
||||
kfree(base->lcla_pool.alloc_map);
|
||||
kfree(base->reg_val_backup_chan);
|
||||
kfree(base->lookup_log_chans);
|
||||
kfree(base->lookup_phy_chans);
|
||||
kfree(base->phy_res);
|
||||
|
|
|
@ -62,8 +62,6 @@
|
|||
#define D40_SREG_ELEM_LOG_LIDX_MASK (0xFF << D40_SREG_ELEM_LOG_LIDX_POS)
|
||||
|
||||
/* Link register */
|
||||
#define D40_DEACTIVATE_EVENTLINE 0x0
|
||||
#define D40_ACTIVATE_EVENTLINE 0x1
|
||||
#define D40_EVENTLINE_POS(i) (2 * i)
|
||||
#define D40_EVENTLINE_MASK(i) (0x3 << D40_EVENTLINE_POS(i))
|
||||
|
||||
|
|
|
@ -64,6 +64,7 @@ struct pxa_gpio_chip {
|
|||
unsigned long irq_mask;
|
||||
unsigned long irq_edge_rise;
|
||||
unsigned long irq_edge_fall;
|
||||
int (*set_wake)(unsigned int gpio, unsigned int on);
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
unsigned long saved_gplr;
|
||||
|
@ -269,7 +270,8 @@ static void pxa_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
|
|||
(value ? GPSR_OFFSET : GPCR_OFFSET));
|
||||
}
|
||||
|
||||
static int __devinit pxa_init_gpio_chip(int gpio_end)
|
||||
static int __devinit pxa_init_gpio_chip(int gpio_end,
|
||||
int (*set_wake)(unsigned int, unsigned int))
|
||||
{
|
||||
int i, gpio, nbanks = gpio_to_bank(gpio_end) + 1;
|
||||
struct pxa_gpio_chip *chips;
|
||||
|
@ -285,6 +287,7 @@ static int __devinit pxa_init_gpio_chip(int gpio_end)
|
|||
|
||||
sprintf(chips[i].label, "gpio-%d", i);
|
||||
chips[i].regbase = gpio_reg_base + BANK_OFF(i);
|
||||
chips[i].set_wake = set_wake;
|
||||
|
||||
c->base = gpio;
|
||||
c->label = chips[i].label;
|
||||
|
@ -412,6 +415,17 @@ static void pxa_mask_muxed_gpio(struct irq_data *d)
|
|||
writel_relaxed(gfer, c->regbase + GFER_OFFSET);
|
||||
}
|
||||
|
||||
static int pxa_gpio_set_wake(struct irq_data *d, unsigned int on)
|
||||
{
|
||||
int gpio = pxa_irq_to_gpio(d->irq);
|
||||
struct pxa_gpio_chip *c = gpio_to_pxachip(gpio);
|
||||
|
||||
if (c->set_wake)
|
||||
return c->set_wake(gpio, on);
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pxa_unmask_muxed_gpio(struct irq_data *d)
|
||||
{
|
||||
int gpio = pxa_irq_to_gpio(d->irq);
|
||||
|
@ -427,6 +441,7 @@ static struct irq_chip pxa_muxed_gpio_chip = {
|
|||
.irq_mask = pxa_mask_muxed_gpio,
|
||||
.irq_unmask = pxa_unmask_muxed_gpio,
|
||||
.irq_set_type = pxa_gpio_irq_type,
|
||||
.irq_set_wake = pxa_gpio_set_wake,
|
||||
};
|
||||
|
||||
static int pxa_gpio_nums(void)
|
||||
|
@ -471,6 +486,7 @@ static int __devinit pxa_gpio_probe(struct platform_device *pdev)
|
|||
struct pxa_gpio_chip *c;
|
||||
struct resource *res;
|
||||
struct clk *clk;
|
||||
struct pxa_gpio_platform_data *info;
|
||||
int gpio, irq, ret;
|
||||
int irq0 = 0, irq1 = 0, irq_mux, gpio_offset = 0;
|
||||
|
||||
|
@ -516,7 +532,8 @@ static int __devinit pxa_gpio_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
/* Initialize GPIO chips */
|
||||
pxa_init_gpio_chip(pxa_last_gpio);
|
||||
info = dev_get_platdata(&pdev->dev);
|
||||
pxa_init_gpio_chip(pxa_last_gpio, info ? info->gpio_set_wake : NULL);
|
||||
|
||||
/* clear all GPIO edge detects */
|
||||
for_each_gpio_chip(gpio, c) {
|
||||
|
|
|
@ -149,22 +149,12 @@ static int exynos_drm_gem_map_pages(struct drm_gem_object *obj,
|
|||
unsigned long pfn;
|
||||
|
||||
if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) {
|
||||
unsigned long usize = buf->size;
|
||||
|
||||
if (!buf->pages)
|
||||
return -EINTR;
|
||||
|
||||
while (usize > 0) {
|
||||
pfn = page_to_pfn(buf->pages[page_offset++]);
|
||||
vm_insert_mixed(vma, f_vaddr, pfn);
|
||||
f_vaddr += PAGE_SIZE;
|
||||
usize -= PAGE_SIZE;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset;
|
||||
pfn = page_to_pfn(buf->pages[page_offset++]);
|
||||
} else
|
||||
pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset;
|
||||
|
||||
return vm_insert_mixed(vma, f_vaddr, pfn);
|
||||
}
|
||||
|
@ -524,6 +514,8 @@ static int exynos_drm_gem_mmap_buffer(struct file *filp,
|
|||
if (!buffer->pages)
|
||||
return -EINVAL;
|
||||
|
||||
vma->vm_flags |= VM_MIXEDMAP;
|
||||
|
||||
do {
|
||||
ret = vm_insert_page(vma, uaddr, buffer->pages[i++]);
|
||||
if (ret) {
|
||||
|
@ -710,7 +702,6 @@ int exynos_drm_gem_dumb_destroy(struct drm_file *file_priv,
|
|||
int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
||||
{
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
|
||||
struct drm_device *dev = obj->dev;
|
||||
unsigned long f_vaddr;
|
||||
pgoff_t page_offset;
|
||||
|
@ -722,21 +713,10 @@ int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
|
|||
|
||||
mutex_lock(&dev->struct_mutex);
|
||||
|
||||
/*
|
||||
* allocate all pages as desired size if user wants to allocate
|
||||
* physically non-continuous memory.
|
||||
*/
|
||||
if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) {
|
||||
ret = exynos_drm_gem_get_pages(obj);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = exynos_drm_gem_map_pages(obj, vma, f_vaddr, page_offset);
|
||||
if (ret < 0)
|
||||
DRM_ERROR("failed to map pages.\n");
|
||||
|
||||
err:
|
||||
mutex_unlock(&dev->struct_mutex);
|
||||
|
||||
return convert_to_vm_err_msg(ret);
|
||||
|
|
|
@ -1133,6 +1133,11 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (args->num_cliprects > UINT_MAX / sizeof(*cliprects)) {
|
||||
DRM_DEBUG("execbuf with %u cliprects\n",
|
||||
args->num_cliprects);
|
||||
return -EINVAL;
|
||||
}
|
||||
cliprects = kmalloc(args->num_cliprects * sizeof(*cliprects),
|
||||
GFP_KERNEL);
|
||||
if (cliprects == NULL) {
|
||||
|
@ -1404,7 +1409,8 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data,
|
|||
struct drm_i915_gem_exec_object2 *exec2_list = NULL;
|
||||
int ret;
|
||||
|
||||
if (args->buffer_count < 1) {
|
||||
if (args->buffer_count < 1 ||
|
||||
args->buffer_count > UINT_MAX / sizeof(*exec2_list)) {
|
||||
DRM_DEBUG("execbuf2 with %d buffers\n", args->buffer_count);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -568,6 +568,7 @@
|
|||
#define CM0_MASK_SHIFT 16
|
||||
#define CM0_IZ_OPT_DISABLE (1<<6)
|
||||
#define CM0_ZR_OPT_DISABLE (1<<5)
|
||||
#define CM0_STC_EVICT_DISABLE_LRA_SNB (1<<5)
|
||||
#define CM0_DEPTH_EVICT_DISABLE (1<<4)
|
||||
#define CM0_COLOR_EVICT_DISABLE (1<<3)
|
||||
#define CM0_DEPTH_WRITE_DISABLE (1<<1)
|
||||
|
|
|
@ -430,8 +430,8 @@ intel_crt_detect(struct drm_connector *connector, bool force)
|
|||
{
|
||||
struct drm_device *dev = connector->dev;
|
||||
struct intel_crt *crt = intel_attached_crt(connector);
|
||||
struct drm_crtc *crtc;
|
||||
enum drm_connector_status status;
|
||||
struct intel_load_detect_pipe tmp;
|
||||
|
||||
if (I915_HAS_HOTPLUG(dev)) {
|
||||
if (intel_crt_detect_hotplug(connector)) {
|
||||
|
@ -450,23 +450,16 @@ intel_crt_detect(struct drm_connector *connector, bool force)
|
|||
return connector->status;
|
||||
|
||||
/* for pre-945g platforms use load detect */
|
||||
crtc = crt->base.base.crtc;
|
||||
if (crtc && crtc->enabled) {
|
||||
status = intel_crt_load_detect(crt);
|
||||
} else {
|
||||
struct intel_load_detect_pipe tmp;
|
||||
|
||||
if (intel_get_load_detect_pipe(&crt->base, connector, NULL,
|
||||
&tmp)) {
|
||||
if (intel_crt_detect_ddc(connector))
|
||||
status = connector_status_connected;
|
||||
else
|
||||
status = intel_crt_load_detect(crt);
|
||||
intel_release_load_detect_pipe(&crt->base, connector,
|
||||
&tmp);
|
||||
} else
|
||||
status = connector_status_unknown;
|
||||
}
|
||||
if (intel_get_load_detect_pipe(&crt->base, connector, NULL,
|
||||
&tmp)) {
|
||||
if (intel_crt_detect_ddc(connector))
|
||||
status = connector_status_connected;
|
||||
else
|
||||
status = intel_crt_load_detect(crt);
|
||||
intel_release_load_detect_pipe(&crt->base, connector,
|
||||
&tmp);
|
||||
} else
|
||||
status = connector_status_unknown;
|
||||
|
||||
return status;
|
||||
}
|
||||
|
|
|
@ -401,6 +401,14 @@ static int init_render_ring(struct intel_ring_buffer *ring)
|
|||
if (INTEL_INFO(dev)->gen >= 6) {
|
||||
I915_WRITE(INSTPM,
|
||||
INSTPM_FORCE_ORDERING << 16 | INSTPM_FORCE_ORDERING);
|
||||
|
||||
/* From the Sandybridge PRM, volume 1 part 3, page 24:
|
||||
* "If this bit is set, STCunit will have LRA as replacement
|
||||
* policy. [...] This bit must be reset. LRA replacement
|
||||
* policy is not supported."
|
||||
*/
|
||||
I915_WRITE(CACHE_MODE_0,
|
||||
CM0_STC_EVICT_DISABLE_LRA_SNB << CM0_MASK_SHIFT);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -731,6 +731,7 @@ static void intel_sdvo_get_dtd_from_mode(struct intel_sdvo_dtd *dtd,
|
|||
uint16_t width, height;
|
||||
uint16_t h_blank_len, h_sync_len, v_blank_len, v_sync_len;
|
||||
uint16_t h_sync_offset, v_sync_offset;
|
||||
int mode_clock;
|
||||
|
||||
width = mode->crtc_hdisplay;
|
||||
height = mode->crtc_vdisplay;
|
||||
|
@ -745,7 +746,11 @@ static void intel_sdvo_get_dtd_from_mode(struct intel_sdvo_dtd *dtd,
|
|||
h_sync_offset = mode->crtc_hsync_start - mode->crtc_hblank_start;
|
||||
v_sync_offset = mode->crtc_vsync_start - mode->crtc_vblank_start;
|
||||
|
||||
dtd->part1.clock = mode->clock / 10;
|
||||
mode_clock = mode->clock;
|
||||
mode_clock /= intel_mode_get_pixel_multiplier(mode) ?: 1;
|
||||
mode_clock /= 10;
|
||||
dtd->part1.clock = mode_clock;
|
||||
|
||||
dtd->part1.h_active = width & 0xff;
|
||||
dtd->part1.h_blank = h_blank_len & 0xff;
|
||||
dtd->part1.h_high = (((width >> 8) & 0xf) << 4) |
|
||||
|
@ -996,7 +1001,7 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
|
|||
struct intel_sdvo *intel_sdvo = to_intel_sdvo(encoder);
|
||||
u32 sdvox;
|
||||
struct intel_sdvo_in_out_map in_out;
|
||||
struct intel_sdvo_dtd input_dtd;
|
||||
struct intel_sdvo_dtd input_dtd, output_dtd;
|
||||
int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode);
|
||||
int rate;
|
||||
|
||||
|
@ -1021,20 +1026,13 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
|
|||
intel_sdvo->attached_output))
|
||||
return;
|
||||
|
||||
/* We have tried to get input timing in mode_fixup, and filled into
|
||||
* adjusted_mode.
|
||||
*/
|
||||
if (intel_sdvo->is_tv || intel_sdvo->is_lvds) {
|
||||
input_dtd = intel_sdvo->input_dtd;
|
||||
} else {
|
||||
/* Set the output timing to the screen */
|
||||
if (!intel_sdvo_set_target_output(intel_sdvo,
|
||||
intel_sdvo->attached_output))
|
||||
return;
|
||||
|
||||
intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode);
|
||||
(void) intel_sdvo_set_output_timing(intel_sdvo, &input_dtd);
|
||||
}
|
||||
/* lvds has a special fixed output timing. */
|
||||
if (intel_sdvo->is_lvds)
|
||||
intel_sdvo_get_dtd_from_mode(&output_dtd,
|
||||
intel_sdvo->sdvo_lvds_fixed_mode);
|
||||
else
|
||||
intel_sdvo_get_dtd_from_mode(&output_dtd, mode);
|
||||
(void) intel_sdvo_set_output_timing(intel_sdvo, &output_dtd);
|
||||
|
||||
/* Set the input timing to the screen. Assume always input 0. */
|
||||
if (!intel_sdvo_set_target_input(intel_sdvo))
|
||||
|
@ -1052,6 +1050,10 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
|
|||
!intel_sdvo_set_tv_format(intel_sdvo))
|
||||
return;
|
||||
|
||||
/* We have tried to get input timing in mode_fixup, and filled into
|
||||
* adjusted_mode.
|
||||
*/
|
||||
intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode);
|
||||
(void) intel_sdvo_set_input_timing(intel_sdvo, &input_dtd);
|
||||
|
||||
switch (pixel_multiplier) {
|
||||
|
|
|
@ -575,6 +575,9 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
|
|||
|
||||
if (rdev->family < CHIP_RV770)
|
||||
pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
|
||||
/* use frac fb div on APUs */
|
||||
if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev))
|
||||
pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV;
|
||||
} else {
|
||||
pll->flags |= RADEON_PLL_LEGACY;
|
||||
|
||||
|
@ -955,8 +958,8 @@ static void atombios_crtc_set_pll(struct drm_crtc *crtc, struct drm_display_mode
|
|||
break;
|
||||
}
|
||||
|
||||
if (radeon_encoder->active_device &
|
||||
(ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) {
|
||||
if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
|
||||
(radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)) {
|
||||
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
|
||||
struct drm_connector *connector =
|
||||
radeon_get_connector_for_encoder(encoder);
|
||||
|
|
|
@ -533,7 +533,7 @@ static void radeon_crtc_init(struct drm_device *dev, int index)
|
|||
radeon_legacy_init_crtc(dev, radeon_crtc);
|
||||
}
|
||||
|
||||
static const char *encoder_names[36] = {
|
||||
static const char *encoder_names[37] = {
|
||||
"NONE",
|
||||
"INTERNAL_LVDS",
|
||||
"INTERNAL_TMDS1",
|
||||
|
@ -570,6 +570,7 @@ static const char *encoder_names[36] = {
|
|||
"INTERNAL_UNIPHY2",
|
||||
"NUTMEG",
|
||||
"TRAVIS",
|
||||
"INTERNAL_VCE"
|
||||
};
|
||||
|
||||
static const char *connector_names[15] = {
|
||||
|
|
|
@ -123,7 +123,7 @@ struct hsc_client_data {
|
|||
static unsigned int hsc_major;
|
||||
/* Maximum buffer size that hsi_char will accept from userspace */
|
||||
static unsigned int max_data_size = 0x1000;
|
||||
module_param(max_data_size, uint, S_IRUSR | S_IWUSR);
|
||||
module_param(max_data_size, uint, 0);
|
||||
MODULE_PARM_DESC(max_data_size, "max read/write data size [4,8..65536] (^2)");
|
||||
|
||||
static void hsc_add_tail(struct hsc_channel *channel, struct hsi_msg *msg,
|
||||
|
|
|
@ -21,26 +21,13 @@
|
|||
*/
|
||||
#include <linux/hsi/hsi.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/rwsem.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/notifier.h>
|
||||
#include "hsi_core.h"
|
||||
|
||||
static struct device_type hsi_ctrl = {
|
||||
.name = "hsi_controller",
|
||||
};
|
||||
|
||||
static struct device_type hsi_cl = {
|
||||
.name = "hsi_client",
|
||||
};
|
||||
|
||||
static struct device_type hsi_port = {
|
||||
.name = "hsi_port",
|
||||
};
|
||||
|
||||
static ssize_t modalias_show(struct device *dev,
|
||||
struct device_attribute *a __maybe_unused, char *buf)
|
||||
{
|
||||
|
@ -54,8 +41,7 @@ static struct device_attribute hsi_bus_dev_attrs[] = {
|
|||
|
||||
static int hsi_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
|
||||
{
|
||||
if (dev->type == &hsi_cl)
|
||||
add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev));
|
||||
add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -80,12 +66,10 @@ static void hsi_client_release(struct device *dev)
|
|||
static void hsi_new_client(struct hsi_port *port, struct hsi_board_info *info)
|
||||
{
|
||||
struct hsi_client *cl;
|
||||
unsigned long flags;
|
||||
|
||||
cl = kzalloc(sizeof(*cl), GFP_KERNEL);
|
||||
if (!cl)
|
||||
return;
|
||||
cl->device.type = &hsi_cl;
|
||||
cl->tx_cfg = info->tx_cfg;
|
||||
cl->rx_cfg = info->rx_cfg;
|
||||
cl->device.bus = &hsi_bus_type;
|
||||
|
@ -93,14 +77,11 @@ static void hsi_new_client(struct hsi_port *port, struct hsi_board_info *info)
|
|||
cl->device.release = hsi_client_release;
|
||||
dev_set_name(&cl->device, info->name);
|
||||
cl->device.platform_data = info->platform_data;
|
||||
spin_lock_irqsave(&port->clock, flags);
|
||||
list_add_tail(&cl->link, &port->clients);
|
||||
spin_unlock_irqrestore(&port->clock, flags);
|
||||
if (info->archdata)
|
||||
cl->device.archdata = *info->archdata;
|
||||
if (device_register(&cl->device) < 0) {
|
||||
pr_err("hsi: failed to register client: %s\n", info->name);
|
||||
kfree(cl);
|
||||
put_device(&cl->device);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -120,13 +101,6 @@ static void hsi_scan_board_info(struct hsi_controller *hsi)
|
|||
|
||||
static int hsi_remove_client(struct device *dev, void *data __maybe_unused)
|
||||
{
|
||||
struct hsi_client *cl = to_hsi_client(dev);
|
||||
struct hsi_port *port = to_hsi_port(dev->parent);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&port->clock, flags);
|
||||
list_del(&cl->link);
|
||||
spin_unlock_irqrestore(&port->clock, flags);
|
||||
device_unregister(dev);
|
||||
|
||||
return 0;
|
||||
|
@ -140,12 +114,17 @@ static int hsi_remove_port(struct device *dev, void *data __maybe_unused)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void hsi_controller_release(struct device *dev __maybe_unused)
|
||||
static void hsi_controller_release(struct device *dev)
|
||||
{
|
||||
struct hsi_controller *hsi = to_hsi_controller(dev);
|
||||
|
||||
kfree(hsi->port);
|
||||
kfree(hsi);
|
||||
}
|
||||
|
||||
static void hsi_port_release(struct device *dev __maybe_unused)
|
||||
static void hsi_port_release(struct device *dev)
|
||||
{
|
||||
kfree(to_hsi_port(dev));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -170,20 +149,12 @@ int hsi_register_controller(struct hsi_controller *hsi)
|
|||
unsigned int i;
|
||||
int err;
|
||||
|
||||
hsi->device.type = &hsi_ctrl;
|
||||
hsi->device.bus = &hsi_bus_type;
|
||||
hsi->device.release = hsi_controller_release;
|
||||
err = device_register(&hsi->device);
|
||||
err = device_add(&hsi->device);
|
||||
if (err < 0)
|
||||
return err;
|
||||
for (i = 0; i < hsi->num_ports; i++) {
|
||||
hsi->port[i].device.parent = &hsi->device;
|
||||
hsi->port[i].device.bus = &hsi_bus_type;
|
||||
hsi->port[i].device.release = hsi_port_release;
|
||||
hsi->port[i].device.type = &hsi_port;
|
||||
INIT_LIST_HEAD(&hsi->port[i].clients);
|
||||
spin_lock_init(&hsi->port[i].clock);
|
||||
err = device_register(&hsi->port[i].device);
|
||||
hsi->port[i]->device.parent = &hsi->device;
|
||||
err = device_add(&hsi->port[i]->device);
|
||||
if (err < 0)
|
||||
goto out;
|
||||
}
|
||||
|
@ -192,7 +163,9 @@ int hsi_register_controller(struct hsi_controller *hsi)
|
|||
|
||||
return 0;
|
||||
out:
|
||||
hsi_unregister_controller(hsi);
|
||||
while (i-- > 0)
|
||||
device_del(&hsi->port[i]->device);
|
||||
device_del(&hsi->device);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -222,6 +195,29 @@ static inline int hsi_dummy_cl(struct hsi_client *cl __maybe_unused)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* hsi_put_controller - Free an HSI controller
|
||||
*
|
||||
* @hsi: Pointer to the HSI controller to freed
|
||||
*
|
||||
* HSI controller drivers should only use this function if they need
|
||||
* to free their allocated hsi_controller structures before a successful
|
||||
* call to hsi_register_controller. Other use is not allowed.
|
||||
*/
|
||||
void hsi_put_controller(struct hsi_controller *hsi)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
if (!hsi)
|
||||
return;
|
||||
|
||||
for (i = 0; i < hsi->num_ports; i++)
|
||||
if (hsi->port && hsi->port[i])
|
||||
put_device(&hsi->port[i]->device);
|
||||
put_device(&hsi->device);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_put_controller);
|
||||
|
||||
/**
|
||||
* hsi_alloc_controller - Allocate an HSI controller and its ports
|
||||
* @n_ports: Number of ports on the HSI controller
|
||||
|
@ -232,54 +228,51 @@ static inline int hsi_dummy_cl(struct hsi_client *cl __maybe_unused)
|
|||
struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags)
|
||||
{
|
||||
struct hsi_controller *hsi;
|
||||
struct hsi_port *port;
|
||||
struct hsi_port **port;
|
||||
unsigned int i;
|
||||
|
||||
if (!n_ports)
|
||||
return NULL;
|
||||
|
||||
port = kzalloc(sizeof(*port)*n_ports, flags);
|
||||
if (!port)
|
||||
return NULL;
|
||||
hsi = kzalloc(sizeof(*hsi), flags);
|
||||
if (!hsi)
|
||||
goto out;
|
||||
for (i = 0; i < n_ports; i++) {
|
||||
dev_set_name(&port[i].device, "port%d", i);
|
||||
port[i].num = i;
|
||||
port[i].async = hsi_dummy_msg;
|
||||
port[i].setup = hsi_dummy_cl;
|
||||
port[i].flush = hsi_dummy_cl;
|
||||
port[i].start_tx = hsi_dummy_cl;
|
||||
port[i].stop_tx = hsi_dummy_cl;
|
||||
port[i].release = hsi_dummy_cl;
|
||||
mutex_init(&port[i].lock);
|
||||
return NULL;
|
||||
port = kzalloc(sizeof(*port)*n_ports, flags);
|
||||
if (!port) {
|
||||
kfree(hsi);
|
||||
return NULL;
|
||||
}
|
||||
hsi->num_ports = n_ports;
|
||||
hsi->port = port;
|
||||
hsi->device.release = hsi_controller_release;
|
||||
device_initialize(&hsi->device);
|
||||
|
||||
for (i = 0; i < n_ports; i++) {
|
||||
port[i] = kzalloc(sizeof(**port), flags);
|
||||
if (port[i] == NULL)
|
||||
goto out;
|
||||
port[i]->num = i;
|
||||
port[i]->async = hsi_dummy_msg;
|
||||
port[i]->setup = hsi_dummy_cl;
|
||||
port[i]->flush = hsi_dummy_cl;
|
||||
port[i]->start_tx = hsi_dummy_cl;
|
||||
port[i]->stop_tx = hsi_dummy_cl;
|
||||
port[i]->release = hsi_dummy_cl;
|
||||
mutex_init(&port[i]->lock);
|
||||
ATOMIC_INIT_NOTIFIER_HEAD(&port[i]->n_head);
|
||||
dev_set_name(&port[i]->device, "port%d", i);
|
||||
hsi->port[i]->device.release = hsi_port_release;
|
||||
device_initialize(&hsi->port[i]->device);
|
||||
}
|
||||
|
||||
return hsi;
|
||||
out:
|
||||
kfree(port);
|
||||
hsi_put_controller(hsi);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_alloc_controller);
|
||||
|
||||
/**
|
||||
* hsi_free_controller - Free an HSI controller
|
||||
* @hsi: Pointer to HSI controller
|
||||
*/
|
||||
void hsi_free_controller(struct hsi_controller *hsi)
|
||||
{
|
||||
if (!hsi)
|
||||
return;
|
||||
|
||||
kfree(hsi->port);
|
||||
kfree(hsi);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_free_controller);
|
||||
|
||||
/**
|
||||
* hsi_free_msg - Free an HSI message
|
||||
* @msg: Pointer to the HSI message
|
||||
|
@ -414,37 +407,67 @@ void hsi_release_port(struct hsi_client *cl)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_release_port);
|
||||
|
||||
static int hsi_start_rx(struct hsi_client *cl, void *data __maybe_unused)
|
||||
static int hsi_event_notifier_call(struct notifier_block *nb,
|
||||
unsigned long event, void *data __maybe_unused)
|
||||
{
|
||||
if (cl->hsi_start_rx)
|
||||
(*cl->hsi_start_rx)(cl);
|
||||
struct hsi_client *cl = container_of(nb, struct hsi_client, nb);
|
||||
|
||||
(*cl->ehandler)(cl, event);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int hsi_stop_rx(struct hsi_client *cl, void *data __maybe_unused)
|
||||
/**
|
||||
* hsi_register_port_event - Register a client to receive port events
|
||||
* @cl: HSI client that wants to receive port events
|
||||
* @cb: Event handler callback
|
||||
*
|
||||
* Clients should register a callback to be able to receive
|
||||
* events from the ports. Registration should happen after
|
||||
* claiming the port.
|
||||
* The handler can be called in interrupt context.
|
||||
*
|
||||
* Returns -errno on error, or 0 on success.
|
||||
*/
|
||||
int hsi_register_port_event(struct hsi_client *cl,
|
||||
void (*handler)(struct hsi_client *, unsigned long))
|
||||
{
|
||||
if (cl->hsi_stop_rx)
|
||||
(*cl->hsi_stop_rx)(cl);
|
||||
struct hsi_port *port = hsi_get_port(cl);
|
||||
|
||||
return 0;
|
||||
if (!handler || cl->ehandler)
|
||||
return -EINVAL;
|
||||
if (!hsi_port_claimed(cl))
|
||||
return -EACCES;
|
||||
cl->ehandler = handler;
|
||||
cl->nb.notifier_call = hsi_event_notifier_call;
|
||||
|
||||
return atomic_notifier_chain_register(&port->n_head, &cl->nb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_register_port_event);
|
||||
|
||||
static int hsi_port_for_each_client(struct hsi_port *port, void *data,
|
||||
int (*fn)(struct hsi_client *cl, void *data))
|
||||
/**
|
||||
* hsi_unregister_port_event - Stop receiving port events for a client
|
||||
* @cl: HSI client that wants to stop receiving port events
|
||||
*
|
||||
* Clients should call this function before releasing their associated
|
||||
* port.
|
||||
*
|
||||
* Returns -errno on error, or 0 on success.
|
||||
*/
|
||||
int hsi_unregister_port_event(struct hsi_client *cl)
|
||||
{
|
||||
struct hsi_client *cl;
|
||||
struct hsi_port *port = hsi_get_port(cl);
|
||||
int err;
|
||||
|
||||
spin_lock(&port->clock);
|
||||
list_for_each_entry(cl, &port->clients, link) {
|
||||
spin_unlock(&port->clock);
|
||||
(*fn)(cl, data);
|
||||
spin_lock(&port->clock);
|
||||
}
|
||||
spin_unlock(&port->clock);
|
||||
WARN_ON(!hsi_port_claimed(cl));
|
||||
|
||||
return 0;
|
||||
err = atomic_notifier_chain_unregister(&port->n_head, &cl->nb);
|
||||
if (!err)
|
||||
cl->ehandler = NULL;
|
||||
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_unregister_port_event);
|
||||
|
||||
/**
|
||||
* hsi_event -Notifies clients about port events
|
||||
|
@ -458,22 +481,12 @@ static int hsi_port_for_each_client(struct hsi_port *port, void *data,
|
|||
* Events:
|
||||
* HSI_EVENT_START_RX - Incoming wake line high
|
||||
* HSI_EVENT_STOP_RX - Incoming wake line down
|
||||
*
|
||||
* Returns -errno on error, or 0 on success.
|
||||
*/
|
||||
void hsi_event(struct hsi_port *port, unsigned int event)
|
||||
int hsi_event(struct hsi_port *port, unsigned long event)
|
||||
{
|
||||
int (*fn)(struct hsi_client *cl, void *data);
|
||||
|
||||
switch (event) {
|
||||
case HSI_EVENT_START_RX:
|
||||
fn = hsi_start_rx;
|
||||
break;
|
||||
case HSI_EVENT_STOP_RX:
|
||||
fn = hsi_stop_rx;
|
||||
break;
|
||||
default:
|
||||
return;
|
||||
}
|
||||
hsi_port_for_each_client(port, NULL, fn);
|
||||
return atomic_notifier_call_chain(&port->n_head, event, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hsi_event);
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue