Merge branch 'common/pfc' into sh-latest

This commit is contained in:
Paul Mundt 2012-06-21 13:44:42 +09:00
commit 7b98cf0cf4
350 changed files with 5323 additions and 2029 deletions

View File

@ -6,7 +6,9 @@ Supported chips:
Prefix: 'coretemp'
CPUID: family 0x6, models 0xe (Pentium M DC), 0xf (Core 2 DC 65nm),
0x16 (Core 2 SC 65nm), 0x17 (Penryn 45nm),
0x1a (Nehalem), 0x1c (Atom), 0x1e (Lynnfield)
0x1a (Nehalem), 0x1c (Atom), 0x1e (Lynnfield),
0x26 (Tunnel Creek Atom), 0x27 (Medfield Atom),
0x36 (Cedar Trail Atom)
Datasheet: Intel 64 and IA-32 Architectures Software Developer's Manual
Volume 3A: System Programming Guide
http://softwarecommunity.intel.com/Wiki/Mobility/720.htm
@ -52,6 +54,17 @@ Some information comes from ark.intel.com
Process Processor TjMax(C)
22nm Core i5/i7 Processors
i7 3920XM, 3820QM, 3720QM, 3667U, 3520M 105
i5 3427U, 3360M/3320M 105
i7 3770/3770K 105
i5 3570/3570K, 3550, 3470/3450 105
i7 3770S 103
i5 3570S/3550S, 3475S/3470S/3450S 103
i7 3770T 94
i5 3570T 94
i5 3470T 91
32nm Core i3/i5/i7 Processors
i7 660UM/640/620, 640LM/620, 620M, 610E 105
i5 540UM/520/430, 540M/520/450/430 105
@ -65,6 +78,11 @@ Process Processor TjMax(C)
U3400 105
P4505/P4500 90
32nm Atom Processors
Z2460 90
D2700/2550/2500 100
N2850/2800/2650/2600 100
45nm Xeon Processors 5400 Quad-Core
X5492, X5482, X5472, X5470, X5460, X5450 85
E5472, E5462, E5450/40/30/20/10/05 85
@ -85,6 +103,8 @@ Process Processor TjMax(C)
N475/470/455/450 100
N280/270 90
330/230 125
E680/660/640/620 90
E680T/660T/640T/620T 110
45nm Core2 Processors
Solo ULV SU3500/3300 100

View File

@ -10,8 +10,8 @@ Currently this network device driver is for all STM embedded MAC/GMAC
(i.e. 7xxx/5xxx SoCs), SPEAr (arm), Loongson1B (mips) and XLINX XC2V3000
FF1152AMT0221 D1215994A VIRTEX FPGA board.
DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether MAC 10/100
Universal version 4.0 have been used for developing this driver.
DWC Ether MAC 10/100/1000 Universal version 3.60a (and older) and DWC Ether
MAC 10/100 Universal version 4.0 have been used for developing this driver.
This driver supports both the platform bus and PCI.
@ -54,27 +54,27 @@ net_device structure enabling the scatter/gather feature.
When one or more packets are received, an interrupt happens. The interrupts
are not queued so the driver has to scan all the descriptors in the ring during
the receive process.
This is based on NAPI so the interrupt handler signals only if there is work to be
done, and it exits.
This is based on NAPI so the interrupt handler signals only if there is work
to be done, and it exits.
Then the poll method will be scheduled at some future point.
The incoming packets are stored, by the DMA, in a list of pre-allocated socket
buffers in order to avoid the memcpy (Zero-copy).
4.3) Timer-Driver Interrupt
Instead of having the device that asynchronously notifies the frame receptions, the
driver configures a timer to generate an interrupt at regular intervals.
Based on the granularity of the timer, the frames that are received by the device
will experience different levels of latency. Some NICs have dedicated timer
device to perform this task. STMMAC can use either the RTC device or the TMU
channel 2 on STLinux platforms.
Instead of having the device that asynchronously notifies the frame receptions,
the driver configures a timer to generate an interrupt at regular intervals.
Based on the granularity of the timer, the frames that are received by the
device will experience different levels of latency. Some NICs have dedicated
timer device to perform this task. STMMAC can use either the RTC device or the
TMU channel 2 on STLinux platforms.
The timers frequency can be passed to the driver as parameter; when change it,
take care of both hardware capability and network stability/performance impact.
Several performance tests on STM platforms showed this optimisation allows to spare
the CPU while having the maximum throughput.
Several performance tests on STM platforms showed this optimisation allows to
spare the CPU while having the maximum throughput.
4.4) WOL
Wake up on Lan feature through Magic and Unicast frames are supported for the GMAC
core.
Wake up on Lan feature through Magic and Unicast frames are supported for the
GMAC core.
4.5) DMA descriptors
Driver handles both normal and enhanced descriptors. The latter has been only
@ -106,7 +106,8 @@ Several driver's information can be passed through the platform
These are included in the include/linux/stmmac.h header file
and detailed below as well:
struct plat_stmmacenet_data {
struct plat_stmmacenet_data {
char *phy_bus_name;
int bus_id;
int phy_addr;
int interface;
@ -124,19 +125,24 @@ and detailed below as well:
void (*bus_setup)(void __iomem *ioaddr);
int (*init)(struct platform_device *pdev);
void (*exit)(struct platform_device *pdev);
void *custom_cfg;
void *custom_data;
void *bsp_priv;
};
Where:
o phy_bus_name: phy bus name to attach to the stmmac.
o bus_id: bus identifier.
o phy_addr: the physical address can be passed from the platform.
If it is set to -1 the driver will automatically
detect it at run-time by probing all the 32 addresses.
o interface: PHY device's interface.
o mdio_bus_data: specific platform fields for the MDIO bus.
o pbl: the Programmable Burst Length is maximum number of beats to
o dma_cfg: internal DMA parameters
o pbl: the Programmable Burst Length is maximum number of beats to
be transferred in one DMA transaction.
GMAC also enables the 4xPBL by default.
o fixed_burst/mixed_burst/burst_len
o clk_csr: fixed CSR Clock range selection.
o has_gmac: uses the GMAC core.
o enh_desc: if sets the MAC will use the enhanced descriptor structure.
@ -160,8 +166,9 @@ Where:
this is sometime necessary on some platforms (e.g. ST boxes)
where the HW needs to have set some PIO lines or system cfg
registers.
o custom_cfg: this is a custom configuration that can be passed while
initialising the resources.
o custom_cfg/custom_data: this is a custom configuration that can be passed
while initialising the resources.
o bsp_priv: another private poiter.
For MDIO bus The we have:
@ -180,7 +187,6 @@ Where:
o irqs: list of IRQs, one per PHY.
o probed_phy_irq: if irqs is NULL, use this for probed PHY.
For DMA engine we have the following internal fields that should be
tuned according to the HW capabilities.

View File

@ -1646,11 +1646,11 @@ S: Maintained
F: drivers/gpio/gpio-bt8xx.c
BTRFS FILE SYSTEM
M: Chris Mason <chris.mason@oracle.com>
M: Chris Mason <chris.mason@fusionio.com>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
S: Maintained
F: Documentation/filesystems/btrfs.txt
F: fs/btrfs/
@ -1800,6 +1800,9 @@ F: include/linux/cfag12864b.h
CFG80211 and NL80211
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: include/linux/nl80211.h
F: include/net/cfg80211.h
@ -4349,7 +4352,8 @@ MAC80211
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: Documentation/networking/mac80211-injection.txt
F: include/net/mac80211.h
@ -4360,7 +4364,8 @@ M: Stefano Brivio <stefano.brivio@polimi.it>
M: Mattias Nissler <mattias.nissler@gmx.de>
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/en/developers/Documentation/mac80211/RateControl/PID
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: net/mac80211/rc80211_pid*
@ -5711,6 +5716,9 @@ F: include/linux/remoteproc.h
RFKILL
M: Johannes Berg <johannes@sipsolutions.net>
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next.git
S: Maintained
F: Documentation/rfkill.txt
F: net/rfkill/

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 5
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Saber-toothed Squirrel
# *DOCUMENTATION*
@ -561,6 +561,8 @@ else
KBUILD_CFLAGS += -O2
endif
include $(srctree)/arch/$(SRCARCH)/Makefile
ifdef CONFIG_READABLE_ASM
# Disable optimizations that make assembler listings hard to read.
# reorder blocks reorders the control in the function
@ -571,8 +573,6 @@ KBUILD_CFLAGS += $(call cc-option,-fno-reorder-blocks,) \
$(call cc-option,-fno-partial-inlining)
endif
include $(srctree)/arch/$(SRCARCH)/Makefile
ifneq ($(CONFIG_FRAME_WARN),0)
KBUILD_CFLAGS += $(call cc-option,-Wframe-larger-than=${CONFIG_FRAME_WARN})
endif

View File

@ -293,6 +293,7 @@ config ARCH_VERSATILE
select ICST
select GENERIC_CLOCKEVENTS
select ARCH_WANT_OPTIONAL_GPIOLIB
select NEED_MACH_IO_H if PCI
select PLAT_VERSATILE
select PLAT_VERSATILE_CLCD
select PLAT_VERSATILE_FPGA_IRQ

View File

@ -11,7 +11,7 @@
/include/ "mmp2.dtsi"
/ {
model = "Marvell MMP2 Aspenite Development Board";
model = "Marvell MMP2 Brownstone Development Board";
compatible = "mrvl,mmp2-brownstone", "mrvl,mmp2";
chosen {
@ -19,7 +19,7 @@
};
memory {
reg = <0x00000000 0x04000000>;
reg = <0x00000000 0x08000000>;
};
soc {

View File

@ -366,8 +366,8 @@ static int __dmabounce_sync_for_cpu(struct device *dev, dma_addr_t addr,
struct safe_buffer *buf;
unsigned long off;
dev_dbg(dev, "%s(dma=%#x,off=%#lx,sz=%zx,dir=%x)\n",
__func__, addr, off, sz, dir);
dev_dbg(dev, "%s(dma=%#x,sz=%zx,dir=%x)\n",
__func__, addr, sz, dir);
buf = find_safe_buffer_dev(dev, addr, __func__);
if (!buf)
@ -377,8 +377,8 @@ static int __dmabounce_sync_for_cpu(struct device *dev, dma_addr_t addr,
BUG_ON(buf->direction != dir);
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x off=%#lx) mapped to %p (dma=%#x)\n",
__func__, buf->ptr, virt_to_dma(dev, buf->ptr), off,
buf->safe, buf->safe_dma_addr);
DO_STATS(dev->archdata.dmabounce->bounce_count++);
@ -406,8 +406,8 @@ static int __dmabounce_sync_for_device(struct device *dev, dma_addr_t addr,
struct safe_buffer *buf;
unsigned long off;
dev_dbg(dev, "%s(dma=%#x,off=%#lx,sz=%zx,dir=%x)\n",
__func__, addr, off, sz, dir);
dev_dbg(dev, "%s(dma=%#x,sz=%zx,dir=%x)\n",
__func__, addr, sz, dir);
buf = find_safe_buffer_dev(dev, addr, __func__);
if (!buf)
@ -417,8 +417,8 @@ static int __dmabounce_sync_for_device(struct device *dev, dma_addr_t addr,
BUG_ON(buf->direction != dir);
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x off=%#lx) mapped to %p (dma=%#x)\n",
__func__, buf->ptr, virt_to_dma(dev, buf->ptr), off,
buf->safe, buf->safe_dma_addr);
DO_STATS(dev->archdata.dmabounce->bounce_count++);

View File

@ -1,4 +1,8 @@
obj-y := clock.o highbank.o system.o
obj-y := clock.o highbank.o system.o smc.o
plus_sec := $(call as-instr,.arch_extension sec,+sec)
AFLAGS_smc.o :=-Wa,-march=armv7-a$(plus_sec)
obj-$(CONFIG_DEBUG_HIGHBANK_UART) += lluart.o
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o

View File

@ -8,3 +8,4 @@ extern void highbank_lluart_map_io(void);
static inline void highbank_lluart_map_io(void) {}
#endif
extern void highbank_smc1(int fn, int arg);

View File

@ -85,10 +85,24 @@ const static struct of_device_id irq_match[] = {
{}
};
#ifdef CONFIG_CACHE_L2X0
static void highbank_l2x0_disable(void)
{
/* Disable PL310 L2 Cache controller */
highbank_smc1(0x102, 0x0);
}
#endif
static void __init highbank_init_irq(void)
{
of_irq_init(irq_match);
#ifdef CONFIG_CACHE_L2X0
/* Enable PL310 L2 Cache controller */
highbank_smc1(0x102, 0x1);
l2x0_of_init(0, ~0UL);
outer_cache.disable = highbank_l2x0_disable;
#endif
}
static void __init highbank_timer_init(void)

View File

@ -0,0 +1,27 @@
/*
* Copied from omap44xx-smc.S Copyright (C) 2010 Texas Instruments, Inc.
* Copyright 2012 Calxeda, Inc.
*
* This program is free software,you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
/*
* This is common routine to manage secure monitor API
* used to modify the PL310 secure registers.
* 'r0' contains the value to be modified and 'r12' contains
* the monitor API number.
* Function signature : void highbank_smc1(u32 fn, u32 arg)
*/
ENTRY(highbank_smc1)
stmfd sp!, {r4-r11, lr}
mov r12, r0
mov r0, r1
dsb
smc #0
ldmfd sp!, {r4-r11, pc}
ENDPROC(highbank_smc1)

View File

@ -477,6 +477,7 @@ config MACH_MX31_3DS
select IMX_HAVE_PLATFORM_IMX2_WDT
select IMX_HAVE_PLATFORM_IMX_I2C
select IMX_HAVE_PLATFORM_IMX_KEYPAD
select IMX_HAVE_PLATFORM_IMX_SSI
select IMX_HAVE_PLATFORM_IMX_UART
select IMX_HAVE_PLATFORM_IPU_CORE
select IMX_HAVE_PLATFORM_MXC_EHCI

View File

@ -108,8 +108,7 @@ int __init mx1_clocks_init(unsigned long fref)
clk_register_clkdev(clk[clk32], NULL, "mxc_rtc.0");
clk_register_clkdev(clk[clko], "clko", NULL);
mxc_timer_init(NULL, MX1_IO_ADDRESS(MX1_TIM1_BASE_ADDR),
MX1_TIM1_INT);
mxc_timer_init(MX1_IO_ADDRESS(MX1_TIM1_BASE_ADDR), MX1_TIM1_INT);
return 0;
}

View File

@ -180,7 +180,7 @@ int __init mx21_clocks_init(unsigned long lref, unsigned long href)
clk_register_clkdev(clk[sdhc1_ipg_gate], "sdhc1", NULL);
clk_register_clkdev(clk[sdhc2_ipg_gate], "sdhc2", NULL);
mxc_timer_init(NULL, MX21_IO_ADDRESS(MX21_GPT1_BASE_ADDR),
MX21_INT_GPT1);
mxc_timer_init(MX21_IO_ADDRESS(MX21_GPT1_BASE_ADDR), MX21_INT_GPT1);
return 0;
}

View File

@ -243,6 +243,6 @@ int __init mx25_clocks_init(void)
clk_register_clkdev(clk[sdma_ahb], "ahb", "imx35-sdma");
clk_register_clkdev(clk[iim_ipg], "iim", NULL);
mxc_timer_init(NULL, MX25_IO_ADDRESS(MX25_GPT1_BASE_ADDR), 54);
mxc_timer_init(MX25_IO_ADDRESS(MX25_GPT1_BASE_ADDR), 54);
return 0;
}

View File

@ -263,8 +263,7 @@ int __init mx27_clocks_init(unsigned long fref)
clk_register_clkdev(clk[ssi1_baud_gate], "bitrate" , "imx-ssi.0");
clk_register_clkdev(clk[ssi2_baud_gate], "bitrate" , "imx-ssi.1");
mxc_timer_init(NULL, MX27_IO_ADDRESS(MX27_GPT1_BASE_ADDR),
MX27_INT_GPT1);
mxc_timer_init(MX27_IO_ADDRESS(MX27_GPT1_BASE_ADDR), MX27_INT_GPT1);
clk_prepare_enable(clk[emi_ahb_gate]);

View File

@ -175,8 +175,7 @@ int __init mx31_clocks_init(unsigned long fref)
mx31_revision();
clk_disable_unprepare(clk[iim_gate]);
mxc_timer_init(NULL, MX31_IO_ADDRESS(MX31_GPT1_BASE_ADDR),
MX31_INT_GPT);
mxc_timer_init(MX31_IO_ADDRESS(MX31_GPT1_BASE_ADDR), MX31_INT_GPT);
return 0;
}

View File

@ -267,11 +267,9 @@ int __init mx35_clocks_init()
imx_print_silicon_rev("i.MX35", mx35_revision());
#ifdef CONFIG_MXC_USE_EPIT
epit_timer_init(&epit1_clk,
MX35_IO_ADDRESS(MX35_EPIT1_BASE_ADDR), MX35_INT_EPIT1);
epit_timer_init(MX35_IO_ADDRESS(MX35_EPIT1_BASE_ADDR), MX35_INT_EPIT1);
#else
mxc_timer_init(NULL, MX35_IO_ADDRESS(MX35_GPT1_BASE_ADDR),
MX35_INT_GPT);
mxc_timer_init(MX35_IO_ADDRESS(MX35_GPT1_BASE_ADDR), MX35_INT_GPT);
#endif
return 0;

View File

@ -104,12 +104,12 @@ static void __init mx5_clocks_common_init(unsigned long rate_ckil,
periph_apm_sel, ARRAY_SIZE(periph_apm_sel));
clk[main_bus] = imx_clk_mux("main_bus", MXC_CCM_CBCDR, 25, 1,
main_bus_sel, ARRAY_SIZE(main_bus_sel));
clk[per_lp_apm] = imx_clk_mux("per_lp_apm", MXC_CCM_CBCDR, 1, 1,
clk[per_lp_apm] = imx_clk_mux("per_lp_apm", MXC_CCM_CBCMR, 1, 1,
per_lp_apm_sel, ARRAY_SIZE(per_lp_apm_sel));
clk[per_pred1] = imx_clk_divider("per_pred1", "per_lp_apm", MXC_CCM_CBCDR, 6, 2);
clk[per_pred2] = imx_clk_divider("per_pred2", "per_pred1", MXC_CCM_CBCDR, 3, 3);
clk[per_podf] = imx_clk_divider("per_podf", "per_pred2", MXC_CCM_CBCDR, 0, 3);
clk[per_root] = imx_clk_mux("per_root", MXC_CCM_CBCDR, 1, 0,
clk[per_root] = imx_clk_mux("per_root", MXC_CCM_CBCMR, 0, 1,
per_root_sel, ARRAY_SIZE(per_root_sel));
clk[ahb] = imx_clk_divider("ahb", "main_bus", MXC_CCM_CBCDR, 10, 3);
clk[ahb_max] = imx_clk_gate2("ahb_max", "ahb", MXC_CCM_CCGR0, 28);
@ -172,7 +172,7 @@ static void __init mx5_clocks_common_init(unsigned long rate_ckil,
clk[pwm1_hf_gate] = imx_clk_gate2("pwm1_hf_gate", "ipg", MXC_CCM_CCGR2, 12);
clk[pwm2_ipg_gate] = imx_clk_gate2("pwm2_ipg_gate", "ipg", MXC_CCM_CCGR2, 14);
clk[pwm2_hf_gate] = imx_clk_gate2("pwm2_hf_gate", "ipg", MXC_CCM_CCGR2, 16);
clk[gpt_gate] = imx_clk_gate2("gpt_gate", "ipg", MXC_CCM_CCGR2, 18);
clk[gpt_gate] = imx_clk_gate2("gpt_gate", "per_root", MXC_CCM_CCGR2, 18);
clk[fec_gate] = imx_clk_gate2("fec_gate", "ipg", MXC_CCM_CCGR2, 24);
clk[usboh3_gate] = imx_clk_gate2("usboh3_gate", "ipg", MXC_CCM_CCGR2, 26);
clk[usboh3_per_gate] = imx_clk_gate2("usboh3_per_gate", "usboh3_podf", MXC_CCM_CCGR2, 28);
@ -366,8 +366,7 @@ int __init mx51_clocks_init(unsigned long rate_ckil, unsigned long rate_osc,
clk_set_rate(clk[esdhc_b_podf], 166250000);
/* System timer */
mxc_timer_init(NULL, MX51_IO_ADDRESS(MX51_GPT1_BASE_ADDR),
MX51_INT_GPT);
mxc_timer_init(MX51_IO_ADDRESS(MX51_GPT1_BASE_ADDR), MX51_INT_GPT);
clk_prepare_enable(clk[iim_gate]);
imx_print_silicon_rev("i.MX51", mx51_revision());
@ -452,8 +451,7 @@ int __init mx53_clocks_init(unsigned long rate_ckil, unsigned long rate_osc,
clk_set_rate(clk[esdhc_b_podf], 200000000);
/* System timer */
mxc_timer_init(NULL, MX53_IO_ADDRESS(MX53_GPT1_BASE_ADDR),
MX53_INT_GPT);
mxc_timer_init(MX53_IO_ADDRESS(MX53_GPT1_BASE_ADDR), MX53_INT_GPT);
clk_prepare_enable(clk[iim_gate]);
imx_print_silicon_rev("i.MX53", mx53_revision());

View File

@ -122,10 +122,6 @@ static const char *cko1_sels[] = { "pll3_usb_otg", "pll2_bus", "pll1_sys", "pll5
"dummy", "axi", "enfc", "ipu1_di0", "ipu1_di1", "ipu2_di0",
"ipu2_di1", "ahb", "ipg", "ipg_per", "ckil", "pll4_audio", };
static const char * const clks_init_on[] __initconst = {
"mmdc_ch0_axi", "mmdc_ch1_axi", "usboh3",
};
enum mx6q_clks {
dummy, ckil, ckih, osc, pll2_pfd0_352m, pll2_pfd1_594m, pll2_pfd2_396m,
pll3_pfd0_720m, pll3_pfd1_540m, pll3_pfd2_508m, pll3_pfd3_454m,
@ -161,11 +157,14 @@ enum mx6q_clks {
static struct clk *clk[clk_max];
static enum mx6q_clks const clks_init_on[] __initconst = {
mmdc_ch0_axi, mmdc_ch1_axi,
};
int __init mx6q_clocks_init(void)
{
struct device_node *np;
void __iomem *base;
struct clk *c;
int i, irq;
clk[dummy] = imx_clk_fixed("dummy", 0);
@ -424,21 +423,14 @@ int __init mx6q_clocks_init(void)
clk_register_clkdev(clk[ahb], "ahb", NULL);
clk_register_clkdev(clk[cko1], "cko1", NULL);
for (i = 0; i < ARRAY_SIZE(clks_init_on); i++) {
c = clk_get_sys(clks_init_on[i], NULL);
if (IS_ERR(c)) {
pr_err("%s: failed to get clk %s", __func__,
clks_init_on[i]);
return PTR_ERR(c);
}
clk_prepare_enable(c);
}
for (i = 0; i < ARRAY_SIZE(clks_init_on); i++)
clk_prepare_enable(clk[clks_init_on[i]]);
np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpt");
base = of_iomap(np, 0);
WARN_ON(!base);
irq = irq_of_parse_and_map(np, 0);
mxc_timer_init(NULL, base, irq);
mxc_timer_init(base, irq);
return 0;
}

View File

@ -74,30 +74,15 @@ struct clk_pllv2 {
void __iomem *base;
};
static unsigned long clk_pllv2_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
static unsigned long __clk_pllv2_recalc_rate(unsigned long parent_rate,
u32 dp_ctl, u32 dp_op, u32 dp_mfd, u32 dp_mfn)
{
long mfi, mfn, mfd, pdf, ref_clk, mfn_abs;
unsigned long dp_op, dp_mfd, dp_mfn, dp_ctl, pll_hfsm, dbl;
void __iomem *pllbase;
unsigned long dbl;
s64 temp;
struct clk_pllv2 *pll = to_clk_pllv2(hw);
pllbase = pll->base;
dp_ctl = __raw_readl(pllbase + MXC_PLL_DP_CTL);
pll_hfsm = dp_ctl & MXC_PLL_DP_CTL_HFSM;
dbl = dp_ctl & MXC_PLL_DP_CTL_DPDCK0_2_EN;
if (pll_hfsm == 0) {
dp_op = __raw_readl(pllbase + MXC_PLL_DP_OP);
dp_mfd = __raw_readl(pllbase + MXC_PLL_DP_MFD);
dp_mfn = __raw_readl(pllbase + MXC_PLL_DP_MFN);
} else {
dp_op = __raw_readl(pllbase + MXC_PLL_DP_HFS_OP);
dp_mfd = __raw_readl(pllbase + MXC_PLL_DP_HFS_MFD);
dp_mfn = __raw_readl(pllbase + MXC_PLL_DP_HFS_MFN);
}
pdf = dp_op & MXC_PLL_DP_OP_PDF_MASK;
mfi = (dp_op & MXC_PLL_DP_OP_MFI_MASK) >> MXC_PLL_DP_OP_MFI_OFFSET;
mfi = (mfi <= 5) ? 5 : mfi;
@ -123,18 +108,30 @@ static unsigned long clk_pllv2_recalc_rate(struct clk_hw *hw,
return temp;
}
static int clk_pllv2_set_rate(struct clk_hw *hw, unsigned long rate,
static unsigned long clk_pllv2_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
struct clk_pllv2 *pll = to_clk_pllv2(hw);
u32 reg;
u32 dp_op, dp_mfd, dp_mfn, dp_ctl;
void __iomem *pllbase;
struct clk_pllv2 *pll = to_clk_pllv2(hw);
pllbase = pll->base;
dp_ctl = __raw_readl(pllbase + MXC_PLL_DP_CTL);
dp_op = __raw_readl(pllbase + MXC_PLL_DP_OP);
dp_mfd = __raw_readl(pllbase + MXC_PLL_DP_MFD);
dp_mfn = __raw_readl(pllbase + MXC_PLL_DP_MFN);
return __clk_pllv2_recalc_rate(parent_rate, dp_ctl, dp_op, dp_mfd, dp_mfn);
}
static int __clk_pllv2_set_rate(unsigned long rate, unsigned long parent_rate,
u32 *dp_op, u32 *dp_mfd, u32 *dp_mfn)
{
u32 reg;
long mfi, pdf, mfn, mfd = 999999;
s64 temp64;
unsigned long quad_parent_rate;
unsigned long pll_hfsm, dp_ctl;
pllbase = pll->base;
quad_parent_rate = 4 * parent_rate;
pdf = mfi = -1;
@ -144,25 +141,41 @@ static int clk_pllv2_set_rate(struct clk_hw *hw, unsigned long rate,
return -EINVAL;
pdf--;
temp64 = rate * (pdf+1) - quad_parent_rate * mfi;
do_div(temp64, quad_parent_rate/1000000);
temp64 = rate * (pdf + 1) - quad_parent_rate * mfi;
do_div(temp64, quad_parent_rate / 1000000);
mfn = (long)temp64;
reg = mfi << 4 | pdf;
*dp_op = reg;
*dp_mfd = mfd;
*dp_mfn = mfn;
return 0;
}
static int clk_pllv2_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
struct clk_pllv2 *pll = to_clk_pllv2(hw);
void __iomem *pllbase;
u32 dp_ctl, dp_op, dp_mfd, dp_mfn;
int ret;
pllbase = pll->base;
ret = __clk_pllv2_set_rate(rate, parent_rate, &dp_op, &dp_mfd, &dp_mfn);
if (ret)
return ret;
dp_ctl = __raw_readl(pllbase + MXC_PLL_DP_CTL);
/* use dpdck0_2 */
__raw_writel(dp_ctl | 0x1000L, pllbase + MXC_PLL_DP_CTL);
pll_hfsm = dp_ctl & MXC_PLL_DP_CTL_HFSM;
if (pll_hfsm == 0) {
reg = mfi << 4 | pdf;
__raw_writel(reg, pllbase + MXC_PLL_DP_OP);
__raw_writel(mfd, pllbase + MXC_PLL_DP_MFD);
__raw_writel(mfn, pllbase + MXC_PLL_DP_MFN);
} else {
reg = mfi << 4 | pdf;
__raw_writel(reg, pllbase + MXC_PLL_DP_HFS_OP);
__raw_writel(mfd, pllbase + MXC_PLL_DP_HFS_MFD);
__raw_writel(mfn, pllbase + MXC_PLL_DP_HFS_MFN);
}
__raw_writel(dp_op, pllbase + MXC_PLL_DP_OP);
__raw_writel(dp_mfd, pllbase + MXC_PLL_DP_MFD);
__raw_writel(dp_mfn, pllbase + MXC_PLL_DP_MFN);
return 0;
}
@ -170,7 +183,11 @@ static int clk_pllv2_set_rate(struct clk_hw *hw, unsigned long rate,
static long clk_pllv2_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *prate)
{
return rate;
u32 dp_op, dp_mfd, dp_mfn;
__clk_pllv2_set_rate(rate, *prate, &dp_op, &dp_mfd, &dp_mfn);
return __clk_pllv2_recalc_rate(*prate, MXC_PLL_DP_CTL_DPDCK0_2_EN,
dp_op, dp_mfd, dp_mfn);
}
static int clk_pllv2_prepare(struct clk_hw *hw)

View File

@ -23,7 +23,7 @@
#define MX53_DPLL1_BASE MX53_IO_ADDRESS(MX53_PLL1_BASE_ADDR)
#define MX53_DPLL2_BASE MX53_IO_ADDRESS(MX53_PLL2_BASE_ADDR)
#define MX53_DPLL3_BASE MX53_IO_ADDRESS(MX53_PLL3_BASE_ADDR)
#define MX53_DPLL4_BASE MX53_IO_ADDRESS(MX53_PLL3_BASE_ADDR)
#define MX53_DPLL4_BASE MX53_IO_ADDRESS(MX53_PLL4_BASE_ADDR)
/* PLL Register Offsets */
#define MXC_PLL_DP_CTL 0x00

View File

@ -12,6 +12,7 @@
#include <linux/errno.h>
#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <mach/common.h>
int platform_cpu_kill(unsigned int cpu)
@ -19,6 +20,44 @@ int platform_cpu_kill(unsigned int cpu)
return 1;
}
static inline void cpu_enter_lowpower(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
"mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
/*
* Turn off coherency
*/
" mrc p15, 0, %0, c1, c0, 1\n"
" bic %0, %0, %3\n"
" mcr p15, 0, %0, c1, c0, 1\n"
" mrc p15, 0, %0, c1, c0, 0\n"
" bic %0, %0, %2\n"
" mcr p15, 0, %0, c1, c0, 0\n"
: "=&r" (v)
: "r" (0), "Ir" (CR_C), "Ir" (0x40)
: "cc");
}
static inline void cpu_leave_lowpower(void)
{
unsigned int v;
asm volatile(
"mrc p15, 0, %0, c1, c0, 0\n"
" orr %0, %0, %1\n"
" mcr p15, 0, %0, c1, c0, 0\n"
" mrc p15, 0, %0, c1, c0, 1\n"
" orr %0, %0, %2\n"
" mcr p15, 0, %0, c1, c0, 1\n"
: "=&r" (v)
: "Ir" (CR_C), "Ir" (0x40)
: "cc");
}
/*
* platform-specific code to shutdown a CPU
*
@ -26,9 +65,10 @@ int platform_cpu_kill(unsigned int cpu)
*/
void platform_cpu_die(unsigned int cpu)
{
flush_cache_all();
cpu_enter_lowpower();
imx_enable_cpu(cpu, false);
cpu_do_idle();
cpu_leave_lowpower();
/* We should never return from idle */
panic("cpu %d unexpectedly exit from shutdown\n", cpu);

View File

@ -70,7 +70,6 @@ static struct i2c_board_info eukrea_cpuimx35_i2c_devices[] = {
I2C_BOARD_INFO("pcf8563", 0x51),
}, {
I2C_BOARD_INFO("tsc2007", 0x48),
.type = "tsc2007",
.platform_data = &tsc2007_info,
.irq = IMX_GPIO_TO_IRQ(TSC2007_IRQGPIO),
},

View File

@ -142,7 +142,6 @@ static struct i2c_board_info eukrea_cpuimx51sd_i2c_devices[] = {
I2C_BOARD_INFO("pcf8563", 0x51),
}, {
I2C_BOARD_INFO("tsc2007", 0x49),
.type = "tsc2007",
.platform_data = &tsc2007_info,
},
};

View File

@ -116,6 +116,8 @@ static const int visstrim_m10_pins[] __initconst = {
PB23_PF_USB_PWR,
PB24_PF_USB_OC,
/* CSI */
TVP5150_RSTN | GPIO_GPIO | GPIO_OUT,
TVP5150_PWDN | GPIO_GPIO | GPIO_OUT,
PB10_PF_CSI_D0,
PB11_PF_CSI_D1,
PB12_PF_CSI_D2,
@ -147,6 +149,24 @@ static struct gpio visstrim_m10_version_gpios[] = {
{ MOTHERBOARD_BIT2, GPIOF_IN, "mother-version-2" },
};
static const struct gpio visstrim_m10_gpios[] __initconst = {
{
.gpio = TVP5150_RSTN,
.flags = GPIOF_DIR_OUT | GPIOF_INIT_HIGH,
.label = "tvp5150_rstn",
},
{
.gpio = TVP5150_PWDN,
.flags = GPIOF_DIR_OUT | GPIOF_INIT_LOW,
.label = "tvp5150_pwdn",
},
{
.gpio = OTG_PHY_CS_GPIO,
.flags = GPIOF_DIR_OUT | GPIOF_INIT_LOW,
.label = "usbotg_cs",
},
};
/* Camera */
static int visstrim_camera_power(struct device *dev, int on)
{
@ -190,13 +210,6 @@ static void __init visstrim_camera_init(void)
struct platform_device *pdev;
int dma;
/* Initialize tvp5150 gpios */
mxc_gpio_mode(TVP5150_RSTN | GPIO_GPIO | GPIO_OUT);
mxc_gpio_mode(TVP5150_PWDN | GPIO_GPIO | GPIO_OUT);
gpio_set_value(TVP5150_RSTN, 1);
gpio_set_value(TVP5150_PWDN, 0);
ndelay(1);
gpio_set_value(TVP5150_PWDN, 1);
ndelay(1);
gpio_set_value(TVP5150_RSTN, 0);
@ -377,10 +390,6 @@ static struct i2c_board_info visstrim_m10_i2c_devices[] = {
/* USB OTG */
static int otg_phy_init(struct platform_device *pdev)
{
gpio_set_value(OTG_PHY_CS_GPIO, 0);
mdelay(10);
return mx27_initialize_usb_hw(pdev->id, MXC_EHCI_POWER_PINS_ENABLED);
}
@ -435,6 +444,11 @@ static void __init visstrim_m10_board_init(void)
if (ret)
pr_err("Failed to setup pins (%d)\n", ret);
ret = gpio_request_array(visstrim_m10_gpios,
ARRAY_SIZE(visstrim_m10_gpios));
if (ret)
pr_err("Failed to request gpios (%d)\n", ret);
imx27_add_imx_ssi(0, &visstrim_m10_ssi_pdata);
imx27_add_imx_uart0(&uart_pdata);

View File

@ -32,7 +32,7 @@
* Memory-mapped I/O on MX21ADS base board
*/
#define MX21ADS_MMIO_BASE_ADDR 0xf5000000
#define MX21ADS_MMIO_SIZE SZ_16M
#define MX21ADS_MMIO_SIZE 0xc00000
#define MX21ADS_REG_ADDR(offset) (void __force __iomem *) \
(MX21ADS_MMIO_BASE_ADDR + (offset))

View File

@ -86,6 +86,7 @@ static void __iomem *imx3_ioremap_caller(unsigned long phys_addr, size_t size,
void __init imx3_init_l2x0(void)
{
#ifdef CONFIG_CACHE_L2X0
void __iomem *l2x0_base;
void __iomem *clkctl_base;
@ -115,6 +116,7 @@ void __init imx3_init_l2x0(void)
}
l2x0_init(l2x0_base, 0x00030024, 0x00000000);
#endif
}
#ifdef CONFIG_SOC_IMX31
@ -179,6 +181,8 @@ void __init imx31_soc_init(void)
mxc_register_gpio("imx31-gpio", 1, MX31_GPIO2_BASE_ADDR, SZ_16K, MX31_INT_GPIO2, 0);
mxc_register_gpio("imx31-gpio", 2, MX31_GPIO3_BASE_ADDR, SZ_16K, MX31_INT_GPIO3, 0);
pinctrl_provide_dummies();
if (to_version == 1) {
strncpy(imx31_sdma_pdata.fw_name, "sdma-imx31-to1.bin",
strlen(imx31_sdma_pdata.fw_name));

View File

@ -202,6 +202,8 @@ void __init imx51_soc_init(void)
mxc_register_gpio("imx31-gpio", 2, MX51_GPIO3_BASE_ADDR, SZ_16K, MX51_INT_GPIO3_LOW, MX51_INT_GPIO3_HIGH);
mxc_register_gpio("imx31-gpio", 3, MX51_GPIO4_BASE_ADDR, SZ_16K, MX51_INT_GPIO4_LOW, MX51_INT_GPIO4_HIGH);
pinctrl_provide_dummies();
/* i.mx51 has the i.mx35 type sdma */
imx_add_imx_sdma("imx35-sdma", MX51_SDMA_BASE_ADDR, MX51_INT_SDMA, &imx51_sdma_pdata);

View File

@ -193,9 +193,11 @@ static struct clk __init *kirkwood_register_gate_fn(const char *name,
bit_idx, 0, &gating_lock, fn);
}
static struct clk *ge0, *ge1;
void __init kirkwood_clk_init(void)
{
struct clk *runit, *ge0, *ge1, *sata0, *sata1, *usb0, *sdio;
struct clk *runit, *sata0, *sata1, *usb0, *sdio;
struct clk *crypto, *xor0, *xor1, *pex0, *pex1, *audio;
tclk = clk_register_fixed_rate(NULL, "tclk", NULL,
@ -257,6 +259,9 @@ void __init kirkwood_ge00_init(struct mv643xx_eth_platform_data *eth_data)
orion_ge00_init(eth_data,
GE00_PHYS_BASE, IRQ_KIRKWOOD_GE00_SUM,
IRQ_KIRKWOOD_GE00_ERR);
/* The interface forgets the MAC address assigned by u-boot if
the clock is turned off, so claim the clk now. */
clk_prepare_enable(ge0);
}
@ -268,6 +273,7 @@ void __init kirkwood_ge01_init(struct mv643xx_eth_platform_data *eth_data)
orion_ge01_init(eth_data,
GE01_PHYS_BASE, IRQ_KIRKWOOD_GE01_SUM,
IRQ_KIRKWOOD_GE01_ERR);
clk_prepare_enable(ge1);
}

View File

@ -241,6 +241,7 @@ void __init mmp2_init_icu(void)
icu_data[1].clr_mfp_irq_base = IRQ_MMP2_PMIC_BASE;
icu_data[1].clr_mfp_hwirq = IRQ_MMP2_PMIC - IRQ_MMP2_PMIC_BASE;
icu_data[1].nr_irqs = 2;
icu_data[1].cascade_irq = 4;
icu_data[1].virq_base = IRQ_MMP2_PMIC_BASE;
icu_data[1].domain = irq_domain_add_legacy(NULL, icu_data[1].nr_irqs,
icu_data[1].virq_base, 0,
@ -249,6 +250,7 @@ void __init mmp2_init_icu(void)
icu_data[2].reg_status = mmp_icu_base + 0x154;
icu_data[2].reg_mask = mmp_icu_base + 0x16c;
icu_data[2].nr_irqs = 2;
icu_data[2].cascade_irq = 5;
icu_data[2].virq_base = IRQ_MMP2_RTC_BASE;
icu_data[2].domain = irq_domain_add_legacy(NULL, icu_data[2].nr_irqs,
icu_data[2].virq_base, 0,
@ -257,6 +259,7 @@ void __init mmp2_init_icu(void)
icu_data[3].reg_status = mmp_icu_base + 0x180;
icu_data[3].reg_mask = mmp_icu_base + 0x17c;
icu_data[3].nr_irqs = 3;
icu_data[3].cascade_irq = 9;
icu_data[3].virq_base = IRQ_MMP2_KEYPAD_BASE;
icu_data[3].domain = irq_domain_add_legacy(NULL, icu_data[3].nr_irqs,
icu_data[3].virq_base, 0,
@ -265,6 +268,7 @@ void __init mmp2_init_icu(void)
icu_data[4].reg_status = mmp_icu_base + 0x158;
icu_data[4].reg_mask = mmp_icu_base + 0x170;
icu_data[4].nr_irqs = 5;
icu_data[4].cascade_irq = 17;
icu_data[4].virq_base = IRQ_MMP2_TWSI_BASE;
icu_data[4].domain = irq_domain_add_legacy(NULL, icu_data[4].nr_irqs,
icu_data[4].virq_base, 0,
@ -273,6 +277,7 @@ void __init mmp2_init_icu(void)
icu_data[5].reg_status = mmp_icu_base + 0x15c;
icu_data[5].reg_mask = mmp_icu_base + 0x174;
icu_data[5].nr_irqs = 15;
icu_data[5].cascade_irq = 35;
icu_data[5].virq_base = IRQ_MMP2_MISC_BASE;
icu_data[5].domain = irq_domain_add_legacy(NULL, icu_data[5].nr_irqs,
icu_data[5].virq_base, 0,
@ -281,6 +286,7 @@ void __init mmp2_init_icu(void)
icu_data[6].reg_status = mmp_icu_base + 0x160;
icu_data[6].reg_mask = mmp_icu_base + 0x178;
icu_data[6].nr_irqs = 2;
icu_data[6].cascade_irq = 51;
icu_data[6].virq_base = IRQ_MMP2_MIPI_HSI1_BASE;
icu_data[6].domain = irq_domain_add_legacy(NULL, icu_data[6].nr_irqs,
icu_data[6].virq_base, 0,
@ -289,6 +295,7 @@ void __init mmp2_init_icu(void)
icu_data[7].reg_status = mmp_icu_base + 0x188;
icu_data[7].reg_mask = mmp_icu_base + 0x184;
icu_data[7].nr_irqs = 2;
icu_data[7].cascade_irq = 55;
icu_data[7].virq_base = IRQ_MMP2_MIPI_HSI0_BASE;
icu_data[7].domain = irq_domain_add_legacy(NULL, icu_data[7].nr_irqs,
icu_data[7].virq_base, 0,

View File

@ -144,7 +144,6 @@ static struct lis3lv02d_platform_data rx51_lis3lv02d_data = {
.release_resources = lis302_release,
.st_min_limits = {-32, 3, 3},
.st_max_limits = {-3, 32, 32},
.irq2 = OMAP_GPIO_IRQ(LIS302_IRQ2_GPIO),
};
#endif
@ -1030,7 +1029,6 @@ static struct i2c_board_info __initdata rx51_peripherals_i2c_board_info_3[] = {
{
I2C_BOARD_INFO("lis3lv02d", 0x1d),
.platform_data = &rx51_lis3lv02d_data,
.irq = OMAP_GPIO_IRQ(LIS302_IRQ1_GPIO),
},
#endif
};
@ -1056,6 +1054,10 @@ static int __init rx51_i2c_init(void)
omap_pmic_init(1, 2200, "twl5030", INT_34XX_SYS_NIRQ, &rx51_twldata);
omap_register_i2c_bus(2, 100, rx51_peripherals_i2c_board_info_2,
ARRAY_SIZE(rx51_peripherals_i2c_board_info_2));
#if defined(CONFIG_SENSORS_LIS3_I2C) || defined(CONFIG_SENSORS_LIS3_I2C_MODULE)
rx51_lis3lv02d_data.irq2 = gpio_to_irq(LIS302_IRQ2_GPIO);
rx51_peripherals_i2c_board_info_3[0].irq = gpio_to_irq(LIS302_IRQ1_GPIO);
#endif
omap_register_i2c_bus(3, 400, rx51_peripherals_i2c_board_info_3,
ARRAY_SIZE(rx51_peripherals_i2c_board_info_3));
return 0;

View File

@ -3514,7 +3514,7 @@ int __init omap3xxx_clk_init(void)
struct omap_clk *c;
u32 cpu_clkflg = 0;
if (cpu_is_omap3517()) {
if (soc_is_am35xx()) {
cpu_mask = RATE_IN_34XX;
cpu_clkflg = CK_AM35XX;
} else if (cpu_is_omap3630()) {

View File

@ -271,9 +271,9 @@ static struct platform_device *create_simple_dss_pdev(const char *pdev_name,
goto err;
}
r = omap_device_register(pdev);
r = platform_device_add(pdev);
if (r) {
pr_err("Could not register omap_device for %s\n", pdev_name);
pr_err("Could not register platform_device for %s\n", pdev_name);
goto err;
}

View File

@ -20,6 +20,9 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <asm/memblock.h>
#include "cm2xxx_3xxx.h"
#include "prm2xxx_3xxx.h"
#ifdef CONFIG_BRIDGE_DVFS

View File

@ -246,6 +246,17 @@ void __init omap3xxx_check_features(void)
omap_features |= OMAP3_HAS_SDRC;
/*
* am35x fixups:
* - The am35x Chip ID register has bits 12, 7:5, and 3:2 marked as
* reserved and therefore return 0 when read. Unfortunately,
* OMAP3_CHECK_FEATURE() will interpret some of those zeroes to
* mean that a feature is present even though it isn't so clear
* the incorrectly set feature bits.
*/
if (soc_is_am35xx())
omap_features &= ~(OMAP3_HAS_IVA | OMAP3_HAS_ISP);
/*
* TODO: Get additional info (where applicable)
* e.g. Size of L2 cache.

View File

@ -149,6 +149,7 @@ omap_alloc_gc(void __iomem *base, unsigned int irq_start, unsigned int num)
ct->chip.irq_ack = omap_mask_ack_irq;
ct->chip.irq_mask = irq_gc_mask_disable_reg;
ct->chip.irq_unmask = irq_gc_unmask_enable_reg;
ct->chip.flags |= IRQCHIP_SKIP_SET_WAKE;
ct->regs.enable = INTC_MIR_CLEAR0;
ct->regs.disable = INTC_MIR_SET0;

View File

@ -217,8 +217,7 @@ static int __init _omap_mux_get_by_name(struct omap_mux_partition *partition,
return -ENODEV;
}
static int __init
omap_mux_get_by_name(const char *muxname,
int __init omap_mux_get_by_name(const char *muxname,
struct omap_mux_partition **found_partition,
struct omap_mux **found_mux)
{

View File

@ -59,6 +59,7 @@
#define OMAP_PIN_OFF_WAKEUPENABLE OMAP_WAKEUP_EN
#define OMAP_MODE_GPIO(x) (((x) & OMAP_MUX_MODE7) == OMAP_MUX_MODE4)
#define OMAP_MODE_UART(x) (((x) & OMAP_MUX_MODE7) == OMAP_MUX_MODE0)
/* Flags for omapX_mux_init */
#define OMAP_PACKAGE_MASK 0xffff
@ -225,8 +226,18 @@ omap_hwmod_mux_init(struct omap_device_pad *bpads, int nr_pads);
*/
void omap_hwmod_mux(struct omap_hwmod_mux_info *hmux, u8 state);
int omap_mux_get_by_name(const char *muxname,
struct omap_mux_partition **found_partition,
struct omap_mux **found_mux);
#else
static inline int omap_mux_get_by_name(const char *muxname,
struct omap_mux_partition **found_partition,
struct omap_mux **found_mux)
{
return 0;
}
static inline int omap_mux_init_gpio(int gpio, int val)
{
return 0;

View File

@ -155,10 +155,11 @@ static irqreturn_t omap3_l3_block_irq(struct omap3_l3 *l3,
u8 multi = error & L3_ERROR_LOG_MULTI;
u32 address = omap3_l3_decode_addr(error_addr);
WARN(true, "%s seen by %s %s at address %x\n",
pr_err("%s seen by %s %s at address %x\n",
omap3_l3_code_string(code),
omap3_l3_initiator_string(initid),
multi ? "Multiple Errors" : "", address);
WARN_ON(1);
return IRQ_HANDLED;
}

View File

@ -724,6 +724,7 @@ int __init omap3_pm_init(void)
ret = request_irq(omap_prcm_event_to_irq("io"),
_prcm_int_handle_io, IRQF_SHARED | IRQF_NO_SUSPEND, "pm_io",
omap3_pm_init);
enable_irq(omap_prcm_event_to_irq("io"));
if (ret) {
pr_err("pm: Failed to request pm_io irq\n");

View File

@ -15,6 +15,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/irq.h>
#include "common.h"
#include <plat/cpu.h>
@ -303,8 +304,15 @@ void omap3xxx_prm_restore_irqen(u32 *saved_mask)
static int __init omap3xxx_prcm_init(void)
{
if (cpu_is_omap34xx())
return omap_prcm_register_chain_handler(&omap3_prcm_irq_setup);
return 0;
int ret = 0;
if (cpu_is_omap34xx()) {
ret = omap_prcm_register_chain_handler(&omap3_prcm_irq_setup);
if (!ret)
irq_set_status_flags(omap_prcm_event_to_irq("io"),
IRQ_NOAUTOEN);
}
return ret;
}
subsys_initcall(omap3xxx_prcm_init);

View File

@ -57,6 +57,7 @@ struct omap_uart_state {
struct list_head node;
struct omap_hwmod *oh;
struct omap_device_pad default_omap_uart_pads[2];
};
static LIST_HEAD(uart_list);
@ -126,11 +127,70 @@ static void omap_uart_set_smartidle(struct platform_device *pdev) {}
#endif /* CONFIG_PM */
#ifdef CONFIG_OMAP_MUX
static void omap_serial_fill_default_pads(struct omap_board_data *bdata)
#define OMAP_UART_DEFAULT_PAD_NAME_LEN 28
static char rx_pad_name[OMAP_UART_DEFAULT_PAD_NAME_LEN],
tx_pad_name[OMAP_UART_DEFAULT_PAD_NAME_LEN] __initdata;
static void __init
omap_serial_fill_uart_tx_rx_pads(struct omap_board_data *bdata,
struct omap_uart_state *uart)
{
uart->default_omap_uart_pads[0].name = rx_pad_name;
uart->default_omap_uart_pads[0].flags = OMAP_DEVICE_PAD_REMUX |
OMAP_DEVICE_PAD_WAKEUP;
uart->default_omap_uart_pads[0].enable = OMAP_PIN_INPUT |
OMAP_MUX_MODE0;
uart->default_omap_uart_pads[0].idle = OMAP_PIN_INPUT | OMAP_MUX_MODE0;
uart->default_omap_uart_pads[1].name = tx_pad_name;
uart->default_omap_uart_pads[1].enable = OMAP_PIN_OUTPUT |
OMAP_MUX_MODE0;
bdata->pads = uart->default_omap_uart_pads;
bdata->pads_cnt = ARRAY_SIZE(uart->default_omap_uart_pads);
}
static void __init omap_serial_check_wakeup(struct omap_board_data *bdata,
struct omap_uart_state *uart)
{
struct omap_mux_partition *tx_partition = NULL, *rx_partition = NULL;
struct omap_mux *rx_mux = NULL, *tx_mux = NULL;
char *rx_fmt, *tx_fmt;
int uart_nr = bdata->id + 1;
if (bdata->id != 2) {
rx_fmt = "uart%d_rx.uart%d_rx";
tx_fmt = "uart%d_tx.uart%d_tx";
} else {
rx_fmt = "uart%d_rx_irrx.uart%d_rx_irrx";
tx_fmt = "uart%d_tx_irtx.uart%d_tx_irtx";
}
snprintf(rx_pad_name, OMAP_UART_DEFAULT_PAD_NAME_LEN, rx_fmt,
uart_nr, uart_nr);
snprintf(tx_pad_name, OMAP_UART_DEFAULT_PAD_NAME_LEN, tx_fmt,
uart_nr, uart_nr);
if (omap_mux_get_by_name(rx_pad_name, &rx_partition, &rx_mux) >= 0 &&
omap_mux_get_by_name
(tx_pad_name, &tx_partition, &tx_mux) >= 0) {
u16 tx_mode, rx_mode;
tx_mode = omap_mux_read(tx_partition, tx_mux->reg_offset);
rx_mode = omap_mux_read(rx_partition, rx_mux->reg_offset);
/*
* Check if uart is used in default tx/rx mode i.e. in mux mode0
* if yes then configure rx pin for wake up capability
*/
if (OMAP_MODE_UART(rx_mode) && OMAP_MODE_UART(tx_mode))
omap_serial_fill_uart_tx_rx_pads(bdata, uart);
}
}
#else
static void omap_serial_fill_default_pads(struct omap_board_data *bdata) {}
static void __init omap_serial_check_wakeup(struct omap_board_data *bdata,
struct omap_uart_state *uart)
{
}
#endif
static char *cmdline_find_option(char *str)
@ -287,8 +347,7 @@ void __init omap_serial_board_init(struct omap_uart_port_info *info)
bdata.pads = NULL;
bdata.pads_cnt = 0;
if (cpu_is_omap44xx() || cpu_is_omap34xx())
omap_serial_fill_default_pads(&bdata);
omap_serial_check_wakeup(&bdata, uart);
if (!info)
omap_serial_init_port(&bdata, NULL);

View File

@ -169,26 +169,13 @@ static struct map_desc versatile_io_desc[] __initdata = {
.pfn = __phys_to_pfn(VERSATILE_PCI_CFG_BASE),
.length = VERSATILE_PCI_CFG_BASE_SIZE,
.type = MT_DEVICE
},
#if 0
{
.virtual = VERSATILE_PCI_VIRT_MEM_BASE0,
}, {
.virtual = (unsigned long)VERSATILE_PCI_VIRT_MEM_BASE0,
.pfn = __phys_to_pfn(VERSATILE_PCI_MEM_BASE0),
.length = SZ_16M,
.type = MT_DEVICE
}, {
.virtual = VERSATILE_PCI_VIRT_MEM_BASE1,
.pfn = __phys_to_pfn(VERSATILE_PCI_MEM_BASE1),
.length = SZ_16M,
.type = MT_DEVICE
}, {
.virtual = VERSATILE_PCI_VIRT_MEM_BASE2,
.pfn = __phys_to_pfn(VERSATILE_PCI_MEM_BASE2),
.length = SZ_16M,
.length = IO_SPACE_LIMIT,
.type = MT_DEVICE
},
#endif
#endif
};
void __init versatile_map_io(void)

View File

@ -29,8 +29,9 @@
*/
#define VERSATILE_PCI_VIRT_BASE (void __iomem *)0xe8000000ul
#define VERSATILE_PCI_CFG_VIRT_BASE (void __iomem *)0xe9000000ul
#define VERSATILE_PCI_VIRT_MEM_BASE0 (void __iomem *)PCIO_BASE
/* macro to get at IO space when running virtually */
/* macro to get at MMIO space when running virtually */
#define IO_ADDRESS(x) (((x) & 0x0fffffff) + (((x) >> 4) & 0x0f000000) + 0xf0000000)
#define __io_address(n) ((void __iomem __force *)IO_ADDRESS(n))

View File

@ -0,0 +1,27 @@
/*
* arch/arm/mach-versatile/include/mach/io.h
*
* Copyright (C) 2003 ARM Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef __ASM_ARM_ARCH_IO_H
#define __ASM_ARM_ARCH_IO_H
#define PCIO_BASE 0xeb000000ul
#define __io(a) ((a) + PCIO_BASE)
#endif

View File

@ -169,11 +169,18 @@ static struct pci_ops pci_versatile_ops = {
.write = versatile_write_config,
};
static struct resource io_port = {
.name = "PCI",
.start = 0,
.end = IO_SPACE_LIMIT,
.flags = IORESOURCE_IO,
};
static struct resource io_mem = {
.name = "PCI I/O space",
.start = VERSATILE_PCI_MEM_BASE0,
.end = VERSATILE_PCI_MEM_BASE0+VERSATILE_PCI_MEM_BASE0_SIZE-1,
.flags = IORESOURCE_IO,
.flags = IORESOURCE_MEM,
};
static struct resource non_mem = {
@ -200,6 +207,12 @@ static int __init pci_versatile_setup_resources(struct pci_sys_data *sys)
"memory region (%d)\n", ret);
goto out;
}
ret = request_resource(&ioport_resource, &io_port);
if (ret) {
printk(KERN_ERR "PCI: unable to allocate I/O "
"port region (%d)\n", ret);
goto out;
}
ret = request_resource(&iomem_resource, &non_mem);
if (ret) {
printk(KERN_ERR "PCI: unable to allocate non-prefetchable "
@ -218,7 +231,7 @@ static int __init pci_versatile_setup_resources(struct pci_sys_data *sys)
* the mem resource for this bus
* the prefetch mem resource for this bus
*/
pci_add_resource_offset(&sys->resources, &io_mem, sys->io_offset);
pci_add_resource_offset(&sys->resources, &io_port, sys->io_offset);
pci_add_resource_offset(&sys->resources, &non_mem, sys->mem_offset);
pci_add_resource_offset(&sys->resources, &pre_mem, sys->mem_offset);
@ -249,6 +262,7 @@ int __init pci_versatile_setup(int nr, struct pci_sys_data *sys)
if (nr == 0) {
sys->mem_offset = 0;
sys->io_offset = 0;
ret = pci_versatile_setup_resources(sys);
if (ret < 0) {
printk("pci_versatile_setup: resources... oops?\n");

View File

@ -228,7 +228,7 @@ static pte_t **consistent_pte;
#define DEFAULT_CONSISTENT_DMA_SIZE SZ_2M
unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE;
static unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE;
void __init init_consistent_dma_size(unsigned long size)
{
@ -321,7 +321,7 @@ static struct arm_vmregion_head coherent_head = {
.vm_list = LIST_HEAD_INIT(coherent_head.vm_list),
};
size_t coherent_pool_size = DEFAULT_CONSISTENT_DMA_SIZE / 8;
static size_t coherent_pool_size = DEFAULT_CONSISTENT_DMA_SIZE / 8;
static int __init early_coherent_pool(char *p)
{

View File

@ -212,7 +212,7 @@ EXPORT_SYMBOL(arm_dma_zone_size);
* allocations. This must be the smallest DMA mask in the system,
* so a successful GFP_DMA allocation will always satisfy this.
*/
u32 arm_dma_limit;
phys_addr_t arm_dma_limit;
static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
unsigned long dma_size)

View File

@ -62,7 +62,7 @@ extern void __flush_dcache_page(struct address_space *mapping, struct page *page
#endif
#ifdef CONFIG_ZONE_DMA
extern u32 arm_dma_limit;
extern phys_addr_t arm_dma_limit;
#else
#define arm_dma_limit ((u32)~0)
#endif

View File

@ -50,6 +50,7 @@
#include <linux/irq.h>
#include <linux/clockchips.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <mach/hardware.h>
#include <asm/mach/time.h>
@ -201,8 +202,16 @@ static int __init epit_clockevent_init(struct clk *timer_clk)
return 0;
}
void __init epit_timer_init(struct clk *timer_clk, void __iomem *base, int irq)
void __init epit_timer_init(void __iomem *base, int irq)
{
struct clk *timer_clk;
timer_clk = clk_get_sys("imx-epit.0", NULL);
if (IS_ERR(timer_clk)) {
pr_err("i.MX epit: unable to get clk\n");
return;
}
clk_prepare_enable(timer_clk);
timer_base = base;

View File

@ -54,8 +54,8 @@ extern void imx50_soc_init(void);
extern void imx51_soc_init(void);
extern void imx53_soc_init(void);
extern void imx51_init_late(void);
extern void epit_timer_init(struct clk *timer_clk, void __iomem *base, int irq);
extern void mxc_timer_init(struct clk *timer_clk, void __iomem *, int);
extern void epit_timer_init(void __iomem *base, int irq);
extern void mxc_timer_init(void __iomem *, int);
extern int mx1_clocks_init(unsigned long fref);
extern int mx21_clocks_init(unsigned long lref, unsigned long fref);
extern int mx25_clocks_init(void);

View File

@ -58,6 +58,7 @@
/* MX31, MX35, MX25, MX5 */
#define V2_TCTL_WAITEN (1 << 3) /* Wait enable mode */
#define V2_TCTL_CLK_IPG (1 << 6)
#define V2_TCTL_CLK_PER (2 << 6)
#define V2_TCTL_FRR (1 << 9)
#define V2_IR 0x0c
#define V2_TSTAT 0x08
@ -280,23 +281,22 @@ static int __init mxc_clockevent_init(struct clk *timer_clk)
return 0;
}
void __init mxc_timer_init(struct clk *timer_clk, void __iomem *base, int irq)
void __init mxc_timer_init(void __iomem *base, int irq)
{
uint32_t tctl_val;
struct clk *timer_clk;
struct clk *timer_ipg_clk;
if (!timer_clk) {
timer_clk = clk_get_sys("imx-gpt.0", "per");
if (IS_ERR(timer_clk)) {
pr_err("i.MX timer: unable to get clk\n");
return;
}
timer_ipg_clk = clk_get_sys("imx-gpt.0", "ipg");
if (!IS_ERR(timer_ipg_clk))
clk_prepare_enable(timer_ipg_clk);
timer_clk = clk_get_sys("imx-gpt.0", "per");
if (IS_ERR(timer_clk)) {
pr_err("i.MX timer: unable to get clk\n");
return;
}
timer_ipg_clk = clk_get_sys("imx-gpt.0", "ipg");
if (!IS_ERR(timer_ipg_clk))
clk_prepare_enable(timer_ipg_clk);
clk_prepare_enable(timer_clk);
timer_base = base;
@ -309,7 +309,7 @@ void __init mxc_timer_init(struct clk *timer_clk, void __iomem *base, int irq)
__raw_writel(0, timer_base + MXC_TPRER); /* see datasheet note */
if (timer_is_v2())
tctl_val = V2_TCTL_CLK_IPG | V2_TCTL_FRR | V2_TCTL_WAITEN | MXC_TCTL_TEN;
tctl_val = V2_TCTL_CLK_PER | V2_TCTL_FRR | V2_TCTL_WAITEN | MXC_TCTL_TEN;
else
tctl_val = MX1_2_TCTL_FRR | MX1_2_TCTL_CLK_PCLK1 | MXC_TCTL_TEN;

View File

@ -252,8 +252,6 @@ IS_AM_SUBCLASS(335x, 0x335)
* cpu_is_omap2423(): True for OMAP2423
* cpu_is_omap2430(): True for OMAP2430
* cpu_is_omap3430(): True for OMAP3430
* cpu_is_omap3505(): True for OMAP3505
* cpu_is_omap3517(): True for OMAP3517
*/
#define GET_OMAP_TYPE ((omap_rev() >> 16) & 0xffff)
@ -277,8 +275,6 @@ IS_OMAP_TYPE(2422, 0x2422)
IS_OMAP_TYPE(2423, 0x2423)
IS_OMAP_TYPE(2430, 0x2430)
IS_OMAP_TYPE(3430, 0x3430)
IS_OMAP_TYPE(3505, 0x3517)
IS_OMAP_TYPE(3517, 0x3517)
#define cpu_is_omap310() 0
#define cpu_is_omap730() 0
@ -293,12 +289,6 @@ IS_OMAP_TYPE(3517, 0x3517)
#define cpu_is_omap2422() 0
#define cpu_is_omap2423() 0
#define cpu_is_omap2430() 0
#define cpu_is_omap3503() 0
#define cpu_is_omap3515() 0
#define cpu_is_omap3525() 0
#define cpu_is_omap3530() 0
#define cpu_is_omap3505() 0
#define cpu_is_omap3517() 0
#define cpu_is_omap3430() 0
#define cpu_is_omap3630() 0
@ -350,12 +340,6 @@ IS_OMAP_TYPE(3517, 0x3517)
#if defined(CONFIG_ARCH_OMAP3)
# undef cpu_is_omap3430
# undef cpu_is_omap3503
# undef cpu_is_omap3515
# undef cpu_is_omap3525
# undef cpu_is_omap3530
# undef cpu_is_omap3505
# undef cpu_is_omap3517
# undef cpu_is_ti81xx
# undef cpu_is_ti816x
# undef cpu_is_ti814x
@ -363,19 +347,6 @@ IS_OMAP_TYPE(3517, 0x3517)
# undef cpu_is_am33xx
# undef cpu_is_am335x
# define cpu_is_omap3430() is_omap3430()
# define cpu_is_omap3503() (cpu_is_omap3430() && \
(!omap3_has_iva()) && \
(!omap3_has_sgx()))
# define cpu_is_omap3515() (cpu_is_omap3430() && \
(!omap3_has_iva()) && \
(omap3_has_sgx()))
# define cpu_is_omap3525() (cpu_is_omap3430() && \
(!omap3_has_sgx()) && \
(omap3_has_iva()))
# define cpu_is_omap3530() (cpu_is_omap3430())
# define cpu_is_omap3517() is_omap3517()
# define cpu_is_omap3505() (cpu_is_omap3517() && \
!omap3_has_sgx())
# undef cpu_is_omap3630
# define cpu_is_omap3630() is_omap363x()
# define cpu_is_ti81xx() is_ti81xx()
@ -424,10 +395,6 @@ IS_OMAP_TYPE(3517, 0x3517)
#define OMAP3630_REV_ES1_1 (OMAP363X_CLASS | (0x1 << 8))
#define OMAP3630_REV_ES1_2 (OMAP363X_CLASS | (0x2 << 8))
#define OMAP3517_CLASS 0x35170034
#define OMAP3517_REV_ES1_0 OMAP3517_CLASS
#define OMAP3517_REV_ES1_1 (OMAP3517_CLASS | (0x1 << 8))
#define TI816X_CLASS 0x81600034
#define TI8168_REV_ES1_0 TI816X_CLASS
#define TI8168_REV_ES1_1 (TI816X_CLASS | (0x1 << 8))

View File

@ -172,8 +172,7 @@ struct omap_mmc_platform_data {
extern void omap_mmc_notify_cover_event(struct device *dev, int slot,
int is_closed);
#if defined(CONFIG_MMC_OMAP) || defined(CONFIG_MMC_OMAP_MODULE) || \
defined(CONFIG_MMC_OMAP_HS) || defined(CONFIG_MMC_OMAP_HS_MODULE)
#if defined(CONFIG_MMC_OMAP) || defined(CONFIG_MMC_OMAP_MODULE)
void omap1_init_mmc(struct omap_mmc_platform_data **mmc_data,
int nr_controllers);
void omap242x_init_mmc(struct omap_mmc_platform_data **mmc_data);
@ -185,7 +184,6 @@ static inline void omap1_init_mmc(struct omap_mmc_platform_data **mmc_data,
static inline void omap242x_init_mmc(struct omap_mmc_platform_data **mmc_data)
{
}
#endif
extern int omap_msdi_reset(struct omap_hwmod *oh);

View File

@ -7,6 +7,8 @@ config M68K
select GENERIC_IRQ_SHOW
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select GENERIC_CPU_DEVICES
select GENERIC_STRNCPY_FROM_USER if MMU
select GENERIC_STRNLEN_USER if MMU
select FPU if MMU
select ARCH_USES_GETTIMEOFFSET if MMU && !COLDFIRE

View File

@ -1,2 +1,4 @@
include include/asm-generic/Kbuild.asm
header-y += cachectl.h
generic-y += word-at-a-time.h

View File

@ -379,12 +379,15 @@ __constant_copy_to_user(void __user *to, const void *from, unsigned long n)
#define copy_from_user(to, from, n) __copy_from_user(to, from, n)
#define copy_to_user(to, from, n) __copy_to_user(to, from, n)
long strncpy_from_user(char *dst, const char __user *src, long count);
long strnlen_user(const char __user *src, long n);
#define user_addr_max() \
(segment_eq(get_fs(), USER_DS) ? TASK_SIZE : ~0UL)
extern long strncpy_from_user(char *dst, const char __user *src, long count);
extern __must_check long strlen_user(const char __user *str);
extern __must_check long strnlen_user(const char __user *str, long n);
unsigned long __clear_user(void __user *to, unsigned long n);
#define clear_user __clear_user
#define strlen_user(str) strnlen_user(str, 32767)
#endif /* _M68K_UACCESS_H */

View File

@ -103,80 +103,6 @@ unsigned long __generic_copy_to_user(void __user *to, const void *from,
}
EXPORT_SYMBOL(__generic_copy_to_user);
/*
* Copy a null terminated string from userspace.
*/
long strncpy_from_user(char *dst, const char __user *src, long count)
{
long res;
char c;
if (count <= 0)
return count;
asm volatile ("\n"
"1: "MOVES".b (%2)+,%4\n"
" move.b %4,(%1)+\n"
" jeq 2f\n"
" subq.l #1,%3\n"
" jne 1b\n"
"2: sub.l %3,%0\n"
"3:\n"
" .section .fixup,\"ax\"\n"
" .even\n"
"10: move.l %5,%0\n"
" jra 3b\n"
" .previous\n"
"\n"
" .section __ex_table,\"a\"\n"
" .align 4\n"
" .long 1b,10b\n"
" .previous"
: "=d" (res), "+a" (dst), "+a" (src), "+r" (count), "=&d" (c)
: "i" (-EFAULT), "0" (count));
return res;
}
EXPORT_SYMBOL(strncpy_from_user);
/*
* Return the size of a string (including the ending 0)
*
* Return 0 on exception, a value greater than N if too long
*/
long strnlen_user(const char __user *src, long n)
{
char c;
long res;
asm volatile ("\n"
"1: subq.l #1,%1\n"
" jmi 3f\n"
"2: "MOVES".b (%0)+,%2\n"
" tst.b %2\n"
" jne 1b\n"
" jra 4f\n"
"\n"
"3: addq.l #1,%0\n"
"4: sub.l %4,%0\n"
"5:\n"
" .section .fixup,\"ax\"\n"
" .even\n"
"20: sub.l %0,%0\n"
" jra 5b\n"
" .previous\n"
"\n"
" .section __ex_table,\"a\"\n"
" .align 4\n"
" .long 2b,20b\n"
" .previous\n"
: "=&a" (res), "+d" (n), "=&d" (c)
: "0" (src), "r" (src));
return res;
}
EXPORT_SYMBOL(strnlen_user);
/*
* Zero Userspace
*/

View File

@ -100,6 +100,9 @@ static inline void hard_irq_disable(void)
get_paca()->irq_happened |= PACA_IRQ_HARD_DIS;
}
/* include/linux/interrupt.h needs hard_irq_disable to be a macro */
#define hard_irq_disable hard_irq_disable
/*
* This is called by asynchronous interrupts to conditionally
* re-enable hard interrupts when soft-disabled after having

View File

@ -1,59 +0,0 @@
#ifndef _SPARC64_CMT_H
#define _SPARC64_CMT_H
/* cmt.h: Chip Multi-Threading register definitions
*
* Copyright (C) 2004 David S. Miller (davem@redhat.com)
*/
/* ASI_CORE_ID - private */
#define LP_ID 0x0000000000000010UL
#define LP_ID_MAX 0x00000000003f0000UL
#define LP_ID_ID 0x000000000000003fUL
/* ASI_INTR_ID - private */
#define LP_INTR_ID 0x0000000000000000UL
#define LP_INTR_ID_ID 0x00000000000003ffUL
/* ASI_CESR_ID - private */
#define CESR_ID 0x0000000000000040UL
#define CESR_ID_ID 0x00000000000000ffUL
/* ASI_CORE_AVAILABLE - shared */
#define LP_AVAIL 0x0000000000000000UL
#define LP_AVAIL_1 0x0000000000000002UL
#define LP_AVAIL_0 0x0000000000000001UL
/* ASI_CORE_ENABLE_STATUS - shared */
#define LP_ENAB_STAT 0x0000000000000010UL
#define LP_ENAB_STAT_1 0x0000000000000002UL
#define LP_ENAB_STAT_0 0x0000000000000001UL
/* ASI_CORE_ENABLE - shared */
#define LP_ENAB 0x0000000000000020UL
#define LP_ENAB_1 0x0000000000000002UL
#define LP_ENAB_0 0x0000000000000001UL
/* ASI_CORE_RUNNING - shared */
#define LP_RUNNING_RW 0x0000000000000050UL
#define LP_RUNNING_W1S 0x0000000000000060UL
#define LP_RUNNING_W1C 0x0000000000000068UL
#define LP_RUNNING_1 0x0000000000000002UL
#define LP_RUNNING_0 0x0000000000000001UL
/* ASI_CORE_RUNNING_STAT - shared */
#define LP_RUN_STAT 0x0000000000000058UL
#define LP_RUN_STAT_1 0x0000000000000002UL
#define LP_RUN_STAT_0 0x0000000000000001UL
/* ASI_XIR_STEERING - shared */
#define LP_XIR_STEER 0x0000000000000030UL
#define LP_XIR_STEER_1 0x0000000000000002UL
#define LP_XIR_STEER_0 0x0000000000000001UL
/* ASI_CMT_ERROR_STEERING - shared */
#define CMT_ER_STEER 0x0000000000000040UL
#define CMT_ER_STEER_1 0x0000000000000002UL
#define CMT_ER_STEER_0 0x0000000000000001UL
#endif /* _SPARC64_CMT_H */

View File

@ -1,67 +0,0 @@
/*
* mpmbox.h: Interface and defines for the OpenProm mailbox
* facilities for MP machines under Linux.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/
#ifndef _SPARC_MPMBOX_H
#define _SPARC_MPMBOX_H
/* The prom allocates, for each CPU on the machine an unsigned
* byte in physical ram. You probe the device tree prom nodes
* for these values. The purpose of this byte is to be able to
* pass messages from one cpu to another.
*/
/* These are the main message types we have to look for in our
* Cpu mailboxes, based upon these values we decide what course
* of action to take.
*/
/* The CPU is executing code in the kernel. */
#define MAILBOX_ISRUNNING 0xf0
/* Another CPU called romvec->pv_exit(), you should call
* prom_stopcpu() when you see this in your mailbox.
*/
#define MAILBOX_EXIT 0xfb
/* Another CPU called romvec->pv_enter(), you should call
* prom_cpuidle() when this is seen.
*/
#define MAILBOX_GOSPIN 0xfc
/* Another CPU has hit a breakpoint either into kadb or the prom
* itself. Just like MAILBOX_GOSPIN, you should call prom_cpuidle()
* at this point.
*/
#define MAILBOX_BPT_SPIN 0xfd
/* Oh geese, some other nitwit got a damn watchdog reset. The party's
* over so go call prom_stopcpu().
*/
#define MAILBOX_WDOG_STOP 0xfe
#ifndef __ASSEMBLY__
/* Handy macro's to determine a cpu's state. */
/* Is the cpu still in Power On Self Test? */
#define MBOX_POST_P(letter) ((letter) >= 0x00 && (letter) <= 0x7f)
/* Is the cpu at the 'ok' prompt of the PROM? */
#define MBOX_PROMPROMPT_P(letter) ((letter) >= 0x80 && (letter) <= 0x8f)
/* Is the cpu spinning in the PROM? */
#define MBOX_PROMSPIN_P(letter) ((letter) >= 0x90 && (letter) <= 0xef)
/* Sanity check... This is junk mail, throw it out. */
#define MBOX_BOGON_P(letter) ((letter) >= 0xf1 && (letter) <= 0xfa)
/* Is the cpu actively running an application/kernel-code? */
#define MBOX_RUNNING_P(letter) ((letter) == MAILBOX_ISRUNNING)
#endif /* !(__ASSEMBLY__) */
#endif /* !(_SPARC_MPMBOX_H) */

View File

@ -146,7 +146,7 @@ extern int fixup_exception(struct pt_regs *regs);
#ifdef __tilegx__
#define __get_user_1(x, ptr, ret) __get_user_asm(ld1u, x, ptr, ret)
#define __get_user_2(x, ptr, ret) __get_user_asm(ld2u, x, ptr, ret)
#define __get_user_4(x, ptr, ret) __get_user_asm(ld4u, x, ptr, ret)
#define __get_user_4(x, ptr, ret) __get_user_asm(ld4s, x, ptr, ret)
#define __get_user_8(x, ptr, ret) __get_user_asm(ld, x, ptr, ret)
#else
#define __get_user_1(x, ptr, ret) __get_user_asm(lb_u, x, ptr, ret)

View File

@ -120,11 +120,6 @@ bool kvm_check_and_clear_guest_paused(void)
bool ret = false;
struct pvclock_vcpu_time_info *src;
/*
* per_cpu() is safe here because this function is only called from
* timer functions where preemption is already disabled.
*/
WARN_ON(!in_atomic());
src = &__get_cpu_var(hv_clock);
if ((src->flags & PVCLOCK_GUEST_STOPPED) != 0) {
__this_cpu_and(hv_clock.flags, ~PVCLOCK_GUEST_STOPPED);

View File

@ -100,7 +100,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
struct dma_attrs *attrs)
{
unsigned long dma_mask;
struct page *page = NULL;
struct page *page;
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
dma_addr_t addr;
@ -108,6 +108,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
flag |= __GFP_ZERO;
again:
page = NULL;
if (!(flag & GFP_ATOMIC))
page = dma_alloc_from_contiguous(dev, count, get_order(size));
if (!page)

View File

@ -349,9 +349,12 @@ static bool __cpuinit match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
static bool __cpuinit match_mc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
{
if (c->phys_proc_id == o->phys_proc_id)
return topology_sane(c, o, "mc");
if (c->phys_proc_id == o->phys_proc_id) {
if (cpu_has(c, X86_FEATURE_AMD_DCM))
return true;
return topology_sane(c, o, "mc");
}
return false;
}

View File

@ -22,7 +22,7 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
void *map;
int ret;
if (__range_not_ok(from, n, TASK_SIZE) == 0)
if (__range_not_ok(from, n, TASK_SIZE))
return len;
do {

View File

@ -180,7 +180,7 @@ err_free_memtype:
/**
* ioremap_nocache - map bus memory into CPU space
* @offset: bus address of the memory
* @phys_addr: bus address of the memory
* @size: size of the resource to map
*
* ioremap_nocache performs a platform specific sequence of operations to
@ -217,7 +217,7 @@ EXPORT_SYMBOL(ioremap_nocache);
/**
* ioremap_wc - map memory into CPU space write combined
* @offset: bus address of the memory
* @phys_addr: bus address of the memory
* @size: size of the resource to map
*
* This version of ioremap ensures that the memory is marked write combining.

View File

@ -122,7 +122,7 @@ within(unsigned long addr, unsigned long start, unsigned long end)
/**
* clflush_cache_range - flush a cache range with clflush
* @addr: virtual start address
* @vaddr: virtual start address
* @size: number of bytes to flush
*
* clflush is an unordered instruction which needs fencing with mfence

View File

@ -39,9 +39,9 @@
#undef __SYSCALL_I386
#define __SYSCALL_I386(nr, sym, compat) [ nr ] = sym,
typedef void (*sys_call_ptr_t)(void);
typedef asmlinkage void (*sys_call_ptr_t)(void);
extern void sys_ni_syscall(void);
extern asmlinkage void sys_ni_syscall(void);
const sys_call_ptr_t sys_call_table[] __cacheline_aligned = {
/*

View File

@ -209,6 +209,9 @@ static void __init xen_banner(void)
xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
}
#define CPUID_THERM_POWER_LEAF 6
#define APERFMPERF_PRESENT 0
static __read_mostly unsigned int cpuid_leaf1_edx_mask = ~0;
static __read_mostly unsigned int cpuid_leaf1_ecx_mask = ~0;
@ -242,6 +245,11 @@ static void xen_cpuid(unsigned int *ax, unsigned int *bx,
*dx = cpuid_leaf5_edx_val;
return;
case CPUID_THERM_POWER_LEAF:
/* Disabling APERFMPERF for kernel usage */
maskecx = ~(1 << APERFMPERF_PRESENT);
break;
case 0xb:
/* Suppress extended topology stuff */
maskebx = 0;

View File

@ -706,6 +706,7 @@ int m2p_add_override(unsigned long mfn, struct page *page,
unsigned long uninitialized_var(address);
unsigned level;
pte_t *ptep = NULL;
int ret = 0;
pfn = page_to_pfn(page);
if (!PageHighMem(page)) {
@ -741,6 +742,24 @@ int m2p_add_override(unsigned long mfn, struct page *page,
list_add(&page->lru, &m2p_overrides[mfn_hash(mfn)]);
spin_unlock_irqrestore(&m2p_override_lock, flags);
/* p2m(m2p(mfn)) == mfn: the mfn is already present somewhere in
* this domain. Set the FOREIGN_FRAME_BIT in the p2m for the other
* pfn so that the following mfn_to_pfn(mfn) calls will return the
* pfn from the m2p_override (the backend pfn) instead.
* We need to do this because the pages shared by the frontend
* (xen-blkfront) can be already locked (lock_page, called by
* do_read_cache_page); when the userspace backend tries to use them
* with direct_IO, mfn_to_pfn returns the pfn of the frontend, so
* do_blockdev_direct_IO is going to try to lock the same pages
* again resulting in a deadlock.
* As a side effect get_user_pages_fast might not be safe on the
* frontend pages while they are being shared with the backend,
* because mfn_to_pfn (that ends up being called by GUPF) will
* return the backend pfn rather than the frontend pfn. */
ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
if (ret == 0 && get_phys_to_machine(pfn) == mfn)
set_phys_to_machine(pfn, FOREIGN_FRAME(mfn));
return 0;
}
EXPORT_SYMBOL_GPL(m2p_add_override);
@ -752,6 +771,7 @@ int m2p_remove_override(struct page *page, bool clear_pte)
unsigned long uninitialized_var(address);
unsigned level;
pte_t *ptep = NULL;
int ret = 0;
pfn = page_to_pfn(page);
mfn = get_phys_to_machine(pfn);
@ -821,6 +841,22 @@ int m2p_remove_override(struct page *page, bool clear_pte)
} else
set_phys_to_machine(pfn, page->index);
/* p2m(m2p(mfn)) == FOREIGN_FRAME(mfn): the mfn is already present
* somewhere in this domain, even before being added to the
* m2p_override (see comment above in m2p_add_override).
* If there are no other entries in the m2p_override corresponding
* to this mfn, then remove the FOREIGN_FRAME_BIT from the p2m for
* the original pfn (the one shared by the frontend): the backend
* cannot do any IO on this page anymore because it has been
* unshared. Removing the FOREIGN_FRAME_BIT from the p2m entry of
* the original pfn causes mfn_to_pfn(mfn) to return the frontend
* pfn again. */
mfn &= ~FOREIGN_FRAME_BIT;
ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
if (ret == 0 && get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) &&
m2p_find_override(mfn) == NULL)
set_phys_to_machine(pfn, mfn);
return 0;
}
EXPORT_SYMBOL_GPL(m2p_remove_override);

View File

@ -371,7 +371,8 @@ char * __init xen_memory_setup(void)
populated = xen_populate_chunk(map, memmap.nr_entries,
max_pfn, &last_pfn, xen_released_pages);
extra_pages += (xen_released_pages - populated);
xen_released_pages -= populated;
extra_pages += xen_released_pages;
if (last_pfn > max_pfn) {
max_pfn = min(MAX_DOMAIN_PAGES, last_pfn);

View File

@ -139,7 +139,9 @@ void bcma_pmu_workarounds(struct bcma_drv_cc *cc)
bcma_chipco_chipctl_maskset(cc, 0, ~0, 0x7);
break;
case 0x4331:
/* BCM4331 workaround is SPROM-related, we put it in sprom.c */
case 43431:
/* Ext PA lines must be enabled for tx on BCM4331 */
bcma_chipco_bcm4331_ext_pa_lines_ctl(cc, true);
break;
case 43224:
if (bus->chipinfo.rev == 0) {

View File

@ -232,17 +232,19 @@ void __devinit bcma_core_pci_init(struct bcma_drv_pci *pc)
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
bool enable)
{
struct pci_dev *pdev = pc->core->bus->host_pci;
struct pci_dev *pdev;
u32 coremask, tmp;
int err = 0;
if (core->bus->hosttype != BCMA_HOSTTYPE_PCI) {
if (!pc || core->bus->hosttype != BCMA_HOSTTYPE_PCI) {
/* This bcma device is not on a PCI host-bus. So the IRQs are
* not routed through the PCI core.
* So we must not enable routing through the PCI core. */
goto out;
}
pdev = pc->core->bus->host_pci;
err = pci_read_config_dword(pdev, BCMA_PCI_IRQMASK, &tmp);
if (err)
goto out;

View File

@ -579,13 +579,13 @@ int bcma_sprom_get(struct bcma_bus *bus)
if (!sprom)
return -ENOMEM;
if (bus->chipinfo.id == 0x4331)
if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)
bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, false);
pr_debug("SPROM offset 0x%x\n", offset);
bcma_sprom_read(bus, offset, sprom);
if (bus->chipinfo.id == 0x4331)
if (bus->chipinfo.id == 0x4331 || bus->chipinfo.id == 43431)
bcma_chipco_bcm4331_ext_pa_lines_ctl(&bus->drv_cc, true);
err = bcma_sprom_valid(sprom);

View File

@ -34,7 +34,7 @@ static int atmel_trng_read(struct hwrng *rng, void *buf, size_t max,
u32 *data = buf;
/* data ready? */
if (readl(trng->base + TRNG_ODATA) & 1) {
if (readl(trng->base + TRNG_ISR) & 1) {
*data = readl(trng->base + TRNG_ODATA);
/*
ensure data ready is only set again AFTER the next data

View File

@ -164,7 +164,7 @@ void *edac_align_ptr(void **p, unsigned size, int n_elems)
else
return (char *)ptr;
r = size % align;
r = (unsigned long)p % align;
if (r == 0)
return (char *)ptr;

View File

@ -1814,12 +1814,6 @@ static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
if (mce->bank != 8)
return NOTIFY_DONE;
#ifdef CONFIG_SMP
/* Only handle if it is the right mc controller */
if (mce->socketid != pvt->i7core_dev->socket)
return NOTIFY_DONE;
#endif
smp_rmb();
if ((pvt->mce_out + 1) % MCE_LOG_LEN == pvt->mce_in) {
smp_wmb();
@ -2116,8 +2110,6 @@ static void i7core_unregister_mci(struct i7core_dev *i7core_dev)
if (pvt->enable_scrub)
disable_sdram_scrub_setting(mci);
mce_unregister_decode_chain(&i7_mce_dec);
/* Disable EDAC polling */
i7core_pci_ctl_release(pvt);
@ -2222,8 +2214,6 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
/* DCLK for scrub rate setting */
pvt->dclk_freq = get_dclk_freq();
mce_register_decode_chain(&i7_mce_dec);
return 0;
fail0:
@ -2367,8 +2357,10 @@ static int __init i7core_init(void)
pci_rc = pci_register_driver(&i7core_driver);
if (pci_rc >= 0)
if (pci_rc >= 0) {
mce_register_decode_chain(&i7_mce_dec);
return 0;
}
i7core_printk(KERN_ERR, "Failed to register device with error %d.\n",
pci_rc);
@ -2384,6 +2376,7 @@ static void __exit i7core_exit(void)
{
debugf2("MC: " __FILE__ ": %s()\n", __func__);
pci_unregister_driver(&i7core_driver);
mce_unregister_decode_chain(&i7_mce_dec);
}
module_init(i7core_init);

View File

@ -980,7 +980,8 @@ static int __devinit mpc85xx_mc_err_probe(struct platform_device *op)
layers[1].type = EDAC_MC_LAYER_CHANNEL;
layers[1].size = 1;
layers[1].is_virt_csrow = false;
mci = edac_mc_alloc(edac_mc_idx, ARRAY_SIZE(layers), sizeof(*pdata));
mci = edac_mc_alloc(edac_mc_idx, ARRAY_SIZE(layers), layers,
sizeof(*pdata));
if (!mci) {
devres_release_group(&op->dev, mpc85xx_mc_err_probe);
return -ENOMEM;

View File

@ -555,7 +555,7 @@ static int get_dimm_config(struct mem_ctl_info *mci)
pvt->is_close_pg = false;
}
pci_read_config_dword(pvt->pci_ta, RANK_CFG_A, &reg);
pci_read_config_dword(pvt->pci_ddrio, RANK_CFG_A, &reg);
if (IS_RDIMM_ENABLED(reg)) {
/* FIXME: Can also be LRDIMM */
debugf0("Memory is registered\n");
@ -1604,8 +1604,6 @@ static void sbridge_unregister_mci(struct sbridge_dev *sbridge_dev)
debugf0("MC: " __FILE__ ": %s(): mci = %p, dev = %p\n",
__func__, mci, &sbridge_dev->pdev[0]->dev);
mce_unregister_decode_chain(&sbridge_mce_dec);
/* Remove MC sysfs nodes */
edac_mc_del_mc(mci->dev);
@ -1682,7 +1680,6 @@ static int sbridge_register_mci(struct sbridge_dev *sbridge_dev)
goto fail0;
}
mce_register_decode_chain(&sbridge_mce_dec);
return 0;
fail0:
@ -1811,8 +1808,10 @@ static int __init sbridge_init(void)
pci_rc = pci_register_driver(&sbridge_driver);
if (pci_rc >= 0)
if (pci_rc >= 0) {
mce_register_decode_chain(&sbridge_mce_dec);
return 0;
}
sbridge_printk(KERN_ERR, "Failed to register device with error %d.\n",
pci_rc);
@ -1828,6 +1827,7 @@ static void __exit sbridge_exit(void)
{
debugf2("MC: " __FILE__ ": %s()\n", __func__);
pci_unregister_driver(&sbridge_driver);
mce_unregister_decode_chain(&sbridge_mce_dec);
}
module_init(sbridge_init);

View File

@ -6558,7 +6558,7 @@ static void intel_setup_outputs(struct drm_device *dev)
if (I915_READ(HDMIC) & PORT_DETECTED)
intel_hdmi_init(dev, HDMIC);
if (I915_READ(HDMID) & PORT_DETECTED)
if (!dpd_is_edp && I915_READ(HDMID) & PORT_DETECTED)
intel_hdmi_init(dev, HDMID);
if (I915_READ(PCH_DP_C) & DP_DETECTED)

View File

@ -32,6 +32,7 @@
#include "drm.h"
#include "drm_crtc.h"
#include "drm_crtc_helper.h"
#include "drm_edid.h"
#include "intel_drv.h"
#include "i915_drm.h"
#include "i915_drv.h"
@ -67,6 +68,8 @@ struct intel_dp {
struct drm_display_mode *panel_fixed_mode; /* for eDP */
struct delayed_work panel_vdd_work;
bool want_panel_vdd;
struct edid *edid; /* cached EDID for eDP */
int edid_mode_count;
};
/**
@ -371,7 +374,7 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
int recv_bytes;
uint32_t status;
uint32_t aux_clock_divider;
int try, precharge = 5;
int try, precharge;
intel_dp_check_edp(intel_dp);
/* The clock divider is based off the hrawclk,
@ -391,6 +394,11 @@ intel_dp_aux_ch(struct intel_dp *intel_dp,
else
aux_clock_divider = intel_hrawclk(dev) / 2;
if (IS_GEN6(dev))
precharge = 3;
else
precharge = 5;
/* Try to wait for any previous AUX channel activity */
for (try = 0; try < 3; try++) {
status = I915_READ(ch_ctl);
@ -1973,6 +1981,8 @@ intel_dp_probe_oui(struct intel_dp *intel_dp)
if (!(intel_dp->dpcd[DP_DOWN_STREAM_PORT_COUNT] & DP_OUI_SUPPORT))
return;
ironlake_edp_panel_vdd_on(intel_dp);
if (intel_dp_aux_native_read_retry(intel_dp, DP_SINK_OUI, buf, 3))
DRM_DEBUG_KMS("Sink OUI: %02hx%02hx%02hx\n",
buf[0], buf[1], buf[2]);
@ -1980,6 +1990,8 @@ intel_dp_probe_oui(struct intel_dp *intel_dp)
if (intel_dp_aux_native_read_retry(intel_dp, DP_BRANCH_OUI, buf, 3))
DRM_DEBUG_KMS("Branch OUI: %02hx%02hx%02hx\n",
buf[0], buf[1], buf[2]);
ironlake_edp_panel_vdd_off(intel_dp, false);
}
static bool
@ -2116,10 +2128,22 @@ intel_dp_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
{
struct intel_dp *intel_dp = intel_attached_dp(connector);
struct edid *edid;
int size;
if (is_edp(intel_dp)) {
if (!intel_dp->edid)
return NULL;
size = (intel_dp->edid->extensions + 1) * EDID_LENGTH;
edid = kmalloc(size, GFP_KERNEL);
if (!edid)
return NULL;
memcpy(edid, intel_dp->edid, size);
return edid;
}
ironlake_edp_panel_vdd_on(intel_dp);
edid = drm_get_edid(connector, adapter);
ironlake_edp_panel_vdd_off(intel_dp, false);
return edid;
}
@ -2129,9 +2153,17 @@ intel_dp_get_edid_modes(struct drm_connector *connector, struct i2c_adapter *ada
struct intel_dp *intel_dp = intel_attached_dp(connector);
int ret;
ironlake_edp_panel_vdd_on(intel_dp);
if (is_edp(intel_dp)) {
drm_mode_connector_update_edid_property(connector,
intel_dp->edid);
ret = drm_add_edid_modes(connector, intel_dp->edid);
drm_edid_to_eld(connector,
intel_dp->edid);
connector->display_info.raw_edid = NULL;
return intel_dp->edid_mode_count;
}
ret = intel_ddc_get_modes(connector, adapter);
ironlake_edp_panel_vdd_off(intel_dp, false);
return ret;
}
@ -2321,6 +2353,7 @@ static void intel_dp_encoder_destroy(struct drm_encoder *encoder)
i2c_del_adapter(&intel_dp->adapter);
drm_encoder_cleanup(encoder);
if (is_edp(intel_dp)) {
kfree(intel_dp->edid);
cancel_delayed_work_sync(&intel_dp->panel_vdd_work);
ironlake_panel_vdd_off_sync(intel_dp);
}
@ -2504,11 +2537,14 @@ intel_dp_init(struct drm_device *dev, int output_reg)
break;
}
intel_dp_i2c_init(intel_dp, intel_connector, name);
/* Cache some DPCD data in the eDP case */
if (is_edp(intel_dp)) {
bool ret;
struct edp_power_seq cur, vbt;
u32 pp_on, pp_off, pp_div;
struct edid *edid;
pp_on = I915_READ(PCH_PP_ON_DELAYS);
pp_off = I915_READ(PCH_PP_OFF_DELAYS);
@ -2576,9 +2612,19 @@ intel_dp_init(struct drm_device *dev, int output_reg)
intel_dp_destroy(&intel_connector->base);
return;
}
}
intel_dp_i2c_init(intel_dp, intel_connector, name);
ironlake_edp_panel_vdd_on(intel_dp);
edid = drm_get_edid(connector, &intel_dp->adapter);
if (edid) {
drm_mode_connector_update_edid_property(connector,
edid);
intel_dp->edid_mode_count =
drm_add_edid_modes(connector, edid);
drm_edid_to_eld(connector, edid);
intel_dp->edid = edid;
}
ironlake_edp_panel_vdd_off(intel_dp, false);
}
intel_encoder->hot_plug = intel_dp_hot_plug;

View File

@ -1926,7 +1926,9 @@ radeon_atom_encoder_mode_set(struct drm_encoder *encoder,
if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) {
r600_hdmi_enable(encoder);
if (ASIC_IS_DCE4(rdev))
if (ASIC_IS_DCE6(rdev))
; /* TODO (use pointers instead of if-s?) */
else if (ASIC_IS_DCE4(rdev))
evergreen_hdmi_setmode(encoder, adjusted_mode);
else
r600_hdmi_setmode(encoder, adjusted_mode);

View File

@ -1932,6 +1932,9 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
smx_dc_ctl0 |= NUMBER_OF_SETS(rdev->config.evergreen.sx_num_of_sets);
WREG32(SMX_DC_CTL0, smx_dc_ctl0);
if (rdev->family <= CHIP_SUMO2)
WREG32(SMX_SAR_CTL0, 0x00010000);
WREG32(SX_EXPORT_BUFFER_SIZES, (COLOR_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_size / 4) - 1) |
POSITION_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_pos_size / 4) - 1) |
SMX_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_smx_size / 4) - 1)));

View File

@ -156,9 +156,6 @@ void evergreen_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
uint32_t offset;
if (ASIC_IS_DCE5(rdev))
return;
/* Silent, r600_hdmi_enable will raise WARN for us */
if (!dig->afmt->enabled)
return;

View File

@ -503,6 +503,7 @@
#define SCRATCH_UMSK 0x8540
#define SCRATCH_ADDR 0x8544
#define SMX_SAR_CTL0 0xA008
#define SMX_DC_CTL0 0xA020
#define USE_HASH_FUNCTION (1 << 0)
#define NUMBER_OF_SETS(x) ((x) << 1)

View File

@ -1303,6 +1303,10 @@ static int cayman_startup(struct radeon_device *rdev)
if (r)
return r;
r = r600_audio_init(rdev);
if (r)
return r;
return 0;
}
@ -1329,6 +1333,7 @@ int cayman_resume(struct radeon_device *rdev)
int cayman_suspend(struct radeon_device *rdev)
{
r600_audio_fini(rdev);
/* FIXME: we should wait for ring to be empty */
radeon_ib_pool_suspend(rdev);
radeon_vm_manager_suspend(rdev);

View File

@ -1839,6 +1839,7 @@ void r600_gpu_init(struct radeon_device *rdev)
WREG32(PA_CL_ENHANCE, (CLIP_VTX_REORDER_ENA |
NUM_CLIP_SEQ(3)));
WREG32(PA_SC_ENHANCE, FORCE_EOV_MAX_CLK_CNT(4095));
WREG32(VC_ENHANCE, 0);
}

View File

@ -57,7 +57,7 @@ static bool radeon_dig_encoder(struct drm_encoder *encoder)
*/
static int r600_audio_chipset_supported(struct radeon_device *rdev)
{
return (rdev->family >= CHIP_R600 && !ASIC_IS_DCE5(rdev))
return (rdev->family >= CHIP_R600 && !ASIC_IS_DCE6(rdev))
|| rdev->family == CHIP_RS600
|| rdev->family == CHIP_RS690
|| rdev->family == CHIP_RS740;

View File

@ -2079,6 +2079,48 @@ static int r600_packet3_check(struct radeon_cs_parser *p,
return -EINVAL;
}
break;
case PACKET3_STRMOUT_BASE_UPDATE:
if (p->family < CHIP_RV770) {
DRM_ERROR("STRMOUT_BASE_UPDATE only supported on 7xx\n");
return -EINVAL;
}
if (pkt->count != 1) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE packet count\n");
return -EINVAL;
}
if (idx_value > 3) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE index\n");
return -EINVAL;
}
{
u64 offset;
r = r600_cs_packet_next_reloc(p, &reloc);
if (r) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE reloc\n");
return -EINVAL;
}
if (reloc->robj != track->vgt_strmout_bo[idx_value]) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE, bo does not match\n");
return -EINVAL;
}
offset = radeon_get_ib_value(p, idx+1) << 8;
if (offset != track->vgt_strmout_bo_offset[idx_value]) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE, bo offset does not match: 0x%llx, 0x%x\n",
offset, track->vgt_strmout_bo_offset[idx_value]);
return -EINVAL;
}
if ((offset + 4) > radeon_bo_size(reloc->robj)) {
DRM_ERROR("bad STRMOUT_BASE_UPDATE bo too small: 0x%llx, 0x%lx\n",
offset + 4, radeon_bo_size(reloc->robj));
return -EINVAL;
}
ib[idx+1] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
}
break;
case PACKET3_SURFACE_BASE_UPDATE:
if (p->family >= CHIP_RV770 || p->family == CHIP_R600) {
DRM_ERROR("bad SURFACE_BASE_UPDATE\n");

View File

@ -322,9 +322,6 @@ void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mod
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
uint32_t offset;
if (ASIC_IS_DCE5(rdev))
return;
/* Silent, r600_hdmi_enable will raise WARN for us */
if (!dig->afmt->enabled)
return;
@ -483,7 +480,7 @@ void r600_hdmi_enable(struct drm_encoder *encoder)
uint32_t offset;
u32 hdmi;
if (ASIC_IS_DCE5(rdev))
if (ASIC_IS_DCE6(rdev))
return;
/* Silent, r600_hdmi_enable will raise WARN for us */
@ -543,7 +540,7 @@ void r600_hdmi_disable(struct drm_encoder *encoder)
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
uint32_t offset;
if (ASIC_IS_DCE5(rdev))
if (ASIC_IS_DCE6(rdev))
return;
/* Called for ATOM_ENCODER_MODE_HDMI only */

View File

@ -485,6 +485,7 @@
#define TC_L2_SIZE(x) ((x)<<5)
#define L2_DISABLE_LATE_HIT (1<<9)
#define VC_ENHANCE 0x9714
#define VGT_CACHE_INVALIDATION 0x88C4
#define CACHE_INVALIDATION(x) ((x)<<0)
@ -1163,6 +1164,7 @@
#define PACKET3_SET_CTL_CONST 0x6F
#define PACKET3_SET_CTL_CONST_OFFSET 0x0003cff0
#define PACKET3_SET_CTL_CONST_END 0x0003e200
#define PACKET3_STRMOUT_BASE_UPDATE 0x72 /* r7xx */
#define PACKET3_SURFACE_BASE_UPDATE 0x73

View File

@ -58,9 +58,10 @@
* 2.14.0 - add evergreen tiling informations
* 2.15.0 - add max_pipes query
* 2.16.0 - fix evergreen 2D tiled surface calculation
* 2.17.0 - add STRMOUT_BASE_UPDATE for r7xx
*/
#define KMS_DRIVER_MAJOR 2
#define KMS_DRIVER_MINOR 16
#define KMS_DRIVER_MINOR 17
#define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev);

View File

@ -801,9 +801,13 @@ static void radeon_dynpm_idle_work_handler(struct work_struct *work)
int i;
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
not_processed += radeon_fence_count_emitted(rdev, i);
if (not_processed >= 3)
break;
struct radeon_ring *ring = &rdev->ring[i];
if (ring->ready) {
not_processed += radeon_fence_count_emitted(rdev, i);
if (not_processed >= 3)
break;
}
}
if (not_processed >= 3) { /* should upclock */

View File

@ -169,11 +169,17 @@ struct dma_buf *radeon_gem_prime_export(struct drm_device *dev,
struct radeon_bo *bo = gem_to_radeon_bo(obj);
int ret = 0;
/* pin buffer into GTT */
ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL);
if (ret)
ret = radeon_bo_reserve(bo, false);
if (unlikely(ret != 0))
return ERR_PTR(ret);
/* pin buffer into GTT */
ret = radeon_bo_pin(bo, RADEON_GEM_DOMAIN_GTT, NULL);
if (ret) {
radeon_bo_unreserve(bo);
return ERR_PTR(ret);
}
radeon_bo_unreserve(bo);
return dma_buf_export(bo, &radeon_dmabuf_ops, obj->size, flags);
}

View File

@ -616,6 +616,9 @@ static void rv770_gpu_init(struct radeon_device *rdev)
ACK_FLUSH_CTL(3) |
SYNC_FLUSH_CTL));
if (rdev->family != CHIP_RV770)
WREG32(SMX_SAR_CTL0, 0x00003f3f);
db_debug3 = RREG32(DB_DEBUG3);
db_debug3 &= ~DB_CLK_OFF_DELAY(0x1f);
switch (rdev->family) {
@ -792,7 +795,7 @@ static void rv770_gpu_init(struct radeon_device *rdev)
WREG32(PA_CL_ENHANCE, (CLIP_VTX_REORDER_ENA |
NUM_CLIP_SEQ(3)));
WREG32(VC_ENHANCE, 0);
}
void r700_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc)

View File

@ -211,6 +211,7 @@
#define SCRATCH_UMSK 0x8540
#define SCRATCH_ADDR 0x8544
#define SMX_SAR_CTL0 0xA008
#define SMX_DC_CTL0 0xA020
#define USE_HASH_FUNCTION (1 << 0)
#define CACHE_DEPTH(x) ((x) << 1)
@ -310,6 +311,8 @@
#define TCP_CNTL 0x9610
#define TCP_CHAN_STEER 0x9614
#define VC_ENHANCE 0x9714
#define VGT_CACHE_INVALIDATION 0x88C4
#define CACHE_INVALIDATION(x) ((x)<<0)
#define VC_ONLY 0

View File

@ -47,9 +47,9 @@ static int sis_driver_load(struct drm_device *dev, unsigned long chipset)
if (dev_priv == NULL)
return -ENOMEM;
idr_init(&dev_priv->object_idr);
dev->dev_private = (void *)dev_priv;
dev_priv->chipset = chipset;
idr_init(&dev->object_name_idr);
return 0;
}

View File

@ -13,8 +13,21 @@
static struct drm_driver driver;
/*
* There are many DisplayLink-based graphics products, all with unique PIDs.
* So we match on DisplayLink's VID + Vendor-Defined Interface Class (0xff)
* We also require a match on SubClass (0x00) and Protocol (0x00),
* which is compatible with all known USB 2.0 era graphics chips and firmware,
* but allows DisplayLink to increment those for any future incompatible chips
*/
static struct usb_device_id id_table[] = {
{.idVendor = 0x17e9, .match_flags = USB_DEVICE_ID_MATCH_VENDOR,},
{.idVendor = 0x17e9, .bInterfaceClass = 0xff,
.bInterfaceSubClass = 0x00,
.bInterfaceProtocol = 0x00,
.match_flags = USB_DEVICE_ID_MATCH_VENDOR |
USB_DEVICE_ID_MATCH_INT_CLASS |
USB_DEVICE_ID_MATCH_INT_SUBCLASS |
USB_DEVICE_ID_MATCH_INT_PROTOCOL,},
{},
};
MODULE_DEVICE_TABLE(usb, id_table);

Some files were not shown because too many files have changed in this diff Show More