summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm
diff options
context:
space:
mode:
authorNishanth Menon <nm@ti.com>2015-03-09 17:11:59 -0500
committerTom Rini <trini@konsulko.com>2015-03-13 09:28:29 -0400
commitc616a0df297e886f09bf88523bcd03a86bdf8704 (patch)
tree3b6620ec15ed38f382061a5b04b9412e78fd0e12 /arch/arm/include/asm
parentfb1bf40838477537fb77bb591335c7aa7f90e8d5 (diff)
downloadu-boot-imx-c616a0df297e886f09bf88523bcd03a86bdf8704.zip
u-boot-imx-c616a0df297e886f09bf88523bcd03a86bdf8704.tar.gz
u-boot-imx-c616a0df297e886f09bf88523bcd03a86bdf8704.tar.bz2
ARM: Introduce erratum workaround for 798870
Add workaround for Cortex-A15 ARM erratum 798870 which says "If back-to-back speculative cache line fills (fill A and fill B) are issued from the L1 data cache of a CPU to the L2 cache, the second request (fill B) is then cancelled, and the second request would have detected a hazard against a recent write or eviction (write B) to the same cache line as fill B then the L2 logic might deadlock." Implementations for SoC families such as Exynos, OMAP5/DRA7 etc will be widely different. Every SoC has slightly different manner of setting up access to L2ACLR and similar registers since the Secure Monitor handling of Secure Monitor Call(smc) is diverse. Hence an weak function is introduced which may be overriden to implement SoC specific accessor implementation. Based on ARM errata Document revision 18.0 (22 Nov 2013) Signed-off-by: Nishanth Menon <nm@ti.com> Tested-by: Matt Porter <mporter@konsulko.com> Reviewed-by: Tom Rini <trini@konsulko.com>
Diffstat (limited to 'arch/arm/include/asm')
-rw-r--r--arch/arm/include/asm/armv7.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/arm/include/asm/armv7.h b/arch/arm/include/asm/armv7.h
index c3cc508..cd40912 100644
--- a/arch/arm/include/asm/armv7.h
+++ b/arch/arm/include/asm/armv7.h
@@ -137,6 +137,9 @@ extern char __secure_end[];
#endif /* CONFIG_ARMV7_NONSEC || CONFIG_ARMV7_VIRT */
+void v7_arch_cp15_set_l2aux_ctrl(u32 l2auxctrl, u32 cpu_midr,
+ u32 cpu_rev_comb, u32 cpu_variant,
+ u32 cpu_rev);
#endif /* ! __ASSEMBLY__ */
#endif