summaryrefslogtreecommitdiff
path: root/arch/arm/cpu/armv8
Commit message (Collapse)AuthorAgeLines
...
* ARMv8/FSL_LSCH3: Add FSL_LSCH3 SoCYork Sun2014-07-03-1/+705
| | | | | | | | | | | | | | Freescale LayerScape with Chassis Generation 3 is a set of SoCs with ARMv8 cores and 3rd generation of Chassis. We use different MMU setup to support memory map and cache attribute for these SoCs. MMU and cache are enabled very early to bootst performance, especially for early development on emulators. After u-boot relocates to DDR, a new MMU table with QBMan cache access is created in DDR. SMMU pagesize is set in SMMU_sACR register. Both DDR3 and DDR4 are supported. Signed-off-by: York Sun <yorksun@freescale.com> Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: Arnab Basu <arnab.basu@freescale.com>
* ARMv8: Adjust MMU setupYork Sun2014-07-03-30/+20
| | | | | | | Make MMU function reusable. Platform code can setup its own MMU tables. Signed-off-by: York Sun <yorksun@freescale.com> CC: David Feng <fenghua@phytium.com.cn>
* arm64: zero cntvoff_el2Mark Rutland2014-06-09-1/+1
| | | | | | | | | | | | | | | | | | | Currently cntvoff_el2 is initialised with an arbitrary bag of bits derived from the initial value of cnthctl_el2 on the current CPU. This is somewhat odd and problematic as some of these bits are UNKNOWN at reset and may differ across CPUs (which may cause an OS at EL1 to observe time going backwards across CPUs). This patch instead initialises cntvoff_el2 with xzr, giving the register a consistent value of zero on all CPUs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Scott Wood <scottwood@freescale.com> Cc: David Feng <fenghua@phytium.com.cn> Cc: Tom Rini <trini@ti.com> Acked-by: David.Feng <fenghua@phytium.com.cn>
* Arm64 fix a bug of vbar_el3 initializationDavid Feng2014-05-25-2/+2
| | | | Signed-off-by: David Feng <fenghua@phytium.com.cn>
* arm64 patch: gicv3 supportDavid Feng2014-04-08-114/+39
| | | | | | | | | | | | | | This patch add gicv3 support to uboot armv8 platform. Changes for v2: - rename arm/cpu/armv8/gic.S with arm/lib/gic_64.S - move smp_kick_all_cpus() from gic.S to start.S, it would be implementation dependent. - Each core initialize it's own ReDistributor instead of master initializeing all ReDistributors. This is advised by arnab.basu <arnab.basu@freescale.com>. Signed-off-by: David Feng <fenghua@phytium.com.cn>
* ARMv8: fix bug for flush data cache by set/wayLeo Yan2014-04-07-3/+1
| | | | | | | | | When flush the d$ with set/way instruction, it need calculate the way's offset = log2(Associativity); but in current uboot's code, it use below formula to calculate the offset: log2(Associativity * 2 - 1), so finally it cannot flush data cache properly. Signed-off-by: Leo Yan <leoy@marvell.com>
* armv8/cache: Change cache invalidate and flush functionYork Sun2014-04-07-20/+46
| | | | | | | | | | | When SoC first boots up, we should invalidate the cache but not flush it. We can use the same function for invalid and flush mostly, with a wrapper. Invalidating large cache can ben slow on emulator, so we postpone doing so until I-cache is enabled, and before enabling D-cache. Signed-off-by: York Sun <yorksun@freescale.com> CC: David Feng <fenghua@phytium.com.cn>
* armv8/cache: Flush D-cache, invalidate I-cache for relocationYork Sun2014-04-07-6/+0
| | | | | | | | If D-cache is enabled, we need to flush it, and invalidate i-cache before jumping to the new location. This should be done right after relocation. Signed-off-by: York Sun <yorksun@freescale.com> CC: David Feng <fenghua@phytium.com.cn>
* armv8/cache: Consolidate setting for MAIR and TCRYork Sun2014-04-07-25/+19
| | | | | | | | Move setting for MAIR and TCR to cache_v8.c, to avoid conflict with sub-architecture. Signed-off-by: York Sun <yorksun@freescale.com> CC: David Feng <fenghua@phytium.com.cn>
* arm: Switch to -mno-unaligned-access when supported by the compilerTom Rini2014-02-26-4/+1
| | | | | | | | | | | | | | | | | | When we tell the compiler to optimize for ARMv7 (and ARMv6 for that matter) it assumes a default of SCTRL.A being cleared and unaligned accesses being allowed and fast at the hardware level. We set this bit and must pass along -mno-unaligned-access so that the compiler will still breakdown accesses and not trigger a data abort. To better help understand the requirements of the project with respect to unaligned memory access, the Documentation/unaligned-memory-access.txt file has been added as doc/README.unaligned-memory-access.txt and is taken from the v3.14-rc1 tag of the kernel. Cc: Albert ARIBAUD <albert.u.boot@aribaud.net> Cc: Mans Rullgard <mans@mansr.com> Signed-off-by: Tom Rini <trini@ti.com>
* arm64: core supportDavid Feng2014-01-09-0/+1050
Relocation code based on a patch by Scott Wood, which is: Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: David Feng <fenghua@phytium.com.cn>