The official Mbed 2 C/C++ SDK provides the software platform and libraries to build your applications.

Dependents:   hello SerialTestv11 SerialTestv12 Sierpinski ... more

mbed 2

This is the mbed 2 library. If you'd like to learn about Mbed OS please see the mbed-os docs.

Committer:
<>
Date:
Mon Jan 16 12:05:23 2017 +0000
Revision:
134:ad3be0349dc5
Parent:
132:9baf128c2fab
Release 134 of the mbed library

Ports for Upcoming Targets


Fixes and Changes

3488: Dev stm i2c v2 unitary functions https://github.com/ARMmbed/mbed-os/pull/3488
3492: Fix #3463 CAN read() return value https://github.com/ARMmbed/mbed-os/pull/3492
3503: [LPC15xx] Ensure that PWM=1 is resolved correctly https://github.com/ARMmbed/mbed-os/pull/3503
3504: [LPC15xx] CAN implementation improvements https://github.com/ARMmbed/mbed-os/pull/3504
3539: NUCLEO_F412ZG - Add support of TRNG peripheral https://github.com/ARMmbed/mbed-os/pull/3539
3540: STM: SPI: Initialize Rx in spi_master_write https://github.com/ARMmbed/mbed-os/pull/3540
3438: K64F: Add support for SERIAL ASYNCH API https://github.com/ARMmbed/mbed-os/pull/3438
3519: MCUXpresso: Fix ENET driver to enable interrupts after interrupt handler is set https://github.com/ARMmbed/mbed-os/pull/3519
3544: STM32L4 deepsleep improvement https://github.com/ARMmbed/mbed-os/pull/3544
3546: NUCLEO-F412ZG - Add CAN peripheral https://github.com/ARMmbed/mbed-os/pull/3546
3551: Fix I2C driver for RZ/A1H https://github.com/ARMmbed/mbed-os/pull/3551
3558: K64F UART Asynch API: Fix synchronization issue https://github.com/ARMmbed/mbed-os/pull/3558
3563: LPC4088 - Fix vector checksum https://github.com/ARMmbed/mbed-os/pull/3563
3567: Dev stm32 F0 v1.7.0 https://github.com/ARMmbed/mbed-os/pull/3567
3577: Fixes linking errors when building with debug profile https://github.com/ARMmbed/mbed-os/pull/3577

Who changed what in which revision?

UserRevisionLine numberNew contents of line
<> 132:9baf128c2fab 1 /**************************************************************************//**
<> 132:9baf128c2fab 2 * @file core_caFunc.h
<> 132:9baf128c2fab 3 * @brief CMSIS Cortex-A Core Function Access Header File
<> 132:9baf128c2fab 4 * @version V3.10
<> 132:9baf128c2fab 5 * @date 30 Oct 2013
<> 132:9baf128c2fab 6 *
<> 132:9baf128c2fab 7 * @note
<> 132:9baf128c2fab 8 *
<> 132:9baf128c2fab 9 ******************************************************************************/
<> 132:9baf128c2fab 10 /* Copyright (c) 2009 - 2013 ARM LIMITED
<> 132:9baf128c2fab 11
<> 132:9baf128c2fab 12 All rights reserved.
<> 132:9baf128c2fab 13 Redistribution and use in source and binary forms, with or without
<> 132:9baf128c2fab 14 modification, are permitted provided that the following conditions are met:
<> 132:9baf128c2fab 15 - Redistributions of source code must retain the above copyright
<> 132:9baf128c2fab 16 notice, this list of conditions and the following disclaimer.
<> 132:9baf128c2fab 17 - Redistributions in binary form must reproduce the above copyright
<> 132:9baf128c2fab 18 notice, this list of conditions and the following disclaimer in the
<> 132:9baf128c2fab 19 documentation and/or other materials provided with the distribution.
<> 132:9baf128c2fab 20 - Neither the name of ARM nor the names of its contributors may be used
<> 132:9baf128c2fab 21 to endorse or promote products derived from this software without
<> 132:9baf128c2fab 22 specific prior written permission.
<> 132:9baf128c2fab 23 *
<> 132:9baf128c2fab 24 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
<> 132:9baf128c2fab 25 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
<> 132:9baf128c2fab 26 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
<> 132:9baf128c2fab 27 ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS AND CONTRIBUTORS BE
<> 132:9baf128c2fab 28 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
<> 132:9baf128c2fab 29 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
<> 132:9baf128c2fab 30 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
<> 132:9baf128c2fab 31 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
<> 132:9baf128c2fab 32 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
<> 132:9baf128c2fab 33 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
<> 132:9baf128c2fab 34 POSSIBILITY OF SUCH DAMAGE.
<> 132:9baf128c2fab 35 ---------------------------------------------------------------------------*/
<> 132:9baf128c2fab 36
<> 132:9baf128c2fab 37
<> 132:9baf128c2fab 38 #ifndef __CORE_CAFUNC_H__
<> 132:9baf128c2fab 39 #define __CORE_CAFUNC_H__
<> 132:9baf128c2fab 40
<> 132:9baf128c2fab 41
<> 132:9baf128c2fab 42 /* ########################### Core Function Access ########################### */
<> 132:9baf128c2fab 43 /** \ingroup CMSIS_Core_FunctionInterface
<> 132:9baf128c2fab 44 \defgroup CMSIS_Core_RegAccFunctions CMSIS Core Register Access Functions
<> 132:9baf128c2fab 45 @{
<> 132:9baf128c2fab 46 */
<> 132:9baf128c2fab 47
<> 132:9baf128c2fab 48 #if defined ( __CC_ARM ) /*------------------RealView Compiler -----------------*/
<> 132:9baf128c2fab 49 /* ARM armcc specific functions */
<> 132:9baf128c2fab 50
<> 132:9baf128c2fab 51 #if (__ARMCC_VERSION < 400677)
<> 132:9baf128c2fab 52 #error "Please use ARM Compiler Toolchain V4.0.677 or later!"
<> 132:9baf128c2fab 53 #endif
<> 132:9baf128c2fab 54
<> 132:9baf128c2fab 55 #define MODE_USR 0x10
<> 132:9baf128c2fab 56 #define MODE_FIQ 0x11
<> 132:9baf128c2fab 57 #define MODE_IRQ 0x12
<> 132:9baf128c2fab 58 #define MODE_SVC 0x13
<> 132:9baf128c2fab 59 #define MODE_MON 0x16
<> 132:9baf128c2fab 60 #define MODE_ABT 0x17
<> 132:9baf128c2fab 61 #define MODE_HYP 0x1A
<> 132:9baf128c2fab 62 #define MODE_UND 0x1B
<> 132:9baf128c2fab 63 #define MODE_SYS 0x1F
<> 132:9baf128c2fab 64
<> 132:9baf128c2fab 65 /** \brief Get APSR Register
<> 132:9baf128c2fab 66
<> 132:9baf128c2fab 67 This function returns the content of the APSR Register.
<> 132:9baf128c2fab 68
<> 132:9baf128c2fab 69 \return APSR Register value
<> 132:9baf128c2fab 70 */
<> 132:9baf128c2fab 71 __STATIC_INLINE uint32_t __get_APSR(void)
<> 132:9baf128c2fab 72 {
<> 132:9baf128c2fab 73 register uint32_t __regAPSR __ASM("apsr");
<> 132:9baf128c2fab 74 return(__regAPSR);
<> 132:9baf128c2fab 75 }
<> 132:9baf128c2fab 76
<> 132:9baf128c2fab 77
<> 132:9baf128c2fab 78 /** \brief Get CPSR Register
<> 132:9baf128c2fab 79
<> 132:9baf128c2fab 80 This function returns the content of the CPSR Register.
<> 132:9baf128c2fab 81
<> 132:9baf128c2fab 82 \return CPSR Register value
<> 132:9baf128c2fab 83 */
<> 132:9baf128c2fab 84 __STATIC_INLINE uint32_t __get_CPSR(void)
<> 132:9baf128c2fab 85 {
<> 132:9baf128c2fab 86 register uint32_t __regCPSR __ASM("cpsr");
<> 132:9baf128c2fab 87 return(__regCPSR);
<> 132:9baf128c2fab 88 }
<> 132:9baf128c2fab 89
<> 132:9baf128c2fab 90 /** \brief Set Stack Pointer
<> 132:9baf128c2fab 91
<> 132:9baf128c2fab 92 This function assigns the given value to the current stack pointer.
<> 132:9baf128c2fab 93
<> 132:9baf128c2fab 94 \param [in] topOfStack Stack Pointer value to set
<> 132:9baf128c2fab 95 */
<> 132:9baf128c2fab 96 register uint32_t __regSP __ASM("sp");
<> 132:9baf128c2fab 97 __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 132:9baf128c2fab 98 {
<> 132:9baf128c2fab 99 __regSP = topOfStack;
<> 132:9baf128c2fab 100 }
<> 132:9baf128c2fab 101
<> 132:9baf128c2fab 102
<> 132:9baf128c2fab 103 /** \brief Get link register
<> 132:9baf128c2fab 104
<> 132:9baf128c2fab 105 This function returns the value of the link register
<> 132:9baf128c2fab 106
<> 132:9baf128c2fab 107 \return Value of link register
<> 132:9baf128c2fab 108 */
<> 132:9baf128c2fab 109 register uint32_t __reglr __ASM("lr");
<> 132:9baf128c2fab 110 __STATIC_INLINE uint32_t __get_LR(void)
<> 132:9baf128c2fab 111 {
<> 132:9baf128c2fab 112 return(__reglr);
<> 132:9baf128c2fab 113 }
<> 132:9baf128c2fab 114
<> 132:9baf128c2fab 115 /** \brief Set link register
<> 132:9baf128c2fab 116
<> 132:9baf128c2fab 117 This function sets the value of the link register
<> 132:9baf128c2fab 118
<> 132:9baf128c2fab 119 \param [in] lr LR value to set
<> 132:9baf128c2fab 120 */
<> 132:9baf128c2fab 121 __STATIC_INLINE void __set_LR(uint32_t lr)
<> 132:9baf128c2fab 122 {
<> 132:9baf128c2fab 123 __reglr = lr;
<> 132:9baf128c2fab 124 }
<> 132:9baf128c2fab 125
<> 132:9baf128c2fab 126 /** \brief Set Process Stack Pointer
<> 132:9baf128c2fab 127
<> 132:9baf128c2fab 128 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 132:9baf128c2fab 129
<> 132:9baf128c2fab 130 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 132:9baf128c2fab 131 */
<> 132:9baf128c2fab 132 __STATIC_ASM void __set_PSP(uint32_t topOfProcStack)
<> 132:9baf128c2fab 133 {
<> 132:9baf128c2fab 134 ARM
<> 132:9baf128c2fab 135 PRESERVE8
<> 132:9baf128c2fab 136
<> 132:9baf128c2fab 137 BIC R0, R0, #7 ;ensure stack is 8-byte aligned
<> 132:9baf128c2fab 138 MRS R1, CPSR
<> 132:9baf128c2fab 139 CPS #MODE_SYS ;no effect in USR mode
<> 132:9baf128c2fab 140 MOV SP, R0
<> 132:9baf128c2fab 141 MSR CPSR_c, R1 ;no effect in USR mode
<> 132:9baf128c2fab 142 ISB
<> 132:9baf128c2fab 143 BX LR
<> 132:9baf128c2fab 144
<> 132:9baf128c2fab 145 }
<> 132:9baf128c2fab 146
<> 132:9baf128c2fab 147 /** \brief Set User Mode
<> 132:9baf128c2fab 148
<> 132:9baf128c2fab 149 This function changes the processor state to User Mode
<> 132:9baf128c2fab 150 */
<> 132:9baf128c2fab 151 __STATIC_ASM void __set_CPS_USR(void)
<> 132:9baf128c2fab 152 {
<> 132:9baf128c2fab 153 ARM
<> 132:9baf128c2fab 154
<> 132:9baf128c2fab 155 CPS #MODE_USR
<> 132:9baf128c2fab 156 BX LR
<> 132:9baf128c2fab 157 }
<> 132:9baf128c2fab 158
<> 132:9baf128c2fab 159
<> 132:9baf128c2fab 160 /** \brief Enable FIQ
<> 132:9baf128c2fab 161
<> 132:9baf128c2fab 162 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 132:9baf128c2fab 163 Can only be executed in Privileged modes.
<> 132:9baf128c2fab 164 */
<> 132:9baf128c2fab 165 #define __enable_fault_irq __enable_fiq
<> 132:9baf128c2fab 166
<> 132:9baf128c2fab 167
<> 132:9baf128c2fab 168 /** \brief Disable FIQ
<> 132:9baf128c2fab 169
<> 132:9baf128c2fab 170 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 132:9baf128c2fab 171 Can only be executed in Privileged modes.
<> 132:9baf128c2fab 172 */
<> 132:9baf128c2fab 173 #define __disable_fault_irq __disable_fiq
<> 132:9baf128c2fab 174
<> 132:9baf128c2fab 175
<> 132:9baf128c2fab 176 /** \brief Get FPSCR
<> 132:9baf128c2fab 177
<> 132:9baf128c2fab 178 This function returns the current value of the Floating Point Status/Control register.
<> 132:9baf128c2fab 179
<> 132:9baf128c2fab 180 \return Floating Point Status/Control register value
<> 132:9baf128c2fab 181 */
<> 132:9baf128c2fab 182 __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 132:9baf128c2fab 183 {
<> 132:9baf128c2fab 184 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 132:9baf128c2fab 185 register uint32_t __regfpscr __ASM("fpscr");
<> 132:9baf128c2fab 186 return(__regfpscr);
<> 132:9baf128c2fab 187 #else
<> 132:9baf128c2fab 188 return(0);
<> 132:9baf128c2fab 189 #endif
<> 132:9baf128c2fab 190 }
<> 132:9baf128c2fab 191
<> 132:9baf128c2fab 192
<> 132:9baf128c2fab 193 /** \brief Set FPSCR
<> 132:9baf128c2fab 194
<> 132:9baf128c2fab 195 This function assigns the given value to the Floating Point Status/Control register.
<> 132:9baf128c2fab 196
<> 132:9baf128c2fab 197 \param [in] fpscr Floating Point Status/Control value to set
<> 132:9baf128c2fab 198 */
<> 132:9baf128c2fab 199 __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 132:9baf128c2fab 200 {
<> 132:9baf128c2fab 201 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 132:9baf128c2fab 202 register uint32_t __regfpscr __ASM("fpscr");
<> 132:9baf128c2fab 203 __regfpscr = (fpscr);
<> 132:9baf128c2fab 204 #endif
<> 132:9baf128c2fab 205 }
<> 132:9baf128c2fab 206
<> 132:9baf128c2fab 207 /** \brief Get FPEXC
<> 132:9baf128c2fab 208
<> 132:9baf128c2fab 209 This function returns the current value of the Floating Point Exception Control register.
<> 132:9baf128c2fab 210
<> 132:9baf128c2fab 211 \return Floating Point Exception Control register value
<> 132:9baf128c2fab 212 */
<> 132:9baf128c2fab 213 __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 132:9baf128c2fab 214 {
<> 132:9baf128c2fab 215 #if (__FPU_PRESENT == 1)
<> 132:9baf128c2fab 216 register uint32_t __regfpexc __ASM("fpexc");
<> 132:9baf128c2fab 217 return(__regfpexc);
<> 132:9baf128c2fab 218 #else
<> 132:9baf128c2fab 219 return(0);
<> 132:9baf128c2fab 220 #endif
<> 132:9baf128c2fab 221 }
<> 132:9baf128c2fab 222
<> 132:9baf128c2fab 223
<> 132:9baf128c2fab 224 /** \brief Set FPEXC
<> 132:9baf128c2fab 225
<> 132:9baf128c2fab 226 This function assigns the given value to the Floating Point Exception Control register.
<> 132:9baf128c2fab 227
<> 132:9baf128c2fab 228 \param [in] fpscr Floating Point Exception Control value to set
<> 132:9baf128c2fab 229 */
<> 132:9baf128c2fab 230 __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 132:9baf128c2fab 231 {
<> 132:9baf128c2fab 232 #if (__FPU_PRESENT == 1)
<> 132:9baf128c2fab 233 register uint32_t __regfpexc __ASM("fpexc");
<> 132:9baf128c2fab 234 __regfpexc = (fpexc);
<> 132:9baf128c2fab 235 #endif
<> 132:9baf128c2fab 236 }
<> 132:9baf128c2fab 237
<> 132:9baf128c2fab 238 /** \brief Get CPACR
<> 132:9baf128c2fab 239
<> 132:9baf128c2fab 240 This function returns the current value of the Coprocessor Access Control register.
<> 132:9baf128c2fab 241
<> 132:9baf128c2fab 242 \return Coprocessor Access Control register value
<> 132:9baf128c2fab 243 */
<> 132:9baf128c2fab 244 __STATIC_INLINE uint32_t __get_CPACR(void)
<> 132:9baf128c2fab 245 {
<> 132:9baf128c2fab 246 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 132:9baf128c2fab 247 return __regCPACR;
<> 132:9baf128c2fab 248 }
<> 132:9baf128c2fab 249
<> 132:9baf128c2fab 250 /** \brief Set CPACR
<> 132:9baf128c2fab 251
<> 132:9baf128c2fab 252 This function assigns the given value to the Coprocessor Access Control register.
<> 132:9baf128c2fab 253
<> 132:9baf128c2fab 254 \param [in] cpacr Coprocessor Acccess Control value to set
<> 132:9baf128c2fab 255 */
<> 132:9baf128c2fab 256 __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 132:9baf128c2fab 257 {
<> 132:9baf128c2fab 258 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 132:9baf128c2fab 259 __regCPACR = cpacr;
<> 132:9baf128c2fab 260 __ISB();
<> 132:9baf128c2fab 261 }
<> 132:9baf128c2fab 262
<> 132:9baf128c2fab 263 /** \brief Get CBAR
<> 132:9baf128c2fab 264
<> 132:9baf128c2fab 265 This function returns the value of the Configuration Base Address register.
<> 132:9baf128c2fab 266
<> 132:9baf128c2fab 267 \return Configuration Base Address register value
<> 132:9baf128c2fab 268 */
<> 132:9baf128c2fab 269 __STATIC_INLINE uint32_t __get_CBAR() {
<> 132:9baf128c2fab 270 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 132:9baf128c2fab 271 return(__regCBAR);
<> 132:9baf128c2fab 272 }
<> 132:9baf128c2fab 273
<> 132:9baf128c2fab 274 /** \brief Get TTBR0
<> 132:9baf128c2fab 275
<> 132:9baf128c2fab 276 This function returns the value of the Translation Table Base Register 0.
<> 132:9baf128c2fab 277
<> 132:9baf128c2fab 278 \return Translation Table Base Register 0 value
<> 132:9baf128c2fab 279 */
<> 132:9baf128c2fab 280 __STATIC_INLINE uint32_t __get_TTBR0() {
<> 132:9baf128c2fab 281 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 132:9baf128c2fab 282 return(__regTTBR0);
<> 132:9baf128c2fab 283 }
<> 132:9baf128c2fab 284
<> 132:9baf128c2fab 285 /** \brief Set TTBR0
<> 132:9baf128c2fab 286
<> 132:9baf128c2fab 287 This function assigns the given value to the Translation Table Base Register 0.
<> 132:9baf128c2fab 288
<> 132:9baf128c2fab 289 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 132:9baf128c2fab 290 */
<> 132:9baf128c2fab 291 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 132:9baf128c2fab 292 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 132:9baf128c2fab 293 __regTTBR0 = ttbr0;
<> 132:9baf128c2fab 294 __ISB();
<> 132:9baf128c2fab 295 }
<> 132:9baf128c2fab 296
<> 132:9baf128c2fab 297 /** \brief Get DACR
<> 132:9baf128c2fab 298
<> 132:9baf128c2fab 299 This function returns the value of the Domain Access Control Register.
<> 132:9baf128c2fab 300
<> 132:9baf128c2fab 301 \return Domain Access Control Register value
<> 132:9baf128c2fab 302 */
<> 132:9baf128c2fab 303 __STATIC_INLINE uint32_t __get_DACR() {
<> 132:9baf128c2fab 304 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 132:9baf128c2fab 305 return(__regDACR);
<> 132:9baf128c2fab 306 }
<> 132:9baf128c2fab 307
<> 132:9baf128c2fab 308 /** \brief Set DACR
<> 132:9baf128c2fab 309
<> 132:9baf128c2fab 310 This function assigns the given value to the Domain Access Control Register.
<> 132:9baf128c2fab 311
<> 132:9baf128c2fab 312 \param [in] dacr Domain Access Control Register value to set
<> 132:9baf128c2fab 313 */
<> 132:9baf128c2fab 314 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 132:9baf128c2fab 315 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 132:9baf128c2fab 316 __regDACR = dacr;
<> 132:9baf128c2fab 317 __ISB();
<> 132:9baf128c2fab 318 }
<> 132:9baf128c2fab 319
<> 132:9baf128c2fab 320 /******************************** Cache and BTAC enable ****************************************************/
<> 132:9baf128c2fab 321
<> 132:9baf128c2fab 322 /** \brief Set SCTLR
<> 132:9baf128c2fab 323
<> 132:9baf128c2fab 324 This function assigns the given value to the System Control Register.
<> 132:9baf128c2fab 325
<> 132:9baf128c2fab 326 \param [in] sctlr System Control Register value to set
<> 132:9baf128c2fab 327 */
<> 132:9baf128c2fab 328 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 132:9baf128c2fab 329 {
<> 132:9baf128c2fab 330 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 132:9baf128c2fab 331 __regSCTLR = sctlr;
<> 132:9baf128c2fab 332 }
<> 132:9baf128c2fab 333
<> 132:9baf128c2fab 334 /** \brief Get SCTLR
<> 132:9baf128c2fab 335
<> 132:9baf128c2fab 336 This function returns the value of the System Control Register.
<> 132:9baf128c2fab 337
<> 132:9baf128c2fab 338 \return System Control Register value
<> 132:9baf128c2fab 339 */
<> 132:9baf128c2fab 340 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 132:9baf128c2fab 341 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 132:9baf128c2fab 342 return(__regSCTLR);
<> 132:9baf128c2fab 343 }
<> 132:9baf128c2fab 344
<> 132:9baf128c2fab 345 /** \brief Enable Caches
<> 132:9baf128c2fab 346
<> 132:9baf128c2fab 347 Enable Caches
<> 132:9baf128c2fab 348 */
<> 132:9baf128c2fab 349 __STATIC_INLINE void __enable_caches(void) {
<> 132:9baf128c2fab 350 // Set I bit 12 to enable I Cache
<> 132:9baf128c2fab 351 // Set C bit 2 to enable D Cache
<> 132:9baf128c2fab 352 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 132:9baf128c2fab 353 }
<> 132:9baf128c2fab 354
<> 132:9baf128c2fab 355 /** \brief Disable Caches
<> 132:9baf128c2fab 356
<> 132:9baf128c2fab 357 Disable Caches
<> 132:9baf128c2fab 358 */
<> 132:9baf128c2fab 359 __STATIC_INLINE void __disable_caches(void) {
<> 132:9baf128c2fab 360 // Clear I bit 12 to disable I Cache
<> 132:9baf128c2fab 361 // Clear C bit 2 to disable D Cache
<> 132:9baf128c2fab 362 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 132:9baf128c2fab 363 __ISB();
<> 132:9baf128c2fab 364 }
<> 132:9baf128c2fab 365
<> 132:9baf128c2fab 366 /** \brief Enable BTAC
<> 132:9baf128c2fab 367
<> 132:9baf128c2fab 368 Enable BTAC
<> 132:9baf128c2fab 369 */
<> 132:9baf128c2fab 370 __STATIC_INLINE void __enable_btac(void) {
<> 132:9baf128c2fab 371 // Set Z bit 11 to enable branch prediction
<> 132:9baf128c2fab 372 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 132:9baf128c2fab 373 __ISB();
<> 132:9baf128c2fab 374 }
<> 132:9baf128c2fab 375
<> 132:9baf128c2fab 376 /** \brief Disable BTAC
<> 132:9baf128c2fab 377
<> 132:9baf128c2fab 378 Disable BTAC
<> 132:9baf128c2fab 379 */
<> 132:9baf128c2fab 380 __STATIC_INLINE void __disable_btac(void) {
<> 132:9baf128c2fab 381 // Clear Z bit 11 to disable branch prediction
<> 132:9baf128c2fab 382 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 132:9baf128c2fab 383 }
<> 132:9baf128c2fab 384
<> 132:9baf128c2fab 385
<> 132:9baf128c2fab 386 /** \brief Enable MMU
<> 132:9baf128c2fab 387
<> 132:9baf128c2fab 388 Enable MMU
<> 132:9baf128c2fab 389 */
<> 132:9baf128c2fab 390 __STATIC_INLINE void __enable_mmu(void) {
<> 132:9baf128c2fab 391 // Set M bit 0 to enable the MMU
<> 132:9baf128c2fab 392 // Set AFE bit to enable simplified access permissions model
<> 132:9baf128c2fab 393 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 132:9baf128c2fab 394 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 132:9baf128c2fab 395 __ISB();
<> 132:9baf128c2fab 396 }
<> 132:9baf128c2fab 397
<> 132:9baf128c2fab 398 /** \brief Disable MMU
<> 132:9baf128c2fab 399
<> 132:9baf128c2fab 400 Disable MMU
<> 132:9baf128c2fab 401 */
<> 132:9baf128c2fab 402 __STATIC_INLINE void __disable_mmu(void) {
<> 132:9baf128c2fab 403 // Clear M bit 0 to disable the MMU
<> 132:9baf128c2fab 404 __set_SCTLR( __get_SCTLR() & ~1);
<> 132:9baf128c2fab 405 __ISB();
<> 132:9baf128c2fab 406 }
<> 132:9baf128c2fab 407
<> 132:9baf128c2fab 408 /******************************** TLB maintenance operations ************************************************/
<> 132:9baf128c2fab 409 /** \brief Invalidate the whole tlb
<> 132:9baf128c2fab 410
<> 132:9baf128c2fab 411 TLBIALL. Invalidate the whole tlb
<> 132:9baf128c2fab 412 */
<> 132:9baf128c2fab 413
<> 132:9baf128c2fab 414 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 132:9baf128c2fab 415 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 132:9baf128c2fab 416 __TLBIALL = 0;
<> 132:9baf128c2fab 417 __DSB();
<> 132:9baf128c2fab 418 __ISB();
<> 132:9baf128c2fab 419 }
<> 132:9baf128c2fab 420
<> 132:9baf128c2fab 421 /******************************** BTB maintenance operations ************************************************/
<> 132:9baf128c2fab 422 /** \brief Invalidate entire branch predictor array
<> 132:9baf128c2fab 423
<> 132:9baf128c2fab 424 BPIALL. Branch Predictor Invalidate All.
<> 132:9baf128c2fab 425 */
<> 132:9baf128c2fab 426
<> 132:9baf128c2fab 427 __STATIC_INLINE void __v7_inv_btac(void) {
<> 132:9baf128c2fab 428 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 132:9baf128c2fab 429 __BPIALL = 0;
<> 132:9baf128c2fab 430 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 431 __ISB(); //ensure instruction fetch path sees new state
<> 132:9baf128c2fab 432 }
<> 132:9baf128c2fab 433
<> 132:9baf128c2fab 434
<> 132:9baf128c2fab 435 /******************************** L1 cache operations ******************************************************/
<> 132:9baf128c2fab 436
<> 132:9baf128c2fab 437 /** \brief Invalidate the whole I$
<> 132:9baf128c2fab 438
<> 132:9baf128c2fab 439 ICIALLU. Instruction Cache Invalidate All to PoU
<> 132:9baf128c2fab 440 */
<> 132:9baf128c2fab 441 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 132:9baf128c2fab 442 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 132:9baf128c2fab 443 __ICIALLU = 0;
<> 132:9baf128c2fab 444 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 445 __ISB(); //ensure instruction fetch path sees new I cache state
<> 132:9baf128c2fab 446 }
<> 132:9baf128c2fab 447
<> 132:9baf128c2fab 448 /** \brief Clean D$ by MVA
<> 132:9baf128c2fab 449
<> 132:9baf128c2fab 450 DCCMVAC. Data cache clean by MVA to PoC
<> 132:9baf128c2fab 451 */
<> 132:9baf128c2fab 452 __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 132:9baf128c2fab 453 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 132:9baf128c2fab 454 __DCCMVAC = (uint32_t)va;
<> 132:9baf128c2fab 455 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 456 }
<> 132:9baf128c2fab 457
<> 132:9baf128c2fab 458 /** \brief Invalidate D$ by MVA
<> 132:9baf128c2fab 459
<> 132:9baf128c2fab 460 DCIMVAC. Data cache invalidate by MVA to PoC
<> 132:9baf128c2fab 461 */
<> 132:9baf128c2fab 462 __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 132:9baf128c2fab 463 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 132:9baf128c2fab 464 __DCIMVAC = (uint32_t)va;
<> 132:9baf128c2fab 465 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 466 }
<> 132:9baf128c2fab 467
<> 132:9baf128c2fab 468 /** \brief Clean and Invalidate D$ by MVA
<> 132:9baf128c2fab 469
<> 132:9baf128c2fab 470 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 132:9baf128c2fab 471 */
<> 132:9baf128c2fab 472 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 132:9baf128c2fab 473 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 132:9baf128c2fab 474 __DCCIMVAC = (uint32_t)va;
<> 132:9baf128c2fab 475 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 476 }
<> 132:9baf128c2fab 477
<> 132:9baf128c2fab 478 /** \brief Clean and Invalidate the entire data or unified cache
<> 132:9baf128c2fab 479
<> 132:9baf128c2fab 480 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 132:9baf128c2fab 481 */
<> 132:9baf128c2fab 482 #pragma push
<> 132:9baf128c2fab 483 #pragma arm
<> 132:9baf128c2fab 484 __STATIC_ASM void __v7_all_cache(uint32_t op) {
<> 132:9baf128c2fab 485 ARM
<> 132:9baf128c2fab 486
<> 132:9baf128c2fab 487 PUSH {R4-R11}
<> 132:9baf128c2fab 488
<> 132:9baf128c2fab 489 MRC p15, 1, R6, c0, c0, 1 // Read CLIDR
<> 132:9baf128c2fab 490 ANDS R3, R6, #0x07000000 // Extract coherency level
<> 132:9baf128c2fab 491 MOV R3, R3, LSR #23 // Total cache levels << 1
<> 132:9baf128c2fab 492 BEQ Finished // If 0, no need to clean
<> 132:9baf128c2fab 493
<> 132:9baf128c2fab 494 MOV R10, #0 // R10 holds current cache level << 1
<> 132:9baf128c2fab 495 Loop1 ADD R2, R10, R10, LSR #1 // R2 holds cache "Set" position
<> 132:9baf128c2fab 496 MOV R1, R6, LSR R2 // Bottom 3 bits are the Cache-type for this level
<> 132:9baf128c2fab 497 AND R1, R1, #7 // Isolate those lower 3 bits
<> 132:9baf128c2fab 498 CMP R1, #2
<> 132:9baf128c2fab 499 BLT Skip // No cache or only instruction cache at this level
<> 132:9baf128c2fab 500
<> 132:9baf128c2fab 501 MCR p15, 2, R10, c0, c0, 0 // Write the Cache Size selection register
<> 132:9baf128c2fab 502 ISB // ISB to sync the change to the CacheSizeID reg
<> 132:9baf128c2fab 503 MRC p15, 1, R1, c0, c0, 0 // Reads current Cache Size ID register
<> 132:9baf128c2fab 504 AND R2, R1, #7 // Extract the line length field
<> 132:9baf128c2fab 505 ADD R2, R2, #4 // Add 4 for the line length offset (log2 16 bytes)
<> 132:9baf128c2fab 506 LDR R4, =0x3FF
<> 132:9baf128c2fab 507 ANDS R4, R4, R1, LSR #3 // R4 is the max number on the way size (right aligned)
<> 132:9baf128c2fab 508 CLZ R5, R4 // R5 is the bit position of the way size increment
<> 132:9baf128c2fab 509 LDR R7, =0x7FFF
<> 132:9baf128c2fab 510 ANDS R7, R7, R1, LSR #13 // R7 is the max number of the index size (right aligned)
<> 132:9baf128c2fab 511
<> 132:9baf128c2fab 512 Loop2 MOV R9, R4 // R9 working copy of the max way size (right aligned)
<> 132:9baf128c2fab 513
<> 132:9baf128c2fab 514 Loop3 ORR R11, R10, R9, LSL R5 // Factor in the Way number and cache number into R11
<> 132:9baf128c2fab 515 ORR R11, R11, R7, LSL R2 // Factor in the Set number
<> 132:9baf128c2fab 516 CMP R0, #0
<> 132:9baf128c2fab 517 BNE Dccsw
<> 132:9baf128c2fab 518 MCR p15, 0, R11, c7, c6, 2 // DCISW. Invalidate by Set/Way
<> 132:9baf128c2fab 519 B cont
<> 132:9baf128c2fab 520 Dccsw CMP R0, #1
<> 132:9baf128c2fab 521 BNE Dccisw
<> 132:9baf128c2fab 522 MCR p15, 0, R11, c7, c10, 2 // DCCSW. Clean by Set/Way
<> 132:9baf128c2fab 523 B cont
<> 132:9baf128c2fab 524 Dccisw MCR p15, 0, R11, c7, c14, 2 // DCCISW. Clean and Invalidate by Set/Way
<> 132:9baf128c2fab 525 cont SUBS R9, R9, #1 // Decrement the Way number
<> 132:9baf128c2fab 526 BGE Loop3
<> 132:9baf128c2fab 527 SUBS R7, R7, #1 // Decrement the Set number
<> 132:9baf128c2fab 528 BGE Loop2
<> 132:9baf128c2fab 529 Skip ADD R10, R10, #2 // Increment the cache number
<> 132:9baf128c2fab 530 CMP R3, R10
<> 132:9baf128c2fab 531 BGT Loop1
<> 132:9baf128c2fab 532
<> 132:9baf128c2fab 533 Finished
<> 132:9baf128c2fab 534 DSB
<> 132:9baf128c2fab 535 POP {R4-R11}
<> 132:9baf128c2fab 536 BX lr
<> 132:9baf128c2fab 537
<> 132:9baf128c2fab 538 }
<> 132:9baf128c2fab 539 #pragma pop
<> 132:9baf128c2fab 540
<> 132:9baf128c2fab 541
<> 132:9baf128c2fab 542 /** \brief Invalidate the whole D$
<> 132:9baf128c2fab 543
<> 132:9baf128c2fab 544 DCISW. Invalidate by Set/Way
<> 132:9baf128c2fab 545 */
<> 132:9baf128c2fab 546
<> 132:9baf128c2fab 547 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 132:9baf128c2fab 548 __v7_all_cache(0);
<> 132:9baf128c2fab 549 }
<> 132:9baf128c2fab 550
<> 132:9baf128c2fab 551 /** \brief Clean the whole D$
<> 132:9baf128c2fab 552
<> 132:9baf128c2fab 553 DCCSW. Clean by Set/Way
<> 132:9baf128c2fab 554 */
<> 132:9baf128c2fab 555
<> 132:9baf128c2fab 556 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 132:9baf128c2fab 557 __v7_all_cache(1);
<> 132:9baf128c2fab 558 }
<> 132:9baf128c2fab 559
<> 132:9baf128c2fab 560 /** \brief Clean and invalidate the whole D$
<> 132:9baf128c2fab 561
<> 132:9baf128c2fab 562 DCCISW. Clean and Invalidate by Set/Way
<> 132:9baf128c2fab 563 */
<> 132:9baf128c2fab 564
<> 132:9baf128c2fab 565 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 132:9baf128c2fab 566 __v7_all_cache(2);
<> 132:9baf128c2fab 567 }
<> 132:9baf128c2fab 568
<> 132:9baf128c2fab 569 #include "core_ca_mmu.h"
<> 132:9baf128c2fab 570
<> 132:9baf128c2fab 571 #elif (defined (__ICCARM__)) /*---------------- ICC Compiler ---------------------*/
<> 132:9baf128c2fab 572
<> 132:9baf128c2fab 573 #define __inline inline
<> 132:9baf128c2fab 574
<> 132:9baf128c2fab 575 inline static uint32_t __disable_irq_iar() {
<> 132:9baf128c2fab 576 int irq_dis = __get_CPSR() & 0x80; // 7bit CPSR.I
<> 132:9baf128c2fab 577 __disable_irq();
<> 132:9baf128c2fab 578 return irq_dis;
<> 132:9baf128c2fab 579 }
<> 132:9baf128c2fab 580
<> 132:9baf128c2fab 581 #define MODE_USR 0x10
<> 132:9baf128c2fab 582 #define MODE_FIQ 0x11
<> 132:9baf128c2fab 583 #define MODE_IRQ 0x12
<> 132:9baf128c2fab 584 #define MODE_SVC 0x13
<> 132:9baf128c2fab 585 #define MODE_MON 0x16
<> 132:9baf128c2fab 586 #define MODE_ABT 0x17
<> 132:9baf128c2fab 587 #define MODE_HYP 0x1A
<> 132:9baf128c2fab 588 #define MODE_UND 0x1B
<> 132:9baf128c2fab 589 #define MODE_SYS 0x1F
<> 132:9baf128c2fab 590
<> 132:9baf128c2fab 591 /** \brief Set Process Stack Pointer
<> 132:9baf128c2fab 592
<> 132:9baf128c2fab 593 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 132:9baf128c2fab 594
<> 132:9baf128c2fab 595 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 132:9baf128c2fab 596 */
<> 132:9baf128c2fab 597 // from rt_CMSIS.c
<> 132:9baf128c2fab 598 __arm static inline void __set_PSP(uint32_t topOfProcStack) {
<> 132:9baf128c2fab 599 __asm(
<> 132:9baf128c2fab 600 " ARM\n"
<> 132:9baf128c2fab 601 // " PRESERVE8\n"
<> 132:9baf128c2fab 602
<> 132:9baf128c2fab 603 " BIC R0, R0, #7 ;ensure stack is 8-byte aligned \n"
<> 132:9baf128c2fab 604 " MRS R1, CPSR \n"
<> 132:9baf128c2fab 605 " CPS #0x1F ;no effect in USR mode \n" // MODE_SYS
<> 132:9baf128c2fab 606 " MOV SP, R0 \n"
<> 132:9baf128c2fab 607 " MSR CPSR_c, R1 ;no effect in USR mode \n"
<> 132:9baf128c2fab 608 " ISB \n"
<> 132:9baf128c2fab 609 " BX LR \n");
<> 132:9baf128c2fab 610 }
<> 132:9baf128c2fab 611
<> 132:9baf128c2fab 612 /** \brief Set User Mode
<> 132:9baf128c2fab 613
<> 132:9baf128c2fab 614 This function changes the processor state to User Mode
<> 132:9baf128c2fab 615 */
<> 132:9baf128c2fab 616 // from rt_CMSIS.c
<> 132:9baf128c2fab 617 __arm static inline void __set_CPS_USR(void) {
<> 132:9baf128c2fab 618 __asm(
<> 132:9baf128c2fab 619 " ARM \n"
<> 132:9baf128c2fab 620
<> 132:9baf128c2fab 621 " CPS #0x10 \n" // MODE_USR
<> 132:9baf128c2fab 622 " BX LR\n");
<> 132:9baf128c2fab 623 }
<> 132:9baf128c2fab 624
<> 132:9baf128c2fab 625 /** \brief Set TTBR0
<> 132:9baf128c2fab 626
<> 132:9baf128c2fab 627 This function assigns the given value to the Translation Table Base Register 0.
<> 132:9baf128c2fab 628
<> 132:9baf128c2fab 629 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 132:9baf128c2fab 630 */
<> 132:9baf128c2fab 631 // from mmu_Renesas_RZ_A1.c
<> 132:9baf128c2fab 632 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 132:9baf128c2fab 633 __MCR(15, 0, ttbr0, 2, 0, 0); // reg to cp15
<> 132:9baf128c2fab 634 __ISB();
<> 132:9baf128c2fab 635 }
<> 132:9baf128c2fab 636
<> 132:9baf128c2fab 637 /** \brief Set DACR
<> 132:9baf128c2fab 638
<> 132:9baf128c2fab 639 This function assigns the given value to the Domain Access Control Register.
<> 132:9baf128c2fab 640
<> 132:9baf128c2fab 641 \param [in] dacr Domain Access Control Register value to set
<> 132:9baf128c2fab 642 */
<> 132:9baf128c2fab 643 // from mmu_Renesas_RZ_A1.c
<> 132:9baf128c2fab 644 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 132:9baf128c2fab 645 __MCR(15, 0, dacr, 3, 0, 0); // reg to cp15
<> 132:9baf128c2fab 646 __ISB();
<> 132:9baf128c2fab 647 }
<> 132:9baf128c2fab 648
<> 132:9baf128c2fab 649
<> 132:9baf128c2fab 650 /******************************** Cache and BTAC enable ****************************************************/
<> 132:9baf128c2fab 651 /** \brief Set SCTLR
<> 132:9baf128c2fab 652
<> 132:9baf128c2fab 653 This function assigns the given value to the System Control Register.
<> 132:9baf128c2fab 654
<> 132:9baf128c2fab 655 \param [in] sctlr System Control Register value to set
<> 132:9baf128c2fab 656 */
<> 132:9baf128c2fab 657 // from __enable_mmu()
<> 132:9baf128c2fab 658 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr) {
<> 132:9baf128c2fab 659 __MCR(15, 0, sctlr, 1, 0, 0); // reg to cp15
<> 132:9baf128c2fab 660 }
<> 132:9baf128c2fab 661
<> 132:9baf128c2fab 662 /** \brief Get SCTLR
<> 132:9baf128c2fab 663
<> 132:9baf128c2fab 664 This function returns the value of the System Control Register.
<> 132:9baf128c2fab 665
<> 132:9baf128c2fab 666 \return System Control Register value
<> 132:9baf128c2fab 667 */
<> 132:9baf128c2fab 668 // from __enable_mmu()
<> 132:9baf128c2fab 669 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 132:9baf128c2fab 670 uint32_t __regSCTLR = __MRC(15, 0, 1, 0, 0);
<> 132:9baf128c2fab 671 return __regSCTLR;
<> 132:9baf128c2fab 672 }
<> 132:9baf128c2fab 673
<> 132:9baf128c2fab 674 /** \brief Enable Caches
<> 132:9baf128c2fab 675
<> 132:9baf128c2fab 676 Enable Caches
<> 132:9baf128c2fab 677 */
<> 132:9baf128c2fab 678 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 679 __STATIC_INLINE void __enable_caches(void) {
<> 132:9baf128c2fab 680 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 132:9baf128c2fab 681 }
<> 132:9baf128c2fab 682
<> 132:9baf128c2fab 683 /** \brief Enable BTAC
<> 132:9baf128c2fab 684
<> 132:9baf128c2fab 685 Enable BTAC
<> 132:9baf128c2fab 686 */
<> 132:9baf128c2fab 687 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 688 __STATIC_INLINE void __enable_btac(void) {
<> 132:9baf128c2fab 689 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 132:9baf128c2fab 690 __ISB();
<> 132:9baf128c2fab 691 }
<> 132:9baf128c2fab 692
<> 132:9baf128c2fab 693 /** \brief Enable MMU
<> 132:9baf128c2fab 694
<> 132:9baf128c2fab 695 Enable MMU
<> 132:9baf128c2fab 696 */
<> 132:9baf128c2fab 697 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 698 __STATIC_INLINE void __enable_mmu(void) {
<> 132:9baf128c2fab 699 // Set M bit 0 to enable the MMU
<> 132:9baf128c2fab 700 // Set AFE bit to enable simplified access permissions model
<> 132:9baf128c2fab 701 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 132:9baf128c2fab 702 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 132:9baf128c2fab 703 __ISB();
<> 132:9baf128c2fab 704 }
<> 132:9baf128c2fab 705
<> 132:9baf128c2fab 706 /******************************** TLB maintenance operations ************************************************/
<> 132:9baf128c2fab 707 /** \brief Invalidate the whole tlb
<> 132:9baf128c2fab 708
<> 132:9baf128c2fab 709 TLBIALL. Invalidate the whole tlb
<> 132:9baf128c2fab 710 */
<> 132:9baf128c2fab 711 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 712 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 132:9baf128c2fab 713 uint32_t val = 0;
<> 132:9baf128c2fab 714 __MCR(15, 0, val, 8, 7, 0); // reg to cp15
<> 132:9baf128c2fab 715 __MCR(15, 0, val, 8, 6, 0); // reg to cp15
<> 132:9baf128c2fab 716 __MCR(15, 0, val, 8, 5, 0); // reg to cp15
<> 132:9baf128c2fab 717 __DSB();
<> 132:9baf128c2fab 718 __ISB();
<> 132:9baf128c2fab 719 }
<> 132:9baf128c2fab 720
<> 132:9baf128c2fab 721 /******************************** BTB maintenance operations ************************************************/
<> 132:9baf128c2fab 722 /** \brief Invalidate entire branch predictor array
<> 132:9baf128c2fab 723
<> 132:9baf128c2fab 724 BPIALL. Branch Predictor Invalidate All.
<> 132:9baf128c2fab 725 */
<> 132:9baf128c2fab 726 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 727 __STATIC_INLINE void __v7_inv_btac(void) {
<> 132:9baf128c2fab 728 uint32_t val = 0;
<> 132:9baf128c2fab 729 __MCR(15, 0, val, 7, 5, 6); // reg to cp15
<> 132:9baf128c2fab 730 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 731 __ISB(); //ensure instruction fetch path sees new state
<> 132:9baf128c2fab 732 }
<> 132:9baf128c2fab 733
<> 132:9baf128c2fab 734
<> 132:9baf128c2fab 735 /******************************** L1 cache operations ******************************************************/
<> 132:9baf128c2fab 736
<> 132:9baf128c2fab 737 /** \brief Invalidate the whole I$
<> 132:9baf128c2fab 738
<> 132:9baf128c2fab 739 ICIALLU. Instruction Cache Invalidate All to PoU
<> 132:9baf128c2fab 740 */
<> 132:9baf128c2fab 741 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 742 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 132:9baf128c2fab 743 uint32_t val = 0;
<> 132:9baf128c2fab 744 __MCR(15, 0, val, 7, 5, 0); // reg to cp15
<> 132:9baf128c2fab 745 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 746 __ISB(); //ensure instruction fetch path sees new I cache state
<> 132:9baf128c2fab 747 }
<> 132:9baf128c2fab 748
<> 132:9baf128c2fab 749 // from __v7_inv_dcache_all()
<> 132:9baf128c2fab 750 __arm static inline void __v7_all_cache(uint32_t op) {
<> 132:9baf128c2fab 751 __asm(
<> 132:9baf128c2fab 752 " ARM \n"
<> 132:9baf128c2fab 753
<> 132:9baf128c2fab 754 " PUSH {R4-R11} \n"
<> 132:9baf128c2fab 755
<> 132:9baf128c2fab 756 " MRC p15, 1, R6, c0, c0, 1\n" // Read CLIDR
<> 132:9baf128c2fab 757 " ANDS R3, R6, #0x07000000\n" // Extract coherency level
<> 132:9baf128c2fab 758 " MOV R3, R3, LSR #23\n" // Total cache levels << 1
<> 132:9baf128c2fab 759 " BEQ Finished\n" // If 0, no need to clean
<> 132:9baf128c2fab 760
<> 132:9baf128c2fab 761 " MOV R10, #0\n" // R10 holds current cache level << 1
<> 132:9baf128c2fab 762 "Loop1: ADD R2, R10, R10, LSR #1\n" // R2 holds cache "Set" position
<> 132:9baf128c2fab 763 " MOV R1, R6, LSR R2 \n" // Bottom 3 bits are the Cache-type for this level
<> 132:9baf128c2fab 764 " AND R1, R1, #7 \n" // Isolate those lower 3 bits
<> 132:9baf128c2fab 765 " CMP R1, #2 \n"
<> 132:9baf128c2fab 766 " BLT Skip \n" // No cache or only instruction cache at this level
<> 132:9baf128c2fab 767
<> 132:9baf128c2fab 768 " MCR p15, 2, R10, c0, c0, 0 \n" // Write the Cache Size selection register
<> 132:9baf128c2fab 769 " ISB \n" // ISB to sync the change to the CacheSizeID reg
<> 132:9baf128c2fab 770 " MRC p15, 1, R1, c0, c0, 0 \n" // Reads current Cache Size ID register
<> 132:9baf128c2fab 771 " AND R2, R1, #7 \n" // Extract the line length field
<> 132:9baf128c2fab 772 " ADD R2, R2, #4 \n" // Add 4 for the line length offset (log2 16 bytes)
<> 132:9baf128c2fab 773 " movw R4, #0x3FF \n"
<> 132:9baf128c2fab 774 " ANDS R4, R4, R1, LSR #3 \n" // R4 is the max number on the way size (right aligned)
<> 132:9baf128c2fab 775 " CLZ R5, R4 \n" // R5 is the bit position of the way size increment
<> 132:9baf128c2fab 776 " movw R7, #0x7FFF \n"
<> 132:9baf128c2fab 777 " ANDS R7, R7, R1, LSR #13 \n" // R7 is the max number of the index size (right aligned)
<> 132:9baf128c2fab 778
<> 132:9baf128c2fab 779 "Loop2: MOV R9, R4 \n" // R9 working copy of the max way size (right aligned)
<> 132:9baf128c2fab 780
<> 132:9baf128c2fab 781 "Loop3: ORR R11, R10, R9, LSL R5 \n" // Factor in the Way number and cache number into R11
<> 132:9baf128c2fab 782 " ORR R11, R11, R7, LSL R2 \n" // Factor in the Set number
<> 132:9baf128c2fab 783 " CMP R0, #0 \n"
<> 132:9baf128c2fab 784 " BNE Dccsw \n"
<> 132:9baf128c2fab 785 " MCR p15, 0, R11, c7, c6, 2 \n" // DCISW. Invalidate by Set/Way
<> 132:9baf128c2fab 786 " B cont \n"
<> 132:9baf128c2fab 787 "Dccsw: CMP R0, #1 \n"
<> 132:9baf128c2fab 788 " BNE Dccisw \n"
<> 132:9baf128c2fab 789 " MCR p15, 0, R11, c7, c10, 2 \n" // DCCSW. Clean by Set/Way
<> 132:9baf128c2fab 790 " B cont \n"
<> 132:9baf128c2fab 791 "Dccisw: MCR p15, 0, R11, c7, c14, 2 \n" // DCCISW, Clean and Invalidate by Set/Way
<> 132:9baf128c2fab 792 "cont: SUBS R9, R9, #1 \n" // Decrement the Way number
<> 132:9baf128c2fab 793 " BGE Loop3 \n"
<> 132:9baf128c2fab 794 " SUBS R7, R7, #1 \n" // Decrement the Set number
<> 132:9baf128c2fab 795 " BGE Loop2 \n"
<> 132:9baf128c2fab 796 "Skip: ADD R10, R10, #2 \n" // increment the cache number
<> 132:9baf128c2fab 797 " CMP R3, R10 \n"
<> 132:9baf128c2fab 798 " BGT Loop1 \n"
<> 132:9baf128c2fab 799
<> 132:9baf128c2fab 800 "Finished: \n"
<> 132:9baf128c2fab 801 " DSB \n"
<> 132:9baf128c2fab 802 " POP {R4-R11} \n"
<> 132:9baf128c2fab 803 " BX lr \n" );
<> 132:9baf128c2fab 804 }
<> 132:9baf128c2fab 805
<> 132:9baf128c2fab 806 /** \brief Invalidate the whole D$
<> 132:9baf128c2fab 807
<> 132:9baf128c2fab 808 DCISW. Invalidate by Set/Way
<> 132:9baf128c2fab 809 */
<> 132:9baf128c2fab 810 // from system_Renesas_RZ_A1.c
<> 132:9baf128c2fab 811 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 132:9baf128c2fab 812 __v7_all_cache(0);
<> 132:9baf128c2fab 813 }
<> 132:9baf128c2fab 814 /** \brief Clean the whole D$
<> 132:9baf128c2fab 815
<> 132:9baf128c2fab 816 DCCSW. Clean by Set/Way
<> 132:9baf128c2fab 817 */
<> 132:9baf128c2fab 818
<> 132:9baf128c2fab 819 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 132:9baf128c2fab 820 __v7_all_cache(1);
<> 132:9baf128c2fab 821 }
<> 132:9baf128c2fab 822
<> 132:9baf128c2fab 823 /** \brief Clean and invalidate the whole D$
<> 132:9baf128c2fab 824
<> 132:9baf128c2fab 825 DCCISW. Clean and Invalidate by Set/Way
<> 132:9baf128c2fab 826 */
<> 132:9baf128c2fab 827
<> 132:9baf128c2fab 828 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 132:9baf128c2fab 829 __v7_all_cache(2);
<> 132:9baf128c2fab 830 }
<> 132:9baf128c2fab 831 /** \brief Clean and Invalidate D$ by MVA
<> 132:9baf128c2fab 832
<> 132:9baf128c2fab 833 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 132:9baf128c2fab 834 */
<> 132:9baf128c2fab 835 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 132:9baf128c2fab 836 __MCR(15, 0, (uint32_t)va, 7, 14, 1);
<> 132:9baf128c2fab 837 __DMB();
<> 132:9baf128c2fab 838 }
<> 132:9baf128c2fab 839
<> 132:9baf128c2fab 840 #include "core_ca_mmu.h"
<> 132:9baf128c2fab 841
<> 132:9baf128c2fab 842 #elif (defined (__GNUC__)) /*------------------ GNU Compiler ---------------------*/
<> 132:9baf128c2fab 843 /* GNU gcc specific functions */
<> 132:9baf128c2fab 844
<> 132:9baf128c2fab 845 #define MODE_USR 0x10
<> 132:9baf128c2fab 846 #define MODE_FIQ 0x11
<> 132:9baf128c2fab 847 #define MODE_IRQ 0x12
<> 132:9baf128c2fab 848 #define MODE_SVC 0x13
<> 132:9baf128c2fab 849 #define MODE_MON 0x16
<> 132:9baf128c2fab 850 #define MODE_ABT 0x17
<> 132:9baf128c2fab 851 #define MODE_HYP 0x1A
<> 132:9baf128c2fab 852 #define MODE_UND 0x1B
<> 132:9baf128c2fab 853 #define MODE_SYS 0x1F
<> 132:9baf128c2fab 854
<> 132:9baf128c2fab 855
<> 132:9baf128c2fab 856 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_irq(void)
<> 132:9baf128c2fab 857 {
<> 132:9baf128c2fab 858 __ASM volatile ("cpsie i");
<> 132:9baf128c2fab 859 }
<> 132:9baf128c2fab 860
<> 132:9baf128c2fab 861 /** \brief Disable IRQ Interrupts
<> 132:9baf128c2fab 862
<> 132:9baf128c2fab 863 This function disables IRQ interrupts by setting the I-bit in the CPSR.
<> 132:9baf128c2fab 864 Can only be executed in Privileged modes.
<> 132:9baf128c2fab 865 */
<> 132:9baf128c2fab 866 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __disable_irq(void)
<> 132:9baf128c2fab 867 {
<> 132:9baf128c2fab 868 uint32_t result;
<> 132:9baf128c2fab 869
<> 132:9baf128c2fab 870 __ASM volatile ("mrs %0, cpsr" : "=r" (result));
<> 132:9baf128c2fab 871 __ASM volatile ("cpsid i");
<> 132:9baf128c2fab 872 return(result & 0x80);
<> 132:9baf128c2fab 873 }
<> 132:9baf128c2fab 874
<> 132:9baf128c2fab 875
<> 132:9baf128c2fab 876 /** \brief Get APSR Register
<> 132:9baf128c2fab 877
<> 132:9baf128c2fab 878 This function returns the content of the APSR Register.
<> 132:9baf128c2fab 879
<> 132:9baf128c2fab 880 \return APSR Register value
<> 132:9baf128c2fab 881 */
<> 132:9baf128c2fab 882 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_APSR(void)
<> 132:9baf128c2fab 883 {
<> 132:9baf128c2fab 884 #if 1
<> 132:9baf128c2fab 885 register uint32_t __regAPSR;
<> 132:9baf128c2fab 886 __ASM volatile ("mrs %0, apsr" : "=r" (__regAPSR) );
<> 132:9baf128c2fab 887 #else
<> 132:9baf128c2fab 888 register uint32_t __regAPSR __ASM("apsr");
<> 132:9baf128c2fab 889 #endif
<> 132:9baf128c2fab 890 return(__regAPSR);
<> 132:9baf128c2fab 891 }
<> 132:9baf128c2fab 892
<> 132:9baf128c2fab 893
<> 132:9baf128c2fab 894 /** \brief Get CPSR Register
<> 132:9baf128c2fab 895
<> 132:9baf128c2fab 896 This function returns the content of the CPSR Register.
<> 132:9baf128c2fab 897
<> 132:9baf128c2fab 898 \return CPSR Register value
<> 132:9baf128c2fab 899 */
<> 132:9baf128c2fab 900 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPSR(void)
<> 132:9baf128c2fab 901 {
<> 132:9baf128c2fab 902 #if 1
<> 132:9baf128c2fab 903 register uint32_t __regCPSR;
<> 132:9baf128c2fab 904 __ASM volatile ("mrs %0, cpsr" : "=r" (__regCPSR));
<> 132:9baf128c2fab 905 #else
<> 132:9baf128c2fab 906 register uint32_t __regCPSR __ASM("cpsr");
<> 132:9baf128c2fab 907 #endif
<> 132:9baf128c2fab 908 return(__regCPSR);
<> 132:9baf128c2fab 909 }
<> 132:9baf128c2fab 910
<> 132:9baf128c2fab 911 #if 0
<> 132:9baf128c2fab 912 /** \brief Set Stack Pointer
<> 132:9baf128c2fab 913
<> 132:9baf128c2fab 914 This function assigns the given value to the current stack pointer.
<> 132:9baf128c2fab 915
<> 132:9baf128c2fab 916 \param [in] topOfStack Stack Pointer value to set
<> 132:9baf128c2fab 917 */
<> 132:9baf128c2fab 918 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 132:9baf128c2fab 919 {
<> 132:9baf128c2fab 920 register uint32_t __regSP __ASM("sp");
<> 132:9baf128c2fab 921 __regSP = topOfStack;
<> 132:9baf128c2fab 922 }
<> 132:9baf128c2fab 923 #endif
<> 132:9baf128c2fab 924
<> 132:9baf128c2fab 925 /** \brief Get link register
<> 132:9baf128c2fab 926
<> 132:9baf128c2fab 927 This function returns the value of the link register
<> 132:9baf128c2fab 928
<> 132:9baf128c2fab 929 \return Value of link register
<> 132:9baf128c2fab 930 */
<> 132:9baf128c2fab 931 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_LR(void)
<> 132:9baf128c2fab 932 {
<> 132:9baf128c2fab 933 register uint32_t __reglr __ASM("lr");
<> 132:9baf128c2fab 934 return(__reglr);
<> 132:9baf128c2fab 935 }
<> 132:9baf128c2fab 936
<> 132:9baf128c2fab 937 #if 0
<> 132:9baf128c2fab 938 /** \brief Set link register
<> 132:9baf128c2fab 939
<> 132:9baf128c2fab 940 This function sets the value of the link register
<> 132:9baf128c2fab 941
<> 132:9baf128c2fab 942 \param [in] lr LR value to set
<> 132:9baf128c2fab 943 */
<> 132:9baf128c2fab 944 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_LR(uint32_t lr)
<> 132:9baf128c2fab 945 {
<> 132:9baf128c2fab 946 register uint32_t __reglr __ASM("lr");
<> 132:9baf128c2fab 947 __reglr = lr;
<> 132:9baf128c2fab 948 }
<> 132:9baf128c2fab 949 #endif
<> 132:9baf128c2fab 950
<> 132:9baf128c2fab 951 /** \brief Set Process Stack Pointer
<> 132:9baf128c2fab 952
<> 132:9baf128c2fab 953 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 132:9baf128c2fab 954
<> 132:9baf128c2fab 955 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 132:9baf128c2fab 956 */
<> 132:9baf128c2fab 957 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_PSP(uint32_t topOfProcStack)
<> 132:9baf128c2fab 958 {
<> 132:9baf128c2fab 959 __asm__ volatile (
<> 132:9baf128c2fab 960 ".ARM;"
<> 132:9baf128c2fab 961 ".eabi_attribute Tag_ABI_align8_preserved,1;"
<> 132:9baf128c2fab 962
<> 132:9baf128c2fab 963 "BIC R0, R0, #7;" /* ;ensure stack is 8-byte aligned */
<> 132:9baf128c2fab 964 "MRS R1, CPSR;"
<> 132:9baf128c2fab 965 "CPS %0;" /* ;no effect in USR mode */
<> 132:9baf128c2fab 966 "MOV SP, R0;"
<> 132:9baf128c2fab 967 "MSR CPSR_c, R1;" /* ;no effect in USR mode */
<> 132:9baf128c2fab 968 "ISB;"
<> 132:9baf128c2fab 969 //"BX LR;"
<> 132:9baf128c2fab 970 :
<> 132:9baf128c2fab 971 : "i"(MODE_SYS)
<> 132:9baf128c2fab 972 : "r0", "r1");
<> 132:9baf128c2fab 973 return;
<> 132:9baf128c2fab 974 }
<> 132:9baf128c2fab 975
<> 132:9baf128c2fab 976 /** \brief Set User Mode
<> 132:9baf128c2fab 977
<> 132:9baf128c2fab 978 This function changes the processor state to User Mode
<> 132:9baf128c2fab 979 */
<> 132:9baf128c2fab 980 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPS_USR(void)
<> 132:9baf128c2fab 981 {
<> 132:9baf128c2fab 982 __asm__ volatile (
<> 132:9baf128c2fab 983 ".ARM;"
<> 132:9baf128c2fab 984
<> 132:9baf128c2fab 985 "CPS %0;"
<> 132:9baf128c2fab 986 //"BX LR;"
<> 132:9baf128c2fab 987 :
<> 132:9baf128c2fab 988 : "i"(MODE_USR)
<> 132:9baf128c2fab 989 : );
<> 132:9baf128c2fab 990 return;
<> 132:9baf128c2fab 991 }
<> 132:9baf128c2fab 992
<> 132:9baf128c2fab 993
<> 132:9baf128c2fab 994 /** \brief Enable FIQ
<> 132:9baf128c2fab 995
<> 132:9baf128c2fab 996 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 132:9baf128c2fab 997 Can only be executed in Privileged modes.
<> 132:9baf128c2fab 998 */
<> 132:9baf128c2fab 999 #define __enable_fault_irq() __asm__ volatile ("cpsie f")
<> 132:9baf128c2fab 1000
<> 132:9baf128c2fab 1001
<> 132:9baf128c2fab 1002 /** \brief Disable FIQ
<> 132:9baf128c2fab 1003
<> 132:9baf128c2fab 1004 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 132:9baf128c2fab 1005 Can only be executed in Privileged modes.
<> 132:9baf128c2fab 1006 */
<> 132:9baf128c2fab 1007 #define __disable_fault_irq() __asm__ volatile ("cpsid f")
<> 132:9baf128c2fab 1008
<> 132:9baf128c2fab 1009
<> 132:9baf128c2fab 1010 /** \brief Get FPSCR
<> 132:9baf128c2fab 1011
<> 132:9baf128c2fab 1012 This function returns the current value of the Floating Point Status/Control register.
<> 132:9baf128c2fab 1013
<> 132:9baf128c2fab 1014 \return Floating Point Status/Control register value
<> 132:9baf128c2fab 1015 */
<> 132:9baf128c2fab 1016 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 132:9baf128c2fab 1017 {
<> 132:9baf128c2fab 1018 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 132:9baf128c2fab 1019 #if 1
<> 132:9baf128c2fab 1020 uint32_t result;
<> 132:9baf128c2fab 1021
<> 132:9baf128c2fab 1022 __ASM volatile ("vmrs %0, fpscr" : "=r" (result) );
<> 132:9baf128c2fab 1023 return (result);
<> 132:9baf128c2fab 1024 #else
<> 132:9baf128c2fab 1025 register uint32_t __regfpscr __ASM("fpscr");
<> 132:9baf128c2fab 1026 return(__regfpscr);
<> 132:9baf128c2fab 1027 #endif
<> 132:9baf128c2fab 1028 #else
<> 132:9baf128c2fab 1029 return(0);
<> 132:9baf128c2fab 1030 #endif
<> 132:9baf128c2fab 1031 }
<> 132:9baf128c2fab 1032
<> 132:9baf128c2fab 1033
<> 132:9baf128c2fab 1034 /** \brief Set FPSCR
<> 132:9baf128c2fab 1035
<> 132:9baf128c2fab 1036 This function assigns the given value to the Floating Point Status/Control register.
<> 132:9baf128c2fab 1037
<> 132:9baf128c2fab 1038 \param [in] fpscr Floating Point Status/Control value to set
<> 132:9baf128c2fab 1039 */
<> 132:9baf128c2fab 1040 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 132:9baf128c2fab 1041 {
<> 132:9baf128c2fab 1042 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 132:9baf128c2fab 1043 #if 1
<> 132:9baf128c2fab 1044 __ASM volatile ("vmsr fpscr, %0" : : "r" (fpscr) );
<> 132:9baf128c2fab 1045 #else
<> 132:9baf128c2fab 1046 register uint32_t __regfpscr __ASM("fpscr");
<> 132:9baf128c2fab 1047 __regfpscr = (fpscr);
<> 132:9baf128c2fab 1048 #endif
<> 132:9baf128c2fab 1049 #endif
<> 132:9baf128c2fab 1050 }
<> 132:9baf128c2fab 1051
<> 132:9baf128c2fab 1052 /** \brief Get FPEXC
<> 132:9baf128c2fab 1053
<> 132:9baf128c2fab 1054 This function returns the current value of the Floating Point Exception Control register.
<> 132:9baf128c2fab 1055
<> 132:9baf128c2fab 1056 \return Floating Point Exception Control register value
<> 132:9baf128c2fab 1057 */
<> 132:9baf128c2fab 1058 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 132:9baf128c2fab 1059 {
<> 132:9baf128c2fab 1060 #if (__FPU_PRESENT == 1)
<> 132:9baf128c2fab 1061 #if 1
<> 132:9baf128c2fab 1062 uint32_t result;
<> 132:9baf128c2fab 1063
<> 132:9baf128c2fab 1064 __ASM volatile ("vmrs %0, fpexc" : "=r" (result));
<> 132:9baf128c2fab 1065 return (result);
<> 132:9baf128c2fab 1066 #else
<> 132:9baf128c2fab 1067 register uint32_t __regfpexc __ASM("fpexc");
<> 132:9baf128c2fab 1068 return(__regfpexc);
<> 132:9baf128c2fab 1069 #endif
<> 132:9baf128c2fab 1070 #else
<> 132:9baf128c2fab 1071 return(0);
<> 132:9baf128c2fab 1072 #endif
<> 132:9baf128c2fab 1073 }
<> 132:9baf128c2fab 1074
<> 132:9baf128c2fab 1075
<> 132:9baf128c2fab 1076 /** \brief Set FPEXC
<> 132:9baf128c2fab 1077
<> 132:9baf128c2fab 1078 This function assigns the given value to the Floating Point Exception Control register.
<> 132:9baf128c2fab 1079
<> 132:9baf128c2fab 1080 \param [in] fpscr Floating Point Exception Control value to set
<> 132:9baf128c2fab 1081 */
<> 132:9baf128c2fab 1082 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 132:9baf128c2fab 1083 {
<> 132:9baf128c2fab 1084 #if (__FPU_PRESENT == 1)
<> 132:9baf128c2fab 1085 #if 1
<> 132:9baf128c2fab 1086 __ASM volatile ("vmsr fpexc, %0" : : "r" (fpexc));
<> 132:9baf128c2fab 1087 #else
<> 132:9baf128c2fab 1088 register uint32_t __regfpexc __ASM("fpexc");
<> 132:9baf128c2fab 1089 __regfpexc = (fpexc);
<> 132:9baf128c2fab 1090 #endif
<> 132:9baf128c2fab 1091 #endif
<> 132:9baf128c2fab 1092 }
<> 132:9baf128c2fab 1093
<> 132:9baf128c2fab 1094 /** \brief Get CPACR
<> 132:9baf128c2fab 1095
<> 132:9baf128c2fab 1096 This function returns the current value of the Coprocessor Access Control register.
<> 132:9baf128c2fab 1097
<> 132:9baf128c2fab 1098 \return Coprocessor Access Control register value
<> 132:9baf128c2fab 1099 */
<> 132:9baf128c2fab 1100 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPACR(void)
<> 132:9baf128c2fab 1101 {
<> 132:9baf128c2fab 1102 #if 1
<> 132:9baf128c2fab 1103 register uint32_t __regCPACR;
<> 132:9baf128c2fab 1104 __ASM volatile ("mrc p15, 0, %0, c1, c0, 2" : "=r" (__regCPACR));
<> 132:9baf128c2fab 1105 #else
<> 132:9baf128c2fab 1106 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 132:9baf128c2fab 1107 #endif
<> 132:9baf128c2fab 1108 return __regCPACR;
<> 132:9baf128c2fab 1109 }
<> 132:9baf128c2fab 1110
<> 132:9baf128c2fab 1111 /** \brief Set CPACR
<> 132:9baf128c2fab 1112
<> 132:9baf128c2fab 1113 This function assigns the given value to the Coprocessor Access Control register.
<> 132:9baf128c2fab 1114
<> 132:9baf128c2fab 1115 \param [in] cpacr Coprocessor Acccess Control value to set
<> 132:9baf128c2fab 1116 */
<> 132:9baf128c2fab 1117 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 132:9baf128c2fab 1118 {
<> 132:9baf128c2fab 1119 #if 1
<> 132:9baf128c2fab 1120 __ASM volatile ("mcr p15, 0, %0, c1, c0, 2" : : "r" (cpacr));
<> 132:9baf128c2fab 1121 #else
<> 132:9baf128c2fab 1122 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 132:9baf128c2fab 1123 __regCPACR = cpacr;
<> 132:9baf128c2fab 1124 #endif
<> 132:9baf128c2fab 1125 __ISB();
<> 132:9baf128c2fab 1126 }
<> 132:9baf128c2fab 1127
<> 132:9baf128c2fab 1128 /** \brief Get CBAR
<> 132:9baf128c2fab 1129
<> 132:9baf128c2fab 1130 This function returns the value of the Configuration Base Address register.
<> 132:9baf128c2fab 1131
<> 132:9baf128c2fab 1132 \return Configuration Base Address register value
<> 132:9baf128c2fab 1133 */
<> 132:9baf128c2fab 1134 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CBAR() {
<> 132:9baf128c2fab 1135 #if 1
<> 132:9baf128c2fab 1136 register uint32_t __regCBAR;
<> 132:9baf128c2fab 1137 __ASM volatile ("mrc p15, 4, %0, c15, c0, 0" : "=r" (__regCBAR));
<> 132:9baf128c2fab 1138 #else
<> 132:9baf128c2fab 1139 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 132:9baf128c2fab 1140 #endif
<> 132:9baf128c2fab 1141 return(__regCBAR);
<> 132:9baf128c2fab 1142 }
<> 132:9baf128c2fab 1143
<> 132:9baf128c2fab 1144 /** \brief Get TTBR0
<> 132:9baf128c2fab 1145
<> 132:9baf128c2fab 1146 This function returns the value of the Translation Table Base Register 0.
<> 132:9baf128c2fab 1147
<> 132:9baf128c2fab 1148 \return Translation Table Base Register 0 value
<> 132:9baf128c2fab 1149 */
<> 132:9baf128c2fab 1150 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_TTBR0() {
<> 132:9baf128c2fab 1151 #if 1
<> 132:9baf128c2fab 1152 register uint32_t __regTTBR0;
<> 132:9baf128c2fab 1153 __ASM volatile ("mrc p15, 0, %0, c2, c0, 0" : "=r" (__regTTBR0));
<> 132:9baf128c2fab 1154 #else
<> 132:9baf128c2fab 1155 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 132:9baf128c2fab 1156 #endif
<> 132:9baf128c2fab 1157 return(__regTTBR0);
<> 132:9baf128c2fab 1158 }
<> 132:9baf128c2fab 1159
<> 132:9baf128c2fab 1160 /** \brief Set TTBR0
<> 132:9baf128c2fab 1161
<> 132:9baf128c2fab 1162 This function assigns the given value to the Translation Table Base Register 0.
<> 132:9baf128c2fab 1163
<> 132:9baf128c2fab 1164 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 132:9baf128c2fab 1165 */
<> 132:9baf128c2fab 1166 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 132:9baf128c2fab 1167 #if 1
<> 132:9baf128c2fab 1168 __ASM volatile ("mcr p15, 0, %0, c2, c0, 0" : : "r" (ttbr0));
<> 132:9baf128c2fab 1169 #else
<> 132:9baf128c2fab 1170 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 132:9baf128c2fab 1171 __regTTBR0 = ttbr0;
<> 132:9baf128c2fab 1172 #endif
<> 132:9baf128c2fab 1173 __ISB();
<> 132:9baf128c2fab 1174 }
<> 132:9baf128c2fab 1175
<> 132:9baf128c2fab 1176 /** \brief Get DACR
<> 132:9baf128c2fab 1177
<> 132:9baf128c2fab 1178 This function returns the value of the Domain Access Control Register.
<> 132:9baf128c2fab 1179
<> 132:9baf128c2fab 1180 \return Domain Access Control Register value
<> 132:9baf128c2fab 1181 */
<> 132:9baf128c2fab 1182 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_DACR() {
<> 132:9baf128c2fab 1183 #if 1
<> 132:9baf128c2fab 1184 register uint32_t __regDACR;
<> 132:9baf128c2fab 1185 __ASM volatile ("mrc p15, 0, %0, c3, c0, 0" : "=r" (__regDACR));
<> 132:9baf128c2fab 1186 #else
<> 132:9baf128c2fab 1187 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 132:9baf128c2fab 1188 #endif
<> 132:9baf128c2fab 1189 return(__regDACR);
<> 132:9baf128c2fab 1190 }
<> 132:9baf128c2fab 1191
<> 132:9baf128c2fab 1192 /** \brief Set DACR
<> 132:9baf128c2fab 1193
<> 132:9baf128c2fab 1194 This function assigns the given value to the Domain Access Control Register.
<> 132:9baf128c2fab 1195
<> 132:9baf128c2fab 1196 \param [in] dacr Domain Access Control Register value to set
<> 132:9baf128c2fab 1197 */
<> 132:9baf128c2fab 1198 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 132:9baf128c2fab 1199 #if 1
<> 132:9baf128c2fab 1200 __ASM volatile ("mcr p15, 0, %0, c3, c0, 0" : : "r" (dacr));
<> 132:9baf128c2fab 1201 #else
<> 132:9baf128c2fab 1202 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 132:9baf128c2fab 1203 __regDACR = dacr;
<> 132:9baf128c2fab 1204 #endif
<> 132:9baf128c2fab 1205 __ISB();
<> 132:9baf128c2fab 1206 }
<> 132:9baf128c2fab 1207
<> 132:9baf128c2fab 1208 /******************************** Cache and BTAC enable ****************************************************/
<> 132:9baf128c2fab 1209
<> 132:9baf128c2fab 1210 /** \brief Set SCTLR
<> 132:9baf128c2fab 1211
<> 132:9baf128c2fab 1212 This function assigns the given value to the System Control Register.
<> 132:9baf128c2fab 1213
<> 132:9baf128c2fab 1214 \param [in] sctlr System Control Register value to set
<> 132:9baf128c2fab 1215 */
<> 132:9baf128c2fab 1216 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 132:9baf128c2fab 1217 {
<> 132:9baf128c2fab 1218 #if 1
<> 132:9baf128c2fab 1219 __ASM volatile ("mcr p15, 0, %0, c1, c0, 0" : : "r" (sctlr));
<> 132:9baf128c2fab 1220 #else
<> 132:9baf128c2fab 1221 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 132:9baf128c2fab 1222 __regSCTLR = sctlr;
<> 132:9baf128c2fab 1223 #endif
<> 132:9baf128c2fab 1224 }
<> 132:9baf128c2fab 1225
<> 132:9baf128c2fab 1226 /** \brief Get SCTLR
<> 132:9baf128c2fab 1227
<> 132:9baf128c2fab 1228 This function returns the value of the System Control Register.
<> 132:9baf128c2fab 1229
<> 132:9baf128c2fab 1230 \return System Control Register value
<> 132:9baf128c2fab 1231 */
<> 132:9baf128c2fab 1232 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_SCTLR() {
<> 132:9baf128c2fab 1233 #if 1
<> 132:9baf128c2fab 1234 register uint32_t __regSCTLR;
<> 132:9baf128c2fab 1235 __ASM volatile ("mrc p15, 0, %0, c1, c0, 0" : "=r" (__regSCTLR));
<> 132:9baf128c2fab 1236 #else
<> 132:9baf128c2fab 1237 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 132:9baf128c2fab 1238 #endif
<> 132:9baf128c2fab 1239 return(__regSCTLR);
<> 132:9baf128c2fab 1240 }
<> 132:9baf128c2fab 1241
<> 132:9baf128c2fab 1242 /** \brief Enable Caches
<> 132:9baf128c2fab 1243
<> 132:9baf128c2fab 1244 Enable Caches
<> 132:9baf128c2fab 1245 */
<> 132:9baf128c2fab 1246 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_caches(void) {
<> 132:9baf128c2fab 1247 // Set I bit 12 to enable I Cache
<> 132:9baf128c2fab 1248 // Set C bit 2 to enable D Cache
<> 132:9baf128c2fab 1249 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 132:9baf128c2fab 1250 }
<> 132:9baf128c2fab 1251
<> 132:9baf128c2fab 1252 /** \brief Disable Caches
<> 132:9baf128c2fab 1253
<> 132:9baf128c2fab 1254 Disable Caches
<> 132:9baf128c2fab 1255 */
<> 132:9baf128c2fab 1256 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_caches(void) {
<> 132:9baf128c2fab 1257 // Clear I bit 12 to disable I Cache
<> 132:9baf128c2fab 1258 // Clear C bit 2 to disable D Cache
<> 132:9baf128c2fab 1259 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 132:9baf128c2fab 1260 __ISB();
<> 132:9baf128c2fab 1261 }
<> 132:9baf128c2fab 1262
<> 132:9baf128c2fab 1263 /** \brief Enable BTAC
<> 132:9baf128c2fab 1264
<> 132:9baf128c2fab 1265 Enable BTAC
<> 132:9baf128c2fab 1266 */
<> 132:9baf128c2fab 1267 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_btac(void) {
<> 132:9baf128c2fab 1268 // Set Z bit 11 to enable branch prediction
<> 132:9baf128c2fab 1269 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 132:9baf128c2fab 1270 __ISB();
<> 132:9baf128c2fab 1271 }
<> 132:9baf128c2fab 1272
<> 132:9baf128c2fab 1273 /** \brief Disable BTAC
<> 132:9baf128c2fab 1274
<> 132:9baf128c2fab 1275 Disable BTAC
<> 132:9baf128c2fab 1276 */
<> 132:9baf128c2fab 1277 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_btac(void) {
<> 132:9baf128c2fab 1278 // Clear Z bit 11 to disable branch prediction
<> 132:9baf128c2fab 1279 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 132:9baf128c2fab 1280 }
<> 132:9baf128c2fab 1281
<> 132:9baf128c2fab 1282
<> 132:9baf128c2fab 1283 /** \brief Enable MMU
<> 132:9baf128c2fab 1284
<> 132:9baf128c2fab 1285 Enable MMU
<> 132:9baf128c2fab 1286 */
<> 132:9baf128c2fab 1287 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_mmu(void) {
<> 132:9baf128c2fab 1288 // Set M bit 0 to enable the MMU
<> 132:9baf128c2fab 1289 // Set AFE bit to enable simplified access permissions model
<> 132:9baf128c2fab 1290 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 132:9baf128c2fab 1291 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 132:9baf128c2fab 1292 __ISB();
<> 132:9baf128c2fab 1293 }
<> 132:9baf128c2fab 1294
<> 132:9baf128c2fab 1295 /** \brief Disable MMU
<> 132:9baf128c2fab 1296
<> 132:9baf128c2fab 1297 Disable MMU
<> 132:9baf128c2fab 1298 */
<> 132:9baf128c2fab 1299 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_mmu(void) {
<> 132:9baf128c2fab 1300 // Clear M bit 0 to disable the MMU
<> 132:9baf128c2fab 1301 __set_SCTLR( __get_SCTLR() & ~1);
<> 132:9baf128c2fab 1302 __ISB();
<> 132:9baf128c2fab 1303 }
<> 132:9baf128c2fab 1304
<> 132:9baf128c2fab 1305 /******************************** TLB maintenance operations ************************************************/
<> 132:9baf128c2fab 1306 /** \brief Invalidate the whole tlb
<> 132:9baf128c2fab 1307
<> 132:9baf128c2fab 1308 TLBIALL. Invalidate the whole tlb
<> 132:9baf128c2fab 1309 */
<> 132:9baf128c2fab 1310
<> 132:9baf128c2fab 1311 __attribute__( ( always_inline ) ) __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 132:9baf128c2fab 1312 #if 1
<> 132:9baf128c2fab 1313 __ASM volatile ("mcr p15, 0, %0, c8, c7, 0" : : "r" (0));
<> 132:9baf128c2fab 1314 #else
<> 132:9baf128c2fab 1315 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 132:9baf128c2fab 1316 __TLBIALL = 0;
<> 132:9baf128c2fab 1317 #endif
<> 132:9baf128c2fab 1318 __DSB();
<> 132:9baf128c2fab 1319 __ISB();
<> 132:9baf128c2fab 1320 }
<> 132:9baf128c2fab 1321
<> 132:9baf128c2fab 1322 /******************************** BTB maintenance operations ************************************************/
<> 132:9baf128c2fab 1323 /** \brief Invalidate entire branch predictor array
<> 132:9baf128c2fab 1324
<> 132:9baf128c2fab 1325 BPIALL. Branch Predictor Invalidate All.
<> 132:9baf128c2fab 1326 */
<> 132:9baf128c2fab 1327
<> 132:9baf128c2fab 1328 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_btac(void) {
<> 132:9baf128c2fab 1329 #if 1
<> 132:9baf128c2fab 1330 __ASM volatile ("mcr p15, 0, %0, c7, c5, 6" : : "r" (0));
<> 132:9baf128c2fab 1331 #else
<> 132:9baf128c2fab 1332 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 132:9baf128c2fab 1333 __BPIALL = 0;
<> 132:9baf128c2fab 1334 #endif
<> 132:9baf128c2fab 1335 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 1336 __ISB(); //ensure instruction fetch path sees new state
<> 132:9baf128c2fab 1337 }
<> 132:9baf128c2fab 1338
<> 132:9baf128c2fab 1339
<> 132:9baf128c2fab 1340 /******************************** L1 cache operations ******************************************************/
<> 132:9baf128c2fab 1341
<> 132:9baf128c2fab 1342 /** \brief Invalidate the whole I$
<> 132:9baf128c2fab 1343
<> 132:9baf128c2fab 1344 ICIALLU. Instruction Cache Invalidate All to PoU
<> 132:9baf128c2fab 1345 */
<> 132:9baf128c2fab 1346 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 132:9baf128c2fab 1347 #if 1
<> 132:9baf128c2fab 1348 __ASM volatile ("mcr p15, 0, %0, c7, c5, 0" : : "r" (0));
<> 132:9baf128c2fab 1349 #else
<> 132:9baf128c2fab 1350 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 132:9baf128c2fab 1351 __ICIALLU = 0;
<> 132:9baf128c2fab 1352 #endif
<> 132:9baf128c2fab 1353 __DSB(); //ensure completion of the invalidation
<> 132:9baf128c2fab 1354 __ISB(); //ensure instruction fetch path sees new I cache state
<> 132:9baf128c2fab 1355 }
<> 132:9baf128c2fab 1356
<> 132:9baf128c2fab 1357 /** \brief Clean D$ by MVA
<> 132:9baf128c2fab 1358
<> 132:9baf128c2fab 1359 DCCMVAC. Data cache clean by MVA to PoC
<> 132:9baf128c2fab 1360 */
<> 132:9baf128c2fab 1361 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 132:9baf128c2fab 1362 #if 1
<> 132:9baf128c2fab 1363 __ASM volatile ("mcr p15, 0, %0, c7, c10, 1" : : "r" ((uint32_t)va));
<> 132:9baf128c2fab 1364 #else
<> 132:9baf128c2fab 1365 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 132:9baf128c2fab 1366 __DCCMVAC = (uint32_t)va;
<> 132:9baf128c2fab 1367 #endif
<> 132:9baf128c2fab 1368 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 1369 }
<> 132:9baf128c2fab 1370
<> 132:9baf128c2fab 1371 /** \brief Invalidate D$ by MVA
<> 132:9baf128c2fab 1372
<> 132:9baf128c2fab 1373 DCIMVAC. Data cache invalidate by MVA to PoC
<> 132:9baf128c2fab 1374 */
<> 132:9baf128c2fab 1375 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 132:9baf128c2fab 1376 #if 1
<> 132:9baf128c2fab 1377 __ASM volatile ("mcr p15, 0, %0, c7, c6, 1" : : "r" ((uint32_t)va));
<> 132:9baf128c2fab 1378 #else
<> 132:9baf128c2fab 1379 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 132:9baf128c2fab 1380 __DCIMVAC = (uint32_t)va;
<> 132:9baf128c2fab 1381 #endif
<> 132:9baf128c2fab 1382 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 1383 }
<> 132:9baf128c2fab 1384
<> 132:9baf128c2fab 1385 /** \brief Clean and Invalidate D$ by MVA
<> 132:9baf128c2fab 1386
<> 132:9baf128c2fab 1387 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 132:9baf128c2fab 1388 */
<> 132:9baf128c2fab 1389 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 132:9baf128c2fab 1390 #if 1
<> 132:9baf128c2fab 1391 __ASM volatile ("mcr p15, 0, %0, c7, c14, 1" : : "r" ((uint32_t)va));
<> 132:9baf128c2fab 1392 #else
<> 132:9baf128c2fab 1393 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 132:9baf128c2fab 1394 __DCCIMVAC = (uint32_t)va;
<> 132:9baf128c2fab 1395 #endif
<> 132:9baf128c2fab 1396 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 132:9baf128c2fab 1397 }
<> 132:9baf128c2fab 1398
<> 132:9baf128c2fab 1399 /** \brief Clean and Invalidate the entire data or unified cache
<> 132:9baf128c2fab 1400
<> 132:9baf128c2fab 1401 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 132:9baf128c2fab 1402 */
<> 132:9baf128c2fab 1403 extern void __v7_all_cache(uint32_t op);
<> 132:9baf128c2fab 1404
<> 132:9baf128c2fab 1405
<> 132:9baf128c2fab 1406 /** \brief Invalidate the whole D$
<> 132:9baf128c2fab 1407
<> 132:9baf128c2fab 1408 DCISW. Invalidate by Set/Way
<> 132:9baf128c2fab 1409 */
<> 132:9baf128c2fab 1410
<> 132:9baf128c2fab 1411 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 132:9baf128c2fab 1412 __v7_all_cache(0);
<> 132:9baf128c2fab 1413 }
<> 132:9baf128c2fab 1414
<> 132:9baf128c2fab 1415 /** \brief Clean the whole D$
<> 132:9baf128c2fab 1416
<> 132:9baf128c2fab 1417 DCCSW. Clean by Set/Way
<> 132:9baf128c2fab 1418 */
<> 132:9baf128c2fab 1419
<> 132:9baf128c2fab 1420 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 132:9baf128c2fab 1421 __v7_all_cache(1);
<> 132:9baf128c2fab 1422 }
<> 132:9baf128c2fab 1423
<> 132:9baf128c2fab 1424 /** \brief Clean and invalidate the whole D$
<> 132:9baf128c2fab 1425
<> 132:9baf128c2fab 1426 DCCISW. Clean and Invalidate by Set/Way
<> 132:9baf128c2fab 1427 */
<> 132:9baf128c2fab 1428
<> 132:9baf128c2fab 1429 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 132:9baf128c2fab 1430 __v7_all_cache(2);
<> 132:9baf128c2fab 1431 }
<> 132:9baf128c2fab 1432
<> 132:9baf128c2fab 1433 #include "core_ca_mmu.h"
<> 132:9baf128c2fab 1434
<> 132:9baf128c2fab 1435 #elif (defined (__TASKING__)) /*--------------- TASKING Compiler -----------------*/
<> 132:9baf128c2fab 1436
<> 132:9baf128c2fab 1437 #error TASKING Compiler support not implemented for Cortex-A
<> 132:9baf128c2fab 1438
<> 132:9baf128c2fab 1439 #endif
<> 132:9baf128c2fab 1440
<> 132:9baf128c2fab 1441 /*@} end of CMSIS_Core_RegAccFunctions */
<> 132:9baf128c2fab 1442
<> 132:9baf128c2fab 1443
<> 132:9baf128c2fab 1444 #endif /* __CORE_CAFUNC_H__ */