The official Mbed 2 C/C++ SDK provides the software platform and libraries to build your applications.

Dependents:   hello SerialTestv11 SerialTestv12 Sierpinski ... more

mbed 2

This is the mbed 2 library. If you'd like to learn about Mbed OS please see the mbed-os docs.

Committer:
<>
Date:
Tue Mar 14 16:20:51 2017 +0000
Revision:
138:093f2bd7b9eb
Parent:
133:99b5ccf27215
Release 138 of the mbed library

Ports for Upcoming Targets


Fixes and Changes

3716: fix for issue #3715: correction in startup files for ARM and IAR, alignment of system_stm32f429xx.c files https://github.com/ARMmbed/mbed-os/pull/3716
3741: STM32 remove warning in hal_tick_32b.c file https://github.com/ARMmbed/mbed-os/pull/3741
3780: STM32L4 : Fix GPIO G port compatibility https://github.com/ARMmbed/mbed-os/pull/3780
3831: NCS36510: SPISLAVE enabled (Conflict resolved) https://github.com/ARMmbed/mbed-os/pull/3831
3836: Allow to redefine nRF's PSTORAGE_NUM_OF_PAGES outside of the mbed-os https://github.com/ARMmbed/mbed-os/pull/3836
3840: STM32: gpio SPEED - always set High Speed by default https://github.com/ARMmbed/mbed-os/pull/3840
3844: STM32 GPIO: Typo correction. Update comment (GPIO_IP_WITHOUT_BRR) https://github.com/ARMmbed/mbed-os/pull/3844
3850: STM32: change spi error to debug warning https://github.com/ARMmbed/mbed-os/pull/3850
3860: Define GPIO_IP_WITHOUT_BRR for xDot platform https://github.com/ARMmbed/mbed-os/pull/3860
3880: DISCO_F469NI: allow the use of CAN2 instance when CAN1 is not activated https://github.com/ARMmbed/mbed-os/pull/3880
3795: Fix pwm period calc https://github.com/ARMmbed/mbed-os/pull/3795
3828: STM32 CAN API: correct format and type https://github.com/ARMmbed/mbed-os/pull/3828
3842: TARGET_NRF: corrected spi_init() to properly handle re-initialization https://github.com/ARMmbed/mbed-os/pull/3842
3843: STM32L476xG: set APB2 clock to 80MHz (instead of 40MHz) https://github.com/ARMmbed/mbed-os/pull/3843
3879: NUCLEO_F446ZE: Add missing AnalogIn pins on PF_3, PF_5 and PF_10. https://github.com/ARMmbed/mbed-os/pull/3879
3902: Fix heap and stack size for NUCLEO_F746ZG https://github.com/ARMmbed/mbed-os/pull/3902
3829: can_write(): return error code when no tx mailboxes are available https://github.com/ARMmbed/mbed-os/pull/3829

Who changed what in which revision?

UserRevisionLine numberNew contents of line
<> 133:99b5ccf27215 1 /**************************************************************************//**
<> 133:99b5ccf27215 2 * @file core_caFunc.h
<> 133:99b5ccf27215 3 * @brief CMSIS Cortex-A Core Function Access Header File
<> 133:99b5ccf27215 4 * @version V3.10
<> 133:99b5ccf27215 5 * @date 30 Oct 2013
<> 133:99b5ccf27215 6 *
<> 133:99b5ccf27215 7 * @note
<> 133:99b5ccf27215 8 *
<> 133:99b5ccf27215 9 ******************************************************************************/
<> 133:99b5ccf27215 10 /* Copyright (c) 2009 - 2013 ARM LIMITED
<> 133:99b5ccf27215 11
<> 133:99b5ccf27215 12 All rights reserved.
<> 133:99b5ccf27215 13 Redistribution and use in source and binary forms, with or without
<> 133:99b5ccf27215 14 modification, are permitted provided that the following conditions are met:
<> 133:99b5ccf27215 15 - Redistributions of source code must retain the above copyright
<> 133:99b5ccf27215 16 notice, this list of conditions and the following disclaimer.
<> 133:99b5ccf27215 17 - Redistributions in binary form must reproduce the above copyright
<> 133:99b5ccf27215 18 notice, this list of conditions and the following disclaimer in the
<> 133:99b5ccf27215 19 documentation and/or other materials provided with the distribution.
<> 133:99b5ccf27215 20 - Neither the name of ARM nor the names of its contributors may be used
<> 133:99b5ccf27215 21 to endorse or promote products derived from this software without
<> 133:99b5ccf27215 22 specific prior written permission.
<> 133:99b5ccf27215 23 *
<> 133:99b5ccf27215 24 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
<> 133:99b5ccf27215 25 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
<> 133:99b5ccf27215 26 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
<> 133:99b5ccf27215 27 ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS AND CONTRIBUTORS BE
<> 133:99b5ccf27215 28 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
<> 133:99b5ccf27215 29 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
<> 133:99b5ccf27215 30 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
<> 133:99b5ccf27215 31 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
<> 133:99b5ccf27215 32 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
<> 133:99b5ccf27215 33 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
<> 133:99b5ccf27215 34 POSSIBILITY OF SUCH DAMAGE.
<> 133:99b5ccf27215 35 ---------------------------------------------------------------------------*/
<> 133:99b5ccf27215 36
<> 133:99b5ccf27215 37
<> 133:99b5ccf27215 38 #ifndef __CORE_CAFUNC_H__
<> 133:99b5ccf27215 39 #define __CORE_CAFUNC_H__
<> 133:99b5ccf27215 40
<> 133:99b5ccf27215 41
<> 133:99b5ccf27215 42 /* ########################### Core Function Access ########################### */
<> 133:99b5ccf27215 43 /** \ingroup CMSIS_Core_FunctionInterface
<> 133:99b5ccf27215 44 \defgroup CMSIS_Core_RegAccFunctions CMSIS Core Register Access Functions
<> 133:99b5ccf27215 45 @{
<> 133:99b5ccf27215 46 */
<> 133:99b5ccf27215 47
<> 133:99b5ccf27215 48 #if defined ( __CC_ARM ) /*------------------RealView Compiler -----------------*/
<> 133:99b5ccf27215 49 /* ARM armcc specific functions */
<> 133:99b5ccf27215 50
<> 133:99b5ccf27215 51 #if (__ARMCC_VERSION < 400677)
<> 133:99b5ccf27215 52 #error "Please use ARM Compiler Toolchain V4.0.677 or later!"
<> 133:99b5ccf27215 53 #endif
<> 133:99b5ccf27215 54
<> 133:99b5ccf27215 55 #define MODE_USR 0x10
<> 133:99b5ccf27215 56 #define MODE_FIQ 0x11
<> 133:99b5ccf27215 57 #define MODE_IRQ 0x12
<> 133:99b5ccf27215 58 #define MODE_SVC 0x13
<> 133:99b5ccf27215 59 #define MODE_MON 0x16
<> 133:99b5ccf27215 60 #define MODE_ABT 0x17
<> 133:99b5ccf27215 61 #define MODE_HYP 0x1A
<> 133:99b5ccf27215 62 #define MODE_UND 0x1B
<> 133:99b5ccf27215 63 #define MODE_SYS 0x1F
<> 133:99b5ccf27215 64
<> 133:99b5ccf27215 65 /** \brief Get APSR Register
<> 133:99b5ccf27215 66
<> 133:99b5ccf27215 67 This function returns the content of the APSR Register.
<> 133:99b5ccf27215 68
<> 133:99b5ccf27215 69 \return APSR Register value
<> 133:99b5ccf27215 70 */
<> 133:99b5ccf27215 71 __STATIC_INLINE uint32_t __get_APSR(void)
<> 133:99b5ccf27215 72 {
<> 133:99b5ccf27215 73 register uint32_t __regAPSR __ASM("apsr");
<> 133:99b5ccf27215 74 return(__regAPSR);
<> 133:99b5ccf27215 75 }
<> 133:99b5ccf27215 76
<> 133:99b5ccf27215 77
<> 133:99b5ccf27215 78 /** \brief Get CPSR Register
<> 133:99b5ccf27215 79
<> 133:99b5ccf27215 80 This function returns the content of the CPSR Register.
<> 133:99b5ccf27215 81
<> 133:99b5ccf27215 82 \return CPSR Register value
<> 133:99b5ccf27215 83 */
<> 133:99b5ccf27215 84 __STATIC_INLINE uint32_t __get_CPSR(void)
<> 133:99b5ccf27215 85 {
<> 133:99b5ccf27215 86 register uint32_t __regCPSR __ASM("cpsr");
<> 133:99b5ccf27215 87 return(__regCPSR);
<> 133:99b5ccf27215 88 }
<> 133:99b5ccf27215 89
<> 133:99b5ccf27215 90 /** \brief Set Stack Pointer
<> 133:99b5ccf27215 91
<> 133:99b5ccf27215 92 This function assigns the given value to the current stack pointer.
<> 133:99b5ccf27215 93
<> 133:99b5ccf27215 94 \param [in] topOfStack Stack Pointer value to set
<> 133:99b5ccf27215 95 */
<> 133:99b5ccf27215 96 register uint32_t __regSP __ASM("sp");
<> 133:99b5ccf27215 97 __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 133:99b5ccf27215 98 {
<> 133:99b5ccf27215 99 __regSP = topOfStack;
<> 133:99b5ccf27215 100 }
<> 133:99b5ccf27215 101
<> 133:99b5ccf27215 102
<> 133:99b5ccf27215 103 /** \brief Get link register
<> 133:99b5ccf27215 104
<> 133:99b5ccf27215 105 This function returns the value of the link register
<> 133:99b5ccf27215 106
<> 133:99b5ccf27215 107 \return Value of link register
<> 133:99b5ccf27215 108 */
<> 133:99b5ccf27215 109 register uint32_t __reglr __ASM("lr");
<> 133:99b5ccf27215 110 __STATIC_INLINE uint32_t __get_LR(void)
<> 133:99b5ccf27215 111 {
<> 133:99b5ccf27215 112 return(__reglr);
<> 133:99b5ccf27215 113 }
<> 133:99b5ccf27215 114
<> 133:99b5ccf27215 115 /** \brief Set link register
<> 133:99b5ccf27215 116
<> 133:99b5ccf27215 117 This function sets the value of the link register
<> 133:99b5ccf27215 118
<> 133:99b5ccf27215 119 \param [in] lr LR value to set
<> 133:99b5ccf27215 120 */
<> 133:99b5ccf27215 121 __STATIC_INLINE void __set_LR(uint32_t lr)
<> 133:99b5ccf27215 122 {
<> 133:99b5ccf27215 123 __reglr = lr;
<> 133:99b5ccf27215 124 }
<> 133:99b5ccf27215 125
<> 133:99b5ccf27215 126 /** \brief Set Process Stack Pointer
<> 133:99b5ccf27215 127
<> 133:99b5ccf27215 128 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 133:99b5ccf27215 129
<> 133:99b5ccf27215 130 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 133:99b5ccf27215 131 */
<> 133:99b5ccf27215 132 __STATIC_ASM void __set_PSP(uint32_t topOfProcStack)
<> 133:99b5ccf27215 133 {
<> 133:99b5ccf27215 134 ARM
<> 133:99b5ccf27215 135 PRESERVE8
<> 133:99b5ccf27215 136
<> 133:99b5ccf27215 137 BIC R0, R0, #7 ;ensure stack is 8-byte aligned
<> 133:99b5ccf27215 138 MRS R1, CPSR
<> 133:99b5ccf27215 139 CPS #MODE_SYS ;no effect in USR mode
<> 133:99b5ccf27215 140 MOV SP, R0
<> 133:99b5ccf27215 141 MSR CPSR_c, R1 ;no effect in USR mode
<> 133:99b5ccf27215 142 ISB
<> 133:99b5ccf27215 143 BX LR
<> 133:99b5ccf27215 144
<> 133:99b5ccf27215 145 }
<> 133:99b5ccf27215 146
<> 133:99b5ccf27215 147 /** \brief Set User Mode
<> 133:99b5ccf27215 148
<> 133:99b5ccf27215 149 This function changes the processor state to User Mode
<> 133:99b5ccf27215 150 */
<> 133:99b5ccf27215 151 __STATIC_ASM void __set_CPS_USR(void)
<> 133:99b5ccf27215 152 {
<> 133:99b5ccf27215 153 ARM
<> 133:99b5ccf27215 154
<> 133:99b5ccf27215 155 CPS #MODE_USR
<> 133:99b5ccf27215 156 BX LR
<> 133:99b5ccf27215 157 }
<> 133:99b5ccf27215 158
<> 133:99b5ccf27215 159
<> 133:99b5ccf27215 160 /** \brief Enable FIQ
<> 133:99b5ccf27215 161
<> 133:99b5ccf27215 162 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 133:99b5ccf27215 163 Can only be executed in Privileged modes.
<> 133:99b5ccf27215 164 */
<> 133:99b5ccf27215 165 #define __enable_fault_irq __enable_fiq
<> 133:99b5ccf27215 166
<> 133:99b5ccf27215 167
<> 133:99b5ccf27215 168 /** \brief Disable FIQ
<> 133:99b5ccf27215 169
<> 133:99b5ccf27215 170 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 133:99b5ccf27215 171 Can only be executed in Privileged modes.
<> 133:99b5ccf27215 172 */
<> 133:99b5ccf27215 173 #define __disable_fault_irq __disable_fiq
<> 133:99b5ccf27215 174
<> 133:99b5ccf27215 175
<> 133:99b5ccf27215 176 /** \brief Get FPSCR
<> 133:99b5ccf27215 177
<> 133:99b5ccf27215 178 This function returns the current value of the Floating Point Status/Control register.
<> 133:99b5ccf27215 179
<> 133:99b5ccf27215 180 \return Floating Point Status/Control register value
<> 133:99b5ccf27215 181 */
<> 133:99b5ccf27215 182 __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 133:99b5ccf27215 183 {
<> 133:99b5ccf27215 184 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 133:99b5ccf27215 185 register uint32_t __regfpscr __ASM("fpscr");
<> 133:99b5ccf27215 186 return(__regfpscr);
<> 133:99b5ccf27215 187 #else
<> 133:99b5ccf27215 188 return(0);
<> 133:99b5ccf27215 189 #endif
<> 133:99b5ccf27215 190 }
<> 133:99b5ccf27215 191
<> 133:99b5ccf27215 192
<> 133:99b5ccf27215 193 /** \brief Set FPSCR
<> 133:99b5ccf27215 194
<> 133:99b5ccf27215 195 This function assigns the given value to the Floating Point Status/Control register.
<> 133:99b5ccf27215 196
<> 133:99b5ccf27215 197 \param [in] fpscr Floating Point Status/Control value to set
<> 133:99b5ccf27215 198 */
<> 133:99b5ccf27215 199 __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 133:99b5ccf27215 200 {
<> 133:99b5ccf27215 201 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 133:99b5ccf27215 202 register uint32_t __regfpscr __ASM("fpscr");
<> 133:99b5ccf27215 203 __regfpscr = (fpscr);
<> 133:99b5ccf27215 204 #endif
<> 133:99b5ccf27215 205 }
<> 133:99b5ccf27215 206
<> 133:99b5ccf27215 207 /** \brief Get FPEXC
<> 133:99b5ccf27215 208
<> 133:99b5ccf27215 209 This function returns the current value of the Floating Point Exception Control register.
<> 133:99b5ccf27215 210
<> 133:99b5ccf27215 211 \return Floating Point Exception Control register value
<> 133:99b5ccf27215 212 */
<> 133:99b5ccf27215 213 __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 133:99b5ccf27215 214 {
<> 133:99b5ccf27215 215 #if (__FPU_PRESENT == 1)
<> 133:99b5ccf27215 216 register uint32_t __regfpexc __ASM("fpexc");
<> 133:99b5ccf27215 217 return(__regfpexc);
<> 133:99b5ccf27215 218 #else
<> 133:99b5ccf27215 219 return(0);
<> 133:99b5ccf27215 220 #endif
<> 133:99b5ccf27215 221 }
<> 133:99b5ccf27215 222
<> 133:99b5ccf27215 223
<> 133:99b5ccf27215 224 /** \brief Set FPEXC
<> 133:99b5ccf27215 225
<> 133:99b5ccf27215 226 This function assigns the given value to the Floating Point Exception Control register.
<> 133:99b5ccf27215 227
<> 133:99b5ccf27215 228 \param [in] fpscr Floating Point Exception Control value to set
<> 133:99b5ccf27215 229 */
<> 133:99b5ccf27215 230 __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 133:99b5ccf27215 231 {
<> 133:99b5ccf27215 232 #if (__FPU_PRESENT == 1)
<> 133:99b5ccf27215 233 register uint32_t __regfpexc __ASM("fpexc");
<> 133:99b5ccf27215 234 __regfpexc = (fpexc);
<> 133:99b5ccf27215 235 #endif
<> 133:99b5ccf27215 236 }
<> 133:99b5ccf27215 237
<> 133:99b5ccf27215 238 /** \brief Get CPACR
<> 133:99b5ccf27215 239
<> 133:99b5ccf27215 240 This function returns the current value of the Coprocessor Access Control register.
<> 133:99b5ccf27215 241
<> 133:99b5ccf27215 242 \return Coprocessor Access Control register value
<> 133:99b5ccf27215 243 */
<> 133:99b5ccf27215 244 __STATIC_INLINE uint32_t __get_CPACR(void)
<> 133:99b5ccf27215 245 {
<> 133:99b5ccf27215 246 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 133:99b5ccf27215 247 return __regCPACR;
<> 133:99b5ccf27215 248 }
<> 133:99b5ccf27215 249
<> 133:99b5ccf27215 250 /** \brief Set CPACR
<> 133:99b5ccf27215 251
<> 133:99b5ccf27215 252 This function assigns the given value to the Coprocessor Access Control register.
<> 133:99b5ccf27215 253
<> 133:99b5ccf27215 254 \param [in] cpacr Coprocessor Acccess Control value to set
<> 133:99b5ccf27215 255 */
<> 133:99b5ccf27215 256 __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 133:99b5ccf27215 257 {
<> 133:99b5ccf27215 258 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 133:99b5ccf27215 259 __regCPACR = cpacr;
<> 133:99b5ccf27215 260 __ISB();
<> 133:99b5ccf27215 261 }
<> 133:99b5ccf27215 262
<> 133:99b5ccf27215 263 /** \brief Get CBAR
<> 133:99b5ccf27215 264
<> 133:99b5ccf27215 265 This function returns the value of the Configuration Base Address register.
<> 133:99b5ccf27215 266
<> 133:99b5ccf27215 267 \return Configuration Base Address register value
<> 133:99b5ccf27215 268 */
<> 133:99b5ccf27215 269 __STATIC_INLINE uint32_t __get_CBAR() {
<> 133:99b5ccf27215 270 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 133:99b5ccf27215 271 return(__regCBAR);
<> 133:99b5ccf27215 272 }
<> 133:99b5ccf27215 273
<> 133:99b5ccf27215 274 /** \brief Get TTBR0
<> 133:99b5ccf27215 275
<> 133:99b5ccf27215 276 This function returns the value of the Translation Table Base Register 0.
<> 133:99b5ccf27215 277
<> 133:99b5ccf27215 278 \return Translation Table Base Register 0 value
<> 133:99b5ccf27215 279 */
<> 133:99b5ccf27215 280 __STATIC_INLINE uint32_t __get_TTBR0() {
<> 133:99b5ccf27215 281 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 133:99b5ccf27215 282 return(__regTTBR0);
<> 133:99b5ccf27215 283 }
<> 133:99b5ccf27215 284
<> 133:99b5ccf27215 285 /** \brief Set TTBR0
<> 133:99b5ccf27215 286
<> 133:99b5ccf27215 287 This function assigns the given value to the Translation Table Base Register 0.
<> 133:99b5ccf27215 288
<> 133:99b5ccf27215 289 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 133:99b5ccf27215 290 */
<> 133:99b5ccf27215 291 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 133:99b5ccf27215 292 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 133:99b5ccf27215 293 __regTTBR0 = ttbr0;
<> 133:99b5ccf27215 294 __ISB();
<> 133:99b5ccf27215 295 }
<> 133:99b5ccf27215 296
<> 133:99b5ccf27215 297 /** \brief Get DACR
<> 133:99b5ccf27215 298
<> 133:99b5ccf27215 299 This function returns the value of the Domain Access Control Register.
<> 133:99b5ccf27215 300
<> 133:99b5ccf27215 301 \return Domain Access Control Register value
<> 133:99b5ccf27215 302 */
<> 133:99b5ccf27215 303 __STATIC_INLINE uint32_t __get_DACR() {
<> 133:99b5ccf27215 304 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 133:99b5ccf27215 305 return(__regDACR);
<> 133:99b5ccf27215 306 }
<> 133:99b5ccf27215 307
<> 133:99b5ccf27215 308 /** \brief Set DACR
<> 133:99b5ccf27215 309
<> 133:99b5ccf27215 310 This function assigns the given value to the Domain Access Control Register.
<> 133:99b5ccf27215 311
<> 133:99b5ccf27215 312 \param [in] dacr Domain Access Control Register value to set
<> 133:99b5ccf27215 313 */
<> 133:99b5ccf27215 314 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 133:99b5ccf27215 315 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 133:99b5ccf27215 316 __regDACR = dacr;
<> 133:99b5ccf27215 317 __ISB();
<> 133:99b5ccf27215 318 }
<> 133:99b5ccf27215 319
<> 133:99b5ccf27215 320 /******************************** Cache and BTAC enable ****************************************************/
<> 133:99b5ccf27215 321
<> 133:99b5ccf27215 322 /** \brief Set SCTLR
<> 133:99b5ccf27215 323
<> 133:99b5ccf27215 324 This function assigns the given value to the System Control Register.
<> 133:99b5ccf27215 325
<> 133:99b5ccf27215 326 \param [in] sctlr System Control Register value to set
<> 133:99b5ccf27215 327 */
<> 133:99b5ccf27215 328 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 133:99b5ccf27215 329 {
<> 133:99b5ccf27215 330 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 133:99b5ccf27215 331 __regSCTLR = sctlr;
<> 133:99b5ccf27215 332 }
<> 133:99b5ccf27215 333
<> 133:99b5ccf27215 334 /** \brief Get SCTLR
<> 133:99b5ccf27215 335
<> 133:99b5ccf27215 336 This function returns the value of the System Control Register.
<> 133:99b5ccf27215 337
<> 133:99b5ccf27215 338 \return System Control Register value
<> 133:99b5ccf27215 339 */
<> 133:99b5ccf27215 340 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 133:99b5ccf27215 341 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 133:99b5ccf27215 342 return(__regSCTLR);
<> 133:99b5ccf27215 343 }
<> 133:99b5ccf27215 344
<> 133:99b5ccf27215 345 /** \brief Enable Caches
<> 133:99b5ccf27215 346
<> 133:99b5ccf27215 347 Enable Caches
<> 133:99b5ccf27215 348 */
<> 133:99b5ccf27215 349 __STATIC_INLINE void __enable_caches(void) {
<> 133:99b5ccf27215 350 // Set I bit 12 to enable I Cache
<> 133:99b5ccf27215 351 // Set C bit 2 to enable D Cache
<> 133:99b5ccf27215 352 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 133:99b5ccf27215 353 }
<> 133:99b5ccf27215 354
<> 133:99b5ccf27215 355 /** \brief Disable Caches
<> 133:99b5ccf27215 356
<> 133:99b5ccf27215 357 Disable Caches
<> 133:99b5ccf27215 358 */
<> 133:99b5ccf27215 359 __STATIC_INLINE void __disable_caches(void) {
<> 133:99b5ccf27215 360 // Clear I bit 12 to disable I Cache
<> 133:99b5ccf27215 361 // Clear C bit 2 to disable D Cache
<> 133:99b5ccf27215 362 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 133:99b5ccf27215 363 __ISB();
<> 133:99b5ccf27215 364 }
<> 133:99b5ccf27215 365
<> 133:99b5ccf27215 366 /** \brief Enable BTAC
<> 133:99b5ccf27215 367
<> 133:99b5ccf27215 368 Enable BTAC
<> 133:99b5ccf27215 369 */
<> 133:99b5ccf27215 370 __STATIC_INLINE void __enable_btac(void) {
<> 133:99b5ccf27215 371 // Set Z bit 11 to enable branch prediction
<> 133:99b5ccf27215 372 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 133:99b5ccf27215 373 __ISB();
<> 133:99b5ccf27215 374 }
<> 133:99b5ccf27215 375
<> 133:99b5ccf27215 376 /** \brief Disable BTAC
<> 133:99b5ccf27215 377
<> 133:99b5ccf27215 378 Disable BTAC
<> 133:99b5ccf27215 379 */
<> 133:99b5ccf27215 380 __STATIC_INLINE void __disable_btac(void) {
<> 133:99b5ccf27215 381 // Clear Z bit 11 to disable branch prediction
<> 133:99b5ccf27215 382 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 133:99b5ccf27215 383 }
<> 133:99b5ccf27215 384
<> 133:99b5ccf27215 385
<> 133:99b5ccf27215 386 /** \brief Enable MMU
<> 133:99b5ccf27215 387
<> 133:99b5ccf27215 388 Enable MMU
<> 133:99b5ccf27215 389 */
<> 133:99b5ccf27215 390 __STATIC_INLINE void __enable_mmu(void) {
<> 133:99b5ccf27215 391 // Set M bit 0 to enable the MMU
<> 133:99b5ccf27215 392 // Set AFE bit to enable simplified access permissions model
<> 133:99b5ccf27215 393 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 133:99b5ccf27215 394 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 133:99b5ccf27215 395 __ISB();
<> 133:99b5ccf27215 396 }
<> 133:99b5ccf27215 397
<> 133:99b5ccf27215 398 /** \brief Disable MMU
<> 133:99b5ccf27215 399
<> 133:99b5ccf27215 400 Disable MMU
<> 133:99b5ccf27215 401 */
<> 133:99b5ccf27215 402 __STATIC_INLINE void __disable_mmu(void) {
<> 133:99b5ccf27215 403 // Clear M bit 0 to disable the MMU
<> 133:99b5ccf27215 404 __set_SCTLR( __get_SCTLR() & ~1);
<> 133:99b5ccf27215 405 __ISB();
<> 133:99b5ccf27215 406 }
<> 133:99b5ccf27215 407
<> 133:99b5ccf27215 408 /******************************** TLB maintenance operations ************************************************/
<> 133:99b5ccf27215 409 /** \brief Invalidate the whole tlb
<> 133:99b5ccf27215 410
<> 133:99b5ccf27215 411 TLBIALL. Invalidate the whole tlb
<> 133:99b5ccf27215 412 */
<> 133:99b5ccf27215 413
<> 133:99b5ccf27215 414 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 133:99b5ccf27215 415 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 133:99b5ccf27215 416 __TLBIALL = 0;
<> 133:99b5ccf27215 417 __DSB();
<> 133:99b5ccf27215 418 __ISB();
<> 133:99b5ccf27215 419 }
<> 133:99b5ccf27215 420
<> 133:99b5ccf27215 421 /******************************** BTB maintenance operations ************************************************/
<> 133:99b5ccf27215 422 /** \brief Invalidate entire branch predictor array
<> 133:99b5ccf27215 423
<> 133:99b5ccf27215 424 BPIALL. Branch Predictor Invalidate All.
<> 133:99b5ccf27215 425 */
<> 133:99b5ccf27215 426
<> 133:99b5ccf27215 427 __STATIC_INLINE void __v7_inv_btac(void) {
<> 133:99b5ccf27215 428 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 133:99b5ccf27215 429 __BPIALL = 0;
<> 133:99b5ccf27215 430 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 431 __ISB(); //ensure instruction fetch path sees new state
<> 133:99b5ccf27215 432 }
<> 133:99b5ccf27215 433
<> 133:99b5ccf27215 434
<> 133:99b5ccf27215 435 /******************************** L1 cache operations ******************************************************/
<> 133:99b5ccf27215 436
<> 133:99b5ccf27215 437 /** \brief Invalidate the whole I$
<> 133:99b5ccf27215 438
<> 133:99b5ccf27215 439 ICIALLU. Instruction Cache Invalidate All to PoU
<> 133:99b5ccf27215 440 */
<> 133:99b5ccf27215 441 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 133:99b5ccf27215 442 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 133:99b5ccf27215 443 __ICIALLU = 0;
<> 133:99b5ccf27215 444 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 445 __ISB(); //ensure instruction fetch path sees new I cache state
<> 133:99b5ccf27215 446 }
<> 133:99b5ccf27215 447
<> 133:99b5ccf27215 448 /** \brief Clean D$ by MVA
<> 133:99b5ccf27215 449
<> 133:99b5ccf27215 450 DCCMVAC. Data cache clean by MVA to PoC
<> 133:99b5ccf27215 451 */
<> 133:99b5ccf27215 452 __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 133:99b5ccf27215 453 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 133:99b5ccf27215 454 __DCCMVAC = (uint32_t)va;
<> 133:99b5ccf27215 455 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 456 }
<> 133:99b5ccf27215 457
<> 133:99b5ccf27215 458 /** \brief Invalidate D$ by MVA
<> 133:99b5ccf27215 459
<> 133:99b5ccf27215 460 DCIMVAC. Data cache invalidate by MVA to PoC
<> 133:99b5ccf27215 461 */
<> 133:99b5ccf27215 462 __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 133:99b5ccf27215 463 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 133:99b5ccf27215 464 __DCIMVAC = (uint32_t)va;
<> 133:99b5ccf27215 465 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 466 }
<> 133:99b5ccf27215 467
<> 133:99b5ccf27215 468 /** \brief Clean and Invalidate D$ by MVA
<> 133:99b5ccf27215 469
<> 133:99b5ccf27215 470 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 133:99b5ccf27215 471 */
<> 133:99b5ccf27215 472 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 133:99b5ccf27215 473 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 133:99b5ccf27215 474 __DCCIMVAC = (uint32_t)va;
<> 133:99b5ccf27215 475 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 476 }
<> 133:99b5ccf27215 477
<> 133:99b5ccf27215 478 /** \brief Clean and Invalidate the entire data or unified cache
<> 133:99b5ccf27215 479
<> 133:99b5ccf27215 480 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 133:99b5ccf27215 481 */
<> 133:99b5ccf27215 482 #pragma push
<> 133:99b5ccf27215 483 #pragma arm
<> 133:99b5ccf27215 484 __STATIC_ASM void __v7_all_cache(uint32_t op) {
<> 133:99b5ccf27215 485 ARM
<> 133:99b5ccf27215 486
<> 133:99b5ccf27215 487 PUSH {R4-R11}
<> 133:99b5ccf27215 488
<> 133:99b5ccf27215 489 MRC p15, 1, R6, c0, c0, 1 // Read CLIDR
<> 133:99b5ccf27215 490 ANDS R3, R6, #0x07000000 // Extract coherency level
<> 133:99b5ccf27215 491 MOV R3, R3, LSR #23 // Total cache levels << 1
<> 133:99b5ccf27215 492 BEQ Finished // If 0, no need to clean
<> 133:99b5ccf27215 493
<> 133:99b5ccf27215 494 MOV R10, #0 // R10 holds current cache level << 1
<> 133:99b5ccf27215 495 Loop1 ADD R2, R10, R10, LSR #1 // R2 holds cache "Set" position
<> 133:99b5ccf27215 496 MOV R1, R6, LSR R2 // Bottom 3 bits are the Cache-type for this level
<> 133:99b5ccf27215 497 AND R1, R1, #7 // Isolate those lower 3 bits
<> 133:99b5ccf27215 498 CMP R1, #2
<> 133:99b5ccf27215 499 BLT Skip // No cache or only instruction cache at this level
<> 133:99b5ccf27215 500
<> 133:99b5ccf27215 501 MCR p15, 2, R10, c0, c0, 0 // Write the Cache Size selection register
<> 133:99b5ccf27215 502 ISB // ISB to sync the change to the CacheSizeID reg
<> 133:99b5ccf27215 503 MRC p15, 1, R1, c0, c0, 0 // Reads current Cache Size ID register
<> 133:99b5ccf27215 504 AND R2, R1, #7 // Extract the line length field
<> 133:99b5ccf27215 505 ADD R2, R2, #4 // Add 4 for the line length offset (log2 16 bytes)
<> 133:99b5ccf27215 506 LDR R4, =0x3FF
<> 133:99b5ccf27215 507 ANDS R4, R4, R1, LSR #3 // R4 is the max number on the way size (right aligned)
<> 133:99b5ccf27215 508 CLZ R5, R4 // R5 is the bit position of the way size increment
<> 133:99b5ccf27215 509 LDR R7, =0x7FFF
<> 133:99b5ccf27215 510 ANDS R7, R7, R1, LSR #13 // R7 is the max number of the index size (right aligned)
<> 133:99b5ccf27215 511
<> 133:99b5ccf27215 512 Loop2 MOV R9, R4 // R9 working copy of the max way size (right aligned)
<> 133:99b5ccf27215 513
<> 133:99b5ccf27215 514 Loop3 ORR R11, R10, R9, LSL R5 // Factor in the Way number and cache number into R11
<> 133:99b5ccf27215 515 ORR R11, R11, R7, LSL R2 // Factor in the Set number
<> 133:99b5ccf27215 516 CMP R0, #0
<> 133:99b5ccf27215 517 BNE Dccsw
<> 133:99b5ccf27215 518 MCR p15, 0, R11, c7, c6, 2 // DCISW. Invalidate by Set/Way
<> 133:99b5ccf27215 519 B cont
<> 133:99b5ccf27215 520 Dccsw CMP R0, #1
<> 133:99b5ccf27215 521 BNE Dccisw
<> 133:99b5ccf27215 522 MCR p15, 0, R11, c7, c10, 2 // DCCSW. Clean by Set/Way
<> 133:99b5ccf27215 523 B cont
<> 133:99b5ccf27215 524 Dccisw MCR p15, 0, R11, c7, c14, 2 // DCCISW. Clean and Invalidate by Set/Way
<> 133:99b5ccf27215 525 cont SUBS R9, R9, #1 // Decrement the Way number
<> 133:99b5ccf27215 526 BGE Loop3
<> 133:99b5ccf27215 527 SUBS R7, R7, #1 // Decrement the Set number
<> 133:99b5ccf27215 528 BGE Loop2
<> 133:99b5ccf27215 529 Skip ADD R10, R10, #2 // Increment the cache number
<> 133:99b5ccf27215 530 CMP R3, R10
<> 133:99b5ccf27215 531 BGT Loop1
<> 133:99b5ccf27215 532
<> 133:99b5ccf27215 533 Finished
<> 133:99b5ccf27215 534 DSB
<> 133:99b5ccf27215 535 POP {R4-R11}
<> 133:99b5ccf27215 536 BX lr
<> 133:99b5ccf27215 537
<> 133:99b5ccf27215 538 }
<> 133:99b5ccf27215 539 #pragma pop
<> 133:99b5ccf27215 540
<> 133:99b5ccf27215 541
<> 133:99b5ccf27215 542 /** \brief Invalidate the whole D$
<> 133:99b5ccf27215 543
<> 133:99b5ccf27215 544 DCISW. Invalidate by Set/Way
<> 133:99b5ccf27215 545 */
<> 133:99b5ccf27215 546
<> 133:99b5ccf27215 547 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 133:99b5ccf27215 548 __v7_all_cache(0);
<> 133:99b5ccf27215 549 }
<> 133:99b5ccf27215 550
<> 133:99b5ccf27215 551 /** \brief Clean the whole D$
<> 133:99b5ccf27215 552
<> 133:99b5ccf27215 553 DCCSW. Clean by Set/Way
<> 133:99b5ccf27215 554 */
<> 133:99b5ccf27215 555
<> 133:99b5ccf27215 556 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 133:99b5ccf27215 557 __v7_all_cache(1);
<> 133:99b5ccf27215 558 }
<> 133:99b5ccf27215 559
<> 133:99b5ccf27215 560 /** \brief Clean and invalidate the whole D$
<> 133:99b5ccf27215 561
<> 133:99b5ccf27215 562 DCCISW. Clean and Invalidate by Set/Way
<> 133:99b5ccf27215 563 */
<> 133:99b5ccf27215 564
<> 133:99b5ccf27215 565 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 133:99b5ccf27215 566 __v7_all_cache(2);
<> 133:99b5ccf27215 567 }
<> 133:99b5ccf27215 568
<> 133:99b5ccf27215 569 #include "core_ca_mmu.h"
<> 133:99b5ccf27215 570
<> 133:99b5ccf27215 571 #elif (defined (__ICCARM__)) /*---------------- ICC Compiler ---------------------*/
<> 133:99b5ccf27215 572
<> 133:99b5ccf27215 573 #define __inline inline
<> 133:99b5ccf27215 574
<> 133:99b5ccf27215 575 inline static uint32_t __disable_irq_iar() {
<> 133:99b5ccf27215 576 int irq_dis = __get_CPSR() & 0x80; // 7bit CPSR.I
<> 133:99b5ccf27215 577 __disable_irq();
<> 133:99b5ccf27215 578 return irq_dis;
<> 133:99b5ccf27215 579 }
<> 133:99b5ccf27215 580
<> 133:99b5ccf27215 581 #define MODE_USR 0x10
<> 133:99b5ccf27215 582 #define MODE_FIQ 0x11
<> 133:99b5ccf27215 583 #define MODE_IRQ 0x12
<> 133:99b5ccf27215 584 #define MODE_SVC 0x13
<> 133:99b5ccf27215 585 #define MODE_MON 0x16
<> 133:99b5ccf27215 586 #define MODE_ABT 0x17
<> 133:99b5ccf27215 587 #define MODE_HYP 0x1A
<> 133:99b5ccf27215 588 #define MODE_UND 0x1B
<> 133:99b5ccf27215 589 #define MODE_SYS 0x1F
<> 133:99b5ccf27215 590
<> 133:99b5ccf27215 591 /** \brief Set Process Stack Pointer
<> 133:99b5ccf27215 592
<> 133:99b5ccf27215 593 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 133:99b5ccf27215 594
<> 133:99b5ccf27215 595 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 133:99b5ccf27215 596 */
<> 133:99b5ccf27215 597 // from rt_CMSIS.c
<> 133:99b5ccf27215 598 __arm static inline void __set_PSP(uint32_t topOfProcStack) {
<> 133:99b5ccf27215 599 __asm(
<> 133:99b5ccf27215 600 " ARM\n"
<> 133:99b5ccf27215 601 // " PRESERVE8\n"
<> 133:99b5ccf27215 602
<> 133:99b5ccf27215 603 " BIC R0, R0, #7 ;ensure stack is 8-byte aligned \n"
<> 133:99b5ccf27215 604 " MRS R1, CPSR \n"
<> 133:99b5ccf27215 605 " CPS #0x1F ;no effect in USR mode \n" // MODE_SYS
<> 133:99b5ccf27215 606 " MOV SP, R0 \n"
<> 133:99b5ccf27215 607 " MSR CPSR_c, R1 ;no effect in USR mode \n"
<> 133:99b5ccf27215 608 " ISB \n"
<> 133:99b5ccf27215 609 " BX LR \n");
<> 133:99b5ccf27215 610 }
<> 133:99b5ccf27215 611
<> 133:99b5ccf27215 612 /** \brief Set User Mode
<> 133:99b5ccf27215 613
<> 133:99b5ccf27215 614 This function changes the processor state to User Mode
<> 133:99b5ccf27215 615 */
<> 133:99b5ccf27215 616 // from rt_CMSIS.c
<> 133:99b5ccf27215 617 __arm static inline void __set_CPS_USR(void) {
<> 133:99b5ccf27215 618 __asm(
<> 133:99b5ccf27215 619 " ARM \n"
<> 133:99b5ccf27215 620
<> 133:99b5ccf27215 621 " CPS #0x10 \n" // MODE_USR
<> 133:99b5ccf27215 622 " BX LR\n");
<> 133:99b5ccf27215 623 }
<> 133:99b5ccf27215 624
<> 133:99b5ccf27215 625 /** \brief Set TTBR0
<> 133:99b5ccf27215 626
<> 133:99b5ccf27215 627 This function assigns the given value to the Translation Table Base Register 0.
<> 133:99b5ccf27215 628
<> 133:99b5ccf27215 629 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 133:99b5ccf27215 630 */
<> 133:99b5ccf27215 631 // from mmu_Renesas_RZ_A1.c
<> 133:99b5ccf27215 632 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 133:99b5ccf27215 633 __MCR(15, 0, ttbr0, 2, 0, 0); // reg to cp15
<> 133:99b5ccf27215 634 __ISB();
<> 133:99b5ccf27215 635 }
<> 133:99b5ccf27215 636
<> 133:99b5ccf27215 637 /** \brief Set DACR
<> 133:99b5ccf27215 638
<> 133:99b5ccf27215 639 This function assigns the given value to the Domain Access Control Register.
<> 133:99b5ccf27215 640
<> 133:99b5ccf27215 641 \param [in] dacr Domain Access Control Register value to set
<> 133:99b5ccf27215 642 */
<> 133:99b5ccf27215 643 // from mmu_Renesas_RZ_A1.c
<> 133:99b5ccf27215 644 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 133:99b5ccf27215 645 __MCR(15, 0, dacr, 3, 0, 0); // reg to cp15
<> 133:99b5ccf27215 646 __ISB();
<> 133:99b5ccf27215 647 }
<> 133:99b5ccf27215 648
<> 133:99b5ccf27215 649
<> 133:99b5ccf27215 650 /******************************** Cache and BTAC enable ****************************************************/
<> 133:99b5ccf27215 651 /** \brief Set SCTLR
<> 133:99b5ccf27215 652
<> 133:99b5ccf27215 653 This function assigns the given value to the System Control Register.
<> 133:99b5ccf27215 654
<> 133:99b5ccf27215 655 \param [in] sctlr System Control Register value to set
<> 133:99b5ccf27215 656 */
<> 133:99b5ccf27215 657 // from __enable_mmu()
<> 133:99b5ccf27215 658 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr) {
<> 133:99b5ccf27215 659 __MCR(15, 0, sctlr, 1, 0, 0); // reg to cp15
<> 133:99b5ccf27215 660 }
<> 133:99b5ccf27215 661
<> 133:99b5ccf27215 662 /** \brief Get SCTLR
<> 133:99b5ccf27215 663
<> 133:99b5ccf27215 664 This function returns the value of the System Control Register.
<> 133:99b5ccf27215 665
<> 133:99b5ccf27215 666 \return System Control Register value
<> 133:99b5ccf27215 667 */
<> 133:99b5ccf27215 668 // from __enable_mmu()
<> 133:99b5ccf27215 669 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 133:99b5ccf27215 670 uint32_t __regSCTLR = __MRC(15, 0, 1, 0, 0);
<> 133:99b5ccf27215 671 return __regSCTLR;
<> 133:99b5ccf27215 672 }
<> 133:99b5ccf27215 673
<> 133:99b5ccf27215 674 /** \brief Enable Caches
<> 133:99b5ccf27215 675
<> 133:99b5ccf27215 676 Enable Caches
<> 133:99b5ccf27215 677 */
<> 133:99b5ccf27215 678 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 679 __STATIC_INLINE void __enable_caches(void) {
<> 133:99b5ccf27215 680 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 133:99b5ccf27215 681 }
<> 133:99b5ccf27215 682
<> 133:99b5ccf27215 683 /** \brief Enable BTAC
<> 133:99b5ccf27215 684
<> 133:99b5ccf27215 685 Enable BTAC
<> 133:99b5ccf27215 686 */
<> 133:99b5ccf27215 687 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 688 __STATIC_INLINE void __enable_btac(void) {
<> 133:99b5ccf27215 689 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 133:99b5ccf27215 690 __ISB();
<> 133:99b5ccf27215 691 }
<> 133:99b5ccf27215 692
<> 133:99b5ccf27215 693 /** \brief Enable MMU
<> 133:99b5ccf27215 694
<> 133:99b5ccf27215 695 Enable MMU
<> 133:99b5ccf27215 696 */
<> 133:99b5ccf27215 697 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 698 __STATIC_INLINE void __enable_mmu(void) {
<> 133:99b5ccf27215 699 // Set M bit 0 to enable the MMU
<> 133:99b5ccf27215 700 // Set AFE bit to enable simplified access permissions model
<> 133:99b5ccf27215 701 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 133:99b5ccf27215 702 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 133:99b5ccf27215 703 __ISB();
<> 133:99b5ccf27215 704 }
<> 133:99b5ccf27215 705
<> 133:99b5ccf27215 706 /******************************** TLB maintenance operations ************************************************/
<> 133:99b5ccf27215 707 /** \brief Invalidate the whole tlb
<> 133:99b5ccf27215 708
<> 133:99b5ccf27215 709 TLBIALL. Invalidate the whole tlb
<> 133:99b5ccf27215 710 */
<> 133:99b5ccf27215 711 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 712 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 133:99b5ccf27215 713 uint32_t val = 0;
<> 133:99b5ccf27215 714 __MCR(15, 0, val, 8, 7, 0); // reg to cp15
<> 133:99b5ccf27215 715 __MCR(15, 0, val, 8, 6, 0); // reg to cp15
<> 133:99b5ccf27215 716 __MCR(15, 0, val, 8, 5, 0); // reg to cp15
<> 133:99b5ccf27215 717 __DSB();
<> 133:99b5ccf27215 718 __ISB();
<> 133:99b5ccf27215 719 }
<> 133:99b5ccf27215 720
<> 133:99b5ccf27215 721 /******************************** BTB maintenance operations ************************************************/
<> 133:99b5ccf27215 722 /** \brief Invalidate entire branch predictor array
<> 133:99b5ccf27215 723
<> 133:99b5ccf27215 724 BPIALL. Branch Predictor Invalidate All.
<> 133:99b5ccf27215 725 */
<> 133:99b5ccf27215 726 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 727 __STATIC_INLINE void __v7_inv_btac(void) {
<> 133:99b5ccf27215 728 uint32_t val = 0;
<> 133:99b5ccf27215 729 __MCR(15, 0, val, 7, 5, 6); // reg to cp15
<> 133:99b5ccf27215 730 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 731 __ISB(); //ensure instruction fetch path sees new state
<> 133:99b5ccf27215 732 }
<> 133:99b5ccf27215 733
<> 133:99b5ccf27215 734
<> 133:99b5ccf27215 735 /******************************** L1 cache operations ******************************************************/
<> 133:99b5ccf27215 736
<> 133:99b5ccf27215 737 /** \brief Invalidate the whole I$
<> 133:99b5ccf27215 738
<> 133:99b5ccf27215 739 ICIALLU. Instruction Cache Invalidate All to PoU
<> 133:99b5ccf27215 740 */
<> 133:99b5ccf27215 741 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 742 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 133:99b5ccf27215 743 uint32_t val = 0;
<> 133:99b5ccf27215 744 __MCR(15, 0, val, 7, 5, 0); // reg to cp15
<> 133:99b5ccf27215 745 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 746 __ISB(); //ensure instruction fetch path sees new I cache state
<> 133:99b5ccf27215 747 }
<> 133:99b5ccf27215 748
<> 133:99b5ccf27215 749 // from __v7_inv_dcache_all()
<> 133:99b5ccf27215 750 __arm static inline void __v7_all_cache(uint32_t op) {
<> 133:99b5ccf27215 751 __asm(
<> 133:99b5ccf27215 752 " ARM \n"
<> 133:99b5ccf27215 753
<> 133:99b5ccf27215 754 " PUSH {R4-R11} \n"
<> 133:99b5ccf27215 755
<> 133:99b5ccf27215 756 " MRC p15, 1, R6, c0, c0, 1\n" // Read CLIDR
<> 133:99b5ccf27215 757 " ANDS R3, R6, #0x07000000\n" // Extract coherency level
<> 133:99b5ccf27215 758 " MOV R3, R3, LSR #23\n" // Total cache levels << 1
<> 133:99b5ccf27215 759 " BEQ Finished\n" // If 0, no need to clean
<> 133:99b5ccf27215 760
<> 133:99b5ccf27215 761 " MOV R10, #0\n" // R10 holds current cache level << 1
<> 133:99b5ccf27215 762 "Loop1: ADD R2, R10, R10, LSR #1\n" // R2 holds cache "Set" position
<> 133:99b5ccf27215 763 " MOV R1, R6, LSR R2 \n" // Bottom 3 bits are the Cache-type for this level
<> 133:99b5ccf27215 764 " AND R1, R1, #7 \n" // Isolate those lower 3 bits
<> 133:99b5ccf27215 765 " CMP R1, #2 \n"
<> 133:99b5ccf27215 766 " BLT Skip \n" // No cache or only instruction cache at this level
<> 133:99b5ccf27215 767
<> 133:99b5ccf27215 768 " MCR p15, 2, R10, c0, c0, 0 \n" // Write the Cache Size selection register
<> 133:99b5ccf27215 769 " ISB \n" // ISB to sync the change to the CacheSizeID reg
<> 133:99b5ccf27215 770 " MRC p15, 1, R1, c0, c0, 0 \n" // Reads current Cache Size ID register
<> 133:99b5ccf27215 771 " AND R2, R1, #7 \n" // Extract the line length field
<> 133:99b5ccf27215 772 " ADD R2, R2, #4 \n" // Add 4 for the line length offset (log2 16 bytes)
<> 133:99b5ccf27215 773 " movw R4, #0x3FF \n"
<> 133:99b5ccf27215 774 " ANDS R4, R4, R1, LSR #3 \n" // R4 is the max number on the way size (right aligned)
<> 133:99b5ccf27215 775 " CLZ R5, R4 \n" // R5 is the bit position of the way size increment
<> 133:99b5ccf27215 776 " movw R7, #0x7FFF \n"
<> 133:99b5ccf27215 777 " ANDS R7, R7, R1, LSR #13 \n" // R7 is the max number of the index size (right aligned)
<> 133:99b5ccf27215 778
<> 133:99b5ccf27215 779 "Loop2: MOV R9, R4 \n" // R9 working copy of the max way size (right aligned)
<> 133:99b5ccf27215 780
<> 133:99b5ccf27215 781 "Loop3: ORR R11, R10, R9, LSL R5 \n" // Factor in the Way number and cache number into R11
<> 133:99b5ccf27215 782 " ORR R11, R11, R7, LSL R2 \n" // Factor in the Set number
<> 133:99b5ccf27215 783 " CMP R0, #0 \n"
<> 133:99b5ccf27215 784 " BNE Dccsw \n"
<> 133:99b5ccf27215 785 " MCR p15, 0, R11, c7, c6, 2 \n" // DCISW. Invalidate by Set/Way
<> 133:99b5ccf27215 786 " B cont \n"
<> 133:99b5ccf27215 787 "Dccsw: CMP R0, #1 \n"
<> 133:99b5ccf27215 788 " BNE Dccisw \n"
<> 133:99b5ccf27215 789 " MCR p15, 0, R11, c7, c10, 2 \n" // DCCSW. Clean by Set/Way
<> 133:99b5ccf27215 790 " B cont \n"
<> 133:99b5ccf27215 791 "Dccisw: MCR p15, 0, R11, c7, c14, 2 \n" // DCCISW, Clean and Invalidate by Set/Way
<> 133:99b5ccf27215 792 "cont: SUBS R9, R9, #1 \n" // Decrement the Way number
<> 133:99b5ccf27215 793 " BGE Loop3 \n"
<> 133:99b5ccf27215 794 " SUBS R7, R7, #1 \n" // Decrement the Set number
<> 133:99b5ccf27215 795 " BGE Loop2 \n"
<> 133:99b5ccf27215 796 "Skip: ADD R10, R10, #2 \n" // increment the cache number
<> 133:99b5ccf27215 797 " CMP R3, R10 \n"
<> 133:99b5ccf27215 798 " BGT Loop1 \n"
<> 133:99b5ccf27215 799
<> 133:99b5ccf27215 800 "Finished: \n"
<> 133:99b5ccf27215 801 " DSB \n"
<> 133:99b5ccf27215 802 " POP {R4-R11} \n"
<> 133:99b5ccf27215 803 " BX lr \n" );
<> 133:99b5ccf27215 804 }
<> 133:99b5ccf27215 805
<> 133:99b5ccf27215 806 /** \brief Invalidate the whole D$
<> 133:99b5ccf27215 807
<> 133:99b5ccf27215 808 DCISW. Invalidate by Set/Way
<> 133:99b5ccf27215 809 */
<> 133:99b5ccf27215 810 // from system_Renesas_RZ_A1.c
<> 133:99b5ccf27215 811 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 133:99b5ccf27215 812 __v7_all_cache(0);
<> 133:99b5ccf27215 813 }
<> 133:99b5ccf27215 814 /** \brief Clean the whole D$
<> 133:99b5ccf27215 815
<> 133:99b5ccf27215 816 DCCSW. Clean by Set/Way
<> 133:99b5ccf27215 817 */
<> 133:99b5ccf27215 818
<> 133:99b5ccf27215 819 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 133:99b5ccf27215 820 __v7_all_cache(1);
<> 133:99b5ccf27215 821 }
<> 133:99b5ccf27215 822
<> 133:99b5ccf27215 823 /** \brief Clean and invalidate the whole D$
<> 133:99b5ccf27215 824
<> 133:99b5ccf27215 825 DCCISW. Clean and Invalidate by Set/Way
<> 133:99b5ccf27215 826 */
<> 133:99b5ccf27215 827
<> 133:99b5ccf27215 828 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 133:99b5ccf27215 829 __v7_all_cache(2);
<> 133:99b5ccf27215 830 }
<> 133:99b5ccf27215 831 /** \brief Clean and Invalidate D$ by MVA
<> 133:99b5ccf27215 832
<> 133:99b5ccf27215 833 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 133:99b5ccf27215 834 */
<> 133:99b5ccf27215 835 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 133:99b5ccf27215 836 __MCR(15, 0, (uint32_t)va, 7, 14, 1);
<> 133:99b5ccf27215 837 __DMB();
<> 133:99b5ccf27215 838 }
<> 133:99b5ccf27215 839
<> 133:99b5ccf27215 840 #include "core_ca_mmu.h"
<> 133:99b5ccf27215 841
<> 133:99b5ccf27215 842 #elif (defined (__GNUC__)) /*------------------ GNU Compiler ---------------------*/
<> 133:99b5ccf27215 843 /* GNU gcc specific functions */
<> 133:99b5ccf27215 844
<> 133:99b5ccf27215 845 #define MODE_USR 0x10
<> 133:99b5ccf27215 846 #define MODE_FIQ 0x11
<> 133:99b5ccf27215 847 #define MODE_IRQ 0x12
<> 133:99b5ccf27215 848 #define MODE_SVC 0x13
<> 133:99b5ccf27215 849 #define MODE_MON 0x16
<> 133:99b5ccf27215 850 #define MODE_ABT 0x17
<> 133:99b5ccf27215 851 #define MODE_HYP 0x1A
<> 133:99b5ccf27215 852 #define MODE_UND 0x1B
<> 133:99b5ccf27215 853 #define MODE_SYS 0x1F
<> 133:99b5ccf27215 854
<> 133:99b5ccf27215 855
<> 133:99b5ccf27215 856 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_irq(void)
<> 133:99b5ccf27215 857 {
<> 133:99b5ccf27215 858 __ASM volatile ("cpsie i");
<> 133:99b5ccf27215 859 }
<> 133:99b5ccf27215 860
<> 133:99b5ccf27215 861 /** \brief Disable IRQ Interrupts
<> 133:99b5ccf27215 862
<> 133:99b5ccf27215 863 This function disables IRQ interrupts by setting the I-bit in the CPSR.
<> 133:99b5ccf27215 864 Can only be executed in Privileged modes.
<> 133:99b5ccf27215 865 */
<> 133:99b5ccf27215 866 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __disable_irq(void)
<> 133:99b5ccf27215 867 {
<> 133:99b5ccf27215 868 uint32_t result;
<> 133:99b5ccf27215 869
<> 133:99b5ccf27215 870 __ASM volatile ("mrs %0, cpsr" : "=r" (result));
<> 133:99b5ccf27215 871 __ASM volatile ("cpsid i");
<> 133:99b5ccf27215 872 return(result & 0x80);
<> 133:99b5ccf27215 873 }
<> 133:99b5ccf27215 874
<> 133:99b5ccf27215 875
<> 133:99b5ccf27215 876 /** \brief Get APSR Register
<> 133:99b5ccf27215 877
<> 133:99b5ccf27215 878 This function returns the content of the APSR Register.
<> 133:99b5ccf27215 879
<> 133:99b5ccf27215 880 \return APSR Register value
<> 133:99b5ccf27215 881 */
<> 133:99b5ccf27215 882 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_APSR(void)
<> 133:99b5ccf27215 883 {
<> 133:99b5ccf27215 884 #if 1
<> 133:99b5ccf27215 885 register uint32_t __regAPSR;
<> 133:99b5ccf27215 886 __ASM volatile ("mrs %0, apsr" : "=r" (__regAPSR) );
<> 133:99b5ccf27215 887 #else
<> 133:99b5ccf27215 888 register uint32_t __regAPSR __ASM("apsr");
<> 133:99b5ccf27215 889 #endif
<> 133:99b5ccf27215 890 return(__regAPSR);
<> 133:99b5ccf27215 891 }
<> 133:99b5ccf27215 892
<> 133:99b5ccf27215 893
<> 133:99b5ccf27215 894 /** \brief Get CPSR Register
<> 133:99b5ccf27215 895
<> 133:99b5ccf27215 896 This function returns the content of the CPSR Register.
<> 133:99b5ccf27215 897
<> 133:99b5ccf27215 898 \return CPSR Register value
<> 133:99b5ccf27215 899 */
<> 133:99b5ccf27215 900 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPSR(void)
<> 133:99b5ccf27215 901 {
<> 133:99b5ccf27215 902 #if 1
<> 133:99b5ccf27215 903 register uint32_t __regCPSR;
<> 133:99b5ccf27215 904 __ASM volatile ("mrs %0, cpsr" : "=r" (__regCPSR));
<> 133:99b5ccf27215 905 #else
<> 133:99b5ccf27215 906 register uint32_t __regCPSR __ASM("cpsr");
<> 133:99b5ccf27215 907 #endif
<> 133:99b5ccf27215 908 return(__regCPSR);
<> 133:99b5ccf27215 909 }
<> 133:99b5ccf27215 910
<> 133:99b5ccf27215 911 #if 0
<> 133:99b5ccf27215 912 /** \brief Set Stack Pointer
<> 133:99b5ccf27215 913
<> 133:99b5ccf27215 914 This function assigns the given value to the current stack pointer.
<> 133:99b5ccf27215 915
<> 133:99b5ccf27215 916 \param [in] topOfStack Stack Pointer value to set
<> 133:99b5ccf27215 917 */
<> 133:99b5ccf27215 918 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 133:99b5ccf27215 919 {
<> 133:99b5ccf27215 920 register uint32_t __regSP __ASM("sp");
<> 133:99b5ccf27215 921 __regSP = topOfStack;
<> 133:99b5ccf27215 922 }
<> 133:99b5ccf27215 923 #endif
<> 133:99b5ccf27215 924
<> 133:99b5ccf27215 925 /** \brief Get link register
<> 133:99b5ccf27215 926
<> 133:99b5ccf27215 927 This function returns the value of the link register
<> 133:99b5ccf27215 928
<> 133:99b5ccf27215 929 \return Value of link register
<> 133:99b5ccf27215 930 */
<> 133:99b5ccf27215 931 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_LR(void)
<> 133:99b5ccf27215 932 {
<> 133:99b5ccf27215 933 register uint32_t __reglr __ASM("lr");
<> 133:99b5ccf27215 934 return(__reglr);
<> 133:99b5ccf27215 935 }
<> 133:99b5ccf27215 936
<> 133:99b5ccf27215 937 #if 0
<> 133:99b5ccf27215 938 /** \brief Set link register
<> 133:99b5ccf27215 939
<> 133:99b5ccf27215 940 This function sets the value of the link register
<> 133:99b5ccf27215 941
<> 133:99b5ccf27215 942 \param [in] lr LR value to set
<> 133:99b5ccf27215 943 */
<> 133:99b5ccf27215 944 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_LR(uint32_t lr)
<> 133:99b5ccf27215 945 {
<> 133:99b5ccf27215 946 register uint32_t __reglr __ASM("lr");
<> 133:99b5ccf27215 947 __reglr = lr;
<> 133:99b5ccf27215 948 }
<> 133:99b5ccf27215 949 #endif
<> 133:99b5ccf27215 950
<> 133:99b5ccf27215 951 /** \brief Set Process Stack Pointer
<> 133:99b5ccf27215 952
<> 133:99b5ccf27215 953 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 133:99b5ccf27215 954
<> 133:99b5ccf27215 955 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 133:99b5ccf27215 956 */
<> 133:99b5ccf27215 957 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_PSP(uint32_t topOfProcStack)
<> 133:99b5ccf27215 958 {
<> 133:99b5ccf27215 959 __asm__ volatile (
<> 133:99b5ccf27215 960 ".ARM;"
<> 133:99b5ccf27215 961 ".eabi_attribute Tag_ABI_align8_preserved,1;"
<> 133:99b5ccf27215 962
<> 133:99b5ccf27215 963 "BIC R0, R0, #7;" /* ;ensure stack is 8-byte aligned */
<> 133:99b5ccf27215 964 "MRS R1, CPSR;"
<> 133:99b5ccf27215 965 "CPS %0;" /* ;no effect in USR mode */
<> 133:99b5ccf27215 966 "MOV SP, R0;"
<> 133:99b5ccf27215 967 "MSR CPSR_c, R1;" /* ;no effect in USR mode */
<> 133:99b5ccf27215 968 "ISB;"
<> 133:99b5ccf27215 969 //"BX LR;"
<> 133:99b5ccf27215 970 :
<> 133:99b5ccf27215 971 : "i"(MODE_SYS)
<> 133:99b5ccf27215 972 : "r0", "r1");
<> 133:99b5ccf27215 973 return;
<> 133:99b5ccf27215 974 }
<> 133:99b5ccf27215 975
<> 133:99b5ccf27215 976 /** \brief Set User Mode
<> 133:99b5ccf27215 977
<> 133:99b5ccf27215 978 This function changes the processor state to User Mode
<> 133:99b5ccf27215 979 */
<> 133:99b5ccf27215 980 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPS_USR(void)
<> 133:99b5ccf27215 981 {
<> 133:99b5ccf27215 982 __asm__ volatile (
<> 133:99b5ccf27215 983 ".ARM;"
<> 133:99b5ccf27215 984
<> 133:99b5ccf27215 985 "CPS %0;"
<> 133:99b5ccf27215 986 //"BX LR;"
<> 133:99b5ccf27215 987 :
<> 133:99b5ccf27215 988 : "i"(MODE_USR)
<> 133:99b5ccf27215 989 : );
<> 133:99b5ccf27215 990 return;
<> 133:99b5ccf27215 991 }
<> 133:99b5ccf27215 992
<> 133:99b5ccf27215 993
<> 133:99b5ccf27215 994 /** \brief Enable FIQ
<> 133:99b5ccf27215 995
<> 133:99b5ccf27215 996 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 133:99b5ccf27215 997 Can only be executed in Privileged modes.
<> 133:99b5ccf27215 998 */
<> 133:99b5ccf27215 999 #define __enable_fault_irq() __asm__ volatile ("cpsie f")
<> 133:99b5ccf27215 1000
<> 133:99b5ccf27215 1001
<> 133:99b5ccf27215 1002 /** \brief Disable FIQ
<> 133:99b5ccf27215 1003
<> 133:99b5ccf27215 1004 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 133:99b5ccf27215 1005 Can only be executed in Privileged modes.
<> 133:99b5ccf27215 1006 */
<> 133:99b5ccf27215 1007 #define __disable_fault_irq() __asm__ volatile ("cpsid f")
<> 133:99b5ccf27215 1008
<> 133:99b5ccf27215 1009
<> 133:99b5ccf27215 1010 /** \brief Get FPSCR
<> 133:99b5ccf27215 1011
<> 133:99b5ccf27215 1012 This function returns the current value of the Floating Point Status/Control register.
<> 133:99b5ccf27215 1013
<> 133:99b5ccf27215 1014 \return Floating Point Status/Control register value
<> 133:99b5ccf27215 1015 */
<> 133:99b5ccf27215 1016 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 133:99b5ccf27215 1017 {
<> 133:99b5ccf27215 1018 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 133:99b5ccf27215 1019 #if 1
<> 133:99b5ccf27215 1020 uint32_t result;
<> 133:99b5ccf27215 1021
<> 133:99b5ccf27215 1022 __ASM volatile ("vmrs %0, fpscr" : "=r" (result) );
<> 133:99b5ccf27215 1023 return (result);
<> 133:99b5ccf27215 1024 #else
<> 133:99b5ccf27215 1025 register uint32_t __regfpscr __ASM("fpscr");
<> 133:99b5ccf27215 1026 return(__regfpscr);
<> 133:99b5ccf27215 1027 #endif
<> 133:99b5ccf27215 1028 #else
<> 133:99b5ccf27215 1029 return(0);
<> 133:99b5ccf27215 1030 #endif
<> 133:99b5ccf27215 1031 }
<> 133:99b5ccf27215 1032
<> 133:99b5ccf27215 1033
<> 133:99b5ccf27215 1034 /** \brief Set FPSCR
<> 133:99b5ccf27215 1035
<> 133:99b5ccf27215 1036 This function assigns the given value to the Floating Point Status/Control register.
<> 133:99b5ccf27215 1037
<> 133:99b5ccf27215 1038 \param [in] fpscr Floating Point Status/Control value to set
<> 133:99b5ccf27215 1039 */
<> 133:99b5ccf27215 1040 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 133:99b5ccf27215 1041 {
<> 133:99b5ccf27215 1042 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 133:99b5ccf27215 1043 #if 1
<> 133:99b5ccf27215 1044 __ASM volatile ("vmsr fpscr, %0" : : "r" (fpscr) );
<> 133:99b5ccf27215 1045 #else
<> 133:99b5ccf27215 1046 register uint32_t __regfpscr __ASM("fpscr");
<> 133:99b5ccf27215 1047 __regfpscr = (fpscr);
<> 133:99b5ccf27215 1048 #endif
<> 133:99b5ccf27215 1049 #endif
<> 133:99b5ccf27215 1050 }
<> 133:99b5ccf27215 1051
<> 133:99b5ccf27215 1052 /** \brief Get FPEXC
<> 133:99b5ccf27215 1053
<> 133:99b5ccf27215 1054 This function returns the current value of the Floating Point Exception Control register.
<> 133:99b5ccf27215 1055
<> 133:99b5ccf27215 1056 \return Floating Point Exception Control register value
<> 133:99b5ccf27215 1057 */
<> 133:99b5ccf27215 1058 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 133:99b5ccf27215 1059 {
<> 133:99b5ccf27215 1060 #if (__FPU_PRESENT == 1)
<> 133:99b5ccf27215 1061 #if 1
<> 133:99b5ccf27215 1062 uint32_t result;
<> 133:99b5ccf27215 1063
<> 133:99b5ccf27215 1064 __ASM volatile ("vmrs %0, fpexc" : "=r" (result));
<> 133:99b5ccf27215 1065 return (result);
<> 133:99b5ccf27215 1066 #else
<> 133:99b5ccf27215 1067 register uint32_t __regfpexc __ASM("fpexc");
<> 133:99b5ccf27215 1068 return(__regfpexc);
<> 133:99b5ccf27215 1069 #endif
<> 133:99b5ccf27215 1070 #else
<> 133:99b5ccf27215 1071 return(0);
<> 133:99b5ccf27215 1072 #endif
<> 133:99b5ccf27215 1073 }
<> 133:99b5ccf27215 1074
<> 133:99b5ccf27215 1075
<> 133:99b5ccf27215 1076 /** \brief Set FPEXC
<> 133:99b5ccf27215 1077
<> 133:99b5ccf27215 1078 This function assigns the given value to the Floating Point Exception Control register.
<> 133:99b5ccf27215 1079
<> 133:99b5ccf27215 1080 \param [in] fpscr Floating Point Exception Control value to set
<> 133:99b5ccf27215 1081 */
<> 133:99b5ccf27215 1082 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 133:99b5ccf27215 1083 {
<> 133:99b5ccf27215 1084 #if (__FPU_PRESENT == 1)
<> 133:99b5ccf27215 1085 #if 1
<> 133:99b5ccf27215 1086 __ASM volatile ("vmsr fpexc, %0" : : "r" (fpexc));
<> 133:99b5ccf27215 1087 #else
<> 133:99b5ccf27215 1088 register uint32_t __regfpexc __ASM("fpexc");
<> 133:99b5ccf27215 1089 __regfpexc = (fpexc);
<> 133:99b5ccf27215 1090 #endif
<> 133:99b5ccf27215 1091 #endif
<> 133:99b5ccf27215 1092 }
<> 133:99b5ccf27215 1093
<> 133:99b5ccf27215 1094 /** \brief Get CPACR
<> 133:99b5ccf27215 1095
<> 133:99b5ccf27215 1096 This function returns the current value of the Coprocessor Access Control register.
<> 133:99b5ccf27215 1097
<> 133:99b5ccf27215 1098 \return Coprocessor Access Control register value
<> 133:99b5ccf27215 1099 */
<> 133:99b5ccf27215 1100 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPACR(void)
<> 133:99b5ccf27215 1101 {
<> 133:99b5ccf27215 1102 #if 1
<> 133:99b5ccf27215 1103 register uint32_t __regCPACR;
<> 133:99b5ccf27215 1104 __ASM volatile ("mrc p15, 0, %0, c1, c0, 2" : "=r" (__regCPACR));
<> 133:99b5ccf27215 1105 #else
<> 133:99b5ccf27215 1106 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 133:99b5ccf27215 1107 #endif
<> 133:99b5ccf27215 1108 return __regCPACR;
<> 133:99b5ccf27215 1109 }
<> 133:99b5ccf27215 1110
<> 133:99b5ccf27215 1111 /** \brief Set CPACR
<> 133:99b5ccf27215 1112
<> 133:99b5ccf27215 1113 This function assigns the given value to the Coprocessor Access Control register.
<> 133:99b5ccf27215 1114
<> 133:99b5ccf27215 1115 \param [in] cpacr Coprocessor Acccess Control value to set
<> 133:99b5ccf27215 1116 */
<> 133:99b5ccf27215 1117 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 133:99b5ccf27215 1118 {
<> 133:99b5ccf27215 1119 #if 1
<> 133:99b5ccf27215 1120 __ASM volatile ("mcr p15, 0, %0, c1, c0, 2" : : "r" (cpacr));
<> 133:99b5ccf27215 1121 #else
<> 133:99b5ccf27215 1122 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 133:99b5ccf27215 1123 __regCPACR = cpacr;
<> 133:99b5ccf27215 1124 #endif
<> 133:99b5ccf27215 1125 __ISB();
<> 133:99b5ccf27215 1126 }
<> 133:99b5ccf27215 1127
<> 133:99b5ccf27215 1128 /** \brief Get CBAR
<> 133:99b5ccf27215 1129
<> 133:99b5ccf27215 1130 This function returns the value of the Configuration Base Address register.
<> 133:99b5ccf27215 1131
<> 133:99b5ccf27215 1132 \return Configuration Base Address register value
<> 133:99b5ccf27215 1133 */
<> 133:99b5ccf27215 1134 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CBAR() {
<> 133:99b5ccf27215 1135 #if 1
<> 133:99b5ccf27215 1136 register uint32_t __regCBAR;
<> 133:99b5ccf27215 1137 __ASM volatile ("mrc p15, 4, %0, c15, c0, 0" : "=r" (__regCBAR));
<> 133:99b5ccf27215 1138 #else
<> 133:99b5ccf27215 1139 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 133:99b5ccf27215 1140 #endif
<> 133:99b5ccf27215 1141 return(__regCBAR);
<> 133:99b5ccf27215 1142 }
<> 133:99b5ccf27215 1143
<> 133:99b5ccf27215 1144 /** \brief Get TTBR0
<> 133:99b5ccf27215 1145
<> 133:99b5ccf27215 1146 This function returns the value of the Translation Table Base Register 0.
<> 133:99b5ccf27215 1147
<> 133:99b5ccf27215 1148 \return Translation Table Base Register 0 value
<> 133:99b5ccf27215 1149 */
<> 133:99b5ccf27215 1150 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_TTBR0() {
<> 133:99b5ccf27215 1151 #if 1
<> 133:99b5ccf27215 1152 register uint32_t __regTTBR0;
<> 133:99b5ccf27215 1153 __ASM volatile ("mrc p15, 0, %0, c2, c0, 0" : "=r" (__regTTBR0));
<> 133:99b5ccf27215 1154 #else
<> 133:99b5ccf27215 1155 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 133:99b5ccf27215 1156 #endif
<> 133:99b5ccf27215 1157 return(__regTTBR0);
<> 133:99b5ccf27215 1158 }
<> 133:99b5ccf27215 1159
<> 133:99b5ccf27215 1160 /** \brief Set TTBR0
<> 133:99b5ccf27215 1161
<> 133:99b5ccf27215 1162 This function assigns the given value to the Translation Table Base Register 0.
<> 133:99b5ccf27215 1163
<> 133:99b5ccf27215 1164 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 133:99b5ccf27215 1165 */
<> 133:99b5ccf27215 1166 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 133:99b5ccf27215 1167 #if 1
<> 133:99b5ccf27215 1168 __ASM volatile ("mcr p15, 0, %0, c2, c0, 0" : : "r" (ttbr0));
<> 133:99b5ccf27215 1169 #else
<> 133:99b5ccf27215 1170 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 133:99b5ccf27215 1171 __regTTBR0 = ttbr0;
<> 133:99b5ccf27215 1172 #endif
<> 133:99b5ccf27215 1173 __ISB();
<> 133:99b5ccf27215 1174 }
<> 133:99b5ccf27215 1175
<> 133:99b5ccf27215 1176 /** \brief Get DACR
<> 133:99b5ccf27215 1177
<> 133:99b5ccf27215 1178 This function returns the value of the Domain Access Control Register.
<> 133:99b5ccf27215 1179
<> 133:99b5ccf27215 1180 \return Domain Access Control Register value
<> 133:99b5ccf27215 1181 */
<> 133:99b5ccf27215 1182 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_DACR() {
<> 133:99b5ccf27215 1183 #if 1
<> 133:99b5ccf27215 1184 register uint32_t __regDACR;
<> 133:99b5ccf27215 1185 __ASM volatile ("mrc p15, 0, %0, c3, c0, 0" : "=r" (__regDACR));
<> 133:99b5ccf27215 1186 #else
<> 133:99b5ccf27215 1187 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 133:99b5ccf27215 1188 #endif
<> 133:99b5ccf27215 1189 return(__regDACR);
<> 133:99b5ccf27215 1190 }
<> 133:99b5ccf27215 1191
<> 133:99b5ccf27215 1192 /** \brief Set DACR
<> 133:99b5ccf27215 1193
<> 133:99b5ccf27215 1194 This function assigns the given value to the Domain Access Control Register.
<> 133:99b5ccf27215 1195
<> 133:99b5ccf27215 1196 \param [in] dacr Domain Access Control Register value to set
<> 133:99b5ccf27215 1197 */
<> 133:99b5ccf27215 1198 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 133:99b5ccf27215 1199 #if 1
<> 133:99b5ccf27215 1200 __ASM volatile ("mcr p15, 0, %0, c3, c0, 0" : : "r" (dacr));
<> 133:99b5ccf27215 1201 #else
<> 133:99b5ccf27215 1202 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 133:99b5ccf27215 1203 __regDACR = dacr;
<> 133:99b5ccf27215 1204 #endif
<> 133:99b5ccf27215 1205 __ISB();
<> 133:99b5ccf27215 1206 }
<> 133:99b5ccf27215 1207
<> 133:99b5ccf27215 1208 /******************************** Cache and BTAC enable ****************************************************/
<> 133:99b5ccf27215 1209
<> 133:99b5ccf27215 1210 /** \brief Set SCTLR
<> 133:99b5ccf27215 1211
<> 133:99b5ccf27215 1212 This function assigns the given value to the System Control Register.
<> 133:99b5ccf27215 1213
<> 133:99b5ccf27215 1214 \param [in] sctlr System Control Register value to set
<> 133:99b5ccf27215 1215 */
<> 133:99b5ccf27215 1216 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 133:99b5ccf27215 1217 {
<> 133:99b5ccf27215 1218 #if 1
<> 133:99b5ccf27215 1219 __ASM volatile ("mcr p15, 0, %0, c1, c0, 0" : : "r" (sctlr));
<> 133:99b5ccf27215 1220 #else
<> 133:99b5ccf27215 1221 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 133:99b5ccf27215 1222 __regSCTLR = sctlr;
<> 133:99b5ccf27215 1223 #endif
<> 133:99b5ccf27215 1224 }
<> 133:99b5ccf27215 1225
<> 133:99b5ccf27215 1226 /** \brief Get SCTLR
<> 133:99b5ccf27215 1227
<> 133:99b5ccf27215 1228 This function returns the value of the System Control Register.
<> 133:99b5ccf27215 1229
<> 133:99b5ccf27215 1230 \return System Control Register value
<> 133:99b5ccf27215 1231 */
<> 133:99b5ccf27215 1232 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_SCTLR() {
<> 133:99b5ccf27215 1233 #if 1
<> 133:99b5ccf27215 1234 register uint32_t __regSCTLR;
<> 133:99b5ccf27215 1235 __ASM volatile ("mrc p15, 0, %0, c1, c0, 0" : "=r" (__regSCTLR));
<> 133:99b5ccf27215 1236 #else
<> 133:99b5ccf27215 1237 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 133:99b5ccf27215 1238 #endif
<> 133:99b5ccf27215 1239 return(__regSCTLR);
<> 133:99b5ccf27215 1240 }
<> 133:99b5ccf27215 1241
<> 133:99b5ccf27215 1242 /** \brief Enable Caches
<> 133:99b5ccf27215 1243
<> 133:99b5ccf27215 1244 Enable Caches
<> 133:99b5ccf27215 1245 */
<> 133:99b5ccf27215 1246 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_caches(void) {
<> 133:99b5ccf27215 1247 // Set I bit 12 to enable I Cache
<> 133:99b5ccf27215 1248 // Set C bit 2 to enable D Cache
<> 133:99b5ccf27215 1249 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 133:99b5ccf27215 1250 }
<> 133:99b5ccf27215 1251
<> 133:99b5ccf27215 1252 /** \brief Disable Caches
<> 133:99b5ccf27215 1253
<> 133:99b5ccf27215 1254 Disable Caches
<> 133:99b5ccf27215 1255 */
<> 133:99b5ccf27215 1256 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_caches(void) {
<> 133:99b5ccf27215 1257 // Clear I bit 12 to disable I Cache
<> 133:99b5ccf27215 1258 // Clear C bit 2 to disable D Cache
<> 133:99b5ccf27215 1259 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 133:99b5ccf27215 1260 __ISB();
<> 133:99b5ccf27215 1261 }
<> 133:99b5ccf27215 1262
<> 133:99b5ccf27215 1263 /** \brief Enable BTAC
<> 133:99b5ccf27215 1264
<> 133:99b5ccf27215 1265 Enable BTAC
<> 133:99b5ccf27215 1266 */
<> 133:99b5ccf27215 1267 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_btac(void) {
<> 133:99b5ccf27215 1268 // Set Z bit 11 to enable branch prediction
<> 133:99b5ccf27215 1269 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 133:99b5ccf27215 1270 __ISB();
<> 133:99b5ccf27215 1271 }
<> 133:99b5ccf27215 1272
<> 133:99b5ccf27215 1273 /** \brief Disable BTAC
<> 133:99b5ccf27215 1274
<> 133:99b5ccf27215 1275 Disable BTAC
<> 133:99b5ccf27215 1276 */
<> 133:99b5ccf27215 1277 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_btac(void) {
<> 133:99b5ccf27215 1278 // Clear Z bit 11 to disable branch prediction
<> 133:99b5ccf27215 1279 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 133:99b5ccf27215 1280 }
<> 133:99b5ccf27215 1281
<> 133:99b5ccf27215 1282
<> 133:99b5ccf27215 1283 /** \brief Enable MMU
<> 133:99b5ccf27215 1284
<> 133:99b5ccf27215 1285 Enable MMU
<> 133:99b5ccf27215 1286 */
<> 133:99b5ccf27215 1287 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_mmu(void) {
<> 133:99b5ccf27215 1288 // Set M bit 0 to enable the MMU
<> 133:99b5ccf27215 1289 // Set AFE bit to enable simplified access permissions model
<> 133:99b5ccf27215 1290 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 133:99b5ccf27215 1291 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 133:99b5ccf27215 1292 __ISB();
<> 133:99b5ccf27215 1293 }
<> 133:99b5ccf27215 1294
<> 133:99b5ccf27215 1295 /** \brief Disable MMU
<> 133:99b5ccf27215 1296
<> 133:99b5ccf27215 1297 Disable MMU
<> 133:99b5ccf27215 1298 */
<> 133:99b5ccf27215 1299 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_mmu(void) {
<> 133:99b5ccf27215 1300 // Clear M bit 0 to disable the MMU
<> 133:99b5ccf27215 1301 __set_SCTLR( __get_SCTLR() & ~1);
<> 133:99b5ccf27215 1302 __ISB();
<> 133:99b5ccf27215 1303 }
<> 133:99b5ccf27215 1304
<> 133:99b5ccf27215 1305 /******************************** TLB maintenance operations ************************************************/
<> 133:99b5ccf27215 1306 /** \brief Invalidate the whole tlb
<> 133:99b5ccf27215 1307
<> 133:99b5ccf27215 1308 TLBIALL. Invalidate the whole tlb
<> 133:99b5ccf27215 1309 */
<> 133:99b5ccf27215 1310
<> 133:99b5ccf27215 1311 __attribute__( ( always_inline ) ) __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 133:99b5ccf27215 1312 #if 1
<> 133:99b5ccf27215 1313 __ASM volatile ("mcr p15, 0, %0, c8, c7, 0" : : "r" (0));
<> 133:99b5ccf27215 1314 #else
<> 133:99b5ccf27215 1315 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 133:99b5ccf27215 1316 __TLBIALL = 0;
<> 133:99b5ccf27215 1317 #endif
<> 133:99b5ccf27215 1318 __DSB();
<> 133:99b5ccf27215 1319 __ISB();
<> 133:99b5ccf27215 1320 }
<> 133:99b5ccf27215 1321
<> 133:99b5ccf27215 1322 /******************************** BTB maintenance operations ************************************************/
<> 133:99b5ccf27215 1323 /** \brief Invalidate entire branch predictor array
<> 133:99b5ccf27215 1324
<> 133:99b5ccf27215 1325 BPIALL. Branch Predictor Invalidate All.
<> 133:99b5ccf27215 1326 */
<> 133:99b5ccf27215 1327
<> 133:99b5ccf27215 1328 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_btac(void) {
<> 133:99b5ccf27215 1329 #if 1
<> 133:99b5ccf27215 1330 __ASM volatile ("mcr p15, 0, %0, c7, c5, 6" : : "r" (0));
<> 133:99b5ccf27215 1331 #else
<> 133:99b5ccf27215 1332 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 133:99b5ccf27215 1333 __BPIALL = 0;
<> 133:99b5ccf27215 1334 #endif
<> 133:99b5ccf27215 1335 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 1336 __ISB(); //ensure instruction fetch path sees new state
<> 133:99b5ccf27215 1337 }
<> 133:99b5ccf27215 1338
<> 133:99b5ccf27215 1339
<> 133:99b5ccf27215 1340 /******************************** L1 cache operations ******************************************************/
<> 133:99b5ccf27215 1341
<> 133:99b5ccf27215 1342 /** \brief Invalidate the whole I$
<> 133:99b5ccf27215 1343
<> 133:99b5ccf27215 1344 ICIALLU. Instruction Cache Invalidate All to PoU
<> 133:99b5ccf27215 1345 */
<> 133:99b5ccf27215 1346 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 133:99b5ccf27215 1347 #if 1
<> 133:99b5ccf27215 1348 __ASM volatile ("mcr p15, 0, %0, c7, c5, 0" : : "r" (0));
<> 133:99b5ccf27215 1349 #else
<> 133:99b5ccf27215 1350 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 133:99b5ccf27215 1351 __ICIALLU = 0;
<> 133:99b5ccf27215 1352 #endif
<> 133:99b5ccf27215 1353 __DSB(); //ensure completion of the invalidation
<> 133:99b5ccf27215 1354 __ISB(); //ensure instruction fetch path sees new I cache state
<> 133:99b5ccf27215 1355 }
<> 133:99b5ccf27215 1356
<> 133:99b5ccf27215 1357 /** \brief Clean D$ by MVA
<> 133:99b5ccf27215 1358
<> 133:99b5ccf27215 1359 DCCMVAC. Data cache clean by MVA to PoC
<> 133:99b5ccf27215 1360 */
<> 133:99b5ccf27215 1361 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 133:99b5ccf27215 1362 #if 1
<> 133:99b5ccf27215 1363 __ASM volatile ("mcr p15, 0, %0, c7, c10, 1" : : "r" ((uint32_t)va));
<> 133:99b5ccf27215 1364 #else
<> 133:99b5ccf27215 1365 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 133:99b5ccf27215 1366 __DCCMVAC = (uint32_t)va;
<> 133:99b5ccf27215 1367 #endif
<> 133:99b5ccf27215 1368 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 1369 }
<> 133:99b5ccf27215 1370
<> 133:99b5ccf27215 1371 /** \brief Invalidate D$ by MVA
<> 133:99b5ccf27215 1372
<> 133:99b5ccf27215 1373 DCIMVAC. Data cache invalidate by MVA to PoC
<> 133:99b5ccf27215 1374 */
<> 133:99b5ccf27215 1375 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 133:99b5ccf27215 1376 #if 1
<> 133:99b5ccf27215 1377 __ASM volatile ("mcr p15, 0, %0, c7, c6, 1" : : "r" ((uint32_t)va));
<> 133:99b5ccf27215 1378 #else
<> 133:99b5ccf27215 1379 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 133:99b5ccf27215 1380 __DCIMVAC = (uint32_t)va;
<> 133:99b5ccf27215 1381 #endif
<> 133:99b5ccf27215 1382 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 1383 }
<> 133:99b5ccf27215 1384
<> 133:99b5ccf27215 1385 /** \brief Clean and Invalidate D$ by MVA
<> 133:99b5ccf27215 1386
<> 133:99b5ccf27215 1387 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 133:99b5ccf27215 1388 */
<> 133:99b5ccf27215 1389 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 133:99b5ccf27215 1390 #if 1
<> 133:99b5ccf27215 1391 __ASM volatile ("mcr p15, 0, %0, c7, c14, 1" : : "r" ((uint32_t)va));
<> 133:99b5ccf27215 1392 #else
<> 133:99b5ccf27215 1393 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 133:99b5ccf27215 1394 __DCCIMVAC = (uint32_t)va;
<> 133:99b5ccf27215 1395 #endif
<> 133:99b5ccf27215 1396 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 133:99b5ccf27215 1397 }
<> 133:99b5ccf27215 1398
<> 133:99b5ccf27215 1399 /** \brief Clean and Invalidate the entire data or unified cache
<> 133:99b5ccf27215 1400
<> 133:99b5ccf27215 1401 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 133:99b5ccf27215 1402 */
<> 133:99b5ccf27215 1403 extern void __v7_all_cache(uint32_t op);
<> 133:99b5ccf27215 1404
<> 133:99b5ccf27215 1405
<> 133:99b5ccf27215 1406 /** \brief Invalidate the whole D$
<> 133:99b5ccf27215 1407
<> 133:99b5ccf27215 1408 DCISW. Invalidate by Set/Way
<> 133:99b5ccf27215 1409 */
<> 133:99b5ccf27215 1410
<> 133:99b5ccf27215 1411 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 133:99b5ccf27215 1412 __v7_all_cache(0);
<> 133:99b5ccf27215 1413 }
<> 133:99b5ccf27215 1414
<> 133:99b5ccf27215 1415 /** \brief Clean the whole D$
<> 133:99b5ccf27215 1416
<> 133:99b5ccf27215 1417 DCCSW. Clean by Set/Way
<> 133:99b5ccf27215 1418 */
<> 133:99b5ccf27215 1419
<> 133:99b5ccf27215 1420 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 133:99b5ccf27215 1421 __v7_all_cache(1);
<> 133:99b5ccf27215 1422 }
<> 133:99b5ccf27215 1423
<> 133:99b5ccf27215 1424 /** \brief Clean and invalidate the whole D$
<> 133:99b5ccf27215 1425
<> 133:99b5ccf27215 1426 DCCISW. Clean and Invalidate by Set/Way
<> 133:99b5ccf27215 1427 */
<> 133:99b5ccf27215 1428
<> 133:99b5ccf27215 1429 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 133:99b5ccf27215 1430 __v7_all_cache(2);
<> 133:99b5ccf27215 1431 }
<> 133:99b5ccf27215 1432
<> 133:99b5ccf27215 1433 #include "core_ca_mmu.h"
<> 133:99b5ccf27215 1434
<> 133:99b5ccf27215 1435 #elif (defined (__TASKING__)) /*--------------- TASKING Compiler -----------------*/
<> 133:99b5ccf27215 1436
<> 133:99b5ccf27215 1437 #error TASKING Compiler support not implemented for Cortex-A
<> 133:99b5ccf27215 1438
<> 133:99b5ccf27215 1439 #endif
<> 133:99b5ccf27215 1440
<> 133:99b5ccf27215 1441 /*@} end of CMSIS_Core_RegAccFunctions */
<> 133:99b5ccf27215 1442
<> 133:99b5ccf27215 1443
<> 133:99b5ccf27215 1444 #endif /* __CORE_CAFUNC_H__ */