The official Mbed 2 C/C++ SDK provides the software platform and libraries to build your applications.

Dependents:   hello SerialTestv11 SerialTestv12 Sierpinski ... more

mbed 2

This is the mbed 2 library. If you'd like to learn about Mbed OS please see the mbed-os docs.

Committer:
<>
Date:
Wed Apr 12 16:07:08 2017 +0100
Revision:
140:97feb9bacc10
Parent:
130:d75b3fe1f5cb
Release 140 of the mbed library

Ports for Upcoming Targets

3841: Add nRf52840 target https://github.com/ARMmbed/mbed-os/pull/3841
3992: Introducing UBLOX_C030 platform. https://github.com/ARMmbed/mbed-os/pull/3992

Fixes and Changes

3951: [NUCLEO_F303ZE] Correct ARDUINO pin https://github.com/ARMmbed/mbed-os/pull/3951
4021: Fixing a macro to detect when RTOS was in use for the NRF52840_DK https://github.com/ARMmbed/mbed-os/pull/4021
3979: KW24D: Add missing SPI defines and Arduino connector definitions https://github.com/ARMmbed/mbed-os/pull/3979
3990: UBLOX_C027: construct a ticker-based wait, rather than calling wait_ms(), in the https://github.com/ARMmbed/mbed-os/pull/3990
4003: Fixed OBOE in async serial tx for NRF52 target, fixes #4002 https://github.com/ARMmbed/mbed-os/pull/4003
4012: STM32: Correct I2C master error handling https://github.com/ARMmbed/mbed-os/pull/4012
4020: NUCLEO_L011K4 remove unsupported tool chain files https://github.com/ARMmbed/mbed-os/pull/4020
4065: K66F: Move bss section to m_data_2 Section https://github.com/ARMmbed/mbed-os/pull/4065
4014: Issue 3763: Reduce heap allocation in the GCC linker file https://github.com/ARMmbed/mbed-os/pull/4014
4030: [STM32L0] reduce IAR heap and stack size for small targets https://github.com/ARMmbed/mbed-os/pull/4030
4109: NUCLEO_L476RG : minor serial pin update https://github.com/ARMmbed/mbed-os/pull/4109
3982: Ticker - kl25z bugfix for handling events in the past https://github.com/ARMmbed/mbed-os/pull/3982

Who changed what in which revision?

UserRevisionLine numberNew contents of line
<> 129:0ab6a29f35bf 1 /**************************************************************************//**
<> 129:0ab6a29f35bf 2 * @file core_caFunc.h
<> 129:0ab6a29f35bf 3 * @brief CMSIS Cortex-A Core Function Access Header File
<> 129:0ab6a29f35bf 4 * @version V3.10
<> 129:0ab6a29f35bf 5 * @date 30 Oct 2013
<> 129:0ab6a29f35bf 6 *
<> 129:0ab6a29f35bf 7 * @note
<> 129:0ab6a29f35bf 8 *
<> 129:0ab6a29f35bf 9 ******************************************************************************/
<> 129:0ab6a29f35bf 10 /* Copyright (c) 2009 - 2013 ARM LIMITED
<> 129:0ab6a29f35bf 11
<> 129:0ab6a29f35bf 12 All rights reserved.
<> 129:0ab6a29f35bf 13 Redistribution and use in source and binary forms, with or without
<> 129:0ab6a29f35bf 14 modification, are permitted provided that the following conditions are met:
<> 129:0ab6a29f35bf 15 - Redistributions of source code must retain the above copyright
<> 129:0ab6a29f35bf 16 notice, this list of conditions and the following disclaimer.
<> 129:0ab6a29f35bf 17 - Redistributions in binary form must reproduce the above copyright
<> 129:0ab6a29f35bf 18 notice, this list of conditions and the following disclaimer in the
<> 129:0ab6a29f35bf 19 documentation and/or other materials provided with the distribution.
<> 129:0ab6a29f35bf 20 - Neither the name of ARM nor the names of its contributors may be used
<> 129:0ab6a29f35bf 21 to endorse or promote products derived from this software without
<> 129:0ab6a29f35bf 22 specific prior written permission.
<> 129:0ab6a29f35bf 23 *
<> 129:0ab6a29f35bf 24 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
<> 129:0ab6a29f35bf 25 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
<> 129:0ab6a29f35bf 26 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
<> 129:0ab6a29f35bf 27 ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS AND CONTRIBUTORS BE
<> 129:0ab6a29f35bf 28 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
<> 129:0ab6a29f35bf 29 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
<> 129:0ab6a29f35bf 30 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
<> 129:0ab6a29f35bf 31 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
<> 129:0ab6a29f35bf 32 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
<> 129:0ab6a29f35bf 33 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
<> 129:0ab6a29f35bf 34 POSSIBILITY OF SUCH DAMAGE.
<> 129:0ab6a29f35bf 35 ---------------------------------------------------------------------------*/
<> 129:0ab6a29f35bf 36
<> 129:0ab6a29f35bf 37
<> 129:0ab6a29f35bf 38 #ifndef __CORE_CAFUNC_H__
<> 129:0ab6a29f35bf 39 #define __CORE_CAFUNC_H__
<> 129:0ab6a29f35bf 40
<> 129:0ab6a29f35bf 41
<> 129:0ab6a29f35bf 42 /* ########################### Core Function Access ########################### */
<> 129:0ab6a29f35bf 43 /** \ingroup CMSIS_Core_FunctionInterface
<> 129:0ab6a29f35bf 44 \defgroup CMSIS_Core_RegAccFunctions CMSIS Core Register Access Functions
<> 129:0ab6a29f35bf 45 @{
<> 129:0ab6a29f35bf 46 */
<> 129:0ab6a29f35bf 47
<> 129:0ab6a29f35bf 48 #if defined ( __CC_ARM ) /*------------------RealView Compiler -----------------*/
<> 129:0ab6a29f35bf 49 /* ARM armcc specific functions */
<> 129:0ab6a29f35bf 50
<> 129:0ab6a29f35bf 51 #if (__ARMCC_VERSION < 400677)
<> 129:0ab6a29f35bf 52 #error "Please use ARM Compiler Toolchain V4.0.677 or later!"
<> 129:0ab6a29f35bf 53 #endif
<> 129:0ab6a29f35bf 54
<> 129:0ab6a29f35bf 55 #define MODE_USR 0x10
<> 129:0ab6a29f35bf 56 #define MODE_FIQ 0x11
<> 129:0ab6a29f35bf 57 #define MODE_IRQ 0x12
<> 129:0ab6a29f35bf 58 #define MODE_SVC 0x13
<> 129:0ab6a29f35bf 59 #define MODE_MON 0x16
<> 129:0ab6a29f35bf 60 #define MODE_ABT 0x17
<> 129:0ab6a29f35bf 61 #define MODE_HYP 0x1A
<> 129:0ab6a29f35bf 62 #define MODE_UND 0x1B
<> 129:0ab6a29f35bf 63 #define MODE_SYS 0x1F
<> 129:0ab6a29f35bf 64
<> 129:0ab6a29f35bf 65 /** \brief Get APSR Register
<> 129:0ab6a29f35bf 66
<> 129:0ab6a29f35bf 67 This function returns the content of the APSR Register.
<> 129:0ab6a29f35bf 68
<> 129:0ab6a29f35bf 69 \return APSR Register value
<> 129:0ab6a29f35bf 70 */
<> 129:0ab6a29f35bf 71 __STATIC_INLINE uint32_t __get_APSR(void)
<> 129:0ab6a29f35bf 72 {
<> 129:0ab6a29f35bf 73 register uint32_t __regAPSR __ASM("apsr");
<> 129:0ab6a29f35bf 74 return(__regAPSR);
<> 129:0ab6a29f35bf 75 }
<> 129:0ab6a29f35bf 76
<> 129:0ab6a29f35bf 77
<> 129:0ab6a29f35bf 78 /** \brief Get CPSR Register
<> 129:0ab6a29f35bf 79
<> 129:0ab6a29f35bf 80 This function returns the content of the CPSR Register.
<> 129:0ab6a29f35bf 81
<> 129:0ab6a29f35bf 82 \return CPSR Register value
<> 129:0ab6a29f35bf 83 */
<> 129:0ab6a29f35bf 84 __STATIC_INLINE uint32_t __get_CPSR(void)
<> 129:0ab6a29f35bf 85 {
<> 129:0ab6a29f35bf 86 register uint32_t __regCPSR __ASM("cpsr");
<> 129:0ab6a29f35bf 87 return(__regCPSR);
<> 129:0ab6a29f35bf 88 }
<> 129:0ab6a29f35bf 89
<> 129:0ab6a29f35bf 90 /** \brief Set Stack Pointer
<> 129:0ab6a29f35bf 91
<> 129:0ab6a29f35bf 92 This function assigns the given value to the current stack pointer.
<> 129:0ab6a29f35bf 93
<> 129:0ab6a29f35bf 94 \param [in] topOfStack Stack Pointer value to set
<> 129:0ab6a29f35bf 95 */
<> 129:0ab6a29f35bf 96 register uint32_t __regSP __ASM("sp");
<> 129:0ab6a29f35bf 97 __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 129:0ab6a29f35bf 98 {
<> 129:0ab6a29f35bf 99 __regSP = topOfStack;
<> 129:0ab6a29f35bf 100 }
<> 129:0ab6a29f35bf 101
<> 129:0ab6a29f35bf 102
<> 129:0ab6a29f35bf 103 /** \brief Get link register
<> 129:0ab6a29f35bf 104
<> 129:0ab6a29f35bf 105 This function returns the value of the link register
<> 129:0ab6a29f35bf 106
<> 129:0ab6a29f35bf 107 \return Value of link register
<> 129:0ab6a29f35bf 108 */
<> 129:0ab6a29f35bf 109 register uint32_t __reglr __ASM("lr");
<> 129:0ab6a29f35bf 110 __STATIC_INLINE uint32_t __get_LR(void)
<> 129:0ab6a29f35bf 111 {
<> 129:0ab6a29f35bf 112 return(__reglr);
<> 129:0ab6a29f35bf 113 }
<> 129:0ab6a29f35bf 114
<> 129:0ab6a29f35bf 115 /** \brief Set link register
<> 129:0ab6a29f35bf 116
<> 129:0ab6a29f35bf 117 This function sets the value of the link register
<> 129:0ab6a29f35bf 118
<> 129:0ab6a29f35bf 119 \param [in] lr LR value to set
<> 129:0ab6a29f35bf 120 */
<> 129:0ab6a29f35bf 121 __STATIC_INLINE void __set_LR(uint32_t lr)
<> 129:0ab6a29f35bf 122 {
<> 129:0ab6a29f35bf 123 __reglr = lr;
<> 129:0ab6a29f35bf 124 }
<> 129:0ab6a29f35bf 125
<> 129:0ab6a29f35bf 126 /** \brief Set Process Stack Pointer
<> 129:0ab6a29f35bf 127
<> 129:0ab6a29f35bf 128 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 129:0ab6a29f35bf 129
<> 129:0ab6a29f35bf 130 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 129:0ab6a29f35bf 131 */
<> 129:0ab6a29f35bf 132 __STATIC_ASM void __set_PSP(uint32_t topOfProcStack)
<> 129:0ab6a29f35bf 133 {
<> 129:0ab6a29f35bf 134 ARM
<> 129:0ab6a29f35bf 135 PRESERVE8
<> 129:0ab6a29f35bf 136
<> 129:0ab6a29f35bf 137 BIC R0, R0, #7 ;ensure stack is 8-byte aligned
<> 129:0ab6a29f35bf 138 MRS R1, CPSR
<> 129:0ab6a29f35bf 139 CPS #MODE_SYS ;no effect in USR mode
<> 129:0ab6a29f35bf 140 MOV SP, R0
<> 129:0ab6a29f35bf 141 MSR CPSR_c, R1 ;no effect in USR mode
<> 129:0ab6a29f35bf 142 ISB
<> 129:0ab6a29f35bf 143 BX LR
<> 129:0ab6a29f35bf 144
<> 129:0ab6a29f35bf 145 }
<> 129:0ab6a29f35bf 146
<> 129:0ab6a29f35bf 147 /** \brief Set User Mode
<> 129:0ab6a29f35bf 148
<> 129:0ab6a29f35bf 149 This function changes the processor state to User Mode
<> 129:0ab6a29f35bf 150 */
<> 129:0ab6a29f35bf 151 __STATIC_ASM void __set_CPS_USR(void)
<> 129:0ab6a29f35bf 152 {
<> 129:0ab6a29f35bf 153 ARM
<> 129:0ab6a29f35bf 154
<> 129:0ab6a29f35bf 155 CPS #MODE_USR
<> 129:0ab6a29f35bf 156 BX LR
<> 129:0ab6a29f35bf 157 }
<> 129:0ab6a29f35bf 158
<> 129:0ab6a29f35bf 159
<> 129:0ab6a29f35bf 160 /** \brief Enable FIQ
<> 129:0ab6a29f35bf 161
<> 129:0ab6a29f35bf 162 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 129:0ab6a29f35bf 163 Can only be executed in Privileged modes.
<> 129:0ab6a29f35bf 164 */
<> 129:0ab6a29f35bf 165 #define __enable_fault_irq __enable_fiq
<> 129:0ab6a29f35bf 166
<> 129:0ab6a29f35bf 167
<> 129:0ab6a29f35bf 168 /** \brief Disable FIQ
<> 129:0ab6a29f35bf 169
<> 129:0ab6a29f35bf 170 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 129:0ab6a29f35bf 171 Can only be executed in Privileged modes.
<> 129:0ab6a29f35bf 172 */
<> 129:0ab6a29f35bf 173 #define __disable_fault_irq __disable_fiq
<> 129:0ab6a29f35bf 174
<> 129:0ab6a29f35bf 175
<> 129:0ab6a29f35bf 176 /** \brief Get FPSCR
<> 129:0ab6a29f35bf 177
<> 129:0ab6a29f35bf 178 This function returns the current value of the Floating Point Status/Control register.
<> 129:0ab6a29f35bf 179
<> 129:0ab6a29f35bf 180 \return Floating Point Status/Control register value
<> 129:0ab6a29f35bf 181 */
<> 129:0ab6a29f35bf 182 __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 129:0ab6a29f35bf 183 {
<> 129:0ab6a29f35bf 184 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 129:0ab6a29f35bf 185 register uint32_t __regfpscr __ASM("fpscr");
<> 129:0ab6a29f35bf 186 return(__regfpscr);
<> 129:0ab6a29f35bf 187 #else
<> 129:0ab6a29f35bf 188 return(0);
<> 129:0ab6a29f35bf 189 #endif
<> 129:0ab6a29f35bf 190 }
<> 129:0ab6a29f35bf 191
<> 129:0ab6a29f35bf 192
<> 129:0ab6a29f35bf 193 /** \brief Set FPSCR
<> 129:0ab6a29f35bf 194
<> 129:0ab6a29f35bf 195 This function assigns the given value to the Floating Point Status/Control register.
<> 129:0ab6a29f35bf 196
<> 129:0ab6a29f35bf 197 \param [in] fpscr Floating Point Status/Control value to set
<> 129:0ab6a29f35bf 198 */
<> 129:0ab6a29f35bf 199 __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 129:0ab6a29f35bf 200 {
<> 129:0ab6a29f35bf 201 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 129:0ab6a29f35bf 202 register uint32_t __regfpscr __ASM("fpscr");
<> 129:0ab6a29f35bf 203 __regfpscr = (fpscr);
<> 129:0ab6a29f35bf 204 #endif
<> 129:0ab6a29f35bf 205 }
<> 129:0ab6a29f35bf 206
<> 129:0ab6a29f35bf 207 /** \brief Get FPEXC
<> 129:0ab6a29f35bf 208
<> 129:0ab6a29f35bf 209 This function returns the current value of the Floating Point Exception Control register.
<> 129:0ab6a29f35bf 210
<> 129:0ab6a29f35bf 211 \return Floating Point Exception Control register value
<> 129:0ab6a29f35bf 212 */
<> 129:0ab6a29f35bf 213 __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 129:0ab6a29f35bf 214 {
<> 129:0ab6a29f35bf 215 #if (__FPU_PRESENT == 1)
<> 129:0ab6a29f35bf 216 register uint32_t __regfpexc __ASM("fpexc");
<> 129:0ab6a29f35bf 217 return(__regfpexc);
<> 129:0ab6a29f35bf 218 #else
<> 129:0ab6a29f35bf 219 return(0);
<> 129:0ab6a29f35bf 220 #endif
<> 129:0ab6a29f35bf 221 }
<> 129:0ab6a29f35bf 222
<> 129:0ab6a29f35bf 223
<> 129:0ab6a29f35bf 224 /** \brief Set FPEXC
<> 129:0ab6a29f35bf 225
<> 129:0ab6a29f35bf 226 This function assigns the given value to the Floating Point Exception Control register.
<> 129:0ab6a29f35bf 227
<> 129:0ab6a29f35bf 228 \param [in] fpscr Floating Point Exception Control value to set
<> 129:0ab6a29f35bf 229 */
<> 129:0ab6a29f35bf 230 __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 129:0ab6a29f35bf 231 {
<> 129:0ab6a29f35bf 232 #if (__FPU_PRESENT == 1)
<> 129:0ab6a29f35bf 233 register uint32_t __regfpexc __ASM("fpexc");
<> 129:0ab6a29f35bf 234 __regfpexc = (fpexc);
<> 129:0ab6a29f35bf 235 #endif
<> 129:0ab6a29f35bf 236 }
<> 129:0ab6a29f35bf 237
<> 129:0ab6a29f35bf 238 /** \brief Get CPACR
<> 129:0ab6a29f35bf 239
<> 129:0ab6a29f35bf 240 This function returns the current value of the Coprocessor Access Control register.
<> 129:0ab6a29f35bf 241
<> 129:0ab6a29f35bf 242 \return Coprocessor Access Control register value
<> 129:0ab6a29f35bf 243 */
<> 129:0ab6a29f35bf 244 __STATIC_INLINE uint32_t __get_CPACR(void)
<> 129:0ab6a29f35bf 245 {
<> 129:0ab6a29f35bf 246 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 129:0ab6a29f35bf 247 return __regCPACR;
<> 129:0ab6a29f35bf 248 }
<> 129:0ab6a29f35bf 249
<> 129:0ab6a29f35bf 250 /** \brief Set CPACR
<> 129:0ab6a29f35bf 251
<> 129:0ab6a29f35bf 252 This function assigns the given value to the Coprocessor Access Control register.
<> 129:0ab6a29f35bf 253
<> 129:0ab6a29f35bf 254 \param [in] cpacr Coprocessor Acccess Control value to set
<> 129:0ab6a29f35bf 255 */
<> 129:0ab6a29f35bf 256 __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 129:0ab6a29f35bf 257 {
<> 129:0ab6a29f35bf 258 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 129:0ab6a29f35bf 259 __regCPACR = cpacr;
<> 129:0ab6a29f35bf 260 __ISB();
<> 129:0ab6a29f35bf 261 }
<> 129:0ab6a29f35bf 262
<> 129:0ab6a29f35bf 263 /** \brief Get CBAR
<> 129:0ab6a29f35bf 264
<> 129:0ab6a29f35bf 265 This function returns the value of the Configuration Base Address register.
<> 129:0ab6a29f35bf 266
<> 129:0ab6a29f35bf 267 \return Configuration Base Address register value
<> 129:0ab6a29f35bf 268 */
<> 129:0ab6a29f35bf 269 __STATIC_INLINE uint32_t __get_CBAR() {
<> 129:0ab6a29f35bf 270 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 129:0ab6a29f35bf 271 return(__regCBAR);
<> 129:0ab6a29f35bf 272 }
<> 129:0ab6a29f35bf 273
<> 129:0ab6a29f35bf 274 /** \brief Get TTBR0
<> 129:0ab6a29f35bf 275
<> 129:0ab6a29f35bf 276 This function returns the value of the Translation Table Base Register 0.
<> 129:0ab6a29f35bf 277
<> 129:0ab6a29f35bf 278 \return Translation Table Base Register 0 value
<> 129:0ab6a29f35bf 279 */
<> 129:0ab6a29f35bf 280 __STATIC_INLINE uint32_t __get_TTBR0() {
<> 129:0ab6a29f35bf 281 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 129:0ab6a29f35bf 282 return(__regTTBR0);
<> 129:0ab6a29f35bf 283 }
<> 129:0ab6a29f35bf 284
<> 129:0ab6a29f35bf 285 /** \brief Set TTBR0
<> 129:0ab6a29f35bf 286
<> 129:0ab6a29f35bf 287 This function assigns the given value to the Translation Table Base Register 0.
<> 129:0ab6a29f35bf 288
<> 129:0ab6a29f35bf 289 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 129:0ab6a29f35bf 290 */
<> 129:0ab6a29f35bf 291 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 129:0ab6a29f35bf 292 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 129:0ab6a29f35bf 293 __regTTBR0 = ttbr0;
<> 129:0ab6a29f35bf 294 __ISB();
<> 129:0ab6a29f35bf 295 }
<> 129:0ab6a29f35bf 296
<> 129:0ab6a29f35bf 297 /** \brief Get DACR
<> 129:0ab6a29f35bf 298
<> 129:0ab6a29f35bf 299 This function returns the value of the Domain Access Control Register.
<> 129:0ab6a29f35bf 300
<> 129:0ab6a29f35bf 301 \return Domain Access Control Register value
<> 129:0ab6a29f35bf 302 */
<> 129:0ab6a29f35bf 303 __STATIC_INLINE uint32_t __get_DACR() {
<> 129:0ab6a29f35bf 304 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 129:0ab6a29f35bf 305 return(__regDACR);
<> 129:0ab6a29f35bf 306 }
<> 129:0ab6a29f35bf 307
<> 129:0ab6a29f35bf 308 /** \brief Set DACR
<> 129:0ab6a29f35bf 309
<> 129:0ab6a29f35bf 310 This function assigns the given value to the Domain Access Control Register.
<> 129:0ab6a29f35bf 311
<> 129:0ab6a29f35bf 312 \param [in] dacr Domain Access Control Register value to set
<> 129:0ab6a29f35bf 313 */
<> 129:0ab6a29f35bf 314 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 129:0ab6a29f35bf 315 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 129:0ab6a29f35bf 316 __regDACR = dacr;
<> 129:0ab6a29f35bf 317 __ISB();
<> 129:0ab6a29f35bf 318 }
<> 129:0ab6a29f35bf 319
<> 129:0ab6a29f35bf 320 /******************************** Cache and BTAC enable ****************************************************/
<> 129:0ab6a29f35bf 321
<> 129:0ab6a29f35bf 322 /** \brief Set SCTLR
<> 129:0ab6a29f35bf 323
<> 129:0ab6a29f35bf 324 This function assigns the given value to the System Control Register.
<> 129:0ab6a29f35bf 325
<> 129:0ab6a29f35bf 326 \param [in] sctlr System Control Register value to set
<> 129:0ab6a29f35bf 327 */
<> 129:0ab6a29f35bf 328 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 129:0ab6a29f35bf 329 {
<> 129:0ab6a29f35bf 330 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 129:0ab6a29f35bf 331 __regSCTLR = sctlr;
<> 129:0ab6a29f35bf 332 }
<> 129:0ab6a29f35bf 333
<> 129:0ab6a29f35bf 334 /** \brief Get SCTLR
<> 129:0ab6a29f35bf 335
<> 129:0ab6a29f35bf 336 This function returns the value of the System Control Register.
<> 129:0ab6a29f35bf 337
<> 129:0ab6a29f35bf 338 \return System Control Register value
<> 129:0ab6a29f35bf 339 */
<> 129:0ab6a29f35bf 340 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 129:0ab6a29f35bf 341 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 129:0ab6a29f35bf 342 return(__regSCTLR);
<> 129:0ab6a29f35bf 343 }
<> 129:0ab6a29f35bf 344
<> 129:0ab6a29f35bf 345 /** \brief Enable Caches
<> 129:0ab6a29f35bf 346
<> 129:0ab6a29f35bf 347 Enable Caches
<> 129:0ab6a29f35bf 348 */
<> 129:0ab6a29f35bf 349 __STATIC_INLINE void __enable_caches(void) {
<> 129:0ab6a29f35bf 350 // Set I bit 12 to enable I Cache
<> 129:0ab6a29f35bf 351 // Set C bit 2 to enable D Cache
<> 129:0ab6a29f35bf 352 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 129:0ab6a29f35bf 353 }
<> 129:0ab6a29f35bf 354
<> 129:0ab6a29f35bf 355 /** \brief Disable Caches
<> 129:0ab6a29f35bf 356
<> 129:0ab6a29f35bf 357 Disable Caches
<> 129:0ab6a29f35bf 358 */
<> 129:0ab6a29f35bf 359 __STATIC_INLINE void __disable_caches(void) {
<> 129:0ab6a29f35bf 360 // Clear I bit 12 to disable I Cache
<> 129:0ab6a29f35bf 361 // Clear C bit 2 to disable D Cache
<> 129:0ab6a29f35bf 362 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 129:0ab6a29f35bf 363 __ISB();
<> 129:0ab6a29f35bf 364 }
<> 129:0ab6a29f35bf 365
<> 129:0ab6a29f35bf 366 /** \brief Enable BTAC
<> 129:0ab6a29f35bf 367
<> 129:0ab6a29f35bf 368 Enable BTAC
<> 129:0ab6a29f35bf 369 */
<> 129:0ab6a29f35bf 370 __STATIC_INLINE void __enable_btac(void) {
<> 129:0ab6a29f35bf 371 // Set Z bit 11 to enable branch prediction
<> 129:0ab6a29f35bf 372 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 129:0ab6a29f35bf 373 __ISB();
<> 129:0ab6a29f35bf 374 }
<> 129:0ab6a29f35bf 375
<> 129:0ab6a29f35bf 376 /** \brief Disable BTAC
<> 129:0ab6a29f35bf 377
<> 129:0ab6a29f35bf 378 Disable BTAC
<> 129:0ab6a29f35bf 379 */
<> 129:0ab6a29f35bf 380 __STATIC_INLINE void __disable_btac(void) {
<> 129:0ab6a29f35bf 381 // Clear Z bit 11 to disable branch prediction
<> 129:0ab6a29f35bf 382 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 129:0ab6a29f35bf 383 }
<> 129:0ab6a29f35bf 384
<> 129:0ab6a29f35bf 385
<> 129:0ab6a29f35bf 386 /** \brief Enable MMU
<> 129:0ab6a29f35bf 387
<> 129:0ab6a29f35bf 388 Enable MMU
<> 129:0ab6a29f35bf 389 */
<> 129:0ab6a29f35bf 390 __STATIC_INLINE void __enable_mmu(void) {
<> 129:0ab6a29f35bf 391 // Set M bit 0 to enable the MMU
<> 129:0ab6a29f35bf 392 // Set AFE bit to enable simplified access permissions model
<> 129:0ab6a29f35bf 393 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 129:0ab6a29f35bf 394 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 129:0ab6a29f35bf 395 __ISB();
<> 129:0ab6a29f35bf 396 }
<> 129:0ab6a29f35bf 397
<> 129:0ab6a29f35bf 398 /** \brief Disable MMU
<> 129:0ab6a29f35bf 399
<> 129:0ab6a29f35bf 400 Disable MMU
<> 129:0ab6a29f35bf 401 */
<> 129:0ab6a29f35bf 402 __STATIC_INLINE void __disable_mmu(void) {
<> 129:0ab6a29f35bf 403 // Clear M bit 0 to disable the MMU
<> 129:0ab6a29f35bf 404 __set_SCTLR( __get_SCTLR() & ~1);
<> 129:0ab6a29f35bf 405 __ISB();
<> 129:0ab6a29f35bf 406 }
<> 129:0ab6a29f35bf 407
<> 129:0ab6a29f35bf 408 /******************************** TLB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 409 /** \brief Invalidate the whole tlb
<> 129:0ab6a29f35bf 410
<> 129:0ab6a29f35bf 411 TLBIALL. Invalidate the whole tlb
<> 129:0ab6a29f35bf 412 */
<> 129:0ab6a29f35bf 413
<> 129:0ab6a29f35bf 414 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 129:0ab6a29f35bf 415 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 129:0ab6a29f35bf 416 __TLBIALL = 0;
<> 129:0ab6a29f35bf 417 __DSB();
<> 129:0ab6a29f35bf 418 __ISB();
<> 129:0ab6a29f35bf 419 }
<> 129:0ab6a29f35bf 420
<> 129:0ab6a29f35bf 421 /******************************** BTB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 422 /** \brief Invalidate entire branch predictor array
<> 129:0ab6a29f35bf 423
<> 129:0ab6a29f35bf 424 BPIALL. Branch Predictor Invalidate All.
<> 129:0ab6a29f35bf 425 */
<> 129:0ab6a29f35bf 426
<> 129:0ab6a29f35bf 427 __STATIC_INLINE void __v7_inv_btac(void) {
<> 129:0ab6a29f35bf 428 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 129:0ab6a29f35bf 429 __BPIALL = 0;
<> 129:0ab6a29f35bf 430 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 431 __ISB(); //ensure instruction fetch path sees new state
<> 129:0ab6a29f35bf 432 }
<> 129:0ab6a29f35bf 433
<> 129:0ab6a29f35bf 434
<> 129:0ab6a29f35bf 435 /******************************** L1 cache operations ******************************************************/
<> 129:0ab6a29f35bf 436
<> 129:0ab6a29f35bf 437 /** \brief Invalidate the whole I$
<> 129:0ab6a29f35bf 438
<> 129:0ab6a29f35bf 439 ICIALLU. Instruction Cache Invalidate All to PoU
<> 129:0ab6a29f35bf 440 */
<> 129:0ab6a29f35bf 441 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 129:0ab6a29f35bf 442 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 129:0ab6a29f35bf 443 __ICIALLU = 0;
<> 129:0ab6a29f35bf 444 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 445 __ISB(); //ensure instruction fetch path sees new I cache state
<> 129:0ab6a29f35bf 446 }
<> 129:0ab6a29f35bf 447
<> 129:0ab6a29f35bf 448 /** \brief Clean D$ by MVA
<> 129:0ab6a29f35bf 449
<> 129:0ab6a29f35bf 450 DCCMVAC. Data cache clean by MVA to PoC
<> 129:0ab6a29f35bf 451 */
<> 129:0ab6a29f35bf 452 __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 453 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 129:0ab6a29f35bf 454 __DCCMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 455 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 456 }
<> 129:0ab6a29f35bf 457
<> 129:0ab6a29f35bf 458 /** \brief Invalidate D$ by MVA
<> 129:0ab6a29f35bf 459
<> 129:0ab6a29f35bf 460 DCIMVAC. Data cache invalidate by MVA to PoC
<> 129:0ab6a29f35bf 461 */
<> 129:0ab6a29f35bf 462 __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 463 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 129:0ab6a29f35bf 464 __DCIMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 465 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 466 }
<> 129:0ab6a29f35bf 467
<> 129:0ab6a29f35bf 468 /** \brief Clean and Invalidate D$ by MVA
<> 129:0ab6a29f35bf 469
<> 129:0ab6a29f35bf 470 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 129:0ab6a29f35bf 471 */
<> 129:0ab6a29f35bf 472 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 473 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 129:0ab6a29f35bf 474 __DCCIMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 475 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 476 }
<> 129:0ab6a29f35bf 477
<> 129:0ab6a29f35bf 478 /** \brief Clean and Invalidate the entire data or unified cache
<> 129:0ab6a29f35bf 479
<> 129:0ab6a29f35bf 480 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 129:0ab6a29f35bf 481 */
<> 129:0ab6a29f35bf 482 #pragma push
<> 129:0ab6a29f35bf 483 #pragma arm
<> 129:0ab6a29f35bf 484 __STATIC_ASM void __v7_all_cache(uint32_t op) {
<> 129:0ab6a29f35bf 485 ARM
<> 129:0ab6a29f35bf 486
<> 129:0ab6a29f35bf 487 PUSH {R4-R11}
<> 129:0ab6a29f35bf 488
<> 129:0ab6a29f35bf 489 MRC p15, 1, R6, c0, c0, 1 // Read CLIDR
<> 129:0ab6a29f35bf 490 ANDS R3, R6, #0x07000000 // Extract coherency level
<> 129:0ab6a29f35bf 491 MOV R3, R3, LSR #23 // Total cache levels << 1
<> 129:0ab6a29f35bf 492 BEQ Finished // If 0, no need to clean
<> 129:0ab6a29f35bf 493
<> 129:0ab6a29f35bf 494 MOV R10, #0 // R10 holds current cache level << 1
<> 129:0ab6a29f35bf 495 Loop1 ADD R2, R10, R10, LSR #1 // R2 holds cache "Set" position
<> 129:0ab6a29f35bf 496 MOV R1, R6, LSR R2 // Bottom 3 bits are the Cache-type for this level
<> 129:0ab6a29f35bf 497 AND R1, R1, #7 // Isolate those lower 3 bits
<> 129:0ab6a29f35bf 498 CMP R1, #2
<> 129:0ab6a29f35bf 499 BLT Skip // No cache or only instruction cache at this level
<> 129:0ab6a29f35bf 500
<> 129:0ab6a29f35bf 501 MCR p15, 2, R10, c0, c0, 0 // Write the Cache Size selection register
<> 129:0ab6a29f35bf 502 ISB // ISB to sync the change to the CacheSizeID reg
<> 129:0ab6a29f35bf 503 MRC p15, 1, R1, c0, c0, 0 // Reads current Cache Size ID register
<> 129:0ab6a29f35bf 504 AND R2, R1, #7 // Extract the line length field
<> 129:0ab6a29f35bf 505 ADD R2, R2, #4 // Add 4 for the line length offset (log2 16 bytes)
<> 129:0ab6a29f35bf 506 LDR R4, =0x3FF
<> 129:0ab6a29f35bf 507 ANDS R4, R4, R1, LSR #3 // R4 is the max number on the way size (right aligned)
<> 129:0ab6a29f35bf 508 CLZ R5, R4 // R5 is the bit position of the way size increment
<> 129:0ab6a29f35bf 509 LDR R7, =0x7FFF
<> 129:0ab6a29f35bf 510 ANDS R7, R7, R1, LSR #13 // R7 is the max number of the index size (right aligned)
<> 129:0ab6a29f35bf 511
<> 129:0ab6a29f35bf 512 Loop2 MOV R9, R4 // R9 working copy of the max way size (right aligned)
<> 129:0ab6a29f35bf 513
<> 129:0ab6a29f35bf 514 Loop3 ORR R11, R10, R9, LSL R5 // Factor in the Way number and cache number into R11
<> 129:0ab6a29f35bf 515 ORR R11, R11, R7, LSL R2 // Factor in the Set number
<> 129:0ab6a29f35bf 516 CMP R0, #0
<> 129:0ab6a29f35bf 517 BNE Dccsw
<> 129:0ab6a29f35bf 518 MCR p15, 0, R11, c7, c6, 2 // DCISW. Invalidate by Set/Way
<> 129:0ab6a29f35bf 519 B cont
<> 129:0ab6a29f35bf 520 Dccsw CMP R0, #1
<> 129:0ab6a29f35bf 521 BNE Dccisw
<> 129:0ab6a29f35bf 522 MCR p15, 0, R11, c7, c10, 2 // DCCSW. Clean by Set/Way
<> 129:0ab6a29f35bf 523 B cont
<> 129:0ab6a29f35bf 524 Dccisw MCR p15, 0, R11, c7, c14, 2 // DCCISW. Clean and Invalidate by Set/Way
<> 129:0ab6a29f35bf 525 cont SUBS R9, R9, #1 // Decrement the Way number
<> 129:0ab6a29f35bf 526 BGE Loop3
<> 129:0ab6a29f35bf 527 SUBS R7, R7, #1 // Decrement the Set number
<> 129:0ab6a29f35bf 528 BGE Loop2
<> 129:0ab6a29f35bf 529 Skip ADD R10, R10, #2 // Increment the cache number
<> 129:0ab6a29f35bf 530 CMP R3, R10
<> 129:0ab6a29f35bf 531 BGT Loop1
<> 129:0ab6a29f35bf 532
<> 129:0ab6a29f35bf 533 Finished
<> 129:0ab6a29f35bf 534 DSB
<> 129:0ab6a29f35bf 535 POP {R4-R11}
<> 129:0ab6a29f35bf 536 BX lr
<> 129:0ab6a29f35bf 537
<> 129:0ab6a29f35bf 538 }
<> 129:0ab6a29f35bf 539 #pragma pop
<> 129:0ab6a29f35bf 540
<> 129:0ab6a29f35bf 541
<> 129:0ab6a29f35bf 542 /** \brief Invalidate the whole D$
<> 129:0ab6a29f35bf 543
<> 129:0ab6a29f35bf 544 DCISW. Invalidate by Set/Way
<> 129:0ab6a29f35bf 545 */
<> 129:0ab6a29f35bf 546
<> 129:0ab6a29f35bf 547 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 129:0ab6a29f35bf 548 __v7_all_cache(0);
<> 129:0ab6a29f35bf 549 }
<> 129:0ab6a29f35bf 550
<> 129:0ab6a29f35bf 551 /** \brief Clean the whole D$
<> 129:0ab6a29f35bf 552
<> 129:0ab6a29f35bf 553 DCCSW. Clean by Set/Way
<> 129:0ab6a29f35bf 554 */
<> 129:0ab6a29f35bf 555
<> 129:0ab6a29f35bf 556 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 129:0ab6a29f35bf 557 __v7_all_cache(1);
<> 129:0ab6a29f35bf 558 }
<> 129:0ab6a29f35bf 559
<> 129:0ab6a29f35bf 560 /** \brief Clean and invalidate the whole D$
<> 129:0ab6a29f35bf 561
<> 129:0ab6a29f35bf 562 DCCISW. Clean and Invalidate by Set/Way
<> 129:0ab6a29f35bf 563 */
<> 129:0ab6a29f35bf 564
<> 129:0ab6a29f35bf 565 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 129:0ab6a29f35bf 566 __v7_all_cache(2);
<> 129:0ab6a29f35bf 567 }
<> 129:0ab6a29f35bf 568
<> 129:0ab6a29f35bf 569 #include "core_ca_mmu.h"
<> 129:0ab6a29f35bf 570
<> 129:0ab6a29f35bf 571 #elif (defined (__ICCARM__)) /*---------------- ICC Compiler ---------------------*/
<> 129:0ab6a29f35bf 572
<> 129:0ab6a29f35bf 573 #define __inline inline
<> 129:0ab6a29f35bf 574
<> 129:0ab6a29f35bf 575 inline static uint32_t __disable_irq_iar() {
<> 129:0ab6a29f35bf 576 int irq_dis = __get_CPSR() & 0x80; // 7bit CPSR.I
<> 129:0ab6a29f35bf 577 __disable_irq();
<> 129:0ab6a29f35bf 578 return irq_dis;
<> 129:0ab6a29f35bf 579 }
<> 129:0ab6a29f35bf 580
<> 129:0ab6a29f35bf 581 #define MODE_USR 0x10
<> 129:0ab6a29f35bf 582 #define MODE_FIQ 0x11
<> 129:0ab6a29f35bf 583 #define MODE_IRQ 0x12
<> 129:0ab6a29f35bf 584 #define MODE_SVC 0x13
<> 129:0ab6a29f35bf 585 #define MODE_MON 0x16
<> 129:0ab6a29f35bf 586 #define MODE_ABT 0x17
<> 129:0ab6a29f35bf 587 #define MODE_HYP 0x1A
<> 129:0ab6a29f35bf 588 #define MODE_UND 0x1B
<> 129:0ab6a29f35bf 589 #define MODE_SYS 0x1F
<> 129:0ab6a29f35bf 590
<> 129:0ab6a29f35bf 591 /** \brief Set Process Stack Pointer
<> 129:0ab6a29f35bf 592
<> 129:0ab6a29f35bf 593 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 129:0ab6a29f35bf 594
<> 129:0ab6a29f35bf 595 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 129:0ab6a29f35bf 596 */
<> 129:0ab6a29f35bf 597 // from rt_CMSIS.c
<> 129:0ab6a29f35bf 598 __arm static inline void __set_PSP(uint32_t topOfProcStack) {
<> 129:0ab6a29f35bf 599 __asm(
<> 129:0ab6a29f35bf 600 " ARM\n"
<> 129:0ab6a29f35bf 601 // " PRESERVE8\n"
<> 129:0ab6a29f35bf 602
<> 129:0ab6a29f35bf 603 " BIC R0, R0, #7 ;ensure stack is 8-byte aligned \n"
<> 129:0ab6a29f35bf 604 " MRS R1, CPSR \n"
<> 129:0ab6a29f35bf 605 " CPS #0x1F ;no effect in USR mode \n" // MODE_SYS
<> 129:0ab6a29f35bf 606 " MOV SP, R0 \n"
<> 129:0ab6a29f35bf 607 " MSR CPSR_c, R1 ;no effect in USR mode \n"
<> 129:0ab6a29f35bf 608 " ISB \n"
<> 129:0ab6a29f35bf 609 " BX LR \n");
<> 129:0ab6a29f35bf 610 }
<> 129:0ab6a29f35bf 611
<> 129:0ab6a29f35bf 612 /** \brief Set User Mode
<> 129:0ab6a29f35bf 613
<> 129:0ab6a29f35bf 614 This function changes the processor state to User Mode
<> 129:0ab6a29f35bf 615 */
<> 129:0ab6a29f35bf 616 // from rt_CMSIS.c
<> 129:0ab6a29f35bf 617 __arm static inline void __set_CPS_USR(void) {
<> 129:0ab6a29f35bf 618 __asm(
<> 129:0ab6a29f35bf 619 " ARM \n"
<> 129:0ab6a29f35bf 620
<> 129:0ab6a29f35bf 621 " CPS #0x10 \n" // MODE_USR
<> 129:0ab6a29f35bf 622 " BX LR\n");
<> 129:0ab6a29f35bf 623 }
<> 129:0ab6a29f35bf 624
<> 129:0ab6a29f35bf 625 /** \brief Set TTBR0
<> 129:0ab6a29f35bf 626
<> 129:0ab6a29f35bf 627 This function assigns the given value to the Translation Table Base Register 0.
<> 129:0ab6a29f35bf 628
<> 129:0ab6a29f35bf 629 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 129:0ab6a29f35bf 630 */
<> 129:0ab6a29f35bf 631 // from mmu_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 632 __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 129:0ab6a29f35bf 633 __MCR(15, 0, ttbr0, 2, 0, 0); // reg to cp15
<> 129:0ab6a29f35bf 634 __ISB();
<> 129:0ab6a29f35bf 635 }
<> 129:0ab6a29f35bf 636
<> 129:0ab6a29f35bf 637 /** \brief Set DACR
<> 129:0ab6a29f35bf 638
<> 129:0ab6a29f35bf 639 This function assigns the given value to the Domain Access Control Register.
<> 129:0ab6a29f35bf 640
<> 129:0ab6a29f35bf 641 \param [in] dacr Domain Access Control Register value to set
<> 129:0ab6a29f35bf 642 */
<> 129:0ab6a29f35bf 643 // from mmu_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 644 __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 129:0ab6a29f35bf 645 __MCR(15, 0, dacr, 3, 0, 0); // reg to cp15
<> 129:0ab6a29f35bf 646 __ISB();
<> 129:0ab6a29f35bf 647 }
<> 129:0ab6a29f35bf 648
<> 129:0ab6a29f35bf 649
<> 129:0ab6a29f35bf 650 /******************************** Cache and BTAC enable ****************************************************/
<> 129:0ab6a29f35bf 651 /** \brief Set SCTLR
<> 129:0ab6a29f35bf 652
<> 129:0ab6a29f35bf 653 This function assigns the given value to the System Control Register.
<> 129:0ab6a29f35bf 654
<> 129:0ab6a29f35bf 655 \param [in] sctlr System Control Register value to set
<> 129:0ab6a29f35bf 656 */
<> 129:0ab6a29f35bf 657 // from __enable_mmu()
<> 129:0ab6a29f35bf 658 __STATIC_INLINE void __set_SCTLR(uint32_t sctlr) {
<> 129:0ab6a29f35bf 659 __MCR(15, 0, sctlr, 1, 0, 0); // reg to cp15
<> 129:0ab6a29f35bf 660 }
<> 129:0ab6a29f35bf 661
<> 129:0ab6a29f35bf 662 /** \brief Get SCTLR
<> 129:0ab6a29f35bf 663
<> 129:0ab6a29f35bf 664 This function returns the value of the System Control Register.
<> 129:0ab6a29f35bf 665
<> 129:0ab6a29f35bf 666 \return System Control Register value
<> 129:0ab6a29f35bf 667 */
<> 129:0ab6a29f35bf 668 // from __enable_mmu()
<> 129:0ab6a29f35bf 669 __STATIC_INLINE uint32_t __get_SCTLR() {
<> 129:0ab6a29f35bf 670 uint32_t __regSCTLR = __MRC(15, 0, 1, 0, 0);
<> 129:0ab6a29f35bf 671 return __regSCTLR;
<> 129:0ab6a29f35bf 672 }
<> 129:0ab6a29f35bf 673
<> 129:0ab6a29f35bf 674 /** \brief Enable Caches
<> 129:0ab6a29f35bf 675
<> 129:0ab6a29f35bf 676 Enable Caches
<> 129:0ab6a29f35bf 677 */
<> 129:0ab6a29f35bf 678 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 679 __STATIC_INLINE void __enable_caches(void) {
<> 129:0ab6a29f35bf 680 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 129:0ab6a29f35bf 681 }
<> 129:0ab6a29f35bf 682
<> 129:0ab6a29f35bf 683 /** \brief Enable BTAC
<> 129:0ab6a29f35bf 684
<> 129:0ab6a29f35bf 685 Enable BTAC
<> 129:0ab6a29f35bf 686 */
<> 129:0ab6a29f35bf 687 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 688 __STATIC_INLINE void __enable_btac(void) {
<> 129:0ab6a29f35bf 689 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 129:0ab6a29f35bf 690 __ISB();
<> 129:0ab6a29f35bf 691 }
<> 129:0ab6a29f35bf 692
<> 129:0ab6a29f35bf 693 /** \brief Enable MMU
<> 129:0ab6a29f35bf 694
<> 129:0ab6a29f35bf 695 Enable MMU
<> 129:0ab6a29f35bf 696 */
<> 129:0ab6a29f35bf 697 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 698 __STATIC_INLINE void __enable_mmu(void) {
<> 129:0ab6a29f35bf 699 // Set M bit 0 to enable the MMU
<> 129:0ab6a29f35bf 700 // Set AFE bit to enable simplified access permissions model
<> 129:0ab6a29f35bf 701 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 129:0ab6a29f35bf 702 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 129:0ab6a29f35bf 703 __ISB();
<> 129:0ab6a29f35bf 704 }
<> 129:0ab6a29f35bf 705
<> 129:0ab6a29f35bf 706 /******************************** TLB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 707 /** \brief Invalidate the whole tlb
<> 129:0ab6a29f35bf 708
<> 129:0ab6a29f35bf 709 TLBIALL. Invalidate the whole tlb
<> 129:0ab6a29f35bf 710 */
<> 129:0ab6a29f35bf 711 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 712 __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 129:0ab6a29f35bf 713 uint32_t val = 0;
<> 129:0ab6a29f35bf 714 __MCR(15, 0, val, 8, 7, 0); // reg to cp15
<> 129:0ab6a29f35bf 715 __MCR(15, 0, val, 8, 6, 0); // reg to cp15
<> 129:0ab6a29f35bf 716 __MCR(15, 0, val, 8, 5, 0); // reg to cp15
<> 129:0ab6a29f35bf 717 __DSB();
<> 129:0ab6a29f35bf 718 __ISB();
<> 129:0ab6a29f35bf 719 }
<> 129:0ab6a29f35bf 720
<> 129:0ab6a29f35bf 721 /******************************** BTB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 722 /** \brief Invalidate entire branch predictor array
<> 129:0ab6a29f35bf 723
<> 129:0ab6a29f35bf 724 BPIALL. Branch Predictor Invalidate All.
<> 129:0ab6a29f35bf 725 */
<> 129:0ab6a29f35bf 726 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 727 __STATIC_INLINE void __v7_inv_btac(void) {
<> 129:0ab6a29f35bf 728 uint32_t val = 0;
<> 129:0ab6a29f35bf 729 __MCR(15, 0, val, 7, 5, 6); // reg to cp15
<> 129:0ab6a29f35bf 730 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 731 __ISB(); //ensure instruction fetch path sees new state
<> 129:0ab6a29f35bf 732 }
<> 129:0ab6a29f35bf 733
<> 129:0ab6a29f35bf 734
<> 129:0ab6a29f35bf 735 /******************************** L1 cache operations ******************************************************/
<> 129:0ab6a29f35bf 736
<> 129:0ab6a29f35bf 737 /** \brief Invalidate the whole I$
<> 129:0ab6a29f35bf 738
<> 129:0ab6a29f35bf 739 ICIALLU. Instruction Cache Invalidate All to PoU
<> 129:0ab6a29f35bf 740 */
<> 129:0ab6a29f35bf 741 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 742 __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 129:0ab6a29f35bf 743 uint32_t val = 0;
<> 129:0ab6a29f35bf 744 __MCR(15, 0, val, 7, 5, 0); // reg to cp15
<> 129:0ab6a29f35bf 745 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 746 __ISB(); //ensure instruction fetch path sees new I cache state
<> 129:0ab6a29f35bf 747 }
<> 129:0ab6a29f35bf 748
<> 129:0ab6a29f35bf 749 // from __v7_inv_dcache_all()
<> 129:0ab6a29f35bf 750 __arm static inline void __v7_all_cache(uint32_t op) {
<> 129:0ab6a29f35bf 751 __asm(
<> 129:0ab6a29f35bf 752 " ARM \n"
<> 129:0ab6a29f35bf 753
<> 129:0ab6a29f35bf 754 " PUSH {R4-R11} \n"
<> 129:0ab6a29f35bf 755
<> 129:0ab6a29f35bf 756 " MRC p15, 1, R6, c0, c0, 1\n" // Read CLIDR
<> 129:0ab6a29f35bf 757 " ANDS R3, R6, #0x07000000\n" // Extract coherency level
<> 129:0ab6a29f35bf 758 " MOV R3, R3, LSR #23\n" // Total cache levels << 1
<> 129:0ab6a29f35bf 759 " BEQ Finished\n" // If 0, no need to clean
<> 129:0ab6a29f35bf 760
<> 129:0ab6a29f35bf 761 " MOV R10, #0\n" // R10 holds current cache level << 1
<> 129:0ab6a29f35bf 762 "Loop1: ADD R2, R10, R10, LSR #1\n" // R2 holds cache "Set" position
<> 129:0ab6a29f35bf 763 " MOV R1, R6, LSR R2 \n" // Bottom 3 bits are the Cache-type for this level
<> 129:0ab6a29f35bf 764 " AND R1, R1, #7 \n" // Isolate those lower 3 bits
<> 129:0ab6a29f35bf 765 " CMP R1, #2 \n"
<> 129:0ab6a29f35bf 766 " BLT Skip \n" // No cache or only instruction cache at this level
<> 129:0ab6a29f35bf 767
<> 129:0ab6a29f35bf 768 " MCR p15, 2, R10, c0, c0, 0 \n" // Write the Cache Size selection register
<> 129:0ab6a29f35bf 769 " ISB \n" // ISB to sync the change to the CacheSizeID reg
<> 129:0ab6a29f35bf 770 " MRC p15, 1, R1, c0, c0, 0 \n" // Reads current Cache Size ID register
<> 129:0ab6a29f35bf 771 " AND R2, R1, #7 \n" // Extract the line length field
<> 129:0ab6a29f35bf 772 " ADD R2, R2, #4 \n" // Add 4 for the line length offset (log2 16 bytes)
<> 129:0ab6a29f35bf 773 " movw R4, #0x3FF \n"
<> 129:0ab6a29f35bf 774 " ANDS R4, R4, R1, LSR #3 \n" // R4 is the max number on the way size (right aligned)
<> 129:0ab6a29f35bf 775 " CLZ R5, R4 \n" // R5 is the bit position of the way size increment
<> 129:0ab6a29f35bf 776 " movw R7, #0x7FFF \n"
<> 129:0ab6a29f35bf 777 " ANDS R7, R7, R1, LSR #13 \n" // R7 is the max number of the index size (right aligned)
<> 129:0ab6a29f35bf 778
<> 129:0ab6a29f35bf 779 "Loop2: MOV R9, R4 \n" // R9 working copy of the max way size (right aligned)
<> 129:0ab6a29f35bf 780
<> 129:0ab6a29f35bf 781 "Loop3: ORR R11, R10, R9, LSL R5 \n" // Factor in the Way number and cache number into R11
<> 129:0ab6a29f35bf 782 " ORR R11, R11, R7, LSL R2 \n" // Factor in the Set number
<> 129:0ab6a29f35bf 783 " CMP R0, #0 \n"
<> 129:0ab6a29f35bf 784 " BNE Dccsw \n"
<> 129:0ab6a29f35bf 785 " MCR p15, 0, R11, c7, c6, 2 \n" // DCISW. Invalidate by Set/Way
<> 129:0ab6a29f35bf 786 " B cont \n"
<> 129:0ab6a29f35bf 787 "Dccsw: CMP R0, #1 \n"
<> 129:0ab6a29f35bf 788 " BNE Dccisw \n"
<> 129:0ab6a29f35bf 789 " MCR p15, 0, R11, c7, c10, 2 \n" // DCCSW. Clean by Set/Way
<> 129:0ab6a29f35bf 790 " B cont \n"
<> 129:0ab6a29f35bf 791 "Dccisw: MCR p15, 0, R11, c7, c14, 2 \n" // DCCISW, Clean and Invalidate by Set/Way
<> 129:0ab6a29f35bf 792 "cont: SUBS R9, R9, #1 \n" // Decrement the Way number
<> 129:0ab6a29f35bf 793 " BGE Loop3 \n"
<> 129:0ab6a29f35bf 794 " SUBS R7, R7, #1 \n" // Decrement the Set number
<> 129:0ab6a29f35bf 795 " BGE Loop2 \n"
<> 129:0ab6a29f35bf 796 "Skip: ADD R10, R10, #2 \n" // increment the cache number
<> 129:0ab6a29f35bf 797 " CMP R3, R10 \n"
<> 129:0ab6a29f35bf 798 " BGT Loop1 \n"
<> 129:0ab6a29f35bf 799
<> 129:0ab6a29f35bf 800 "Finished: \n"
<> 129:0ab6a29f35bf 801 " DSB \n"
<> 129:0ab6a29f35bf 802 " POP {R4-R11} \n"
<> 129:0ab6a29f35bf 803 " BX lr \n" );
<> 129:0ab6a29f35bf 804 }
<> 129:0ab6a29f35bf 805
<> 129:0ab6a29f35bf 806 /** \brief Invalidate the whole D$
<> 129:0ab6a29f35bf 807
<> 129:0ab6a29f35bf 808 DCISW. Invalidate by Set/Way
<> 129:0ab6a29f35bf 809 */
<> 129:0ab6a29f35bf 810 // from system_Renesas_RZ_A1.c
<> 129:0ab6a29f35bf 811 __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 129:0ab6a29f35bf 812 __v7_all_cache(0);
<> 129:0ab6a29f35bf 813 }
<> 130:d75b3fe1f5cb 814 /** \brief Clean the whole D$
<> 130:d75b3fe1f5cb 815
<> 130:d75b3fe1f5cb 816 DCCSW. Clean by Set/Way
<> 130:d75b3fe1f5cb 817 */
<> 130:d75b3fe1f5cb 818
<> 130:d75b3fe1f5cb 819 __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 130:d75b3fe1f5cb 820 __v7_all_cache(1);
<> 130:d75b3fe1f5cb 821 }
<> 130:d75b3fe1f5cb 822
<> 130:d75b3fe1f5cb 823 /** \brief Clean and invalidate the whole D$
<> 130:d75b3fe1f5cb 824
<> 130:d75b3fe1f5cb 825 DCCISW. Clean and Invalidate by Set/Way
<> 130:d75b3fe1f5cb 826 */
<> 130:d75b3fe1f5cb 827
<> 130:d75b3fe1f5cb 828 __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 130:d75b3fe1f5cb 829 __v7_all_cache(2);
<> 130:d75b3fe1f5cb 830 }
<> 129:0ab6a29f35bf 831 /** \brief Clean and Invalidate D$ by MVA
<> 129:0ab6a29f35bf 832
<> 129:0ab6a29f35bf 833 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 129:0ab6a29f35bf 834 */
<> 129:0ab6a29f35bf 835 __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 836 __MCR(15, 0, (uint32_t)va, 7, 14, 1);
<> 129:0ab6a29f35bf 837 __DMB();
<> 129:0ab6a29f35bf 838 }
<> 129:0ab6a29f35bf 839
<> 129:0ab6a29f35bf 840 #include "core_ca_mmu.h"
<> 129:0ab6a29f35bf 841
<> 129:0ab6a29f35bf 842 #elif (defined (__GNUC__)) /*------------------ GNU Compiler ---------------------*/
<> 129:0ab6a29f35bf 843 /* GNU gcc specific functions */
<> 129:0ab6a29f35bf 844
<> 129:0ab6a29f35bf 845 #define MODE_USR 0x10
<> 129:0ab6a29f35bf 846 #define MODE_FIQ 0x11
<> 129:0ab6a29f35bf 847 #define MODE_IRQ 0x12
<> 129:0ab6a29f35bf 848 #define MODE_SVC 0x13
<> 129:0ab6a29f35bf 849 #define MODE_MON 0x16
<> 129:0ab6a29f35bf 850 #define MODE_ABT 0x17
<> 129:0ab6a29f35bf 851 #define MODE_HYP 0x1A
<> 129:0ab6a29f35bf 852 #define MODE_UND 0x1B
<> 129:0ab6a29f35bf 853 #define MODE_SYS 0x1F
<> 129:0ab6a29f35bf 854
<> 129:0ab6a29f35bf 855
<> 129:0ab6a29f35bf 856 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_irq(void)
<> 129:0ab6a29f35bf 857 {
<> 129:0ab6a29f35bf 858 __ASM volatile ("cpsie i");
<> 129:0ab6a29f35bf 859 }
<> 129:0ab6a29f35bf 860
<> 129:0ab6a29f35bf 861 /** \brief Disable IRQ Interrupts
<> 129:0ab6a29f35bf 862
<> 129:0ab6a29f35bf 863 This function disables IRQ interrupts by setting the I-bit in the CPSR.
<> 129:0ab6a29f35bf 864 Can only be executed in Privileged modes.
<> 129:0ab6a29f35bf 865 */
<> 129:0ab6a29f35bf 866 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __disable_irq(void)
<> 129:0ab6a29f35bf 867 {
<> 129:0ab6a29f35bf 868 uint32_t result;
<> 129:0ab6a29f35bf 869
<> 129:0ab6a29f35bf 870 __ASM volatile ("mrs %0, cpsr" : "=r" (result));
<> 129:0ab6a29f35bf 871 __ASM volatile ("cpsid i");
<> 129:0ab6a29f35bf 872 return(result & 0x80);
<> 129:0ab6a29f35bf 873 }
<> 129:0ab6a29f35bf 874
<> 129:0ab6a29f35bf 875
<> 129:0ab6a29f35bf 876 /** \brief Get APSR Register
<> 129:0ab6a29f35bf 877
<> 129:0ab6a29f35bf 878 This function returns the content of the APSR Register.
<> 129:0ab6a29f35bf 879
<> 129:0ab6a29f35bf 880 \return APSR Register value
<> 129:0ab6a29f35bf 881 */
<> 129:0ab6a29f35bf 882 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_APSR(void)
<> 129:0ab6a29f35bf 883 {
<> 129:0ab6a29f35bf 884 #if 1
<> 129:0ab6a29f35bf 885 register uint32_t __regAPSR;
<> 129:0ab6a29f35bf 886 __ASM volatile ("mrs %0, apsr" : "=r" (__regAPSR) );
<> 129:0ab6a29f35bf 887 #else
<> 129:0ab6a29f35bf 888 register uint32_t __regAPSR __ASM("apsr");
<> 129:0ab6a29f35bf 889 #endif
<> 129:0ab6a29f35bf 890 return(__regAPSR);
<> 129:0ab6a29f35bf 891 }
<> 129:0ab6a29f35bf 892
<> 129:0ab6a29f35bf 893
<> 129:0ab6a29f35bf 894 /** \brief Get CPSR Register
<> 129:0ab6a29f35bf 895
<> 129:0ab6a29f35bf 896 This function returns the content of the CPSR Register.
<> 129:0ab6a29f35bf 897
<> 129:0ab6a29f35bf 898 \return CPSR Register value
<> 129:0ab6a29f35bf 899 */
<> 129:0ab6a29f35bf 900 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPSR(void)
<> 129:0ab6a29f35bf 901 {
<> 129:0ab6a29f35bf 902 #if 1
<> 129:0ab6a29f35bf 903 register uint32_t __regCPSR;
<> 129:0ab6a29f35bf 904 __ASM volatile ("mrs %0, cpsr" : "=r" (__regCPSR));
<> 129:0ab6a29f35bf 905 #else
<> 129:0ab6a29f35bf 906 register uint32_t __regCPSR __ASM("cpsr");
<> 129:0ab6a29f35bf 907 #endif
<> 129:0ab6a29f35bf 908 return(__regCPSR);
<> 129:0ab6a29f35bf 909 }
<> 129:0ab6a29f35bf 910
<> 129:0ab6a29f35bf 911 #if 0
<> 129:0ab6a29f35bf 912 /** \brief Set Stack Pointer
<> 129:0ab6a29f35bf 913
<> 129:0ab6a29f35bf 914 This function assigns the given value to the current stack pointer.
<> 129:0ab6a29f35bf 915
<> 129:0ab6a29f35bf 916 \param [in] topOfStack Stack Pointer value to set
<> 129:0ab6a29f35bf 917 */
<> 129:0ab6a29f35bf 918 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SP(uint32_t topOfStack)
<> 129:0ab6a29f35bf 919 {
<> 129:0ab6a29f35bf 920 register uint32_t __regSP __ASM("sp");
<> 129:0ab6a29f35bf 921 __regSP = topOfStack;
<> 129:0ab6a29f35bf 922 }
<> 129:0ab6a29f35bf 923 #endif
<> 129:0ab6a29f35bf 924
<> 129:0ab6a29f35bf 925 /** \brief Get link register
<> 129:0ab6a29f35bf 926
<> 129:0ab6a29f35bf 927 This function returns the value of the link register
<> 129:0ab6a29f35bf 928
<> 129:0ab6a29f35bf 929 \return Value of link register
<> 129:0ab6a29f35bf 930 */
<> 129:0ab6a29f35bf 931 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_LR(void)
<> 129:0ab6a29f35bf 932 {
<> 129:0ab6a29f35bf 933 register uint32_t __reglr __ASM("lr");
<> 129:0ab6a29f35bf 934 return(__reglr);
<> 129:0ab6a29f35bf 935 }
<> 129:0ab6a29f35bf 936
<> 129:0ab6a29f35bf 937 #if 0
<> 129:0ab6a29f35bf 938 /** \brief Set link register
<> 129:0ab6a29f35bf 939
<> 129:0ab6a29f35bf 940 This function sets the value of the link register
<> 129:0ab6a29f35bf 941
<> 129:0ab6a29f35bf 942 \param [in] lr LR value to set
<> 129:0ab6a29f35bf 943 */
<> 129:0ab6a29f35bf 944 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_LR(uint32_t lr)
<> 129:0ab6a29f35bf 945 {
<> 129:0ab6a29f35bf 946 register uint32_t __reglr __ASM("lr");
<> 129:0ab6a29f35bf 947 __reglr = lr;
<> 129:0ab6a29f35bf 948 }
<> 129:0ab6a29f35bf 949 #endif
<> 129:0ab6a29f35bf 950
<> 129:0ab6a29f35bf 951 /** \brief Set Process Stack Pointer
<> 129:0ab6a29f35bf 952
<> 129:0ab6a29f35bf 953 This function assigns the given value to the USR/SYS Stack Pointer (PSP).
<> 129:0ab6a29f35bf 954
<> 129:0ab6a29f35bf 955 \param [in] topOfProcStack USR/SYS Stack Pointer value to set
<> 129:0ab6a29f35bf 956 */
<> 129:0ab6a29f35bf 957 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_PSP(uint32_t topOfProcStack)
<> 129:0ab6a29f35bf 958 {
<> 129:0ab6a29f35bf 959 __asm__ volatile (
<> 129:0ab6a29f35bf 960 ".ARM;"
<> 129:0ab6a29f35bf 961 ".eabi_attribute Tag_ABI_align8_preserved,1;"
<> 129:0ab6a29f35bf 962
<> 129:0ab6a29f35bf 963 "BIC R0, R0, #7;" /* ;ensure stack is 8-byte aligned */
<> 129:0ab6a29f35bf 964 "MRS R1, CPSR;"
<> 129:0ab6a29f35bf 965 "CPS %0;" /* ;no effect in USR mode */
<> 129:0ab6a29f35bf 966 "MOV SP, R0;"
<> 129:0ab6a29f35bf 967 "MSR CPSR_c, R1;" /* ;no effect in USR mode */
<> 129:0ab6a29f35bf 968 "ISB;"
<> 129:0ab6a29f35bf 969 //"BX LR;"
<> 129:0ab6a29f35bf 970 :
<> 129:0ab6a29f35bf 971 : "i"(MODE_SYS)
<> 129:0ab6a29f35bf 972 : "r0", "r1");
<> 129:0ab6a29f35bf 973 return;
<> 129:0ab6a29f35bf 974 }
<> 129:0ab6a29f35bf 975
<> 129:0ab6a29f35bf 976 /** \brief Set User Mode
<> 129:0ab6a29f35bf 977
<> 129:0ab6a29f35bf 978 This function changes the processor state to User Mode
<> 129:0ab6a29f35bf 979 */
<> 129:0ab6a29f35bf 980 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPS_USR(void)
<> 129:0ab6a29f35bf 981 {
<> 129:0ab6a29f35bf 982 __asm__ volatile (
<> 129:0ab6a29f35bf 983 ".ARM;"
<> 129:0ab6a29f35bf 984
<> 129:0ab6a29f35bf 985 "CPS %0;"
<> 129:0ab6a29f35bf 986 //"BX LR;"
<> 129:0ab6a29f35bf 987 :
<> 129:0ab6a29f35bf 988 : "i"(MODE_USR)
<> 129:0ab6a29f35bf 989 : );
<> 129:0ab6a29f35bf 990 return;
<> 129:0ab6a29f35bf 991 }
<> 129:0ab6a29f35bf 992
<> 129:0ab6a29f35bf 993
<> 129:0ab6a29f35bf 994 /** \brief Enable FIQ
<> 129:0ab6a29f35bf 995
<> 129:0ab6a29f35bf 996 This function enables FIQ interrupts by clearing the F-bit in the CPSR.
<> 129:0ab6a29f35bf 997 Can only be executed in Privileged modes.
<> 129:0ab6a29f35bf 998 */
<> 129:0ab6a29f35bf 999 #define __enable_fault_irq() __asm__ volatile ("cpsie f")
<> 129:0ab6a29f35bf 1000
<> 129:0ab6a29f35bf 1001
<> 129:0ab6a29f35bf 1002 /** \brief Disable FIQ
<> 129:0ab6a29f35bf 1003
<> 129:0ab6a29f35bf 1004 This function disables FIQ interrupts by setting the F-bit in the CPSR.
<> 129:0ab6a29f35bf 1005 Can only be executed in Privileged modes.
<> 129:0ab6a29f35bf 1006 */
<> 129:0ab6a29f35bf 1007 #define __disable_fault_irq() __asm__ volatile ("cpsid f")
<> 129:0ab6a29f35bf 1008
<> 129:0ab6a29f35bf 1009
<> 129:0ab6a29f35bf 1010 /** \brief Get FPSCR
<> 129:0ab6a29f35bf 1011
<> 129:0ab6a29f35bf 1012 This function returns the current value of the Floating Point Status/Control register.
<> 129:0ab6a29f35bf 1013
<> 129:0ab6a29f35bf 1014 \return Floating Point Status/Control register value
<> 129:0ab6a29f35bf 1015 */
<> 129:0ab6a29f35bf 1016 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPSCR(void)
<> 129:0ab6a29f35bf 1017 {
<> 129:0ab6a29f35bf 1018 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 129:0ab6a29f35bf 1019 #if 1
<> 129:0ab6a29f35bf 1020 uint32_t result;
<> 129:0ab6a29f35bf 1021
<> 129:0ab6a29f35bf 1022 __ASM volatile ("vmrs %0, fpscr" : "=r" (result) );
<> 129:0ab6a29f35bf 1023 return (result);
<> 129:0ab6a29f35bf 1024 #else
<> 129:0ab6a29f35bf 1025 register uint32_t __regfpscr __ASM("fpscr");
<> 129:0ab6a29f35bf 1026 return(__regfpscr);
<> 129:0ab6a29f35bf 1027 #endif
<> 129:0ab6a29f35bf 1028 #else
<> 129:0ab6a29f35bf 1029 return(0);
<> 129:0ab6a29f35bf 1030 #endif
<> 129:0ab6a29f35bf 1031 }
<> 129:0ab6a29f35bf 1032
<> 129:0ab6a29f35bf 1033
<> 129:0ab6a29f35bf 1034 /** \brief Set FPSCR
<> 129:0ab6a29f35bf 1035
<> 129:0ab6a29f35bf 1036 This function assigns the given value to the Floating Point Status/Control register.
<> 129:0ab6a29f35bf 1037
<> 129:0ab6a29f35bf 1038 \param [in] fpscr Floating Point Status/Control value to set
<> 129:0ab6a29f35bf 1039 */
<> 129:0ab6a29f35bf 1040 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPSCR(uint32_t fpscr)
<> 129:0ab6a29f35bf 1041 {
<> 129:0ab6a29f35bf 1042 #if (__FPU_PRESENT == 1) && (__FPU_USED == 1)
<> 129:0ab6a29f35bf 1043 #if 1
<> 129:0ab6a29f35bf 1044 __ASM volatile ("vmsr fpscr, %0" : : "r" (fpscr) );
<> 129:0ab6a29f35bf 1045 #else
<> 129:0ab6a29f35bf 1046 register uint32_t __regfpscr __ASM("fpscr");
<> 129:0ab6a29f35bf 1047 __regfpscr = (fpscr);
<> 129:0ab6a29f35bf 1048 #endif
<> 129:0ab6a29f35bf 1049 #endif
<> 129:0ab6a29f35bf 1050 }
<> 129:0ab6a29f35bf 1051
<> 129:0ab6a29f35bf 1052 /** \brief Get FPEXC
<> 129:0ab6a29f35bf 1053
<> 129:0ab6a29f35bf 1054 This function returns the current value of the Floating Point Exception Control register.
<> 129:0ab6a29f35bf 1055
<> 129:0ab6a29f35bf 1056 \return Floating Point Exception Control register value
<> 129:0ab6a29f35bf 1057 */
<> 129:0ab6a29f35bf 1058 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_FPEXC(void)
<> 129:0ab6a29f35bf 1059 {
<> 129:0ab6a29f35bf 1060 #if (__FPU_PRESENT == 1)
<> 129:0ab6a29f35bf 1061 #if 1
<> 129:0ab6a29f35bf 1062 uint32_t result;
<> 129:0ab6a29f35bf 1063
<> 129:0ab6a29f35bf 1064 __ASM volatile ("vmrs %0, fpexc" : "=r" (result));
<> 129:0ab6a29f35bf 1065 return (result);
<> 129:0ab6a29f35bf 1066 #else
<> 129:0ab6a29f35bf 1067 register uint32_t __regfpexc __ASM("fpexc");
<> 129:0ab6a29f35bf 1068 return(__regfpexc);
<> 129:0ab6a29f35bf 1069 #endif
<> 129:0ab6a29f35bf 1070 #else
<> 129:0ab6a29f35bf 1071 return(0);
<> 129:0ab6a29f35bf 1072 #endif
<> 129:0ab6a29f35bf 1073 }
<> 129:0ab6a29f35bf 1074
<> 129:0ab6a29f35bf 1075
<> 129:0ab6a29f35bf 1076 /** \brief Set FPEXC
<> 129:0ab6a29f35bf 1077
<> 129:0ab6a29f35bf 1078 This function assigns the given value to the Floating Point Exception Control register.
<> 129:0ab6a29f35bf 1079
<> 129:0ab6a29f35bf 1080 \param [in] fpscr Floating Point Exception Control value to set
<> 129:0ab6a29f35bf 1081 */
<> 129:0ab6a29f35bf 1082 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_FPEXC(uint32_t fpexc)
<> 129:0ab6a29f35bf 1083 {
<> 129:0ab6a29f35bf 1084 #if (__FPU_PRESENT == 1)
<> 129:0ab6a29f35bf 1085 #if 1
<> 129:0ab6a29f35bf 1086 __ASM volatile ("vmsr fpexc, %0" : : "r" (fpexc));
<> 129:0ab6a29f35bf 1087 #else
<> 129:0ab6a29f35bf 1088 register uint32_t __regfpexc __ASM("fpexc");
<> 129:0ab6a29f35bf 1089 __regfpexc = (fpexc);
<> 129:0ab6a29f35bf 1090 #endif
<> 129:0ab6a29f35bf 1091 #endif
<> 129:0ab6a29f35bf 1092 }
<> 129:0ab6a29f35bf 1093
<> 129:0ab6a29f35bf 1094 /** \brief Get CPACR
<> 129:0ab6a29f35bf 1095
<> 129:0ab6a29f35bf 1096 This function returns the current value of the Coprocessor Access Control register.
<> 129:0ab6a29f35bf 1097
<> 129:0ab6a29f35bf 1098 \return Coprocessor Access Control register value
<> 129:0ab6a29f35bf 1099 */
<> 129:0ab6a29f35bf 1100 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CPACR(void)
<> 129:0ab6a29f35bf 1101 {
<> 129:0ab6a29f35bf 1102 #if 1
<> 129:0ab6a29f35bf 1103 register uint32_t __regCPACR;
<> 129:0ab6a29f35bf 1104 __ASM volatile ("mrc p15, 0, %0, c1, c0, 2" : "=r" (__regCPACR));
<> 129:0ab6a29f35bf 1105 #else
<> 129:0ab6a29f35bf 1106 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 129:0ab6a29f35bf 1107 #endif
<> 129:0ab6a29f35bf 1108 return __regCPACR;
<> 129:0ab6a29f35bf 1109 }
<> 129:0ab6a29f35bf 1110
<> 129:0ab6a29f35bf 1111 /** \brief Set CPACR
<> 129:0ab6a29f35bf 1112
<> 129:0ab6a29f35bf 1113 This function assigns the given value to the Coprocessor Access Control register.
<> 129:0ab6a29f35bf 1114
<> 129:0ab6a29f35bf 1115 \param [in] cpacr Coprocessor Acccess Control value to set
<> 129:0ab6a29f35bf 1116 */
<> 129:0ab6a29f35bf 1117 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_CPACR(uint32_t cpacr)
<> 129:0ab6a29f35bf 1118 {
<> 129:0ab6a29f35bf 1119 #if 1
<> 129:0ab6a29f35bf 1120 __ASM volatile ("mcr p15, 0, %0, c1, c0, 2" : : "r" (cpacr));
<> 129:0ab6a29f35bf 1121 #else
<> 129:0ab6a29f35bf 1122 register uint32_t __regCPACR __ASM("cp15:0:c1:c0:2");
<> 129:0ab6a29f35bf 1123 __regCPACR = cpacr;
<> 129:0ab6a29f35bf 1124 #endif
<> 129:0ab6a29f35bf 1125 __ISB();
<> 129:0ab6a29f35bf 1126 }
<> 129:0ab6a29f35bf 1127
<> 129:0ab6a29f35bf 1128 /** \brief Get CBAR
<> 129:0ab6a29f35bf 1129
<> 129:0ab6a29f35bf 1130 This function returns the value of the Configuration Base Address register.
<> 129:0ab6a29f35bf 1131
<> 129:0ab6a29f35bf 1132 \return Configuration Base Address register value
<> 129:0ab6a29f35bf 1133 */
<> 129:0ab6a29f35bf 1134 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_CBAR() {
<> 129:0ab6a29f35bf 1135 #if 1
<> 129:0ab6a29f35bf 1136 register uint32_t __regCBAR;
<> 129:0ab6a29f35bf 1137 __ASM volatile ("mrc p15, 4, %0, c15, c0, 0" : "=r" (__regCBAR));
<> 129:0ab6a29f35bf 1138 #else
<> 129:0ab6a29f35bf 1139 register uint32_t __regCBAR __ASM("cp15:4:c15:c0:0");
<> 129:0ab6a29f35bf 1140 #endif
<> 129:0ab6a29f35bf 1141 return(__regCBAR);
<> 129:0ab6a29f35bf 1142 }
<> 129:0ab6a29f35bf 1143
<> 129:0ab6a29f35bf 1144 /** \brief Get TTBR0
<> 129:0ab6a29f35bf 1145
<> 129:0ab6a29f35bf 1146 This function returns the value of the Translation Table Base Register 0.
<> 129:0ab6a29f35bf 1147
<> 129:0ab6a29f35bf 1148 \return Translation Table Base Register 0 value
<> 129:0ab6a29f35bf 1149 */
<> 129:0ab6a29f35bf 1150 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_TTBR0() {
<> 129:0ab6a29f35bf 1151 #if 1
<> 129:0ab6a29f35bf 1152 register uint32_t __regTTBR0;
<> 129:0ab6a29f35bf 1153 __ASM volatile ("mrc p15, 0, %0, c2, c0, 0" : "=r" (__regTTBR0));
<> 129:0ab6a29f35bf 1154 #else
<> 129:0ab6a29f35bf 1155 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 129:0ab6a29f35bf 1156 #endif
<> 129:0ab6a29f35bf 1157 return(__regTTBR0);
<> 129:0ab6a29f35bf 1158 }
<> 129:0ab6a29f35bf 1159
<> 129:0ab6a29f35bf 1160 /** \brief Set TTBR0
<> 129:0ab6a29f35bf 1161
<> 129:0ab6a29f35bf 1162 This function assigns the given value to the Translation Table Base Register 0.
<> 129:0ab6a29f35bf 1163
<> 129:0ab6a29f35bf 1164 \param [in] ttbr0 Translation Table Base Register 0 value to set
<> 129:0ab6a29f35bf 1165 */
<> 129:0ab6a29f35bf 1166 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_TTBR0(uint32_t ttbr0) {
<> 129:0ab6a29f35bf 1167 #if 1
<> 129:0ab6a29f35bf 1168 __ASM volatile ("mcr p15, 0, %0, c2, c0, 0" : : "r" (ttbr0));
<> 129:0ab6a29f35bf 1169 #else
<> 129:0ab6a29f35bf 1170 register uint32_t __regTTBR0 __ASM("cp15:0:c2:c0:0");
<> 129:0ab6a29f35bf 1171 __regTTBR0 = ttbr0;
<> 129:0ab6a29f35bf 1172 #endif
<> 129:0ab6a29f35bf 1173 __ISB();
<> 129:0ab6a29f35bf 1174 }
<> 129:0ab6a29f35bf 1175
<> 129:0ab6a29f35bf 1176 /** \brief Get DACR
<> 129:0ab6a29f35bf 1177
<> 129:0ab6a29f35bf 1178 This function returns the value of the Domain Access Control Register.
<> 129:0ab6a29f35bf 1179
<> 129:0ab6a29f35bf 1180 \return Domain Access Control Register value
<> 129:0ab6a29f35bf 1181 */
<> 129:0ab6a29f35bf 1182 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_DACR() {
<> 129:0ab6a29f35bf 1183 #if 1
<> 129:0ab6a29f35bf 1184 register uint32_t __regDACR;
<> 129:0ab6a29f35bf 1185 __ASM volatile ("mrc p15, 0, %0, c3, c0, 0" : "=r" (__regDACR));
<> 129:0ab6a29f35bf 1186 #else
<> 129:0ab6a29f35bf 1187 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 129:0ab6a29f35bf 1188 #endif
<> 129:0ab6a29f35bf 1189 return(__regDACR);
<> 129:0ab6a29f35bf 1190 }
<> 129:0ab6a29f35bf 1191
<> 129:0ab6a29f35bf 1192 /** \brief Set DACR
<> 129:0ab6a29f35bf 1193
<> 129:0ab6a29f35bf 1194 This function assigns the given value to the Domain Access Control Register.
<> 129:0ab6a29f35bf 1195
<> 129:0ab6a29f35bf 1196 \param [in] dacr Domain Access Control Register value to set
<> 129:0ab6a29f35bf 1197 */
<> 129:0ab6a29f35bf 1198 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_DACR(uint32_t dacr) {
<> 129:0ab6a29f35bf 1199 #if 1
<> 129:0ab6a29f35bf 1200 __ASM volatile ("mcr p15, 0, %0, c3, c0, 0" : : "r" (dacr));
<> 129:0ab6a29f35bf 1201 #else
<> 129:0ab6a29f35bf 1202 register uint32_t __regDACR __ASM("cp15:0:c3:c0:0");
<> 129:0ab6a29f35bf 1203 __regDACR = dacr;
<> 129:0ab6a29f35bf 1204 #endif
<> 129:0ab6a29f35bf 1205 __ISB();
<> 129:0ab6a29f35bf 1206 }
<> 129:0ab6a29f35bf 1207
<> 129:0ab6a29f35bf 1208 /******************************** Cache and BTAC enable ****************************************************/
<> 129:0ab6a29f35bf 1209
<> 129:0ab6a29f35bf 1210 /** \brief Set SCTLR
<> 129:0ab6a29f35bf 1211
<> 129:0ab6a29f35bf 1212 This function assigns the given value to the System Control Register.
<> 129:0ab6a29f35bf 1213
<> 129:0ab6a29f35bf 1214 \param [in] sctlr System Control Register value to set
<> 129:0ab6a29f35bf 1215 */
<> 129:0ab6a29f35bf 1216 __attribute__( ( always_inline ) ) __STATIC_INLINE void __set_SCTLR(uint32_t sctlr)
<> 129:0ab6a29f35bf 1217 {
<> 129:0ab6a29f35bf 1218 #if 1
<> 129:0ab6a29f35bf 1219 __ASM volatile ("mcr p15, 0, %0, c1, c0, 0" : : "r" (sctlr));
<> 129:0ab6a29f35bf 1220 #else
<> 129:0ab6a29f35bf 1221 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 129:0ab6a29f35bf 1222 __regSCTLR = sctlr;
<> 129:0ab6a29f35bf 1223 #endif
<> 129:0ab6a29f35bf 1224 }
<> 129:0ab6a29f35bf 1225
<> 129:0ab6a29f35bf 1226 /** \brief Get SCTLR
<> 129:0ab6a29f35bf 1227
<> 129:0ab6a29f35bf 1228 This function returns the value of the System Control Register.
<> 129:0ab6a29f35bf 1229
<> 129:0ab6a29f35bf 1230 \return System Control Register value
<> 129:0ab6a29f35bf 1231 */
<> 129:0ab6a29f35bf 1232 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __get_SCTLR() {
<> 129:0ab6a29f35bf 1233 #if 1
<> 129:0ab6a29f35bf 1234 register uint32_t __regSCTLR;
<> 129:0ab6a29f35bf 1235 __ASM volatile ("mrc p15, 0, %0, c1, c0, 0" : "=r" (__regSCTLR));
<> 129:0ab6a29f35bf 1236 #else
<> 129:0ab6a29f35bf 1237 register uint32_t __regSCTLR __ASM("cp15:0:c1:c0:0");
<> 129:0ab6a29f35bf 1238 #endif
<> 129:0ab6a29f35bf 1239 return(__regSCTLR);
<> 129:0ab6a29f35bf 1240 }
<> 129:0ab6a29f35bf 1241
<> 129:0ab6a29f35bf 1242 /** \brief Enable Caches
<> 129:0ab6a29f35bf 1243
<> 129:0ab6a29f35bf 1244 Enable Caches
<> 129:0ab6a29f35bf 1245 */
<> 129:0ab6a29f35bf 1246 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_caches(void) {
<> 129:0ab6a29f35bf 1247 // Set I bit 12 to enable I Cache
<> 129:0ab6a29f35bf 1248 // Set C bit 2 to enable D Cache
<> 129:0ab6a29f35bf 1249 __set_SCTLR( __get_SCTLR() | (1 << 12) | (1 << 2));
<> 129:0ab6a29f35bf 1250 }
<> 129:0ab6a29f35bf 1251
<> 129:0ab6a29f35bf 1252 /** \brief Disable Caches
<> 129:0ab6a29f35bf 1253
<> 129:0ab6a29f35bf 1254 Disable Caches
<> 129:0ab6a29f35bf 1255 */
<> 129:0ab6a29f35bf 1256 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_caches(void) {
<> 129:0ab6a29f35bf 1257 // Clear I bit 12 to disable I Cache
<> 129:0ab6a29f35bf 1258 // Clear C bit 2 to disable D Cache
<> 129:0ab6a29f35bf 1259 __set_SCTLR( __get_SCTLR() & ~(1 << 12) & ~(1 << 2));
<> 129:0ab6a29f35bf 1260 __ISB();
<> 129:0ab6a29f35bf 1261 }
<> 129:0ab6a29f35bf 1262
<> 129:0ab6a29f35bf 1263 /** \brief Enable BTAC
<> 129:0ab6a29f35bf 1264
<> 129:0ab6a29f35bf 1265 Enable BTAC
<> 129:0ab6a29f35bf 1266 */
<> 129:0ab6a29f35bf 1267 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_btac(void) {
<> 129:0ab6a29f35bf 1268 // Set Z bit 11 to enable branch prediction
<> 129:0ab6a29f35bf 1269 __set_SCTLR( __get_SCTLR() | (1 << 11));
<> 129:0ab6a29f35bf 1270 __ISB();
<> 129:0ab6a29f35bf 1271 }
<> 129:0ab6a29f35bf 1272
<> 129:0ab6a29f35bf 1273 /** \brief Disable BTAC
<> 129:0ab6a29f35bf 1274
<> 129:0ab6a29f35bf 1275 Disable BTAC
<> 129:0ab6a29f35bf 1276 */
<> 129:0ab6a29f35bf 1277 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_btac(void) {
<> 129:0ab6a29f35bf 1278 // Clear Z bit 11 to disable branch prediction
<> 129:0ab6a29f35bf 1279 __set_SCTLR( __get_SCTLR() & ~(1 << 11));
<> 129:0ab6a29f35bf 1280 }
<> 129:0ab6a29f35bf 1281
<> 129:0ab6a29f35bf 1282
<> 129:0ab6a29f35bf 1283 /** \brief Enable MMU
<> 129:0ab6a29f35bf 1284
<> 129:0ab6a29f35bf 1285 Enable MMU
<> 129:0ab6a29f35bf 1286 */
<> 129:0ab6a29f35bf 1287 __attribute__( ( always_inline ) ) __STATIC_INLINE void __enable_mmu(void) {
<> 129:0ab6a29f35bf 1288 // Set M bit 0 to enable the MMU
<> 129:0ab6a29f35bf 1289 // Set AFE bit to enable simplified access permissions model
<> 129:0ab6a29f35bf 1290 // Clear TRE bit to disable TEX remap and A bit to disable strict alignment fault checking
<> 129:0ab6a29f35bf 1291 __set_SCTLR( (__get_SCTLR() & ~(1 << 28) & ~(1 << 1)) | 1 | (1 << 29));
<> 129:0ab6a29f35bf 1292 __ISB();
<> 129:0ab6a29f35bf 1293 }
<> 129:0ab6a29f35bf 1294
<> 129:0ab6a29f35bf 1295 /** \brief Disable MMU
<> 129:0ab6a29f35bf 1296
<> 129:0ab6a29f35bf 1297 Disable MMU
<> 129:0ab6a29f35bf 1298 */
<> 129:0ab6a29f35bf 1299 __attribute__( ( always_inline ) ) __STATIC_INLINE void __disable_mmu(void) {
<> 129:0ab6a29f35bf 1300 // Clear M bit 0 to disable the MMU
<> 129:0ab6a29f35bf 1301 __set_SCTLR( __get_SCTLR() & ~1);
<> 129:0ab6a29f35bf 1302 __ISB();
<> 129:0ab6a29f35bf 1303 }
<> 129:0ab6a29f35bf 1304
<> 129:0ab6a29f35bf 1305 /******************************** TLB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 1306 /** \brief Invalidate the whole tlb
<> 129:0ab6a29f35bf 1307
<> 129:0ab6a29f35bf 1308 TLBIALL. Invalidate the whole tlb
<> 129:0ab6a29f35bf 1309 */
<> 129:0ab6a29f35bf 1310
<> 129:0ab6a29f35bf 1311 __attribute__( ( always_inline ) ) __STATIC_INLINE void __ca9u_inv_tlb_all(void) {
<> 129:0ab6a29f35bf 1312 #if 1
<> 129:0ab6a29f35bf 1313 __ASM volatile ("mcr p15, 0, %0, c8, c7, 0" : : "r" (0));
<> 129:0ab6a29f35bf 1314 #else
<> 129:0ab6a29f35bf 1315 register uint32_t __TLBIALL __ASM("cp15:0:c8:c7:0");
<> 129:0ab6a29f35bf 1316 __TLBIALL = 0;
<> 129:0ab6a29f35bf 1317 #endif
<> 129:0ab6a29f35bf 1318 __DSB();
<> 129:0ab6a29f35bf 1319 __ISB();
<> 129:0ab6a29f35bf 1320 }
<> 129:0ab6a29f35bf 1321
<> 129:0ab6a29f35bf 1322 /******************************** BTB maintenance operations ************************************************/
<> 129:0ab6a29f35bf 1323 /** \brief Invalidate entire branch predictor array
<> 129:0ab6a29f35bf 1324
<> 129:0ab6a29f35bf 1325 BPIALL. Branch Predictor Invalidate All.
<> 129:0ab6a29f35bf 1326 */
<> 129:0ab6a29f35bf 1327
<> 129:0ab6a29f35bf 1328 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_btac(void) {
<> 129:0ab6a29f35bf 1329 #if 1
<> 129:0ab6a29f35bf 1330 __ASM volatile ("mcr p15, 0, %0, c7, c5, 6" : : "r" (0));
<> 129:0ab6a29f35bf 1331 #else
<> 129:0ab6a29f35bf 1332 register uint32_t __BPIALL __ASM("cp15:0:c7:c5:6");
<> 129:0ab6a29f35bf 1333 __BPIALL = 0;
<> 129:0ab6a29f35bf 1334 #endif
<> 129:0ab6a29f35bf 1335 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 1336 __ISB(); //ensure instruction fetch path sees new state
<> 129:0ab6a29f35bf 1337 }
<> 129:0ab6a29f35bf 1338
<> 129:0ab6a29f35bf 1339
<> 129:0ab6a29f35bf 1340 /******************************** L1 cache operations ******************************************************/
<> 129:0ab6a29f35bf 1341
<> 129:0ab6a29f35bf 1342 /** \brief Invalidate the whole I$
<> 129:0ab6a29f35bf 1343
<> 129:0ab6a29f35bf 1344 ICIALLU. Instruction Cache Invalidate All to PoU
<> 129:0ab6a29f35bf 1345 */
<> 129:0ab6a29f35bf 1346 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_icache_all(void) {
<> 129:0ab6a29f35bf 1347 #if 1
<> 129:0ab6a29f35bf 1348 __ASM volatile ("mcr p15, 0, %0, c7, c5, 0" : : "r" (0));
<> 129:0ab6a29f35bf 1349 #else
<> 129:0ab6a29f35bf 1350 register uint32_t __ICIALLU __ASM("cp15:0:c7:c5:0");
<> 129:0ab6a29f35bf 1351 __ICIALLU = 0;
<> 129:0ab6a29f35bf 1352 #endif
<> 129:0ab6a29f35bf 1353 __DSB(); //ensure completion of the invalidation
<> 129:0ab6a29f35bf 1354 __ISB(); //ensure instruction fetch path sees new I cache state
<> 129:0ab6a29f35bf 1355 }
<> 129:0ab6a29f35bf 1356
<> 129:0ab6a29f35bf 1357 /** \brief Clean D$ by MVA
<> 129:0ab6a29f35bf 1358
<> 129:0ab6a29f35bf 1359 DCCMVAC. Data cache clean by MVA to PoC
<> 129:0ab6a29f35bf 1360 */
<> 129:0ab6a29f35bf 1361 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 1362 #if 1
<> 129:0ab6a29f35bf 1363 __ASM volatile ("mcr p15, 0, %0, c7, c10, 1" : : "r" ((uint32_t)va));
<> 129:0ab6a29f35bf 1364 #else
<> 129:0ab6a29f35bf 1365 register uint32_t __DCCMVAC __ASM("cp15:0:c7:c10:1");
<> 129:0ab6a29f35bf 1366 __DCCMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 1367 #endif
<> 129:0ab6a29f35bf 1368 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 1369 }
<> 129:0ab6a29f35bf 1370
<> 129:0ab6a29f35bf 1371 /** \brief Invalidate D$ by MVA
<> 129:0ab6a29f35bf 1372
<> 129:0ab6a29f35bf 1373 DCIMVAC. Data cache invalidate by MVA to PoC
<> 129:0ab6a29f35bf 1374 */
<> 129:0ab6a29f35bf 1375 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 1376 #if 1
<> 129:0ab6a29f35bf 1377 __ASM volatile ("mcr p15, 0, %0, c7, c6, 1" : : "r" ((uint32_t)va));
<> 129:0ab6a29f35bf 1378 #else
<> 129:0ab6a29f35bf 1379 register uint32_t __DCIMVAC __ASM("cp15:0:c7:c6:1");
<> 129:0ab6a29f35bf 1380 __DCIMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 1381 #endif
<> 129:0ab6a29f35bf 1382 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 1383 }
<> 129:0ab6a29f35bf 1384
<> 129:0ab6a29f35bf 1385 /** \brief Clean and Invalidate D$ by MVA
<> 129:0ab6a29f35bf 1386
<> 129:0ab6a29f35bf 1387 DCCIMVAC. Data cache clean and invalidate by MVA to PoC
<> 129:0ab6a29f35bf 1388 */
<> 129:0ab6a29f35bf 1389 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_mva(void *va) {
<> 129:0ab6a29f35bf 1390 #if 1
<> 129:0ab6a29f35bf 1391 __ASM volatile ("mcr p15, 0, %0, c7, c14, 1" : : "r" ((uint32_t)va));
<> 129:0ab6a29f35bf 1392 #else
<> 129:0ab6a29f35bf 1393 register uint32_t __DCCIMVAC __ASM("cp15:0:c7:c14:1");
<> 129:0ab6a29f35bf 1394 __DCCIMVAC = (uint32_t)va;
<> 129:0ab6a29f35bf 1395 #endif
<> 129:0ab6a29f35bf 1396 __DMB(); //ensure the ordering of data cache maintenance operations and their effects
<> 129:0ab6a29f35bf 1397 }
<> 129:0ab6a29f35bf 1398
<> 129:0ab6a29f35bf 1399 /** \brief Clean and Invalidate the entire data or unified cache
<> 129:0ab6a29f35bf 1400
<> 129:0ab6a29f35bf 1401 Generic mechanism for cleaning/invalidating the entire data or unified cache to the point of coherency.
<> 129:0ab6a29f35bf 1402 */
<> 129:0ab6a29f35bf 1403 extern void __v7_all_cache(uint32_t op);
<> 129:0ab6a29f35bf 1404
<> 129:0ab6a29f35bf 1405
<> 129:0ab6a29f35bf 1406 /** \brief Invalidate the whole D$
<> 129:0ab6a29f35bf 1407
<> 129:0ab6a29f35bf 1408 DCISW. Invalidate by Set/Way
<> 129:0ab6a29f35bf 1409 */
<> 129:0ab6a29f35bf 1410
<> 129:0ab6a29f35bf 1411 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_inv_dcache_all(void) {
<> 129:0ab6a29f35bf 1412 __v7_all_cache(0);
<> 129:0ab6a29f35bf 1413 }
<> 129:0ab6a29f35bf 1414
<> 129:0ab6a29f35bf 1415 /** \brief Clean the whole D$
<> 129:0ab6a29f35bf 1416
<> 129:0ab6a29f35bf 1417 DCCSW. Clean by Set/Way
<> 129:0ab6a29f35bf 1418 */
<> 129:0ab6a29f35bf 1419
<> 129:0ab6a29f35bf 1420 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_dcache_all(void) {
<> 129:0ab6a29f35bf 1421 __v7_all_cache(1);
<> 129:0ab6a29f35bf 1422 }
<> 129:0ab6a29f35bf 1423
<> 129:0ab6a29f35bf 1424 /** \brief Clean and invalidate the whole D$
<> 129:0ab6a29f35bf 1425
<> 129:0ab6a29f35bf 1426 DCCISW. Clean and Invalidate by Set/Way
<> 129:0ab6a29f35bf 1427 */
<> 129:0ab6a29f35bf 1428
<> 129:0ab6a29f35bf 1429 __attribute__( ( always_inline ) ) __STATIC_INLINE void __v7_clean_inv_dcache_all(void) {
<> 129:0ab6a29f35bf 1430 __v7_all_cache(2);
<> 129:0ab6a29f35bf 1431 }
<> 129:0ab6a29f35bf 1432
<> 129:0ab6a29f35bf 1433 #include "core_ca_mmu.h"
<> 129:0ab6a29f35bf 1434
<> 129:0ab6a29f35bf 1435 #elif (defined (__TASKING__)) /*--------------- TASKING Compiler -----------------*/
<> 129:0ab6a29f35bf 1436
<> 129:0ab6a29f35bf 1437 #error TASKING Compiler support not implemented for Cortex-A
<> 129:0ab6a29f35bf 1438
<> 129:0ab6a29f35bf 1439 #endif
<> 129:0ab6a29f35bf 1440
<> 129:0ab6a29f35bf 1441 /*@} end of CMSIS_Core_RegAccFunctions */
<> 129:0ab6a29f35bf 1442
<> 129:0ab6a29f35bf 1443
<> 129:0ab6a29f35bf 1444 #endif /* __CORE_CAFUNC_H__ */