The official Mbed 2 C/C++ SDK provides the software platform and libraries to build your applications.

Dependents:   hello SerialTestv11 SerialTestv12 Sierpinski ... more

mbed 2

This is the mbed 2 library. If you'd like to learn about Mbed OS please see the mbed-os docs.

Committer:
<>
Date:
Tue Mar 14 16:20:51 2017 +0000
Revision:
138:093f2bd7b9eb
Parent:
133:99b5ccf27215
Release 138 of the mbed library

Ports for Upcoming Targets


Fixes and Changes

3716: fix for issue #3715: correction in startup files for ARM and IAR, alignment of system_stm32f429xx.c files https://github.com/ARMmbed/mbed-os/pull/3716
3741: STM32 remove warning in hal_tick_32b.c file https://github.com/ARMmbed/mbed-os/pull/3741
3780: STM32L4 : Fix GPIO G port compatibility https://github.com/ARMmbed/mbed-os/pull/3780
3831: NCS36510: SPISLAVE enabled (Conflict resolved) https://github.com/ARMmbed/mbed-os/pull/3831
3836: Allow to redefine nRF's PSTORAGE_NUM_OF_PAGES outside of the mbed-os https://github.com/ARMmbed/mbed-os/pull/3836
3840: STM32: gpio SPEED - always set High Speed by default https://github.com/ARMmbed/mbed-os/pull/3840
3844: STM32 GPIO: Typo correction. Update comment (GPIO_IP_WITHOUT_BRR) https://github.com/ARMmbed/mbed-os/pull/3844
3850: STM32: change spi error to debug warning https://github.com/ARMmbed/mbed-os/pull/3850
3860: Define GPIO_IP_WITHOUT_BRR for xDot platform https://github.com/ARMmbed/mbed-os/pull/3860
3880: DISCO_F469NI: allow the use of CAN2 instance when CAN1 is not activated https://github.com/ARMmbed/mbed-os/pull/3880
3795: Fix pwm period calc https://github.com/ARMmbed/mbed-os/pull/3795
3828: STM32 CAN API: correct format and type https://github.com/ARMmbed/mbed-os/pull/3828
3842: TARGET_NRF: corrected spi_init() to properly handle re-initialization https://github.com/ARMmbed/mbed-os/pull/3842
3843: STM32L476xG: set APB2 clock to 80MHz (instead of 40MHz) https://github.com/ARMmbed/mbed-os/pull/3843
3879: NUCLEO_F446ZE: Add missing AnalogIn pins on PF_3, PF_5 and PF_10. https://github.com/ARMmbed/mbed-os/pull/3879
3902: Fix heap and stack size for NUCLEO_F746ZG https://github.com/ARMmbed/mbed-os/pull/3902
3829: can_write(): return error code when no tx mailboxes are available https://github.com/ARMmbed/mbed-os/pull/3829

Who changed what in which revision?

UserRevisionLine numberNew contents of line
<> 133:99b5ccf27215 1 /**************************************************************************//**
<> 133:99b5ccf27215 2 * @file core_cm4_simd.h
<> 133:99b5ccf27215 3 * @brief CMSIS Cortex-M4 SIMD Header File
<> 133:99b5ccf27215 4 * @version V3.20
<> 133:99b5ccf27215 5 * @date 25. February 2013
<> 133:99b5ccf27215 6 *
<> 133:99b5ccf27215 7 * @note
<> 133:99b5ccf27215 8 *
<> 133:99b5ccf27215 9 ******************************************************************************/
<> 133:99b5ccf27215 10 /* Copyright (c) 2009 - 2013 ARM LIMITED
<> 133:99b5ccf27215 11
<> 133:99b5ccf27215 12 All rights reserved.
<> 133:99b5ccf27215 13 Redistribution and use in source and binary forms, with or without
<> 133:99b5ccf27215 14 modification, are permitted provided that the following conditions are met:
<> 133:99b5ccf27215 15 - Redistributions of source code must retain the above copyright
<> 133:99b5ccf27215 16 notice, this list of conditions and the following disclaimer.
<> 133:99b5ccf27215 17 - Redistributions in binary form must reproduce the above copyright
<> 133:99b5ccf27215 18 notice, this list of conditions and the following disclaimer in the
<> 133:99b5ccf27215 19 documentation and/or other materials provided with the distribution.
<> 133:99b5ccf27215 20 - Neither the name of ARM nor the names of its contributors may be used
<> 133:99b5ccf27215 21 to endorse or promote products derived from this software without
<> 133:99b5ccf27215 22 specific prior written permission.
<> 133:99b5ccf27215 23 *
<> 133:99b5ccf27215 24 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
<> 133:99b5ccf27215 25 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
<> 133:99b5ccf27215 26 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
<> 133:99b5ccf27215 27 ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS AND CONTRIBUTORS BE
<> 133:99b5ccf27215 28 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
<> 133:99b5ccf27215 29 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
<> 133:99b5ccf27215 30 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
<> 133:99b5ccf27215 31 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
<> 133:99b5ccf27215 32 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
<> 133:99b5ccf27215 33 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
<> 133:99b5ccf27215 34 POSSIBILITY OF SUCH DAMAGE.
<> 133:99b5ccf27215 35 ---------------------------------------------------------------------------*/
<> 133:99b5ccf27215 36
<> 133:99b5ccf27215 37
<> 133:99b5ccf27215 38 #ifdef __cplusplus
<> 133:99b5ccf27215 39 extern "C" {
<> 133:99b5ccf27215 40 #endif
<> 133:99b5ccf27215 41
<> 133:99b5ccf27215 42 #ifndef __CORE_CM4_SIMD_H
<> 133:99b5ccf27215 43 #define __CORE_CM4_SIMD_H
<> 133:99b5ccf27215 44
<> 133:99b5ccf27215 45
<> 133:99b5ccf27215 46 /*******************************************************************************
<> 133:99b5ccf27215 47 * Hardware Abstraction Layer
<> 133:99b5ccf27215 48 ******************************************************************************/
<> 133:99b5ccf27215 49
<> 133:99b5ccf27215 50
<> 133:99b5ccf27215 51 /* ################### Compiler specific Intrinsics ########################### */
<> 133:99b5ccf27215 52 /** \defgroup CMSIS_SIMD_intrinsics CMSIS SIMD Intrinsics
<> 133:99b5ccf27215 53 Access to dedicated SIMD instructions
<> 133:99b5ccf27215 54 @{
<> 133:99b5ccf27215 55 */
<> 133:99b5ccf27215 56
<> 133:99b5ccf27215 57 #if defined ( __CC_ARM ) /*------------------RealView Compiler -----------------*/
<> 133:99b5ccf27215 58 /* ARM armcc specific functions */
<> 133:99b5ccf27215 59
<> 133:99b5ccf27215 60 /*------ CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 61 #define __SADD8 __sadd8
<> 133:99b5ccf27215 62 #define __QADD8 __qadd8
<> 133:99b5ccf27215 63 #define __SHADD8 __shadd8
<> 133:99b5ccf27215 64 #define __UADD8 __uadd8
<> 133:99b5ccf27215 65 #define __UQADD8 __uqadd8
<> 133:99b5ccf27215 66 #define __UHADD8 __uhadd8
<> 133:99b5ccf27215 67 #define __SSUB8 __ssub8
<> 133:99b5ccf27215 68 #define __QSUB8 __qsub8
<> 133:99b5ccf27215 69 #define __SHSUB8 __shsub8
<> 133:99b5ccf27215 70 #define __USUB8 __usub8
<> 133:99b5ccf27215 71 #define __UQSUB8 __uqsub8
<> 133:99b5ccf27215 72 #define __UHSUB8 __uhsub8
<> 133:99b5ccf27215 73 #define __SADD16 __sadd16
<> 133:99b5ccf27215 74 #define __QADD16 __qadd16
<> 133:99b5ccf27215 75 #define __SHADD16 __shadd16
<> 133:99b5ccf27215 76 #define __UADD16 __uadd16
<> 133:99b5ccf27215 77 #define __UQADD16 __uqadd16
<> 133:99b5ccf27215 78 #define __UHADD16 __uhadd16
<> 133:99b5ccf27215 79 #define __SSUB16 __ssub16
<> 133:99b5ccf27215 80 #define __QSUB16 __qsub16
<> 133:99b5ccf27215 81 #define __SHSUB16 __shsub16
<> 133:99b5ccf27215 82 #define __USUB16 __usub16
<> 133:99b5ccf27215 83 #define __UQSUB16 __uqsub16
<> 133:99b5ccf27215 84 #define __UHSUB16 __uhsub16
<> 133:99b5ccf27215 85 #define __SASX __sasx
<> 133:99b5ccf27215 86 #define __QASX __qasx
<> 133:99b5ccf27215 87 #define __SHASX __shasx
<> 133:99b5ccf27215 88 #define __UASX __uasx
<> 133:99b5ccf27215 89 #define __UQASX __uqasx
<> 133:99b5ccf27215 90 #define __UHASX __uhasx
<> 133:99b5ccf27215 91 #define __SSAX __ssax
<> 133:99b5ccf27215 92 #define __QSAX __qsax
<> 133:99b5ccf27215 93 #define __SHSAX __shsax
<> 133:99b5ccf27215 94 #define __USAX __usax
<> 133:99b5ccf27215 95 #define __UQSAX __uqsax
<> 133:99b5ccf27215 96 #define __UHSAX __uhsax
<> 133:99b5ccf27215 97 #define __USAD8 __usad8
<> 133:99b5ccf27215 98 #define __USADA8 __usada8
<> 133:99b5ccf27215 99 #define __SSAT16 __ssat16
<> 133:99b5ccf27215 100 #define __USAT16 __usat16
<> 133:99b5ccf27215 101 #define __UXTB16 __uxtb16
<> 133:99b5ccf27215 102 #define __UXTAB16 __uxtab16
<> 133:99b5ccf27215 103 #define __SXTB16 __sxtb16
<> 133:99b5ccf27215 104 #define __SXTAB16 __sxtab16
<> 133:99b5ccf27215 105 #define __SMUAD __smuad
<> 133:99b5ccf27215 106 #define __SMUADX __smuadx
<> 133:99b5ccf27215 107 #define __SMLAD __smlad
<> 133:99b5ccf27215 108 #define __SMLADX __smladx
<> 133:99b5ccf27215 109 #define __SMLALD __smlald
<> 133:99b5ccf27215 110 #define __SMLALDX __smlaldx
<> 133:99b5ccf27215 111 #define __SMUSD __smusd
<> 133:99b5ccf27215 112 #define __SMUSDX __smusdx
<> 133:99b5ccf27215 113 #define __SMLSD __smlsd
<> 133:99b5ccf27215 114 #define __SMLSDX __smlsdx
<> 133:99b5ccf27215 115 #define __SMLSLD __smlsld
<> 133:99b5ccf27215 116 #define __SMLSLDX __smlsldx
<> 133:99b5ccf27215 117 #define __SEL __sel
<> 133:99b5ccf27215 118 #define __QADD __qadd
<> 133:99b5ccf27215 119 #define __QSUB __qsub
<> 133:99b5ccf27215 120
<> 133:99b5ccf27215 121 #define __PKHBT(ARG1,ARG2,ARG3) ( ((((uint32_t)(ARG1)) ) & 0x0000FFFFUL) | \
<> 133:99b5ccf27215 122 ((((uint32_t)(ARG2)) << (ARG3)) & 0xFFFF0000UL) )
<> 133:99b5ccf27215 123
<> 133:99b5ccf27215 124 #define __PKHTB(ARG1,ARG2,ARG3) ( ((((uint32_t)(ARG1)) ) & 0xFFFF0000UL) | \
<> 133:99b5ccf27215 125 ((((uint32_t)(ARG2)) >> (ARG3)) & 0x0000FFFFUL) )
<> 133:99b5ccf27215 126
<> 133:99b5ccf27215 127 #define __SMMLA(ARG1,ARG2,ARG3) ( (int32_t)((((int64_t)(ARG1) * (ARG2)) + \
<> 133:99b5ccf27215 128 ((int64_t)(ARG3) << 32) ) >> 32))
<> 133:99b5ccf27215 129
<> 133:99b5ccf27215 130 /*-- End CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 131
<> 133:99b5ccf27215 132
<> 133:99b5ccf27215 133
<> 133:99b5ccf27215 134 #elif defined ( __ICCARM__ ) /*------------------ ICC Compiler -------------------*/
<> 133:99b5ccf27215 135 /* IAR iccarm specific functions */
<> 133:99b5ccf27215 136
<> 133:99b5ccf27215 137 /*------ CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 138 #include <cmsis_iar.h>
<> 133:99b5ccf27215 139
<> 133:99b5ccf27215 140 /*-- End CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 141
<> 133:99b5ccf27215 142
<> 133:99b5ccf27215 143
<> 133:99b5ccf27215 144 #elif defined ( __TMS470__ ) /*---------------- TI CCS Compiler ------------------*/
<> 133:99b5ccf27215 145 /* TI CCS specific functions */
<> 133:99b5ccf27215 146
<> 133:99b5ccf27215 147 /*------ CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 148 #include <cmsis_ccs.h>
<> 133:99b5ccf27215 149
<> 133:99b5ccf27215 150 /*-- End CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 151
<> 133:99b5ccf27215 152
<> 133:99b5ccf27215 153
<> 133:99b5ccf27215 154 #elif defined ( __GNUC__ ) /*------------------ GNU Compiler ---------------------*/
<> 133:99b5ccf27215 155 /* GNU gcc specific functions */
<> 133:99b5ccf27215 156
<> 133:99b5ccf27215 157 /*------ CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 158 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 159 {
<> 133:99b5ccf27215 160 uint32_t result;
<> 133:99b5ccf27215 161
<> 133:99b5ccf27215 162 __ASM volatile ("sadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 163 return(result);
<> 133:99b5ccf27215 164 }
<> 133:99b5ccf27215 165
<> 133:99b5ccf27215 166 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 167 {
<> 133:99b5ccf27215 168 uint32_t result;
<> 133:99b5ccf27215 169
<> 133:99b5ccf27215 170 __ASM volatile ("qadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 171 return(result);
<> 133:99b5ccf27215 172 }
<> 133:99b5ccf27215 173
<> 133:99b5ccf27215 174 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 175 {
<> 133:99b5ccf27215 176 uint32_t result;
<> 133:99b5ccf27215 177
<> 133:99b5ccf27215 178 __ASM volatile ("shadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 179 return(result);
<> 133:99b5ccf27215 180 }
<> 133:99b5ccf27215 181
<> 133:99b5ccf27215 182 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 183 {
<> 133:99b5ccf27215 184 uint32_t result;
<> 133:99b5ccf27215 185
<> 133:99b5ccf27215 186 __ASM volatile ("uadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 187 return(result);
<> 133:99b5ccf27215 188 }
<> 133:99b5ccf27215 189
<> 133:99b5ccf27215 190 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 191 {
<> 133:99b5ccf27215 192 uint32_t result;
<> 133:99b5ccf27215 193
<> 133:99b5ccf27215 194 __ASM volatile ("uqadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 195 return(result);
<> 133:99b5ccf27215 196 }
<> 133:99b5ccf27215 197
<> 133:99b5ccf27215 198 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 199 {
<> 133:99b5ccf27215 200 uint32_t result;
<> 133:99b5ccf27215 201
<> 133:99b5ccf27215 202 __ASM volatile ("uhadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 203 return(result);
<> 133:99b5ccf27215 204 }
<> 133:99b5ccf27215 205
<> 133:99b5ccf27215 206
<> 133:99b5ccf27215 207 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 208 {
<> 133:99b5ccf27215 209 uint32_t result;
<> 133:99b5ccf27215 210
<> 133:99b5ccf27215 211 __ASM volatile ("ssub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 212 return(result);
<> 133:99b5ccf27215 213 }
<> 133:99b5ccf27215 214
<> 133:99b5ccf27215 215 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 216 {
<> 133:99b5ccf27215 217 uint32_t result;
<> 133:99b5ccf27215 218
<> 133:99b5ccf27215 219 __ASM volatile ("qsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 220 return(result);
<> 133:99b5ccf27215 221 }
<> 133:99b5ccf27215 222
<> 133:99b5ccf27215 223 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 224 {
<> 133:99b5ccf27215 225 uint32_t result;
<> 133:99b5ccf27215 226
<> 133:99b5ccf27215 227 __ASM volatile ("shsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 228 return(result);
<> 133:99b5ccf27215 229 }
<> 133:99b5ccf27215 230
<> 133:99b5ccf27215 231 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 232 {
<> 133:99b5ccf27215 233 uint32_t result;
<> 133:99b5ccf27215 234
<> 133:99b5ccf27215 235 __ASM volatile ("usub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 236 return(result);
<> 133:99b5ccf27215 237 }
<> 133:99b5ccf27215 238
<> 133:99b5ccf27215 239 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 240 {
<> 133:99b5ccf27215 241 uint32_t result;
<> 133:99b5ccf27215 242
<> 133:99b5ccf27215 243 __ASM volatile ("uqsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 244 return(result);
<> 133:99b5ccf27215 245 }
<> 133:99b5ccf27215 246
<> 133:99b5ccf27215 247 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 248 {
<> 133:99b5ccf27215 249 uint32_t result;
<> 133:99b5ccf27215 250
<> 133:99b5ccf27215 251 __ASM volatile ("uhsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 252 return(result);
<> 133:99b5ccf27215 253 }
<> 133:99b5ccf27215 254
<> 133:99b5ccf27215 255
<> 133:99b5ccf27215 256 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 257 {
<> 133:99b5ccf27215 258 uint32_t result;
<> 133:99b5ccf27215 259
<> 133:99b5ccf27215 260 __ASM volatile ("sadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 261 return(result);
<> 133:99b5ccf27215 262 }
<> 133:99b5ccf27215 263
<> 133:99b5ccf27215 264 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 265 {
<> 133:99b5ccf27215 266 uint32_t result;
<> 133:99b5ccf27215 267
<> 133:99b5ccf27215 268 __ASM volatile ("qadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 269 return(result);
<> 133:99b5ccf27215 270 }
<> 133:99b5ccf27215 271
<> 133:99b5ccf27215 272 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 273 {
<> 133:99b5ccf27215 274 uint32_t result;
<> 133:99b5ccf27215 275
<> 133:99b5ccf27215 276 __ASM volatile ("shadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 277 return(result);
<> 133:99b5ccf27215 278 }
<> 133:99b5ccf27215 279
<> 133:99b5ccf27215 280 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 281 {
<> 133:99b5ccf27215 282 uint32_t result;
<> 133:99b5ccf27215 283
<> 133:99b5ccf27215 284 __ASM volatile ("uadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 285 return(result);
<> 133:99b5ccf27215 286 }
<> 133:99b5ccf27215 287
<> 133:99b5ccf27215 288 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 289 {
<> 133:99b5ccf27215 290 uint32_t result;
<> 133:99b5ccf27215 291
<> 133:99b5ccf27215 292 __ASM volatile ("uqadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 293 return(result);
<> 133:99b5ccf27215 294 }
<> 133:99b5ccf27215 295
<> 133:99b5ccf27215 296 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 297 {
<> 133:99b5ccf27215 298 uint32_t result;
<> 133:99b5ccf27215 299
<> 133:99b5ccf27215 300 __ASM volatile ("uhadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 301 return(result);
<> 133:99b5ccf27215 302 }
<> 133:99b5ccf27215 303
<> 133:99b5ccf27215 304 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 305 {
<> 133:99b5ccf27215 306 uint32_t result;
<> 133:99b5ccf27215 307
<> 133:99b5ccf27215 308 __ASM volatile ("ssub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 309 return(result);
<> 133:99b5ccf27215 310 }
<> 133:99b5ccf27215 311
<> 133:99b5ccf27215 312 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 313 {
<> 133:99b5ccf27215 314 uint32_t result;
<> 133:99b5ccf27215 315
<> 133:99b5ccf27215 316 __ASM volatile ("qsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 317 return(result);
<> 133:99b5ccf27215 318 }
<> 133:99b5ccf27215 319
<> 133:99b5ccf27215 320 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 321 {
<> 133:99b5ccf27215 322 uint32_t result;
<> 133:99b5ccf27215 323
<> 133:99b5ccf27215 324 __ASM volatile ("shsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 325 return(result);
<> 133:99b5ccf27215 326 }
<> 133:99b5ccf27215 327
<> 133:99b5ccf27215 328 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 329 {
<> 133:99b5ccf27215 330 uint32_t result;
<> 133:99b5ccf27215 331
<> 133:99b5ccf27215 332 __ASM volatile ("usub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 333 return(result);
<> 133:99b5ccf27215 334 }
<> 133:99b5ccf27215 335
<> 133:99b5ccf27215 336 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 337 {
<> 133:99b5ccf27215 338 uint32_t result;
<> 133:99b5ccf27215 339
<> 133:99b5ccf27215 340 __ASM volatile ("uqsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 341 return(result);
<> 133:99b5ccf27215 342 }
<> 133:99b5ccf27215 343
<> 133:99b5ccf27215 344 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 345 {
<> 133:99b5ccf27215 346 uint32_t result;
<> 133:99b5ccf27215 347
<> 133:99b5ccf27215 348 __ASM volatile ("uhsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 349 return(result);
<> 133:99b5ccf27215 350 }
<> 133:99b5ccf27215 351
<> 133:99b5ccf27215 352 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 353 {
<> 133:99b5ccf27215 354 uint32_t result;
<> 133:99b5ccf27215 355
<> 133:99b5ccf27215 356 __ASM volatile ("sasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 357 return(result);
<> 133:99b5ccf27215 358 }
<> 133:99b5ccf27215 359
<> 133:99b5ccf27215 360 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 361 {
<> 133:99b5ccf27215 362 uint32_t result;
<> 133:99b5ccf27215 363
<> 133:99b5ccf27215 364 __ASM volatile ("qasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 365 return(result);
<> 133:99b5ccf27215 366 }
<> 133:99b5ccf27215 367
<> 133:99b5ccf27215 368 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 369 {
<> 133:99b5ccf27215 370 uint32_t result;
<> 133:99b5ccf27215 371
<> 133:99b5ccf27215 372 __ASM volatile ("shasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 373 return(result);
<> 133:99b5ccf27215 374 }
<> 133:99b5ccf27215 375
<> 133:99b5ccf27215 376 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 377 {
<> 133:99b5ccf27215 378 uint32_t result;
<> 133:99b5ccf27215 379
<> 133:99b5ccf27215 380 __ASM volatile ("uasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 381 return(result);
<> 133:99b5ccf27215 382 }
<> 133:99b5ccf27215 383
<> 133:99b5ccf27215 384 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 385 {
<> 133:99b5ccf27215 386 uint32_t result;
<> 133:99b5ccf27215 387
<> 133:99b5ccf27215 388 __ASM volatile ("uqasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 389 return(result);
<> 133:99b5ccf27215 390 }
<> 133:99b5ccf27215 391
<> 133:99b5ccf27215 392 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 393 {
<> 133:99b5ccf27215 394 uint32_t result;
<> 133:99b5ccf27215 395
<> 133:99b5ccf27215 396 __ASM volatile ("uhasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 397 return(result);
<> 133:99b5ccf27215 398 }
<> 133:99b5ccf27215 399
<> 133:99b5ccf27215 400 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 401 {
<> 133:99b5ccf27215 402 uint32_t result;
<> 133:99b5ccf27215 403
<> 133:99b5ccf27215 404 __ASM volatile ("ssax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 405 return(result);
<> 133:99b5ccf27215 406 }
<> 133:99b5ccf27215 407
<> 133:99b5ccf27215 408 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 409 {
<> 133:99b5ccf27215 410 uint32_t result;
<> 133:99b5ccf27215 411
<> 133:99b5ccf27215 412 __ASM volatile ("qsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 413 return(result);
<> 133:99b5ccf27215 414 }
<> 133:99b5ccf27215 415
<> 133:99b5ccf27215 416 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 417 {
<> 133:99b5ccf27215 418 uint32_t result;
<> 133:99b5ccf27215 419
<> 133:99b5ccf27215 420 __ASM volatile ("shsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 421 return(result);
<> 133:99b5ccf27215 422 }
<> 133:99b5ccf27215 423
<> 133:99b5ccf27215 424 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 425 {
<> 133:99b5ccf27215 426 uint32_t result;
<> 133:99b5ccf27215 427
<> 133:99b5ccf27215 428 __ASM volatile ("usax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 429 return(result);
<> 133:99b5ccf27215 430 }
<> 133:99b5ccf27215 431
<> 133:99b5ccf27215 432 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 433 {
<> 133:99b5ccf27215 434 uint32_t result;
<> 133:99b5ccf27215 435
<> 133:99b5ccf27215 436 __ASM volatile ("uqsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 437 return(result);
<> 133:99b5ccf27215 438 }
<> 133:99b5ccf27215 439
<> 133:99b5ccf27215 440 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 441 {
<> 133:99b5ccf27215 442 uint32_t result;
<> 133:99b5ccf27215 443
<> 133:99b5ccf27215 444 __ASM volatile ("uhsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 445 return(result);
<> 133:99b5ccf27215 446 }
<> 133:99b5ccf27215 447
<> 133:99b5ccf27215 448 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USAD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 449 {
<> 133:99b5ccf27215 450 uint32_t result;
<> 133:99b5ccf27215 451
<> 133:99b5ccf27215 452 __ASM volatile ("usad8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 453 return(result);
<> 133:99b5ccf27215 454 }
<> 133:99b5ccf27215 455
<> 133:99b5ccf27215 456 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USADA8(uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 457 {
<> 133:99b5ccf27215 458 uint32_t result;
<> 133:99b5ccf27215 459
<> 133:99b5ccf27215 460 __ASM volatile ("usada8 %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 461 return(result);
<> 133:99b5ccf27215 462 }
<> 133:99b5ccf27215 463
<> 133:99b5ccf27215 464 #define __SSAT16(ARG1,ARG2) \
<> 133:99b5ccf27215 465 ({ \
<> 133:99b5ccf27215 466 uint32_t __RES, __ARG1 = (ARG1); \
<> 133:99b5ccf27215 467 __ASM ("ssat16 %0, %1, %2" : "=r" (__RES) : "I" (ARG2), "r" (__ARG1) ); \
<> 133:99b5ccf27215 468 __RES; \
<> 133:99b5ccf27215 469 })
<> 133:99b5ccf27215 470
<> 133:99b5ccf27215 471 #define __USAT16(ARG1,ARG2) \
<> 133:99b5ccf27215 472 ({ \
<> 133:99b5ccf27215 473 uint32_t __RES, __ARG1 = (ARG1); \
<> 133:99b5ccf27215 474 __ASM ("usat16 %0, %1, %2" : "=r" (__RES) : "I" (ARG2), "r" (__ARG1) ); \
<> 133:99b5ccf27215 475 __RES; \
<> 133:99b5ccf27215 476 })
<> 133:99b5ccf27215 477
<> 133:99b5ccf27215 478 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UXTB16(uint32_t op1)
<> 133:99b5ccf27215 479 {
<> 133:99b5ccf27215 480 uint32_t result;
<> 133:99b5ccf27215 481
<> 133:99b5ccf27215 482 __ASM volatile ("uxtb16 %0, %1" : "=r" (result) : "r" (op1));
<> 133:99b5ccf27215 483 return(result);
<> 133:99b5ccf27215 484 }
<> 133:99b5ccf27215 485
<> 133:99b5ccf27215 486 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UXTAB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 487 {
<> 133:99b5ccf27215 488 uint32_t result;
<> 133:99b5ccf27215 489
<> 133:99b5ccf27215 490 __ASM volatile ("uxtab16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 491 return(result);
<> 133:99b5ccf27215 492 }
<> 133:99b5ccf27215 493
<> 133:99b5ccf27215 494 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SXTB16(uint32_t op1)
<> 133:99b5ccf27215 495 {
<> 133:99b5ccf27215 496 uint32_t result;
<> 133:99b5ccf27215 497
<> 133:99b5ccf27215 498 __ASM volatile ("sxtb16 %0, %1" : "=r" (result) : "r" (op1));
<> 133:99b5ccf27215 499 return(result);
<> 133:99b5ccf27215 500 }
<> 133:99b5ccf27215 501
<> 133:99b5ccf27215 502 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SXTAB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 503 {
<> 133:99b5ccf27215 504 uint32_t result;
<> 133:99b5ccf27215 505
<> 133:99b5ccf27215 506 __ASM volatile ("sxtab16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 507 return(result);
<> 133:99b5ccf27215 508 }
<> 133:99b5ccf27215 509
<> 133:99b5ccf27215 510 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUAD (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 511 {
<> 133:99b5ccf27215 512 uint32_t result;
<> 133:99b5ccf27215 513
<> 133:99b5ccf27215 514 __ASM volatile ("smuad %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 515 return(result);
<> 133:99b5ccf27215 516 }
<> 133:99b5ccf27215 517
<> 133:99b5ccf27215 518 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUADX (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 519 {
<> 133:99b5ccf27215 520 uint32_t result;
<> 133:99b5ccf27215 521
<> 133:99b5ccf27215 522 __ASM volatile ("smuadx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 523 return(result);
<> 133:99b5ccf27215 524 }
<> 133:99b5ccf27215 525
<> 133:99b5ccf27215 526 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLAD (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 527 {
<> 133:99b5ccf27215 528 uint32_t result;
<> 133:99b5ccf27215 529
<> 133:99b5ccf27215 530 __ASM volatile ("smlad %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 531 return(result);
<> 133:99b5ccf27215 532 }
<> 133:99b5ccf27215 533
<> 133:99b5ccf27215 534 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLADX (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 535 {
<> 133:99b5ccf27215 536 uint32_t result;
<> 133:99b5ccf27215 537
<> 133:99b5ccf27215 538 __ASM volatile ("smladx %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 539 return(result);
<> 133:99b5ccf27215 540 }
<> 133:99b5ccf27215 541
<> 133:99b5ccf27215 542 #define __SMLALD(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 543 ({ \
<> 133:99b5ccf27215 544 uint32_t __ARG1 = (ARG1), __ARG2 = (ARG2), __ARG3_H = (uint32_t)((uint64_t)(ARG3) >> 32), __ARG3_L = (uint32_t)((uint64_t)(ARG3) & 0xFFFFFFFFUL); \
<> 133:99b5ccf27215 545 __ASM volatile ("smlald %0, %1, %2, %3" : "=r" (__ARG3_L), "=r" (__ARG3_H) : "r" (__ARG1), "r" (__ARG2), "0" (__ARG3_L), "1" (__ARG3_H) ); \
<> 133:99b5ccf27215 546 (uint64_t)(((uint64_t)__ARG3_H << 32) | __ARG3_L); \
<> 133:99b5ccf27215 547 })
<> 133:99b5ccf27215 548
<> 133:99b5ccf27215 549 #define __SMLALDX(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 550 ({ \
<> 133:99b5ccf27215 551 uint32_t __ARG1 = (ARG1), __ARG2 = (ARG2), __ARG3_H = (uint32_t)((uint64_t)(ARG3) >> 32), __ARG3_L = (uint32_t)((uint64_t)(ARG3) & 0xFFFFFFFFUL); \
<> 133:99b5ccf27215 552 __ASM volatile ("smlaldx %0, %1, %2, %3" : "=r" (__ARG3_L), "=r" (__ARG3_H) : "r" (__ARG1), "r" (__ARG2), "0" (__ARG3_L), "1" (__ARG3_H) ); \
<> 133:99b5ccf27215 553 (uint64_t)(((uint64_t)__ARG3_H << 32) | __ARG3_L); \
<> 133:99b5ccf27215 554 })
<> 133:99b5ccf27215 555
<> 133:99b5ccf27215 556 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUSD (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 557 {
<> 133:99b5ccf27215 558 uint32_t result;
<> 133:99b5ccf27215 559
<> 133:99b5ccf27215 560 __ASM volatile ("smusd %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 561 return(result);
<> 133:99b5ccf27215 562 }
<> 133:99b5ccf27215 563
<> 133:99b5ccf27215 564 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUSDX (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 565 {
<> 133:99b5ccf27215 566 uint32_t result;
<> 133:99b5ccf27215 567
<> 133:99b5ccf27215 568 __ASM volatile ("smusdx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 569 return(result);
<> 133:99b5ccf27215 570 }
<> 133:99b5ccf27215 571
<> 133:99b5ccf27215 572 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLSD (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 573 {
<> 133:99b5ccf27215 574 uint32_t result;
<> 133:99b5ccf27215 575
<> 133:99b5ccf27215 576 __ASM volatile ("smlsd %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 577 return(result);
<> 133:99b5ccf27215 578 }
<> 133:99b5ccf27215 579
<> 133:99b5ccf27215 580 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLSDX (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 581 {
<> 133:99b5ccf27215 582 uint32_t result;
<> 133:99b5ccf27215 583
<> 133:99b5ccf27215 584 __ASM volatile ("smlsdx %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 585 return(result);
<> 133:99b5ccf27215 586 }
<> 133:99b5ccf27215 587
<> 133:99b5ccf27215 588 #define __SMLSLD(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 589 ({ \
<> 133:99b5ccf27215 590 uint32_t __ARG1 = (ARG1), __ARG2 = (ARG2), __ARG3_H = (uint32_t)((ARG3) >> 32), __ARG3_L = (uint32_t)((ARG3) & 0xFFFFFFFFUL); \
<> 133:99b5ccf27215 591 __ASM volatile ("smlsld %0, %1, %2, %3" : "=r" (__ARG3_L), "=r" (__ARG3_H) : "r" (__ARG1), "r" (__ARG2), "0" (__ARG3_L), "1" (__ARG3_H) ); \
<> 133:99b5ccf27215 592 (uint64_t)(((uint64_t)__ARG3_H << 32) | __ARG3_L); \
<> 133:99b5ccf27215 593 })
<> 133:99b5ccf27215 594
<> 133:99b5ccf27215 595 #define __SMLSLDX(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 596 ({ \
<> 133:99b5ccf27215 597 uint32_t __ARG1 = (ARG1), __ARG2 = (ARG2), __ARG3_H = (uint32_t)((ARG3) >> 32), __ARG3_L = (uint32_t)((ARG3) & 0xFFFFFFFFUL); \
<> 133:99b5ccf27215 598 __ASM volatile ("smlsldx %0, %1, %2, %3" : "=r" (__ARG3_L), "=r" (__ARG3_H) : "r" (__ARG1), "r" (__ARG2), "0" (__ARG3_L), "1" (__ARG3_H) ); \
<> 133:99b5ccf27215 599 (uint64_t)(((uint64_t)__ARG3_H << 32) | __ARG3_L); \
<> 133:99b5ccf27215 600 })
<> 133:99b5ccf27215 601
<> 133:99b5ccf27215 602 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SEL (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 603 {
<> 133:99b5ccf27215 604 uint32_t result;
<> 133:99b5ccf27215 605
<> 133:99b5ccf27215 606 __ASM volatile ("sel %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 607 return(result);
<> 133:99b5ccf27215 608 }
<> 133:99b5ccf27215 609
<> 133:99b5ccf27215 610 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 611 {
<> 133:99b5ccf27215 612 uint32_t result;
<> 133:99b5ccf27215 613
<> 133:99b5ccf27215 614 __ASM volatile ("qadd %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 615 return(result);
<> 133:99b5ccf27215 616 }
<> 133:99b5ccf27215 617
<> 133:99b5ccf27215 618 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 619 {
<> 133:99b5ccf27215 620 uint32_t result;
<> 133:99b5ccf27215 621
<> 133:99b5ccf27215 622 __ASM volatile ("qsub %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 623 return(result);
<> 133:99b5ccf27215 624 }
<> 133:99b5ccf27215 625
<> 133:99b5ccf27215 626 #define __PKHBT(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 627 ({ \
<> 133:99b5ccf27215 628 uint32_t __RES, __ARG1 = (ARG1), __ARG2 = (ARG2); \
<> 133:99b5ccf27215 629 __ASM ("pkhbt %0, %1, %2, lsl %3" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2), "I" (ARG3) ); \
<> 133:99b5ccf27215 630 __RES; \
<> 133:99b5ccf27215 631 })
<> 133:99b5ccf27215 632
<> 133:99b5ccf27215 633 #define __PKHTB(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 634 ({ \
<> 133:99b5ccf27215 635 uint32_t __RES, __ARG1 = (ARG1), __ARG2 = (ARG2); \
<> 133:99b5ccf27215 636 if (ARG3 == 0) \
<> 133:99b5ccf27215 637 __ASM ("pkhtb %0, %1, %2" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2) ); \
<> 133:99b5ccf27215 638 else \
<> 133:99b5ccf27215 639 __ASM ("pkhtb %0, %1, %2, asr %3" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2), "I" (ARG3) ); \
<> 133:99b5ccf27215 640 __RES; \
<> 133:99b5ccf27215 641 })
<> 133:99b5ccf27215 642
<> 133:99b5ccf27215 643 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMMLA (int32_t op1, int32_t op2, int32_t op3)
<> 133:99b5ccf27215 644 {
<> 133:99b5ccf27215 645 int32_t result;
<> 133:99b5ccf27215 646
<> 133:99b5ccf27215 647 __ASM volatile ("smmla %0, %1, %2, %3" : "=r" (result): "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 648 return(result);
<> 133:99b5ccf27215 649 }
<> 133:99b5ccf27215 650
<> 133:99b5ccf27215 651 /*-- End CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 652
<> 133:99b5ccf27215 653
<> 133:99b5ccf27215 654
<> 133:99b5ccf27215 655 #elif defined ( __TASKING__ ) /*------------------ TASKING Compiler --------------*/
<> 133:99b5ccf27215 656 /* TASKING carm specific functions */
<> 133:99b5ccf27215 657
<> 133:99b5ccf27215 658
<> 133:99b5ccf27215 659 /*------ CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 660 /* not yet supported */
<> 133:99b5ccf27215 661 /*-- End CM4 SIMD Intrinsics -----------------------------------------------------*/
<> 133:99b5ccf27215 662
<> 133:99b5ccf27215 663
<> 133:99b5ccf27215 664 #endif
<> 133:99b5ccf27215 665
<> 133:99b5ccf27215 666 /*@} end of group CMSIS_SIMD_intrinsics */
<> 133:99b5ccf27215 667
<> 133:99b5ccf27215 668
<> 133:99b5ccf27215 669 #endif /* __CORE_CM4_SIMD_H */
<> 133:99b5ccf27215 670
<> 133:99b5ccf27215 671 #ifdef __cplusplus
<> 133:99b5ccf27215 672 }
<> 133:99b5ccf27215 673 #endif