The official Mbed 2 C/C++ SDK provides the software platform and libraries to build your applications.

Dependents:   hello SerialTestv11 SerialTestv12 Sierpinski ... more

mbed 2

This is the mbed 2 library. If you'd like to learn about Mbed OS please see the mbed-os docs.

Committer:
<>
Date:
Tue Mar 14 16:20:51 2017 +0000
Revision:
138:093f2bd7b9eb
Parent:
133:99b5ccf27215
Release 138 of the mbed library

Ports for Upcoming Targets


Fixes and Changes

3716: fix for issue #3715: correction in startup files for ARM and IAR, alignment of system_stm32f429xx.c files https://github.com/ARMmbed/mbed-os/pull/3716
3741: STM32 remove warning in hal_tick_32b.c file https://github.com/ARMmbed/mbed-os/pull/3741
3780: STM32L4 : Fix GPIO G port compatibility https://github.com/ARMmbed/mbed-os/pull/3780
3831: NCS36510: SPISLAVE enabled (Conflict resolved) https://github.com/ARMmbed/mbed-os/pull/3831
3836: Allow to redefine nRF's PSTORAGE_NUM_OF_PAGES outside of the mbed-os https://github.com/ARMmbed/mbed-os/pull/3836
3840: STM32: gpio SPEED - always set High Speed by default https://github.com/ARMmbed/mbed-os/pull/3840
3844: STM32 GPIO: Typo correction. Update comment (GPIO_IP_WITHOUT_BRR) https://github.com/ARMmbed/mbed-os/pull/3844
3850: STM32: change spi error to debug warning https://github.com/ARMmbed/mbed-os/pull/3850
3860: Define GPIO_IP_WITHOUT_BRR for xDot platform https://github.com/ARMmbed/mbed-os/pull/3860
3880: DISCO_F469NI: allow the use of CAN2 instance when CAN1 is not activated https://github.com/ARMmbed/mbed-os/pull/3880
3795: Fix pwm period calc https://github.com/ARMmbed/mbed-os/pull/3795
3828: STM32 CAN API: correct format and type https://github.com/ARMmbed/mbed-os/pull/3828
3842: TARGET_NRF: corrected spi_init() to properly handle re-initialization https://github.com/ARMmbed/mbed-os/pull/3842
3843: STM32L476xG: set APB2 clock to 80MHz (instead of 40MHz) https://github.com/ARMmbed/mbed-os/pull/3843
3879: NUCLEO_F446ZE: Add missing AnalogIn pins on PF_3, PF_5 and PF_10. https://github.com/ARMmbed/mbed-os/pull/3879
3902: Fix heap and stack size for NUCLEO_F746ZG https://github.com/ARMmbed/mbed-os/pull/3902
3829: can_write(): return error code when no tx mailboxes are available https://github.com/ARMmbed/mbed-os/pull/3829

Who changed what in which revision?

UserRevisionLine numberNew contents of line
<> 133:99b5ccf27215 1 /**************************************************************************//**
<> 133:99b5ccf27215 2 * @file core_cmSimd.h
<> 133:99b5ccf27215 3 * @brief CMSIS Cortex-M SIMD Header File
<> 133:99b5ccf27215 4 * @version V4.10
<> 133:99b5ccf27215 5 * @date 18. March 2015
<> 133:99b5ccf27215 6 *
<> 133:99b5ccf27215 7 * @note
<> 133:99b5ccf27215 8 *
<> 133:99b5ccf27215 9 ******************************************************************************/
<> 133:99b5ccf27215 10 /* Copyright (c) 2009 - 2014 ARM LIMITED
<> 133:99b5ccf27215 11
<> 133:99b5ccf27215 12 All rights reserved.
<> 133:99b5ccf27215 13 Redistribution and use in source and binary forms, with or without
<> 133:99b5ccf27215 14 modification, are permitted provided that the following conditions are met:
<> 133:99b5ccf27215 15 - Redistributions of source code must retain the above copyright
<> 133:99b5ccf27215 16 notice, this list of conditions and the following disclaimer.
<> 133:99b5ccf27215 17 - Redistributions in binary form must reproduce the above copyright
<> 133:99b5ccf27215 18 notice, this list of conditions and the following disclaimer in the
<> 133:99b5ccf27215 19 documentation and/or other materials provided with the distribution.
<> 133:99b5ccf27215 20 - Neither the name of ARM nor the names of its contributors may be used
<> 133:99b5ccf27215 21 to endorse or promote products derived from this software without
<> 133:99b5ccf27215 22 specific prior written permission.
<> 133:99b5ccf27215 23 *
<> 133:99b5ccf27215 24 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
<> 133:99b5ccf27215 25 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
<> 133:99b5ccf27215 26 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
<> 133:99b5ccf27215 27 ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDERS AND CONTRIBUTORS BE
<> 133:99b5ccf27215 28 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
<> 133:99b5ccf27215 29 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
<> 133:99b5ccf27215 30 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
<> 133:99b5ccf27215 31 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
<> 133:99b5ccf27215 32 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
<> 133:99b5ccf27215 33 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
<> 133:99b5ccf27215 34 POSSIBILITY OF SUCH DAMAGE.
<> 133:99b5ccf27215 35 ---------------------------------------------------------------------------*/
<> 133:99b5ccf27215 36
<> 133:99b5ccf27215 37
<> 133:99b5ccf27215 38 #if defined ( __ICCARM__ )
<> 133:99b5ccf27215 39 #pragma system_include /* treat file as system include file for MISRA check */
<> 133:99b5ccf27215 40 #endif
<> 133:99b5ccf27215 41
<> 133:99b5ccf27215 42 #ifndef __CORE_CMSIMD_H
<> 133:99b5ccf27215 43 #define __CORE_CMSIMD_H
<> 133:99b5ccf27215 44
<> 133:99b5ccf27215 45 #ifdef __cplusplus
<> 133:99b5ccf27215 46 extern "C" {
<> 133:99b5ccf27215 47 #endif
<> 133:99b5ccf27215 48
<> 133:99b5ccf27215 49
<> 133:99b5ccf27215 50 /*******************************************************************************
<> 133:99b5ccf27215 51 * Hardware Abstraction Layer
<> 133:99b5ccf27215 52 ******************************************************************************/
<> 133:99b5ccf27215 53
<> 133:99b5ccf27215 54
<> 133:99b5ccf27215 55 /* ################### Compiler specific Intrinsics ########################### */
<> 133:99b5ccf27215 56 /** \defgroup CMSIS_SIMD_intrinsics CMSIS SIMD Intrinsics
<> 133:99b5ccf27215 57 Access to dedicated SIMD instructions
<> 133:99b5ccf27215 58 @{
<> 133:99b5ccf27215 59 */
<> 133:99b5ccf27215 60
<> 133:99b5ccf27215 61 #if defined ( __CC_ARM ) /*------------------RealView Compiler -----------------*/
<> 133:99b5ccf27215 62 /* ARM armcc specific functions */
<> 133:99b5ccf27215 63 #define __SADD8 __sadd8
<> 133:99b5ccf27215 64 #define __QADD8 __qadd8
<> 133:99b5ccf27215 65 #define __SHADD8 __shadd8
<> 133:99b5ccf27215 66 #define __UADD8 __uadd8
<> 133:99b5ccf27215 67 #define __UQADD8 __uqadd8
<> 133:99b5ccf27215 68 #define __UHADD8 __uhadd8
<> 133:99b5ccf27215 69 #define __SSUB8 __ssub8
<> 133:99b5ccf27215 70 #define __QSUB8 __qsub8
<> 133:99b5ccf27215 71 #define __SHSUB8 __shsub8
<> 133:99b5ccf27215 72 #define __USUB8 __usub8
<> 133:99b5ccf27215 73 #define __UQSUB8 __uqsub8
<> 133:99b5ccf27215 74 #define __UHSUB8 __uhsub8
<> 133:99b5ccf27215 75 #define __SADD16 __sadd16
<> 133:99b5ccf27215 76 #define __QADD16 __qadd16
<> 133:99b5ccf27215 77 #define __SHADD16 __shadd16
<> 133:99b5ccf27215 78 #define __UADD16 __uadd16
<> 133:99b5ccf27215 79 #define __UQADD16 __uqadd16
<> 133:99b5ccf27215 80 #define __UHADD16 __uhadd16
<> 133:99b5ccf27215 81 #define __SSUB16 __ssub16
<> 133:99b5ccf27215 82 #define __QSUB16 __qsub16
<> 133:99b5ccf27215 83 #define __SHSUB16 __shsub16
<> 133:99b5ccf27215 84 #define __USUB16 __usub16
<> 133:99b5ccf27215 85 #define __UQSUB16 __uqsub16
<> 133:99b5ccf27215 86 #define __UHSUB16 __uhsub16
<> 133:99b5ccf27215 87 #define __SASX __sasx
<> 133:99b5ccf27215 88 #define __QASX __qasx
<> 133:99b5ccf27215 89 #define __SHASX __shasx
<> 133:99b5ccf27215 90 #define __UASX __uasx
<> 133:99b5ccf27215 91 #define __UQASX __uqasx
<> 133:99b5ccf27215 92 #define __UHASX __uhasx
<> 133:99b5ccf27215 93 #define __SSAX __ssax
<> 133:99b5ccf27215 94 #define __QSAX __qsax
<> 133:99b5ccf27215 95 #define __SHSAX __shsax
<> 133:99b5ccf27215 96 #define __USAX __usax
<> 133:99b5ccf27215 97 #define __UQSAX __uqsax
<> 133:99b5ccf27215 98 #define __UHSAX __uhsax
<> 133:99b5ccf27215 99 #define __USAD8 __usad8
<> 133:99b5ccf27215 100 #define __USADA8 __usada8
<> 133:99b5ccf27215 101 #define __SSAT16 __ssat16
<> 133:99b5ccf27215 102 #define __USAT16 __usat16
<> 133:99b5ccf27215 103 #define __UXTB16 __uxtb16
<> 133:99b5ccf27215 104 #define __UXTAB16 __uxtab16
<> 133:99b5ccf27215 105 #define __SXTB16 __sxtb16
<> 133:99b5ccf27215 106 #define __SXTAB16 __sxtab16
<> 133:99b5ccf27215 107 #define __SMUAD __smuad
<> 133:99b5ccf27215 108 #define __SMUADX __smuadx
<> 133:99b5ccf27215 109 #define __SMLAD __smlad
<> 133:99b5ccf27215 110 #define __SMLADX __smladx
<> 133:99b5ccf27215 111 #define __SMLALD __smlald
<> 133:99b5ccf27215 112 #define __SMLALDX __smlaldx
<> 133:99b5ccf27215 113 #define __SMUSD __smusd
<> 133:99b5ccf27215 114 #define __SMUSDX __smusdx
<> 133:99b5ccf27215 115 #define __SMLSD __smlsd
<> 133:99b5ccf27215 116 #define __SMLSDX __smlsdx
<> 133:99b5ccf27215 117 #define __SMLSLD __smlsld
<> 133:99b5ccf27215 118 #define __SMLSLDX __smlsldx
<> 133:99b5ccf27215 119 #define __SEL __sel
<> 133:99b5ccf27215 120 #define __QADD __qadd
<> 133:99b5ccf27215 121 #define __QSUB __qsub
<> 133:99b5ccf27215 122
<> 133:99b5ccf27215 123 #define __PKHBT(ARG1,ARG2,ARG3) ( ((((uint32_t)(ARG1)) ) & 0x0000FFFFUL) | \
<> 133:99b5ccf27215 124 ((((uint32_t)(ARG2)) << (ARG3)) & 0xFFFF0000UL) )
<> 133:99b5ccf27215 125
<> 133:99b5ccf27215 126 #define __PKHTB(ARG1,ARG2,ARG3) ( ((((uint32_t)(ARG1)) ) & 0xFFFF0000UL) | \
<> 133:99b5ccf27215 127 ((((uint32_t)(ARG2)) >> (ARG3)) & 0x0000FFFFUL) )
<> 133:99b5ccf27215 128
<> 133:99b5ccf27215 129 #define __SMMLA(ARG1,ARG2,ARG3) ( (int32_t)((((int64_t)(ARG1) * (ARG2)) + \
<> 133:99b5ccf27215 130 ((int64_t)(ARG3) << 32) ) >> 32))
<> 133:99b5ccf27215 131
<> 133:99b5ccf27215 132
<> 133:99b5ccf27215 133 #elif defined ( __GNUC__ ) /*------------------ GNU Compiler ---------------------*/
<> 133:99b5ccf27215 134 /* GNU gcc specific functions */
<> 133:99b5ccf27215 135 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 136 {
<> 133:99b5ccf27215 137 uint32_t result;
<> 133:99b5ccf27215 138
<> 133:99b5ccf27215 139 __ASM volatile ("sadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 140 return(result);
<> 133:99b5ccf27215 141 }
<> 133:99b5ccf27215 142
<> 133:99b5ccf27215 143 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 144 {
<> 133:99b5ccf27215 145 uint32_t result;
<> 133:99b5ccf27215 146
<> 133:99b5ccf27215 147 __ASM volatile ("qadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 148 return(result);
<> 133:99b5ccf27215 149 }
<> 133:99b5ccf27215 150
<> 133:99b5ccf27215 151 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 152 {
<> 133:99b5ccf27215 153 uint32_t result;
<> 133:99b5ccf27215 154
<> 133:99b5ccf27215 155 __ASM volatile ("shadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 156 return(result);
<> 133:99b5ccf27215 157 }
<> 133:99b5ccf27215 158
<> 133:99b5ccf27215 159 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 160 {
<> 133:99b5ccf27215 161 uint32_t result;
<> 133:99b5ccf27215 162
<> 133:99b5ccf27215 163 __ASM volatile ("uadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 164 return(result);
<> 133:99b5ccf27215 165 }
<> 133:99b5ccf27215 166
<> 133:99b5ccf27215 167 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 168 {
<> 133:99b5ccf27215 169 uint32_t result;
<> 133:99b5ccf27215 170
<> 133:99b5ccf27215 171 __ASM volatile ("uqadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 172 return(result);
<> 133:99b5ccf27215 173 }
<> 133:99b5ccf27215 174
<> 133:99b5ccf27215 175 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHADD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 176 {
<> 133:99b5ccf27215 177 uint32_t result;
<> 133:99b5ccf27215 178
<> 133:99b5ccf27215 179 __ASM volatile ("uhadd8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 180 return(result);
<> 133:99b5ccf27215 181 }
<> 133:99b5ccf27215 182
<> 133:99b5ccf27215 183
<> 133:99b5ccf27215 184 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 185 {
<> 133:99b5ccf27215 186 uint32_t result;
<> 133:99b5ccf27215 187
<> 133:99b5ccf27215 188 __ASM volatile ("ssub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 189 return(result);
<> 133:99b5ccf27215 190 }
<> 133:99b5ccf27215 191
<> 133:99b5ccf27215 192 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 193 {
<> 133:99b5ccf27215 194 uint32_t result;
<> 133:99b5ccf27215 195
<> 133:99b5ccf27215 196 __ASM volatile ("qsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 197 return(result);
<> 133:99b5ccf27215 198 }
<> 133:99b5ccf27215 199
<> 133:99b5ccf27215 200 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 201 {
<> 133:99b5ccf27215 202 uint32_t result;
<> 133:99b5ccf27215 203
<> 133:99b5ccf27215 204 __ASM volatile ("shsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 205 return(result);
<> 133:99b5ccf27215 206 }
<> 133:99b5ccf27215 207
<> 133:99b5ccf27215 208 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 209 {
<> 133:99b5ccf27215 210 uint32_t result;
<> 133:99b5ccf27215 211
<> 133:99b5ccf27215 212 __ASM volatile ("usub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 213 return(result);
<> 133:99b5ccf27215 214 }
<> 133:99b5ccf27215 215
<> 133:99b5ccf27215 216 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 217 {
<> 133:99b5ccf27215 218 uint32_t result;
<> 133:99b5ccf27215 219
<> 133:99b5ccf27215 220 __ASM volatile ("uqsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 221 return(result);
<> 133:99b5ccf27215 222 }
<> 133:99b5ccf27215 223
<> 133:99b5ccf27215 224 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSUB8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 225 {
<> 133:99b5ccf27215 226 uint32_t result;
<> 133:99b5ccf27215 227
<> 133:99b5ccf27215 228 __ASM volatile ("uhsub8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 229 return(result);
<> 133:99b5ccf27215 230 }
<> 133:99b5ccf27215 231
<> 133:99b5ccf27215 232
<> 133:99b5ccf27215 233 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 234 {
<> 133:99b5ccf27215 235 uint32_t result;
<> 133:99b5ccf27215 236
<> 133:99b5ccf27215 237 __ASM volatile ("sadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 238 return(result);
<> 133:99b5ccf27215 239 }
<> 133:99b5ccf27215 240
<> 133:99b5ccf27215 241 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 242 {
<> 133:99b5ccf27215 243 uint32_t result;
<> 133:99b5ccf27215 244
<> 133:99b5ccf27215 245 __ASM volatile ("qadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 246 return(result);
<> 133:99b5ccf27215 247 }
<> 133:99b5ccf27215 248
<> 133:99b5ccf27215 249 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 250 {
<> 133:99b5ccf27215 251 uint32_t result;
<> 133:99b5ccf27215 252
<> 133:99b5ccf27215 253 __ASM volatile ("shadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 254 return(result);
<> 133:99b5ccf27215 255 }
<> 133:99b5ccf27215 256
<> 133:99b5ccf27215 257 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 258 {
<> 133:99b5ccf27215 259 uint32_t result;
<> 133:99b5ccf27215 260
<> 133:99b5ccf27215 261 __ASM volatile ("uadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 262 return(result);
<> 133:99b5ccf27215 263 }
<> 133:99b5ccf27215 264
<> 133:99b5ccf27215 265 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 266 {
<> 133:99b5ccf27215 267 uint32_t result;
<> 133:99b5ccf27215 268
<> 133:99b5ccf27215 269 __ASM volatile ("uqadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 270 return(result);
<> 133:99b5ccf27215 271 }
<> 133:99b5ccf27215 272
<> 133:99b5ccf27215 273 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHADD16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 274 {
<> 133:99b5ccf27215 275 uint32_t result;
<> 133:99b5ccf27215 276
<> 133:99b5ccf27215 277 __ASM volatile ("uhadd16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 278 return(result);
<> 133:99b5ccf27215 279 }
<> 133:99b5ccf27215 280
<> 133:99b5ccf27215 281 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 282 {
<> 133:99b5ccf27215 283 uint32_t result;
<> 133:99b5ccf27215 284
<> 133:99b5ccf27215 285 __ASM volatile ("ssub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 286 return(result);
<> 133:99b5ccf27215 287 }
<> 133:99b5ccf27215 288
<> 133:99b5ccf27215 289 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 290 {
<> 133:99b5ccf27215 291 uint32_t result;
<> 133:99b5ccf27215 292
<> 133:99b5ccf27215 293 __ASM volatile ("qsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 294 return(result);
<> 133:99b5ccf27215 295 }
<> 133:99b5ccf27215 296
<> 133:99b5ccf27215 297 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 298 {
<> 133:99b5ccf27215 299 uint32_t result;
<> 133:99b5ccf27215 300
<> 133:99b5ccf27215 301 __ASM volatile ("shsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 302 return(result);
<> 133:99b5ccf27215 303 }
<> 133:99b5ccf27215 304
<> 133:99b5ccf27215 305 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 306 {
<> 133:99b5ccf27215 307 uint32_t result;
<> 133:99b5ccf27215 308
<> 133:99b5ccf27215 309 __ASM volatile ("usub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 310 return(result);
<> 133:99b5ccf27215 311 }
<> 133:99b5ccf27215 312
<> 133:99b5ccf27215 313 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 314 {
<> 133:99b5ccf27215 315 uint32_t result;
<> 133:99b5ccf27215 316
<> 133:99b5ccf27215 317 __ASM volatile ("uqsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 318 return(result);
<> 133:99b5ccf27215 319 }
<> 133:99b5ccf27215 320
<> 133:99b5ccf27215 321 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSUB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 322 {
<> 133:99b5ccf27215 323 uint32_t result;
<> 133:99b5ccf27215 324
<> 133:99b5ccf27215 325 __ASM volatile ("uhsub16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 326 return(result);
<> 133:99b5ccf27215 327 }
<> 133:99b5ccf27215 328
<> 133:99b5ccf27215 329 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 330 {
<> 133:99b5ccf27215 331 uint32_t result;
<> 133:99b5ccf27215 332
<> 133:99b5ccf27215 333 __ASM volatile ("sasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 334 return(result);
<> 133:99b5ccf27215 335 }
<> 133:99b5ccf27215 336
<> 133:99b5ccf27215 337 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 338 {
<> 133:99b5ccf27215 339 uint32_t result;
<> 133:99b5ccf27215 340
<> 133:99b5ccf27215 341 __ASM volatile ("qasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 342 return(result);
<> 133:99b5ccf27215 343 }
<> 133:99b5ccf27215 344
<> 133:99b5ccf27215 345 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 346 {
<> 133:99b5ccf27215 347 uint32_t result;
<> 133:99b5ccf27215 348
<> 133:99b5ccf27215 349 __ASM volatile ("shasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 350 return(result);
<> 133:99b5ccf27215 351 }
<> 133:99b5ccf27215 352
<> 133:99b5ccf27215 353 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 354 {
<> 133:99b5ccf27215 355 uint32_t result;
<> 133:99b5ccf27215 356
<> 133:99b5ccf27215 357 __ASM volatile ("uasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 358 return(result);
<> 133:99b5ccf27215 359 }
<> 133:99b5ccf27215 360
<> 133:99b5ccf27215 361 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 362 {
<> 133:99b5ccf27215 363 uint32_t result;
<> 133:99b5ccf27215 364
<> 133:99b5ccf27215 365 __ASM volatile ("uqasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 366 return(result);
<> 133:99b5ccf27215 367 }
<> 133:99b5ccf27215 368
<> 133:99b5ccf27215 369 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHASX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 370 {
<> 133:99b5ccf27215 371 uint32_t result;
<> 133:99b5ccf27215 372
<> 133:99b5ccf27215 373 __ASM volatile ("uhasx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 374 return(result);
<> 133:99b5ccf27215 375 }
<> 133:99b5ccf27215 376
<> 133:99b5ccf27215 377 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 378 {
<> 133:99b5ccf27215 379 uint32_t result;
<> 133:99b5ccf27215 380
<> 133:99b5ccf27215 381 __ASM volatile ("ssax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 382 return(result);
<> 133:99b5ccf27215 383 }
<> 133:99b5ccf27215 384
<> 133:99b5ccf27215 385 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 386 {
<> 133:99b5ccf27215 387 uint32_t result;
<> 133:99b5ccf27215 388
<> 133:99b5ccf27215 389 __ASM volatile ("qsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 390 return(result);
<> 133:99b5ccf27215 391 }
<> 133:99b5ccf27215 392
<> 133:99b5ccf27215 393 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SHSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 394 {
<> 133:99b5ccf27215 395 uint32_t result;
<> 133:99b5ccf27215 396
<> 133:99b5ccf27215 397 __ASM volatile ("shsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 398 return(result);
<> 133:99b5ccf27215 399 }
<> 133:99b5ccf27215 400
<> 133:99b5ccf27215 401 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 402 {
<> 133:99b5ccf27215 403 uint32_t result;
<> 133:99b5ccf27215 404
<> 133:99b5ccf27215 405 __ASM volatile ("usax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 406 return(result);
<> 133:99b5ccf27215 407 }
<> 133:99b5ccf27215 408
<> 133:99b5ccf27215 409 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UQSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 410 {
<> 133:99b5ccf27215 411 uint32_t result;
<> 133:99b5ccf27215 412
<> 133:99b5ccf27215 413 __ASM volatile ("uqsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 414 return(result);
<> 133:99b5ccf27215 415 }
<> 133:99b5ccf27215 416
<> 133:99b5ccf27215 417 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UHSAX(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 418 {
<> 133:99b5ccf27215 419 uint32_t result;
<> 133:99b5ccf27215 420
<> 133:99b5ccf27215 421 __ASM volatile ("uhsax %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 422 return(result);
<> 133:99b5ccf27215 423 }
<> 133:99b5ccf27215 424
<> 133:99b5ccf27215 425 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USAD8(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 426 {
<> 133:99b5ccf27215 427 uint32_t result;
<> 133:99b5ccf27215 428
<> 133:99b5ccf27215 429 __ASM volatile ("usad8 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 430 return(result);
<> 133:99b5ccf27215 431 }
<> 133:99b5ccf27215 432
<> 133:99b5ccf27215 433 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __USADA8(uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 434 {
<> 133:99b5ccf27215 435 uint32_t result;
<> 133:99b5ccf27215 436
<> 133:99b5ccf27215 437 __ASM volatile ("usada8 %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 438 return(result);
<> 133:99b5ccf27215 439 }
<> 133:99b5ccf27215 440
<> 133:99b5ccf27215 441 #define __SSAT16(ARG1,ARG2) \
<> 133:99b5ccf27215 442 ({ \
<> 133:99b5ccf27215 443 uint32_t __RES, __ARG1 = (ARG1); \
<> 133:99b5ccf27215 444 __ASM ("ssat16 %0, %1, %2" : "=r" (__RES) : "I" (ARG2), "r" (__ARG1) ); \
<> 133:99b5ccf27215 445 __RES; \
<> 133:99b5ccf27215 446 })
<> 133:99b5ccf27215 447
<> 133:99b5ccf27215 448 #define __USAT16(ARG1,ARG2) \
<> 133:99b5ccf27215 449 ({ \
<> 133:99b5ccf27215 450 uint32_t __RES, __ARG1 = (ARG1); \
<> 133:99b5ccf27215 451 __ASM ("usat16 %0, %1, %2" : "=r" (__RES) : "I" (ARG2), "r" (__ARG1) ); \
<> 133:99b5ccf27215 452 __RES; \
<> 133:99b5ccf27215 453 })
<> 133:99b5ccf27215 454
<> 133:99b5ccf27215 455 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UXTB16(uint32_t op1)
<> 133:99b5ccf27215 456 {
<> 133:99b5ccf27215 457 uint32_t result;
<> 133:99b5ccf27215 458
<> 133:99b5ccf27215 459 __ASM volatile ("uxtb16 %0, %1" : "=r" (result) : "r" (op1));
<> 133:99b5ccf27215 460 return(result);
<> 133:99b5ccf27215 461 }
<> 133:99b5ccf27215 462
<> 133:99b5ccf27215 463 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __UXTAB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 464 {
<> 133:99b5ccf27215 465 uint32_t result;
<> 133:99b5ccf27215 466
<> 133:99b5ccf27215 467 __ASM volatile ("uxtab16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 468 return(result);
<> 133:99b5ccf27215 469 }
<> 133:99b5ccf27215 470
<> 133:99b5ccf27215 471 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SXTB16(uint32_t op1)
<> 133:99b5ccf27215 472 {
<> 133:99b5ccf27215 473 uint32_t result;
<> 133:99b5ccf27215 474
<> 133:99b5ccf27215 475 __ASM volatile ("sxtb16 %0, %1" : "=r" (result) : "r" (op1));
<> 133:99b5ccf27215 476 return(result);
<> 133:99b5ccf27215 477 }
<> 133:99b5ccf27215 478
<> 133:99b5ccf27215 479 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SXTAB16(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 480 {
<> 133:99b5ccf27215 481 uint32_t result;
<> 133:99b5ccf27215 482
<> 133:99b5ccf27215 483 __ASM volatile ("sxtab16 %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 484 return(result);
<> 133:99b5ccf27215 485 }
<> 133:99b5ccf27215 486
<> 133:99b5ccf27215 487 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUAD (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 488 {
<> 133:99b5ccf27215 489 uint32_t result;
<> 133:99b5ccf27215 490
<> 133:99b5ccf27215 491 __ASM volatile ("smuad %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 492 return(result);
<> 133:99b5ccf27215 493 }
<> 133:99b5ccf27215 494
<> 133:99b5ccf27215 495 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUADX (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 496 {
<> 133:99b5ccf27215 497 uint32_t result;
<> 133:99b5ccf27215 498
<> 133:99b5ccf27215 499 __ASM volatile ("smuadx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 500 return(result);
<> 133:99b5ccf27215 501 }
<> 133:99b5ccf27215 502
<> 133:99b5ccf27215 503 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLAD (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 504 {
<> 133:99b5ccf27215 505 uint32_t result;
<> 133:99b5ccf27215 506
<> 133:99b5ccf27215 507 __ASM volatile ("smlad %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 508 return(result);
<> 133:99b5ccf27215 509 }
<> 133:99b5ccf27215 510
<> 133:99b5ccf27215 511 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLADX (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 512 {
<> 133:99b5ccf27215 513 uint32_t result;
<> 133:99b5ccf27215 514
<> 133:99b5ccf27215 515 __ASM volatile ("smladx %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 516 return(result);
<> 133:99b5ccf27215 517 }
<> 133:99b5ccf27215 518
<> 133:99b5ccf27215 519 __attribute__( ( always_inline ) ) __STATIC_INLINE uint64_t __SMLALD (uint32_t op1, uint32_t op2, uint64_t acc)
<> 133:99b5ccf27215 520 {
<> 133:99b5ccf27215 521 union llreg_u{
<> 133:99b5ccf27215 522 uint32_t w32[2];
<> 133:99b5ccf27215 523 uint64_t w64;
<> 133:99b5ccf27215 524 } llr;
<> 133:99b5ccf27215 525 llr.w64 = acc;
<> 133:99b5ccf27215 526
<> 133:99b5ccf27215 527 #ifndef __ARMEB__ // Little endian
<> 133:99b5ccf27215 528 __ASM volatile ("smlald %0, %1, %2, %3" : "=r" (llr.w32[0]), "=r" (llr.w32[1]): "r" (op1), "r" (op2) , "0" (llr.w32[0]), "1" (llr.w32[1]) );
<> 133:99b5ccf27215 529 #else // Big endian
<> 133:99b5ccf27215 530 __ASM volatile ("smlald %0, %1, %2, %3" : "=r" (llr.w32[1]), "=r" (llr.w32[0]): "r" (op1), "r" (op2) , "0" (llr.w32[1]), "1" (llr.w32[0]) );
<> 133:99b5ccf27215 531 #endif
<> 133:99b5ccf27215 532
<> 133:99b5ccf27215 533 return(llr.w64);
<> 133:99b5ccf27215 534 }
<> 133:99b5ccf27215 535
<> 133:99b5ccf27215 536 __attribute__( ( always_inline ) ) __STATIC_INLINE uint64_t __SMLALDX (uint32_t op1, uint32_t op2, uint64_t acc)
<> 133:99b5ccf27215 537 {
<> 133:99b5ccf27215 538 union llreg_u{
<> 133:99b5ccf27215 539 uint32_t w32[2];
<> 133:99b5ccf27215 540 uint64_t w64;
<> 133:99b5ccf27215 541 } llr;
<> 133:99b5ccf27215 542 llr.w64 = acc;
<> 133:99b5ccf27215 543
<> 133:99b5ccf27215 544 #ifndef __ARMEB__ // Little endian
<> 133:99b5ccf27215 545 __ASM volatile ("smlaldx %0, %1, %2, %3" : "=r" (llr.w32[0]), "=r" (llr.w32[1]): "r" (op1), "r" (op2) , "0" (llr.w32[0]), "1" (llr.w32[1]) );
<> 133:99b5ccf27215 546 #else // Big endian
<> 133:99b5ccf27215 547 __ASM volatile ("smlaldx %0, %1, %2, %3" : "=r" (llr.w32[1]), "=r" (llr.w32[0]): "r" (op1), "r" (op2) , "0" (llr.w32[1]), "1" (llr.w32[0]) );
<> 133:99b5ccf27215 548 #endif
<> 133:99b5ccf27215 549
<> 133:99b5ccf27215 550 return(llr.w64);
<> 133:99b5ccf27215 551 }
<> 133:99b5ccf27215 552
<> 133:99b5ccf27215 553 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUSD (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 554 {
<> 133:99b5ccf27215 555 uint32_t result;
<> 133:99b5ccf27215 556
<> 133:99b5ccf27215 557 __ASM volatile ("smusd %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 558 return(result);
<> 133:99b5ccf27215 559 }
<> 133:99b5ccf27215 560
<> 133:99b5ccf27215 561 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMUSDX (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 562 {
<> 133:99b5ccf27215 563 uint32_t result;
<> 133:99b5ccf27215 564
<> 133:99b5ccf27215 565 __ASM volatile ("smusdx %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 566 return(result);
<> 133:99b5ccf27215 567 }
<> 133:99b5ccf27215 568
<> 133:99b5ccf27215 569 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLSD (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 570 {
<> 133:99b5ccf27215 571 uint32_t result;
<> 133:99b5ccf27215 572
<> 133:99b5ccf27215 573 __ASM volatile ("smlsd %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 574 return(result);
<> 133:99b5ccf27215 575 }
<> 133:99b5ccf27215 576
<> 133:99b5ccf27215 577 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMLSDX (uint32_t op1, uint32_t op2, uint32_t op3)
<> 133:99b5ccf27215 578 {
<> 133:99b5ccf27215 579 uint32_t result;
<> 133:99b5ccf27215 580
<> 133:99b5ccf27215 581 __ASM volatile ("smlsdx %0, %1, %2, %3" : "=r" (result) : "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 582 return(result);
<> 133:99b5ccf27215 583 }
<> 133:99b5ccf27215 584
<> 133:99b5ccf27215 585 __attribute__( ( always_inline ) ) __STATIC_INLINE uint64_t __SMLSLD (uint32_t op1, uint32_t op2, uint64_t acc)
<> 133:99b5ccf27215 586 {
<> 133:99b5ccf27215 587 union llreg_u{
<> 133:99b5ccf27215 588 uint32_t w32[2];
<> 133:99b5ccf27215 589 uint64_t w64;
<> 133:99b5ccf27215 590 } llr;
<> 133:99b5ccf27215 591 llr.w64 = acc;
<> 133:99b5ccf27215 592
<> 133:99b5ccf27215 593 #ifndef __ARMEB__ // Little endian
<> 133:99b5ccf27215 594 __ASM volatile ("smlsld %0, %1, %2, %3" : "=r" (llr.w32[0]), "=r" (llr.w32[1]): "r" (op1), "r" (op2) , "0" (llr.w32[0]), "1" (llr.w32[1]) );
<> 133:99b5ccf27215 595 #else // Big endian
<> 133:99b5ccf27215 596 __ASM volatile ("smlsld %0, %1, %2, %3" : "=r" (llr.w32[1]), "=r" (llr.w32[0]): "r" (op1), "r" (op2) , "0" (llr.w32[1]), "1" (llr.w32[0]) );
<> 133:99b5ccf27215 597 #endif
<> 133:99b5ccf27215 598
<> 133:99b5ccf27215 599 return(llr.w64);
<> 133:99b5ccf27215 600 }
<> 133:99b5ccf27215 601
<> 133:99b5ccf27215 602 __attribute__( ( always_inline ) ) __STATIC_INLINE uint64_t __SMLSLDX (uint32_t op1, uint32_t op2, uint64_t acc)
<> 133:99b5ccf27215 603 {
<> 133:99b5ccf27215 604 union llreg_u{
<> 133:99b5ccf27215 605 uint32_t w32[2];
<> 133:99b5ccf27215 606 uint64_t w64;
<> 133:99b5ccf27215 607 } llr;
<> 133:99b5ccf27215 608 llr.w64 = acc;
<> 133:99b5ccf27215 609
<> 133:99b5ccf27215 610 #ifndef __ARMEB__ // Little endian
<> 133:99b5ccf27215 611 __ASM volatile ("smlsldx %0, %1, %2, %3" : "=r" (llr.w32[0]), "=r" (llr.w32[1]): "r" (op1), "r" (op2) , "0" (llr.w32[0]), "1" (llr.w32[1]) );
<> 133:99b5ccf27215 612 #else // Big endian
<> 133:99b5ccf27215 613 __ASM volatile ("smlsldx %0, %1, %2, %3" : "=r" (llr.w32[1]), "=r" (llr.w32[0]): "r" (op1), "r" (op2) , "0" (llr.w32[1]), "1" (llr.w32[0]) );
<> 133:99b5ccf27215 614 #endif
<> 133:99b5ccf27215 615
<> 133:99b5ccf27215 616 return(llr.w64);
<> 133:99b5ccf27215 617 }
<> 133:99b5ccf27215 618
<> 133:99b5ccf27215 619 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SEL (uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 620 {
<> 133:99b5ccf27215 621 uint32_t result;
<> 133:99b5ccf27215 622
<> 133:99b5ccf27215 623 __ASM volatile ("sel %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 624 return(result);
<> 133:99b5ccf27215 625 }
<> 133:99b5ccf27215 626
<> 133:99b5ccf27215 627 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QADD(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 628 {
<> 133:99b5ccf27215 629 uint32_t result;
<> 133:99b5ccf27215 630
<> 133:99b5ccf27215 631 __ASM volatile ("qadd %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 632 return(result);
<> 133:99b5ccf27215 633 }
<> 133:99b5ccf27215 634
<> 133:99b5ccf27215 635 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __QSUB(uint32_t op1, uint32_t op2)
<> 133:99b5ccf27215 636 {
<> 133:99b5ccf27215 637 uint32_t result;
<> 133:99b5ccf27215 638
<> 133:99b5ccf27215 639 __ASM volatile ("qsub %0, %1, %2" : "=r" (result) : "r" (op1), "r" (op2) );
<> 133:99b5ccf27215 640 return(result);
<> 133:99b5ccf27215 641 }
<> 133:99b5ccf27215 642
<> 133:99b5ccf27215 643 #define __PKHBT(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 644 ({ \
<> 133:99b5ccf27215 645 uint32_t __RES, __ARG1 = (ARG1), __ARG2 = (ARG2); \
<> 133:99b5ccf27215 646 __ASM ("pkhbt %0, %1, %2, lsl %3" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2), "I" (ARG3) ); \
<> 133:99b5ccf27215 647 __RES; \
<> 133:99b5ccf27215 648 })
<> 133:99b5ccf27215 649
<> 133:99b5ccf27215 650 #define __PKHTB(ARG1,ARG2,ARG3) \
<> 133:99b5ccf27215 651 ({ \
<> 133:99b5ccf27215 652 uint32_t __RES, __ARG1 = (ARG1), __ARG2 = (ARG2); \
<> 133:99b5ccf27215 653 if (ARG3 == 0) \
<> 133:99b5ccf27215 654 __ASM ("pkhtb %0, %1, %2" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2) ); \
<> 133:99b5ccf27215 655 else \
<> 133:99b5ccf27215 656 __ASM ("pkhtb %0, %1, %2, asr %3" : "=r" (__RES) : "r" (__ARG1), "r" (__ARG2), "I" (ARG3) ); \
<> 133:99b5ccf27215 657 __RES; \
<> 133:99b5ccf27215 658 })
<> 133:99b5ccf27215 659
<> 133:99b5ccf27215 660 __attribute__( ( always_inline ) ) __STATIC_INLINE uint32_t __SMMLA (int32_t op1, int32_t op2, int32_t op3)
<> 133:99b5ccf27215 661 {
<> 133:99b5ccf27215 662 int32_t result;
<> 133:99b5ccf27215 663
<> 133:99b5ccf27215 664 __ASM volatile ("smmla %0, %1, %2, %3" : "=r" (result): "r" (op1), "r" (op2), "r" (op3) );
<> 133:99b5ccf27215 665 return(result);
<> 133:99b5ccf27215 666 }
<> 133:99b5ccf27215 667
<> 133:99b5ccf27215 668
<> 133:99b5ccf27215 669 #elif defined ( __ICCARM__ ) /*------------------ ICC Compiler -------------------*/
<> 133:99b5ccf27215 670 /* IAR iccarm specific functions */
<> 133:99b5ccf27215 671 #include <cmsis_iar.h>
<> 133:99b5ccf27215 672
<> 133:99b5ccf27215 673
<> 133:99b5ccf27215 674 #elif defined ( __TMS470__ ) /*---------------- TI CCS Compiler ------------------*/
<> 133:99b5ccf27215 675 /* TI CCS specific functions */
<> 133:99b5ccf27215 676 #include <cmsis_ccs.h>
<> 133:99b5ccf27215 677
<> 133:99b5ccf27215 678
<> 133:99b5ccf27215 679 #elif defined ( __TASKING__ ) /*------------------ TASKING Compiler --------------*/
<> 133:99b5ccf27215 680 /* TASKING carm specific functions */
<> 133:99b5ccf27215 681 /* not yet supported */
<> 133:99b5ccf27215 682
<> 133:99b5ccf27215 683
<> 133:99b5ccf27215 684 #elif defined ( __CSMC__ ) /*------------------ COSMIC Compiler -------------------*/
<> 133:99b5ccf27215 685 /* Cosmic specific functions */
<> 133:99b5ccf27215 686 #include <cmsis_csm.h>
<> 133:99b5ccf27215 687
<> 133:99b5ccf27215 688 #endif
<> 133:99b5ccf27215 689
<> 133:99b5ccf27215 690 /*@} end of group CMSIS_SIMD_intrinsics */
<> 133:99b5ccf27215 691
<> 133:99b5ccf27215 692
<> 133:99b5ccf27215 693 #ifdef __cplusplus
<> 133:99b5ccf27215 694 }
<> 133:99b5ccf27215 695 #endif
<> 133:99b5ccf27215 696
<> 133:99b5ccf27215 697 #endif /* __CORE_CMSIMD_H */