In commit M20150601 the code was changed from:
uint8_t taddr[4];
getSIPR(taddr);
to:
uint32_t taddr;
getSIPR((uint8_t*)&taddr);
This works on most platforms, but on 16-bit word architectures (e.g. TI C28x / DSP28335) with optimizations enabled and uint8_t typedef’d as unsigned char, it can cause stack corruption and wrong return addresses.
sizeof(uint8_t[4]); // 4 chars = 64 bits
sizeof(uint32_t); // 2 chars = 32 bits
Example disassembly (DSP28335):
safe:
socket.c:
328042: B2BD MOVL *SP++, XAR1
328043: AABD MOVL *SP++, XAR2
328044: A2BD MOVL *SP++, XAR3
328045: FE06 ADDB SP, #6 \\!!!
328046: 5AA5 MOVZ AR2, @AR5
...
RPC OK
unsafe
socket.c:
328042: B2BD MOVL *SP++, XAR1
328043: AABD MOVL *SP++, XAR2
328044: A2BD MOVL *SP++, XAR3
328045: FE04 ADDB SP, #4 \\!!!
328046: 5AA5 MOVZ AR2, @AR5
...
RPC corrupted (e.g., 0x190001 )
This corrupts the stack, leading to a wrong return address from getSIPR()