If you have this code:
main(){
unsigned int x = 5;
int y = -3;
if (y>x)
printf("y is bigger\n\r");
}
What is the output of this?
- int = signed int
- when comparing unsigned to signed, the signed is casted into unsigned
- in our case
- x = 00000005
- y = FFFFFFFD (MSB = sign bit = 1, the other bits shall be the 2's complement of 3)
- casting u into unsigned, then comparing, y > x --> true
- the output: "y is bigger"
What about normal operation?
main(){
int x;
int y = -10;
unsigned int z = 4;
x = y + z;
printf("%d\n\r",x);
}
// -6?
// Also y will be casted to unsigned, then added to z, the result will be correct, thanks to the 2's complement representation
// y + z = FFFFFFF6 + 00000004 = FFFFFFFA = - 6
Ex:
main(){
short x;
long x2;
short y = -10;
unsigned short z = 4;
x = y + z;
x2 = y + z;
x3 = y + (short)z;
printf("%d, %d, %d\n\r",x, x2, x3);
}
in 16 bit machine:
- y + z = 0xFFF6 + 0x0004 = 0xFFFA
- x = 0xFFFA
- Since x is signed, then x = - 6
- x2 = 0x0000FFFA
- Since x2 is signed, then x = + 65530
- No sign extension happened. because (y + z) is unsigned
- x3 = 0xFFFFFFFA
- Since x3 is signed, then x = - 6
- sign extension happened because (y + z) is signed
in 32 bit machine:
- y + z = 0xFFFFFFF6 + 0x00000004 = 0xFFFFFFFA