a = b + c;What could possible go wrong? Let us list the ways:
- a, b, or c are the not the variables we want
- We specified addition, but we wanted something else
- The addition operation overflows
- a and b are unsigned, while c is signed and negative; the result becomes unsigned
- a has a smaller size than b or c; the assignment operation overflows
- a is signed while b or c are unsigned; a becomes negative while b+c are positive
- a is unsigned, while b and c are signed; again we have an overflow
- We used the wrong indentation level and people are unhappy about it
- to handle (3), we need trapping signed addition and trapping unsigned addition instructions
- to handle (4), we need a mixed signedness trapping add
- to handle (5), we need a trapping store unsigned and trapping store signed instructions, which check that the value in the register fits into the memory location specified
- ditto with (6) and (7)
So why are those instructions missing? In the 70's and 80's when the industry was ramping up performance was a much greater concern than security. Code was smaller and easier to audit; CPU cycles were longer and therefore more important to conserve; networks were small and private; truly malicious attacks were rare.
An unvirtuous cycle followed: C tried to make the most of exising processors, so its semantics mimic the instruction set of those days. It then became wildly successful, so processors were optimized for running C code; naturally they implemented or optimized instructions which directly translated to C concepts. This made C even more popular.
A pair of examples from the x86 world are the INTO and BOUND instructions. INTO (INTerrupt on Overflow) can follow an addition or subtraction instruction, effectively turning it into a trapping signed instruction. BOUND performs an array subscript bounds checking, trapping if the index is out of bounds. But the first implementations were rarely used, so they were not optimized in later iterations of the processor. Finally, the 64-bit extensions to the x86 instruction set removed those two instructions for good.
2 comments:
I think you're confusing C for a language that maps 1-1 with assembly. It's not.
In the example case (4), section 6.3.1.3 explicitly calls for what happens in this scenario. The variable 'c' is converted to an unsigned value by repeatedly adding MAX_INT which due to the behavior of overflow means that you'll get the 2's compliment representation as an unsigned integer.
But this is a property of the type system, not machine architecture. This could simply be treated as a compile error or require an explicit cast. This would sacrifice absolutely no performance. It just requires the programmer to be explicit when doing unsafe things.
Requiring an explicit cast fixes nothing. Suppose you do have an unsigned and signed addition - it happens in the best of families. A compile error is secure, but then your program has failed in 100% of the cases, which is somewhat above the norm. An explicit cast means that the programmer has signed a disclaimer that he understands he is doing unsafe things and bears responsibility for the risk in its entirety, and now you're back at a run-time unsafe addition that sometimes yields incorrect and unsafe results.
What we need is a run-time check, and current instruction sets don't provide one, because C didn't insist on it in the past.
Post a Comment