Even calling uninitialized data “garbage” is misleading. You might expect that the compiler would just leave out some initialization code and compile the remaining code in the expected way, causing the values to be “whatever was in memory previously”. But no - the compiler can (and absolutely will) optimize by assuming the values are whatever would be most convenient for optimization reasons, even if it would be vanishingly unlikely or even impossible.
At high enough optimization levels, the function compiles to “mov eax, 9485; ret”, which sets both a=13 and b=37 without testing the condition at all - as if both branches of the test were executed. This is perfectly reasonable because the lack of initialization means the values could already have been set that way (even if unlikely), so the compiler just goes ahead and sets them that way. It’s faster!
If I understand it right, in principle the compiler doesn't even need to do that.
It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.
It could even just replace the function body with abort(), or omit the implementation entirely (even the ret instruction, allowing execution to just fall through to whatever memory happens to follow). Whether any computer does that in practice is another matter.
Even the notion that uninitialized memory contain values is kind of dangerous. Once you access them you can't reason about what's going to happen at all. Behaviour can happen that's not self-consistent with any value at all: https://godbolt.org/z/adsP4sxMT
Even if you fixed the initialized data problem, this code is still a bug waiting to happen. It should be a single bool in the struct to handle the state for the function as there are only two states that actually make sense.
succeeded = true;
error = true;
//This makes no sense
succeeded = false;
error = false;
//This makes no sense
Otherwise if I'm checking a response, I am generally going to check just "succeeded" or "error" and miss one of the two above states that "shouldn't happen", or if I check both it's both a lot of awkward extra code and I'm left with trying to output an error for a state that again makes no sense.
It happens often when "error" field is not a bool, but a string, aka error_message. Could be empty string, or _null_, or even _undefined_ if we're in JS.
Then the obvious question why do we need _succeeded_ at all, if we can always check for _error_. Sometimes it can be useful, when the server doesn't know itself if the operation is succeeded (e.g. an IO/database operation timed out), so it might be succeeded, but should also show an error message to user.
Another possibility if the succeeded is not a bool, but, say, "succeeded_at" timestamp. In general, I noticed that almost always any boolean value in database can be replaced with a timestamp or an error code.
I have bumped into this myself, too. It's really annoying. The biggest footgun isn't even discussed explicitly and it might be how the error got introduced - it's when the struct goes from POD to non-POD or vice-versa, the rules change, so completely innocent change, like adding a string field, can suddenly create undefined behaviour in unrelated code that was correct previously.
Yeah, looks pretty straightforward to me, but I used to write C++ for a living. I mean, there are complicated cases in C++ starting with C++11, this one is not really one of them. Just init the fields to false. Most of these cases is just C++ trying to bring in new features without breaking legacy code, it has become pretty difficult to keep up with it all.
Many years had a customer complaint about undefined data changing value in Fortran 77. It turned out that the compiler never allocated storage for uninitialized variables, so it was aliased to something else.
Compiler was changed to allocate storage for any referenced varibles.
The two fields in the struct are expected to be false unless changed, then initialize them as such. Nothing is gained by leaving it to the compiler, and a lot is lost.
I think the point is that sometimes variables are defined by the language spec as initialized to zero, and sometimes they aren't.
Perhaps what you mean is, "Nothing is to be gained by relying on the language spec to initialize things to zero, and a lot is lost"; I'd agree with that.
Yeah... but I wouldn't characterize the bug itself (in its essential form) as UB.
Even if the implementation specified that the data would be indeterminate depending on what existed in that memory location previously, the bug would still exist.
Even if you hand-coded this in assembly, the bug would still exist.
The essence of the bug is uninitialized data being garbage. That's always gonna be a latent bug, regardless of whether the behavior is defined in an ISO standard.
Yeah I agree. This is a classic “uninitialized variable has garbage memory value” bug. But it is not a “undefined nasal demons behavior” bug.
That said, we all learn this one! I spent like two weeks debugging a super rare desync bug in a multiplayer game with a P2P lockstep synchronous architecture.
Suffice to say I am now a zealot about providing default values all the time. Thankfully it’s a lot easier since C++11 came out and lets you define default values at the declaration site!
I prefer language constructs define that new storage is zero-initialized. It doesn't prevent all bugs (i.e. application logic bugs) but at least gives deterministic results. These days it's zero cost for local variables and near-zero cost for fields. This is the case in Virgil.
That makes things worse if all-zero is not a valid value for the datatype. I'd much prefer a set-up that requires you to initialise explicitly. Rust, for example, has a `Default` trait that you can implement if there is a sensible default, which may well be all-zero. It also has a `MaybeUninit` holder which doesn't do any initialisation, but needs an `unsafe` to extract the value once you've made sure it's OK. But if you don't have a suitable default, and don't want/need to use `unsafe`, you have to supply all the values.
Symbian's way of avoiding this was to use a class called CBase to derive from. CBase would memset the entire allocated memory for the object to binary zeros, thus zeroizing any member variable.
And by convention, all classes derived from CBase would start their name with C, so something like CHash or CRectangle.
As an example, consider this code (godbolt: https://godbolt.org/z/TrMrYTKG9):
At high enough optimization levels, the function compiles to “mov eax, 9485; ret”, which sets both a=13 and b=37 without testing the condition at all - as if both branches of the test were executed. This is perfectly reasonable because the lack of initialization means the values could already have been set that way (even if unlikely), so the compiler just goes ahead and sets them that way. It’s faster!It can just leave the result totally uninitialised. That's because both code paths have undefined behaviour: whichever of result.x or result.y is not set is still copied at "return result" which is undefined behaviour, so the overall function has undefined behaviour either way.
It could even just replace the function body with abort(), or omit the implementation entirely (even the ret instruction, allowing execution to just fall through to whatever memory happens to follow). Whether any computer does that in practice is another matter.
The code says that if x is true then a=13 and if it is false than b=37.
This is the case. Its just that a=13 even if x is false. A thing that the code had nothing to say about, and so the compiler is free to do.
succeeded = true; error = true; //This makes no sense
succeeded = false; error = false; //This makes no sense
Otherwise if I'm checking a response, I am generally going to check just "succeeded" or "error" and miss one of the two above states that "shouldn't happen", or if I check both it's both a lot of awkward extra code and I'm left with trying to output an error for a state that again makes no sense.
Then the obvious question why do we need _succeeded_ at all, if we can always check for _error_. Sometimes it can be useful, when the server doesn't know itself if the operation is succeeded (e.g. an IO/database operation timed out), so it might be succeeded, but should also show an error message to user.
Another possibility if the succeeded is not a bool, but, say, "succeeded_at" timestamp. In general, I noticed that almost always any boolean value in database can be replaced with a timestamp or an error code.
Compiler was changed to allocate storage for any referenced varibles.
Perhaps what you mean is, "Nothing is to be gained by relying on the language spec to initialize things to zero, and a lot is lost"; I'd agree with that.
I think a sanitizer probably would have caught this, but IMHO this is the language's fault.
Hopefully future versions of C++ will mandate default initialization for all cases that are UB today and we can be free of this class of bug.
Even if the implementation specified that the data would be indeterminate depending on what existed in that memory location previously, the bug would still exist.
Even if you hand-coded this in assembly, the bug would still exist.
The essence of the bug is uninitialized data being garbage. That's always gonna be a latent bug, regardless of whether the behavior is defined in an ISO standard.
That said, we all learn this one! I spent like two weeks debugging a super rare desync bug in a multiplayer game with a P2P lockstep synchronous architecture.
Suffice to say I am now a zealot about providing default values all the time. Thankfully it’s a lot easier since C++11 came out and lets you define default values at the declaration site!
And by convention, all classes derived from CBase would start their name with C, so something like CHash or CRectangle.