Systems programmers love to hate on unsigned integers. Generations have been infected with the Java world model that integers have to be pretend number lines centered on zero. Guess what, you still have boundary conditions to deal with. There are times when you really really need to use the full word range without negative values. This happens more often with low level programming and machines with small word sizes, something fewer people are engaged in. It doesn't need to be the default. Ada has them sequestered as modular types but it's available to use when needed.
Java doesn't have unsigned as primitive types, because James Gosling did a series of interviews at Sun among "expert" C devs, and all got the C language rules for unsigned arithmetic wrong.
Yes I miss them in Java as primitives, however there are utility methods for unsigned arithmetic, that get it right.
Why does an unsigned type for sizes or indices fare worse than a signed type? When do I want the -247th element in an array? When do I have a block that is -10 bytes in size?
You never want any element of an array, except elements within the range [0, array_length). Anything outside of that is undefined behavior.
I think people tend to overthink this. A function which takes an index argument, should simply return a result when the index is within the valid range, and error if it's outside of it (regardless of whether it's outside by being too low or too high). It doesn't particularly matter that the integer is signed.
If you aren't storing 2^64 elements in your array (which you probably aren't - most systems don't even support addressing that much memory) then the only thing unsigned gets you is a bunch of footguns (like those described in the OP article).
There are (rare) times when you want negative array indices. C lets you index in both directions from a pointer to the middle of an array. That's why array indexing is signed in C. Some libc ctypes lookup tables do this. For sizing there is no strong case for negatives other than to shoehorn them into signed operations.
That’s interesting but seems pretty dangerous. How do you know you aren’t going to decrement off the front of the array? Keeping the pointer to the first element in the array and using offsets seems safer for humans and I don’t think the computer would care.
Kinda a smart alec response, but how do you know you aren’t going to increment off the end of the array when operating normally? I guess it is twice the danger.
the reason is not that you want a negative index or size, but that you want the computation of the index to be correct, and you want to have obvious errors. Both turns out to be easier with signed types.
In Java, unsigned arithmetic is available through an API and, as you said, it is pretty much only needed when marshalling to certain wire protocols or for FFI. Built-in unsigned types are useful primarily for bitfields or similar tiny types with up to 6 bits or so.
I think all the unsigned arithmetic you need is already offered. Unsigned shift right is an operator; the primitive wrappers offer compareUnsigned, divideUnsigned, and remainderUnsigned, as well as conversion methods; unsigned exponentiation is offered in Math (because signed types in Java wrap, there's no need for special unsigned addition/subtraction).
> There are times when you really really need to use the full word range without negative values.
There are a few of those, but that is the niche case. Certainly when we're talking about 64-bit size types. And if you want to cater to smaller size types, then just just template over the size type. Or, OK, some other trick if it's C rather than C++.
Sometimes (and very often in some scenarios/industries, i.e. HPC for graphics and simulation with indices for things like points, vertices, primvars, voxels, etc) you want pretty good efficiency of the size of the datatype as well for memory / cache performance reasons, because you're storing millions of them, and need to be random addressing (so can't really bit-pack to say 36 bytes, at least without overhead away from native types, which are really needed for maximum speed without any branching).
Losing half the range to make them signed when you only care about positive values 95% of the time (and in the rare case when you do any modulo on top of them you can cast, or write wrappers for that), is just a bad trade-off.
Yes, you've still then only doubled the range to 2^32, and you'll still hit it at some point, but that extra byte can make a lot of difference from a memory/cache efficiency standpoint without jumping to 64-bit.
So very often uint32_t is a very good sweet spot for size: int32_t is sometimes too small, and (u)int64_t is generally not needed and too wasteful.
As I generally believed in Moore's law, i.e., accepted the notion that transistors were exponential, I was surprised at how long the difference between a 2 GiB address space and a 3 GiB address space was relevant in practice.
In theory, it should have been at most a year. In practice, Windows XP /3GB boot switch (allocates 3 GB of virtual address space user mode and 1 GiB for the kernel instead of the usual 2 and 2) was relevant for many years.
>If sizes are unsigned, like in C, C++, Rust and Zig – then it follows that anything involving indexing into data will need to either be all unsigned or require casts. With C’s loose semantics, the problem is largely swept under the rug, but for Rust it meant that you’d regularly need to cast back and forth when dealing with sizes.
TBH I've had very little struggle with this at all. As long as you keep your values and types separate, the unsigned type that you got a number from originally feeds just fine into the unsigned type that you send it to next. Needing casting then becomes a very clear sign that you're mixing sources and there be dragons, back up and fix the types or stop using the wrong variable. It's a low-cost early bug detector.
Implicitly casting between integer types though... yeah, that's an absolute freaking nightmare.
> As long as you keep your values and types separate, the unsigned type that you got a number from originally feeds just fine into the unsigned type that you send it to next.
Part of me feels like direct numeric array indexing is one of the last holdouts of a low-level operation screaming for some standardized higher-level abstraction. I'm not saying to get rid of the ability to index directly, but if the error-resistant design here is to use numeric array indices as though they were opaque handles, maybe we just need to start building support for opaque handles into our languages, rather than just handing out numeric indices and using thoughts and prayers to stop people from doing math on them.
For analogy, it's like how standardizing on iterators means that Rust code rarely needs to index directly, in contrast with C, where the design of for-loops encourages indexing. But Rust could still benefit from opaque handles to take care of those in-between cases where iterators are too limiting and yet where numeric indices are more powerful than needed.
> Part of me feels like direct numeric array indexing is one of the last holdouts of a low-level operation screaming for some standardized higher-level abstraction.
Maybe this isn't what you're suggesting, but it's already possible to make an interface that prevents callers from doing math on indices in Rust — just return a struct that has a private member for the index. The caller can pass the value back at which point you can unwrap it and do index arithmetic.
You do need to store those if they're totally opaque though, e.g. how do you represent a range without holding N tokens? Often I like it, and it allows changing the underlying storage to be e.g. generational with ~no changes, but it kinda can't be enforced for runtime-cost reasons.
Using a unique type per array instance though, that I quite like, and in index-heavy code I often build that explicitly (e.g. in Go) because it makes writing the code trivially correct in many cases. Indices are only very rarely shared between arrays, and exceptions can and should look different because they need careful scrutiny compared to "this is definitely intended for that array".
> But what about the range? While it’s true that you get twice the range, surprisingly often the code in the range above signed-int max is quite bug-ridden. Any code doing something like (2U * index) / 2U in this range will have quite the surprise coming.
Alas, (2S * signed_index) / 2S will similarly result in surprises the moment the signed_index hits half the signed-int max. There's no free lunch when trying to cheat the integer ranges.
The difference is that in the unsigned case you get a seemingly plausible value, and in the signed case you get a negative value which you can be sure is wrong. This is the problem.
> The former is easier to define, but has the downside of essentially “silencing warnings”. Let’s say the code was originally written to cast an u16 to u32, but later the variable type changes from u16 to u64 and the cast is now actually silently truncating things. Here we have casts becoming a sort of “silence all warnings”.
Well … we even mention Rust in the paragraph right before this. In Rust, you can up a u16 to a u64 this way:
let bigger: u32 = x.into();
or
let bigger = u32::from(x);
The conversion `from` is infallible, because a u16 always fits in a u32. There is no `from(u64) -> u32`, because as the article notes, that would truncate, so if we did change the type to u64, the code would now fail to compile. (And we'd be forced to figure out what we want to do here.)
(There are fallible conversions, too, in the form of try_from, that can do u64 → u32, but will return an error if the conversion fails.)
Similarly, for,
for (uint x = 10; x >= 0; x--) // Infinte loop!
This is why I think implicit wrapping is a bad idea in language design. Even Rust went down the wrong path (in my mind) there, and I think has worked back towards something safer in recent years. But Rust provides a decent example here too; this is pseudo-code:
for (uint x = 10; x.is_some(); x = x.checked_sub(1))
Where `checked_sub` is returns `None` instead of wrapping, providing us a means to detect the stopping point. So, something like that. (Though you'd probably also want to destructure the option into the uint for use inside the loop.) Of course, higher-level stuff always wins out here, I think, and in Rust you wouldn't write the above; instead something like,
for x in (0..=10).rev()
(And even then, if we need indexes; usually, one would prefer to iterate through a slice or something like that. The higher-level concept of iterators usually dispenses with most or all uses of indexes, and in the rare cases when needed, most languages provide something like `enumerate` to get them from the iterator.)
In my reading, what Stroustroup is saying is that given other problems in c/c++, that singed sizes are less bad than unsigned but both have clear and significant deficiencies. A new language doesn't have to inherit all of these deficiencies.
No. He says that signed/unsigned arithmetic is a universal problem. And in the context of std::span, using signed arithmetic is the correct choice rather than shoehorning in size_t to make it more cosmetically consistent with the rest of the STL.
I might be a contrarian in that I actually like using unsigned integers for sizes and indexes. In my experience, most of their trappings can be prevented by treating any subtraction involving them as a `reinterpret_cast`: i.e.
* Do your utmost to rewrite the code in order to avoid doing that (e.g. reordering disequations to transform subtractions into additions).
* If not possible, think very hard about any possible edge case: you most certainly need an additional `if` to deal with those.
* When analyzing other people's code during troubleshooting merge reviews, assume any formula involving an unsigned integer and a minus sign is wrong.
I am personally moving in the opposite direction. I haven't meaningfully used a signed integer in years, and I see signed integers as being for more niche use-cases. I mainly only use a signed types when I want to do a "signed shift right". If there was a >>> operator in Zig I wouldn't even think of signed integers.
Given your examples, I think you'd have fewer issues if you were working with unsigned integers exclusively. Although I'm curious about what other code you were referencing with this:
"But seeing how each change both made the code easier to reason about and more correct, I couldn’t deny the evidence."
With regards to modulo, in Zig if you try to use it with a signed integer it will tell you to specify whether you want `@mod` or `@rem` semantics. In my case, I'd almost never write `x % 2`, I'd write `x & 1`. I do use unsigned division but I'd pretty much never write code that would emit the `div` instruction.
I'm not saying you're wrong though! Everyone has a different mind. If you attain higher correctness and understandability through using signed integers, that's great. I'm just saying I'm in the opposite camp.
> If sizes are unsigned, like in C, C++, Rust and Zig – then it follows that anything involving indexing into data will need to either be all unsigned or require casts.
I don’t really get this claim. Indexing should just look up the element corresponding to the value provided. It’s easy to come up with semantics that are intuitive and sound, even if signed integers or ones smaller than size_t are used.
Indexing does that, but the indices must vary in a certain range, whose limits are frequently determined by using something like "sizeof(array)/sizeof(element)" which is an unsigned number.
This is especially inconvenient in C, where there exist extremely dangerous legacy implicit casts between signed integers and unsigned integers, which have a great probability of generating incorrect values.
Because the index is typically a signed integer, comparing it with an unsigned limit without using explicit casts is likely to cause bugs. Using explicit casts of smaller unsigned integers towards bigger signed integers results in correct code, but it is cumbersome.
These problems are avoided as said in TFA, by making "sizeof" and the like to have 64-bit signed integer values, instead of unsigned values.
Well chosen implicit conversions are good for a programming language, by reducing unnecessary verbosity, but the implicit integer conversions of C are just wrong and they are by far the worst mistake of C much worse than any other C feature.
Other C features are criticized because they may be misused by inexperienced or careless programmers, but most of the implicit integer conversions are just incorrect. There is no way of using them correctly. Only the conversions from a smaller signed integer to a bigger signed integer are correct.
Mixed signedness conversions have always been wrong and the conversions between unsigned integers have been made wrong by the change in the C standard that has decided that the unsigned integers are integer residues modulo 2^N and they are not non-negative integers.
For modular integers, the only correct conversions are from bigger numbers to smaller numbers, i.e. the opposite of the implicit conversions of C. The implicit conversions of C unsigned numbers would have been correct for non-negative integers, but in the current C standard there are no such numbers.
The current C standard is inconsistent, because the meaning of sizeof is of a non-negative integer and this is also true for the conversions between unsigned numbers, but all the arithmetic operations with unsigned numbers are defined to be operations with integer residues, not operations with non-negative numbers.
The hardware of most processors implements at least 3 kinds of arithmetic operations: operations with signed integers, operations with non-negative integers and operations with integer residues.
Any decent programming language should define distinct types for these kinds of numbers, otherwise the only way to use completely the processor hardware is to use assembly language. Because C does not do this, you have to use at least inline assembly, if not separate assembly source files, for implementing operations with big numbers.
It was undefined what happens at unsigned overflows and underflows. Therefore a compiler could choose to implement "unsigned" as either non-negative numbers or as integer residues.
The fact that "sizeof" is unsigned and the implicit conversions between "unsigned" numbers are consistent only with non-negative numbers. Therefore the undefined behavior should have been defined correspondingly.
Instead of this, at some version of the standard, I am lazy to search it now, but it might have been C99, they have changed the behavior from undefined to defined as the behavior of integer residues.
I do not know the reason for this choice, it may have been just laziness, because it is easier to implement in compilers and it leads to maximum performance in the absence of bugs. In any case this decision has broken the standard, because the arithmetic operations have become incompatible with the implicit conversions between "unsigned" types and with the semantics of "sizeof", which must be non-negative.
For non-negative numbers, the correct conversions are from smaller sizes to bigger sizes, while for integer residues the correct conversions are only in the opposite direction, from bigger sizes to smaller sizes (e.g. a number that is 257 modulo 65536 is also 1 modulo 256, so truncating it yields a correct value, while a number that is 1 modulo 256 when modulo 65536 it could be 257, 511, 769 etc. so you cannot extend it without additional information).
Judging from the implicit conversions, it is clear that the intention of the designers of C during the seventies was that "unsigned" numbers must be non-negative integers and not integer residues. The modern C standard is guilty of the current inconsistencies that greatly increase the chances of bugs
I know language designers have a lot of trade-offs to consider... But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.
The potential bugs listed would be prevented by, e.g. "x--" won't compile without explicitly supplying a case for x==0 OR by using some more verbose methods like "decrement_with_wrap".
The trade-off is lack of C-like concise code, but more safe and explicit.
> But I would say if you know a value will logically always be >= 0, better to have a type that reflects that.
Except that's not quite what unsigned types do. They are not (just) numbers that will always be >= 0, but numbers where the value of `1 - 2` is > 1 and depends on the type. This is not an accident but how these types are intended to behave because what they express is that you want modular arithmetic, not non-negative integers.
> e.g. "x--" won't compile without explicitly supplying a case for x==0
If you want non-negative types (which, again, is not what unsigned types are for) you also run into difficulties with `x - y`. It's not so simple.
There are many useful constraints that you might think it's "better to have a type that reflects that" - what about variables that can only ever be even? - but it's often easier said than done.
This is true, which means that a language has to be designed from the ground up to deal with these problems or there will always be inscrutable bugs due to misuse of arithmetic results. A simple example in a c-like language would be that the following function would not compile:
unsigned foo(unsigned a, unsigned b) { return a - b; }
but this would:
unsigned foo(unsigned a, unsigned b) {
auto c = a - b;
return c >= 0 ? c : 0;
}
Assuming 32 bit unsigned and int, the type of c should be computed as the range [-0xffffffff, 0xffffffff], which is different from int [-0x100000000, 0x7fffffff]. Subtle things like this are why I think it is generally a mistake to type annotate the result of a numerical calculation when the compiler can compute it precisely for you.
First, your code is about having unsigned types represent the notion of non-negative values, but this is not the intent of unsigned types in C/C++. They represent modular arithmetic types.
Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit?
You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations).
That's true for signed numbers too though? `int_min - 2 > int_min`
I agree they're a bit more error-prone in practice, but I suspect a huge part of that is because people are so used to signed numbers because they're usually the default (and thus most examples assume signed, if they handle extreme values correctly at all (much example code does not)). And, legitimately, zero is a more commonly-encountered value... but that can push errors to occur sooner, which is generally a desirable thing.
> That's true for signed numbers too though? `int_min - 2 > int_min`
As someone else already pointed out, that's undefined behaviour in C and C++ (in Java they wrap), but the more important point is that the vast majority of integers used in programs are much closer to zero than to int_min/max. Sizes of buffers etc. tend to be particularly small. There are, of course, overflow problems with signed integers, but they're not as common.
Which makes them even less safe than unsigned, where it is defined, yes? The optimizations that can lead to are incredibly hard to predict.
Besides, for safety there are much clearer options, like wrapping_add / saturating_add. Aborting is great as a safety tool though, agreed - it'd be nice if more code used it.
I think it should be alike in Pascal where you have size ranges as types, and then, you can declare that this collection fall on this range (and very nicely, you can make it at enum):
I don't get it. Is this a parody of poor design decisions?
Sure, it's possible to write bugs in C. And if you really want to, you can disable the compiler warnings which flag tautologous comparisons and mixed-sign comparisons (a common reason for doing this is to avoid spurious warnings in generic-type code).
But, uhh, "people can deliberately write bugs" has got to be the weakest justification I've ever seen for changing a language feature -- especially one as fundamental as "sizes of objects can't be negative".
The C language does not have any data type that has the property "can't be negative".
Signed integers can be negative. The so-called "unsigned" integers of C are integer residues modulo 2^N, which are neither positive nor negative, i.e. these concepts are not applicable to "unsigned" integers.
An alternative view is that any C "unsigned" is both positive and negative. For example the unsigned short "1" is the same number as "65537" and as "-65535".
So any sizeof value in C is negative (while also being positive).
In contradiction with what you say, the change described in TFA, by making sizes 64-bit signed integers, is the only method to guarantee that the sizes are non-negative in a language that does not have dedicated non-negative integers.
Other programming languages have non-negative integers, but C and C++ and many languages derived from them do not have such integers.
The arithmetic operations with non-negative integers differ from the arithmetic operations of C. On overflows and underflows, they either generate exceptions or have saturating behavior.
> An alternative view is that any C "unsigned" is both positive and negative. For example the unsigned short "1" is the same number as "65537" and as "-65535".
This can be disproven by the fact that dividing by `unsigned e = 1U` is well defined and always yields the starting number. If the unsigned numbers were really modular numbers as you suggest, division could not be defined.
This does not demonstrate anything. It is just additional evidence that the C standard contains contradictory rules about "unsigned" integers.
The oldest parts of the C language are all consistent with "unsigned" numbers being non-negative integers. The implicit conversions between different sizes of "unsigned", the sizeof operator, the relational operators and division are consistent with non-negative integers.
However the first C standard, instead of defining the correct behavior has left undefined many corner cases of the arithmetic operations, allowing the implementation of "unsigned" as either non-negative integers or integer residues.
Eventually, the undefined behaviors for addition, subtraction and multiplication have been defined to be those of integer residues, not those of non-negative integers.
These contradictory properties are the cause of many confusions and bugs.
In extensible languages, like C++, it is possible to define proper non-negative integers and integer residues and bit strings and to always use those types instead of the built-in "unsigned".
In C, it is better to always use signed numbers and avoid unsigned, by casting unsigned to bigger sizes of signed before using such a value.
Leaving aside the fact that, yes, unsigned integer types are definitely not negative -- my point wasn't about types at all. Objects cannot take up a negative number of bytes of memory!
It seems like they've identified common bugs patterns in C that would have been ameliorated by using signed, but come to the wrong conclusion that signed is the correct answer rather than that C is poorly designed for making the broken code the easy option.
Fix the language. Don't hack around it by using the wrong type.
I hate using languages that only have signed integers. Using integers that can’t be negative fits many problems nicely and avoids the edge case of having to check for negative.
You are perfectly right, but neither C nor C++ nor many more recent languages derived from them have non-negative integers.
The so-called "unsigned" integers of C are integer residues, where each value can be interpreted either as both positive and negative or as neither positive nor negative. In any case no "unsigned" value can be said to be non-negative.
You have to go back to languages not contaminated by C, like Ada, to find true non-negative integers among the primitive data types.
In C++, it is possible to define a non-negative integer type, which can have good performance if you implement its operations in assembly language.
However I am not aware of an open-source library including such a type.
I really appreciate your comments in this thread adrian_b. Could you point me at a brief summary of how Ada (or Pascal?) non-negative ints work? What is a compile error, what is a guaranteed run-time error, etc.
It's not "can't be negative", it's just that the semantics for negativity is wrapping around.
And - yes, there are very important use cases for unsigned/modulo-2n/wraparound values. But sizes of data structures are generally _not_ one of those use cases. The fact that the size is non-negative does not mean that the type should be unsigned. You should still be able to, say, subtract sizes and get a signed value which may be negative.
That’s definitely not true. Unsigned ints have no “negativity” semantic. Wrapping around is what happens when you decrement the minimum value of any integer type, including signed types. Regardless of the type you use to represent an integer value that cannot legally be negative, you will have to take care not to allow your program to return values lower than zero for things like indices or sizes.
> Wrapping around is what happens when you decrement the minimum value of any integer type, including signed types.
No, signed wraparound is undefined behavior in C, whereas unsigneds are defined to wraparound. If you use -ftrapv, signed wraparound is an immediate abort().
While C like in many other places fails to define the correct behavior to avoid shaming the processor or compiler makers that fail to provide it, there are only 2 correct behaviors on overflows and underflows, like when incrementing the biggest number or decrementing the smallest number.
Both for signed integers and for non-negative integers, the 2 alternatives of correct behavior on overflows and underflows is to either generate exceptions or to saturate the result to the biggest or smallest representable number.
Wraparound is the correct behavior for integer residues, which are a distinct data type from either signed integers or non-negative integers.
While some people criticize C for making easy for careless programmers to make certain kinds of bugs, like access outside bounds, those are easily mitigated by using appropriate compiler options.
For me a much more serious defect of C is this confusion promoted by it in the heads of most programmers, who do not understand which are the fundamental integer types and which are the correct conversions between them, because C uses "unsigned" instead of at least 3 distinct types that it does not have, bit strings, integer residues and non-negative integers. More rarely, "unsigned" is used for other 2 types that are missing, binary polynomials and binary polynomial residues. All these 5 types must be primitive types in a programming language because all modern processors implement in hardware distinct operations for all 5 types, which can be accessed only through assembly language when these types are missing.
Yes I miss them in Java as primitives, however there are utility methods for unsigned arithmetic, that get it right.
You never want any element of an array, except elements within the range [0, array_length). Anything outside of that is undefined behavior.
I think people tend to overthink this. A function which takes an index argument, should simply return a result when the index is within the valid range, and error if it's outside of it (regardless of whether it's outside by being too low or too high). It doesn't particularly matter that the integer is signed.
If you aren't storing 2^64 elements in your array (which you probably aren't - most systems don't even support addressing that much memory) then the only thing unsigned gets you is a bunch of footguns (like those described in the OP article).
best off having a bespoke type that understands how big the array its indexing is
However I do concede writing a few helper methods isn't that much of a burden.
There are a few of those, but that is the niche case. Certainly when we're talking about 64-bit size types. And if you want to cater to smaller size types, then just just template over the size type. Or, OK, some other trick if it's C rather than C++.
Losing half the range to make them signed when you only care about positive values 95% of the time (and in the rare case when you do any modulo on top of them you can cast, or write wrappers for that), is just a bad trade-off.
Yes, you've still then only doubled the range to 2^32, and you'll still hit it at some point, but that extra byte can make a lot of difference from a memory/cache efficiency standpoint without jumping to 64-bit.
So very often uint32_t is a very good sweet spot for size: int32_t is sometimes too small, and (u)int64_t is generally not needed and too wasteful.
In theory, it should have been at most a year. In practice, Windows XP /3GB boot switch (allocates 3 GB of virtual address space user mode and 1 GiB for the kernel instead of the usual 2 and 2) was relevant for many years.
TBH I've had very little struggle with this at all. As long as you keep your values and types separate, the unsigned type that you got a number from originally feeds just fine into the unsigned type that you send it to next. Needing casting then becomes a very clear sign that you're mixing sources and there be dragons, back up and fix the types or stop using the wrong variable. It's a low-cost early bug detector.
Implicitly casting between integer types though... yeah, that's an absolute freaking nightmare.
Part of me feels like direct numeric array indexing is one of the last holdouts of a low-level operation screaming for some standardized higher-level abstraction. I'm not saying to get rid of the ability to index directly, but if the error-resistant design here is to use numeric array indices as though they were opaque handles, maybe we just need to start building support for opaque handles into our languages, rather than just handing out numeric indices and using thoughts and prayers to stop people from doing math on them.
For analogy, it's like how standardizing on iterators means that Rust code rarely needs to index directly, in contrast with C, where the design of for-loops encourages indexing. But Rust could still benefit from opaque handles to take care of those in-between cases where iterators are too limiting and yet where numeric indices are more powerful than needed.
This paragraph reminds me a bit of Dex: https://arxiv.org/abs/2104.05372
Using a unique type per array instance though, that I quite like, and in index-heavy code I often build that explicitly (e.g. in Go) because it makes writing the code trivially correct in many cases. Indices are only very rarely shared between arrays, and exceptions can and should look different because they need careful scrutiny compared to "this is definitely intended for that array".
Alas, (2S * signed_index) / 2S will similarly result in surprises the moment the signed_index hits half the signed-int max. There's no free lunch when trying to cheat the integer ranges.
Well … we even mention Rust in the paragraph right before this. In Rust, you can up a u16 to a u64 this way:
or The conversion `from` is infallible, because a u16 always fits in a u32. There is no `from(u64) -> u32`, because as the article notes, that would truncate, so if we did change the type to u64, the code would now fail to compile. (And we'd be forced to figure out what we want to do here.)(There are fallible conversions, too, in the form of try_from, that can do u64 → u32, but will return an error if the conversion fails.)
Similarly, for,
This is why I think implicit wrapping is a bad idea in language design. Even Rust went down the wrong path (in my mind) there, and I think has worked back towards something safer in recent years. But Rust provides a decent example here too; this is pseudo-code: Where `checked_sub` is returns `None` instead of wrapping, providing us a means to detect the stopping point. So, something like that. (Though you'd probably also want to destructure the option into the uint for use inside the loop.) Of course, higher-level stuff always wins out here, I think, and in Rust you wouldn't write the above; instead something like, (And even then, if we need indexes; usually, one would prefer to iterate through a slice or something like that. The higher-level concept of iterators usually dispenses with most or all uses of indexes, and in the rare cases when needed, most languages provide something like `enumerate` to get them from the iterator.)https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p14...
* Do your utmost to rewrite the code in order to avoid doing that (e.g. reordering disequations to transform subtractions into additions). * If not possible, think very hard about any possible edge case: you most certainly need an additional `if` to deal with those. * When analyzing other people's code during troubleshooting merge reviews, assume any formula involving an unsigned integer and a minus sign is wrong.
Given your examples, I think you'd have fewer issues if you were working with unsigned integers exclusively. Although I'm curious about what other code you were referencing with this: "But seeing how each change both made the code easier to reason about and more correct, I couldn’t deny the evidence."
With regards to modulo, in Zig if you try to use it with a signed integer it will tell you to specify whether you want `@mod` or `@rem` semantics. In my case, I'd almost never write `x % 2`, I'd write `x & 1`. I do use unsigned division but I'd pretty much never write code that would emit the `div` instruction.
I'm not saying you're wrong though! Everyone has a different mind. If you attain higher correctness and understandability through using signed integers, that's great. I'm just saying I'm in the opposite camp.
I don’t really get this claim. Indexing should just look up the element corresponding to the value provided. It’s easy to come up with semantics that are intuitive and sound, even if signed integers or ones smaller than size_t are used.
This is especially inconvenient in C, where there exist extremely dangerous legacy implicit casts between signed integers and unsigned integers, which have a great probability of generating incorrect values.
Because the index is typically a signed integer, comparing it with an unsigned limit without using explicit casts is likely to cause bugs. Using explicit casts of smaller unsigned integers towards bigger signed integers results in correct code, but it is cumbersome.
These problems are avoided as said in TFA, by making "sizeof" and the like to have 64-bit signed integer values, instead of unsigned values.
Well chosen implicit conversions are good for a programming language, by reducing unnecessary verbosity, but the implicit integer conversions of C are just wrong and they are by far the worst mistake of C much worse than any other C feature.
Other C features are criticized because they may be misused by inexperienced or careless programmers, but most of the implicit integer conversions are just incorrect. There is no way of using them correctly. Only the conversions from a smaller signed integer to a bigger signed integer are correct.
Mixed signedness conversions have always been wrong and the conversions between unsigned integers have been made wrong by the change in the C standard that has decided that the unsigned integers are integer residues modulo 2^N and they are not non-negative integers.
For modular integers, the only correct conversions are from bigger numbers to smaller numbers, i.e. the opposite of the implicit conversions of C. The implicit conversions of C unsigned numbers would have been correct for non-negative integers, but in the current C standard there are no such numbers.
The current C standard is inconsistent, because the meaning of sizeof is of a non-negative integer and this is also true for the conversions between unsigned numbers, but all the arithmetic operations with unsigned numbers are defined to be operations with integer residues, not operations with non-negative numbers.
The hardware of most processors implements at least 3 kinds of arithmetic operations: operations with signed integers, operations with non-negative integers and operations with integer residues.
Any decent programming language should define distinct types for these kinds of numbers, otherwise the only way to use completely the processor hardware is to use assembly language. Because C does not do this, you have to use at least inline assembly, if not separate assembly source files, for implementing operations with big numbers.
It was undefined what happens at unsigned overflows and underflows. Therefore a compiler could choose to implement "unsigned" as either non-negative numbers or as integer residues.
The fact that "sizeof" is unsigned and the implicit conversions between "unsigned" numbers are consistent only with non-negative numbers. Therefore the undefined behavior should have been defined correspondingly.
Instead of this, at some version of the standard, I am lazy to search it now, but it might have been C99, they have changed the behavior from undefined to defined as the behavior of integer residues.
I do not know the reason for this choice, it may have been just laziness, because it is easier to implement in compilers and it leads to maximum performance in the absence of bugs. In any case this decision has broken the standard, because the arithmetic operations have become incompatible with the implicit conversions between "unsigned" types and with the semantics of "sizeof", which must be non-negative.
For non-negative numbers, the correct conversions are from smaller sizes to bigger sizes, while for integer residues the correct conversions are only in the opposite direction, from bigger sizes to smaller sizes (e.g. a number that is 257 modulo 65536 is also 1 modulo 256, so truncating it yields a correct value, while a number that is 1 modulo 256 when modulo 65536 it could be 257, 511, 769 etc. so you cannot extend it without additional information).
Judging from the implicit conversions, it is clear that the intention of the designers of C during the seventies was that "unsigned" numbers must be non-negative integers and not integer residues. The modern C standard is guilty of the current inconsistencies that greatly increase the chances of bugs
The potential bugs listed would be prevented by, e.g. "x--" won't compile without explicitly supplying a case for x==0 OR by using some more verbose methods like "decrement_with_wrap".
The trade-off is lack of C-like concise code, but more safe and explicit.
Except that's not quite what unsigned types do. They are not (just) numbers that will always be >= 0, but numbers where the value of `1 - 2` is > 1 and depends on the type. This is not an accident but how these types are intended to behave because what they express is that you want modular arithmetic, not non-negative integers.
> e.g. "x--" won't compile without explicitly supplying a case for x==0
If you want non-negative types (which, again, is not what unsigned types are for) you also run into difficulties with `x - y`. It's not so simple.
There are many useful constraints that you might think it's "better to have a type that reflects that" - what about variables that can only ever be even? - but it's often easier said than done.
Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit?
You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations).
I agree they're a bit more error-prone in practice, but I suspect a huge part of that is because people are so used to signed numbers because they're usually the default (and thus most examples assume signed, if they handle extreme values correctly at all (much example code does not)). And, legitimately, zero is a more commonly-encountered value... but that can push errors to occur sooner, which is generally a desirable thing.
As someone else already pointed out, that's undefined behaviour in C and C++ (in Java they wrap), but the more important point is that the vast majority of integers used in programs are much closer to zero than to int_min/max. Sizes of buffers etc. tend to be particularly small. There are, of course, overflow problems with signed integers, but they're not as common.
No, that's undefined behavior in C, and if you care about correctness, you run at least your testsuite in CI with -ftrapv so it turns into an abort().
Besides, for safety there are much clearer options, like wrapping_add / saturating_add. Aborting is great as a safety tool though, agreed - it'd be nice if more code used it.
https://www.freepascal.org/docs-html/ref/refsu4.html
Sure, it's possible to write bugs in C. And if you really want to, you can disable the compiler warnings which flag tautologous comparisons and mixed-sign comparisons (a common reason for doing this is to avoid spurious warnings in generic-type code).
But, uhh, "people can deliberately write bugs" has got to be the weakest justification I've ever seen for changing a language feature -- especially one as fundamental as "sizes of objects can't be negative".
Signed integers can be negative. The so-called "unsigned" integers of C are integer residues modulo 2^N, which are neither positive nor negative, i.e. these concepts are not applicable to "unsigned" integers.
An alternative view is that any C "unsigned" is both positive and negative. For example the unsigned short "1" is the same number as "65537" and as "-65535".
So any sizeof value in C is negative (while also being positive).
In contradiction with what you say, the change described in TFA, by making sizes 64-bit signed integers, is the only method to guarantee that the sizes are non-negative in a language that does not have dedicated non-negative integers.
Other programming languages have non-negative integers, but C and C++ and many languages derived from them do not have such integers.
The arithmetic operations with non-negative integers differ from the arithmetic operations of C. On overflows and underflows, they either generate exceptions or have saturating behavior.
This can be disproven by the fact that dividing by `unsigned e = 1U` is well defined and always yields the starting number. If the unsigned numbers were really modular numbers as you suggest, division could not be defined.
The oldest parts of the C language are all consistent with "unsigned" numbers being non-negative integers. The implicit conversions between different sizes of "unsigned", the sizeof operator, the relational operators and division are consistent with non-negative integers.
However the first C standard, instead of defining the correct behavior has left undefined many corner cases of the arithmetic operations, allowing the implementation of "unsigned" as either non-negative integers or integer residues.
Eventually, the undefined behaviors for addition, subtraction and multiplication have been defined to be those of integer residues, not those of non-negative integers.
These contradictory properties are the cause of many confusions and bugs.
In extensible languages, like C++, it is possible to define proper non-negative integers and integer residues and bit strings and to always use those types instead of the built-in "unsigned".
In C, it is better to always use signed numbers and avoid unsigned, by casting unsigned to bigger sizes of signed before using such a value.
Fix the language. Don't hack around it by using the wrong type.
The so-called "unsigned" integers of C are integer residues, where each value can be interpreted either as both positive and negative or as neither positive nor negative. In any case no "unsigned" value can be said to be non-negative.
You have to go back to languages not contaminated by C, like Ada, to find true non-negative integers among the primitive data types.
In C++, it is possible to define a non-negative integer type, which can have good performance if you implement its operations in assembly language.
However I am not aware of an open-source library including such a type.
And - yes, there are very important use cases for unsigned/modulo-2n/wraparound values. But sizes of data structures are generally _not_ one of those use cases. The fact that the size is non-negative does not mean that the type should be unsigned. You should still be able to, say, subtract sizes and get a signed value which may be negative.
No, signed wraparound is undefined behavior in C, whereas unsigneds are defined to wraparound. If you use -ftrapv, signed wraparound is an immediate abort().
While C like in many other places fails to define the correct behavior to avoid shaming the processor or compiler makers that fail to provide it, there are only 2 correct behaviors on overflows and underflows, like when incrementing the biggest number or decrementing the smallest number.
Both for signed integers and for non-negative integers, the 2 alternatives of correct behavior on overflows and underflows is to either generate exceptions or to saturate the result to the biggest or smallest representable number.
Wraparound is the correct behavior for integer residues, which are a distinct data type from either signed integers or non-negative integers.
While some people criticize C for making easy for careless programmers to make certain kinds of bugs, like access outside bounds, those are easily mitigated by using appropriate compiler options.
For me a much more serious defect of C is this confusion promoted by it in the heads of most programmers, who do not understand which are the fundamental integer types and which are the correct conversions between them, because C uses "unsigned" instead of at least 3 distinct types that it does not have, bit strings, integer residues and non-negative integers. More rarely, "unsigned" is used for other 2 types that are missing, binary polynomials and binary polynomial residues. All these 5 types must be primitive types in a programming language because all modern processors implement in hardware distinct operations for all 5 types, which can be accessed only through assembly language when these types are missing.