Zero one infinity rule


128 points | by azhenley 13 days ago


  • zeroonetwothree 13 days ago
    While this has some theoretical merit, IME, limits are quite useful to catch bugs (or prevent degenerate cases). For example I was recently working on a permissions system wherein there can be members of groups. I set a reasonable limit on the size of a group based on a maximum of actual usage and what I could foresee being reasonable. A few days later, this limit was triggered, and I got a bug report. But it turned out there was some bad caller that had essentially an infinite loop that was adding more and more members to the group. Without the limit this probably would only have detected after it cascaded and caused other failures.

    As another example, Facebook famously had (maybe still has? I have no idea) a limit of 5,000 friends. This was based on the idea that you were supposed to be friends with people you actually kind of know at least a little bit, which would probably fit within 5,000 for most people. After some more popular users started hitting the limit, it led to the development of public profiles/pages as an alternative product to handle that use case. So in this case the limit allowed the company to identify a new market that they could better serve in an alternative way.

    BTW, the George Gamow book that the name comes from (One, Two, Three… Infinity) is a great read for anyone interested in popular descriptions of physics or math. He also lived a very interesting life, I would recommend his autobiography "My World Line".

    • josephg 13 days ago
      Yeah; expected limits are also fantastically useful in performance engineering. It’s very common your code needs to handle an arbitrarily sized input, but 99% of the time the input will be bounded. (Or generally simpler). Special casing the common code path can make a lot of code run much faster.

      For example, in some code I’m writing at the moment I have lists of integers all over the place. I call them lists - usually they only have 1 element. Sometimes they have 2 elements (10%) and very occasionally more than 2 elements or they’re empty (<1%).

      If I used a language like Javascript, I’d use Arrays. But arrays are quite expensive performance wise - they need to be allocated and tracked by the GC and the array contents are stored indirectly.

      Instead, I’m using an array type which stores up to 2 items inline in the container object (or the stack) without allocating. It only allocates memory on the heap when there are 3 or more items. This decreases allocations by 2 orders of magnitude, which makes a really big difference for performance in my library. And my code is just as readable.

      I’m using the smallvec crate. There’s plenty of libraries in C and Rust for this sort of thing in arrays and strings. Swift (like obj-c before it) builds small string optimizations into the standard library. I think that’s a great idea.

      • citizen_friend 13 days ago
        An arena allocator sounds like it could handle your problem elegantly, without the special cases.
        • josephg 13 days ago
          I've thought about that.

          While processing, yes - an arena allocator is a better fit. But my data is loaded from disk & held in memory while its manipulated by consumers of my API. Given the lifetime is determined by the caller, there's no obvious arena to allocate from.

          I could put the whole thing into a long lived arena - but unless I'm careful, some operations would leak memory.

          But it would definitely be better from a performance standpoint. Using smallvec, every time these values are read or written the code needs to check if the value is "spilled" or not. And I think there's a lot of code monomorphization involved too - using a vec in an arena would probably make my binary a fair bit smaller.

          • comex 13 days ago
            Even with an arena allocator, the indirection is likely to increase cache misses, especially when the array element type is something small like an integer.

            It’s possible to implement something like smallvec without the branch by having it always contain a pointer field, which points to either the inline storage or a heap allocation. However this means it can’t be moved in memory (has to be pinned), and also means you can’t reuse the pointer field to be part of the inline storage in the inline case.

            • josephg 13 days ago
              I'd love to see some real numbers showing how these different decisions impact performance and code size. I suspect the branch cost is pretty minimal because so few of my smallvecs get spilled - so the branch predictor probably does a pretty good job at this.

              And there's often fiercely diminishing returns from optimizing allocations. Dropping the number of allocations from 1M to 1k made a massive performance difference. Dropping it from 1k to 1 will probably be under the benchmark noise floor.

      • xigoi 13 days ago
        I don't think that goes against the rule. Your code doesn't impose an arbitrary limit on the data, it just internally represents it differently based on size.
    • solveit 13 days ago
      I think a better way to formulate the same rule of thumb is that zero, one, and infinity are the only numbers that don't need to be justified. You should always have an answer to the question "why 5000 and not 6000?"

      Of course, the justification can be as simple as "it has to be some number and 5000 is as good as any", but that opens the door to the discussion of whether 5000 really is as good as any other number, which is often surprisingly enlightening.

      • hinkley 13 days ago
        You can’t justify infinity.

        The practical version of the rule is “zero, one, and out of memory”.

        The data structure doesn’t care about 12,001, but your real system does.

        • lolcatuser 13 days ago
          You absolutely can justify infinity - even if your real system can't represent numbers larger than X, you can't guarantee that won't change in the future.

          For example, in C, integers have minimum sizes, but no maximum. We're lucky nobody ever went, "well, our PDP-11 can only go up to 2^16, so let's set the maximum there."

          That's not to say it doesn't make sense to limit things after a certain point, but you'd damn well better be able to justify it, especially if you're using a recursive data structure that could absolutely grow to infinity if needed.

          • hinkley 13 days ago
            I can guarantee it’ll never be able to support infinity records. I can also guarantee that most data structures can’t even support a googol of records and never will. Some won’t ever support a quintillion records due to complexity, and quantum won’t fix all of those. There may be structures that support this, but they aren’t the ones we use today.

            And the reason is that complexity theory is on shaky ground these days and needs a revision. Forty years and six orders of magnitude ago we treated a bunch of operations as constant , which is a clever fiction that is blatantly obvious now,but some of you knuckleheads continue to not look at. Most C terms are in fact stair-stepped log(n). Every time log(n) doubles and crosses a threshold, the cost of the operation doubles, and therefore is in log(n), not C. When you’re talking billions of records that distinction is already fairly obvious, and people dealing with trillions have to architect for it.

            Memory access, as it turns out, is sqrt(n), not C, and with magic hardware might some day reach the cube root of n, and don’t count on it getting any better until we can bend space. With that sort of memory access a lot of O(1) algorithms perform worst than log(n) operations.

            As you go into 128 bit addressing the difference between nlogn and n^3/2 * log(n)² becomes impractical. There’s a wall there, and we won’t get past it without completely different algorithms or desktop FTL technology.

            The next order of magnitude of n will require a complete rewrite of that section of the code, so in practical terms the n only goes up in a new program, not the old one, and therefore the old program cannot and never will handle a thousand times more data than it was designed for.

            • Dylan16807 12 days ago
              You're getting deep into the realm of data sets that don't fit into a computer. I don't think you're talking about solving the same problems.

              > Forty years and six orders of magnitude ago we treated a bunch of operations as constant , which is a clever fiction that is blatantly obvious now,but some of you knuckleheads continue to not look at. Most C terms are in fact stair-stepped log(n). Every time log(n) doubles and crosses a threshold, the cost of the operation doubles, and therefore is in log(n), not C.

              Memory latency hasn't really budged in 25 years and is significantly faster than it was 40 years ago.

              Pointers have gotten bigger, but they're not that big compared to data and they're only one notch away from the maximum you could ever use in a single computer.

              Where are we seeing these slowdowns?

              > I can also guarantee that most data structures can’t even support a googol of records and never will. Some won’t ever support a quintillion records due to complexity

              A B-tree will do fine.

              > Memory access, as it turns out, is sqrt(n)

              Not in real single computers.

              And if our baseline is 1983 latency, it would take more than unrealistic amounts of circuits and size for the speed-of-light delays to drag modern tech down to equal it, let alone be worse.

              I agree that actual infinity is impractical, but “zero, one, and out of memory” gives the wrong idea, because it implies you might limit things to the amount of memory currently available or available soon. "Infinity" is better at getting across the idea that your code is not allowed to bake in any limits.

            • hinkley 12 days ago
              For giggles I did the math.

              n^3/2 * log(n)²

              The time required for 1000n is 3 million times more than n. If anyone makes a computer that is 6 orders of magnitude faster than current computers, none of us here will be alive to see it.

          • w0mbat 13 days ago
            I want to say, INT_MAX, but I know what you mean. It has been useful that the value varies by implementation (and even by compiler flag, on occasion).
            • hinkley 13 days ago
              I think you both need to watch some YouTube math videos on what exactly infinity means. As Everyday Maths put it, it’s not “count until you can’t count anymore and then add one to the number”. That’s the sort of wrong frame of thinking that makes you think that an infinite pile of hundred dollar bills is worth more than an infinite pile of one dollar bills. They are both worth $infinity.

              We don’t have any algorithms that can deal with infinity, so stop pretending like we do. And we couldn’t build one unless the universe is infinite and expansions turns out to be incorrect. So no infinity in computer data structures.

              • lolcatuser 11 days ago
                You're being willfully obtuse here.

                When somebody says "this algorithm is unbounded" they of course don't mean that they've changed the laws of the universe to allow for an infinite array of memory that it can work with. They just mean that if you could do that, it'd work.

                As an example: `while (node) node = node->next;` (in C, where `node`'s type defines a reference to `next` of the same type) will traverse an unbounded list. It will work just as well for zero nodes, one node, a dozen nodes, a billion nodes, and so on, as the number of nodes approaches infinity. Obviously it is impossible for you to ever create a computer with an unbounded amount of memory, but computer scientists can talk about algorithms the same way mathematicians can talk about what `f(x)=x^2` at infinity.

                No mathematician, with the knowledge we have now, would ever say, "It's unreasonable for us to talk about infinity. That's simply not possible, ever, now or in the future, so let's limit things at 999,999,999,999,999,999,999,999,999." We settled this debate back in the 1600s.

    • crazygringo 13 days ago
      Totally agreed. I think it's absolutely a best practice to set limits.

      Code is generally designed to operate within certain "reasonable" performance boundaries, and when it goes outside those you need to think whether code should be rewritten to accomodate it.

      Just a tiny example, but I regularly deal with long (800+ page) PDF's on my iPad, reading parts of them in the stock Books app. When I select text to highlight, the context menu unfortunately puts "select all" directly next to "highlight". Of course, every so often I accidentally hit "select all", and then I have to force-close the app because otherwise it just freezes for 10 minutes as it extracts and selects the text on every single page.

      When really, it needs a limit to detect that, hey, if the PDF is over 20 pages long then just don't allow "select all" anymore, because this isn't the right tool for the job.

      • iudqnolq 13 days ago
        The ideal solution to this is the pattern of turning decisions that have to be thought about before you click the button into decisions that can be easily undone.

        In this case detect the pathological case and spawn a separate thread that computes the text. Show a progress indicator and an easy cancel button.

        Of course, that's harder to implement.

      • DougBTX 13 days ago
        A soft limit would work well here, a prompt saying, “There are a large number of pages, it will take a while to select them all. Do you want to continue?” then it informs the user, rather than (maybe) artificially limiting them.
      • psychphysic 13 days ago
        Although that's also ripe for the annoyance of not being able to select all in border cases.

        I'm much in favour of don't fuck with the basics (i.e. the core expect feature UI like select, copy and paste.).

      • bee_rider 13 days ago
        Maybe “select all” is too easy to click? Might make sense to put it in the “…” menu, and have a “Select All On Page” button front-and-center.
    • pyuser583 13 days ago
      The Wiki page says the problem isn’t with limits, but with arbitrary limits.

      When a program limits me to 256 of something, it doesn’t seem arbitrary.

      I’ve heard stories of programmers setting limits to multiples of two simply so nobody asks why.

      • bee_rider 13 days ago
        Although any time I see a limit of 256 I think “are they really using a byte here?”
    • pixl97 13 days ago
      What should the limits be? And are the limits clear to the users of the system? Are they clear to other components of the system?

      I agree we should have limits in software because we don't have unlimited memory and processing time, but I commonly find these limits are encoded by the imagination of the programmer working on the software at the time, and it's often you find limits that were not considered in systems design of the product.

    • hinkley 12 days ago
      Someone implemented a fairly naive DAG in our system that sometimes sprouts cycles. The cheapest way to catch cycles is to limit the graph size to 10x nominal. If we ever encountered a request that was legitimately that far from normal, we would most likely time out during the subsequent work anyway, because processing time grows faster than n and our timeout is only about 20x our mean response time.

      If there is a loop, we will hit the limit while evaluating the cycle because it’s a DFS. With a BFS we could hit the limit in a sibling of the problematic node.

      Scheduling budgets are how most people avoid the halting problem. You set a fixed multiple of expected halting time and work hard to make sure that’s your p99 time instead of your p75 time, and stop trying to violate core principles of computational theory.

    • Cthulhu_ 13 days ago
      A lot of limits are set explicitly to ensure things don't go haywire, think things like rotating log files. Other ones have to do with display logic, that is, there's only space for X characters in a title section. Although that can be handled with CSS, it's still sane to set a limit so you don't get megabytes of garbage data cut off by CSS.
    • hammock 13 days ago
      >limits are quite useful to catch bugs

      Can be solved with tests instead though.

      Speed limits on roads are useful in catching unsafe driving behavior; but if every car actually had speed governors installed, that couldn’t be overcome, it should be clear that this is a suboptimal solution.

      Arbitrary limits written into code will eventually be refactored into ZOI, one way or another

    • hakre 11 days ago
      > limits are quite useful to catch bugs (or prevent degenerate cases)

      Sure, this is why code designed by ZOI rule has those straight forward test cases: none, one, some, many aaaand crash (at most).

      and for safeguard of correct use (limits as undefined behaviour) many languages have assertions.

    • renox 12 days ago
      Yes, that's why I think than 'checked int64' are a far better default for language integers than 'big ints', because if you overflow an int64, it's most likely a bug/an hacking attempt.
    • HervalFreire 13 days ago
      >While this has some theoretical merit,

      What theoretical merit? It sounds like a completely made up rule of thumb based off of a persons anecdotal experience.

      • chazu 13 days ago
        Examples of issues Ive seen in the wild because people violate this rule include payroll systems with an arbitrary maximum number of pay codes and review app systems with a static number of review apps.

        Just like every other heuristic in software engineering, its not a silver bullet, but generally speaking, this principle will serve you well.

        • HervalFreire 13 days ago
          So you're confirming what I said. No theoretical basis. You just reaffirmed my position that this rule of thumb is anecdotal with your own anecdotal experiences.

          If you look through this very thread there are people talking about anecdotal experiences verifying the opposite effect.

          • iudqnolq 13 days ago
            Interestingly this sort of thing falls into a special category where reasoning from first principles is less rigorous than pattern matching to experience. That's because we don't have a good theoretical model.

            We're at the point in history where doctor who noticed less patients dying after they wash their hands argues with the doctor with a very erudite explanation of how that's impossible. Maybe someday we'll discover germ theory, but until then we're left to argue anecdotes down in the mud.

            • HervalFreire 13 days ago
              You recall mercury? Mercury was used by doctors for years as medicine and it was validated by anecdotal experiences. But it really had the complete opposite effect. It killed people. That is the reliability of anecdotes and an illustration of how delusional humans and "experts" are. Nowadays there's still no complete "theory" for medical science. We have partial theories like biochemistry but it's not a complete theory as in we can't completely derive which chemicals can cure which disease via a formal model.

              In fact given the complexity of reality we may never ever have a formal model for medicine and as such we most likely will have to forever rely on asymmetrical nature of science.

              Computing is bounded in a simulated universe axioms and logic. Computers are in actuality a part of reality but we try to use them as if they are seperate universes of pure logic and math games. To say that something has "some theoretical basis" is a very precise statement in computing because unlike medicine it is VERY possible for entities in such a bounded system to have a complete formal theory.

              The problem is the Zero one infinity rule is not such a thing. The statement to say it has "some theoretical basis" is therefore completely false. Especially given the fact that in this thread there are a bunch of counter examples and I myself don't fully agree with it. How do you know this Zero one infinite rule is not just some form of mercury in disguise?

              My disagreement, however, is NOT the point. The point is that this rule currently has ZERO theoretical basis. Additionally given the existence of counter examples, it likely will ALWAYS have zero theoretical basis. I don't completely deny the validity of anecdotal evidence, but, again, the claim made initially by the GP is false.

  • hsjqllzlfkf 13 days ago
    John Carmack argued the opposite. He said he would hardcore limits into his data structures. Limits that would never be hit under normal operating circumstances. He argued that when you design software you should have an idea under what circumstances it will run and optimize for those. The fact that people normally don't do this, is why software often lags - that algo you implemented worked worked just fine when it's hashmap has 1000 keys, but not when it has 1m keys.

    I haven't articulated his point very well, but I lost interest in this comment mid way thru.

    • adhesive_wombat 13 days ago
      You can still have both: an algorithm that technically has no formal numerical limit, but have warnings or errors when it reaches some condition that is clearly outside the design scope.

      For example, you might have a system that warns somehow if you start inserting items at the front of a vector with over a million items (when you originally expected the vector to be used for tens of items). If the structure is suddenly being used for something way outside of it's design scope, it's probably in need of a re-think.

      • stefan_ 13 days ago
        Nope. "No formal limit" or "infinity" is just another way of saying "dynamic memory allocation". There is a reason you usually find this advice in game adjacent places; to punt everything to the dynamic memory allocator makes it very difficult to know if you are keeping within the memory-starved limits of e.g. the game console you are developing for, or on mobile to keep a garbage collector from stepping in and ruining your day.
        • adhesive_wombat 13 days ago
          "Nope." An "algorithm" can be about things other than memory allocation. Say you have a collection of sensors the you poll. ZOI is saying that the system that polls them shouldn't[1] have some kind of hard assumption that there will only ever be, say, specifically 4. You could still statically allocate the storage on a given system, for example in an embedded system where the software is compile-time configured for a given hardware.

          However, if you pass the "poll_sensors" function a size of 60 million when the system was designed for "about 4, I guess", it's likely that you're operating the algorithm in a regime it wasn't designed for. You may (or may not, this is just another trade-off) wish to know about it.

          [1]: of course you can always construct exceptions. If you follow every rule you subscribe to dogmatically, then you're doing something more akin to religion then engineering.

          • mellavora 13 days ago
            > If you follow every rule you subscribe to dogmatically, then you're doing something more akin to religion

            My impression of religion is dogmatically following only a changeable subset of the rules you subscribe to, where subscribe means "they are in our special book"

    • TazeTSchnitzel 13 days ago
      You can see this kind of design in action in the Source engine, perhaps inherited from Quake (which is partly Carmack's work of course). There are hard limits on things like how many objects can exist at once. Occasionally those get hit… usually due to a memory leak![0] So the artificial limit helps find real problems. That's also been my experience with PHP, which limits the amount of memory a request or script execution can consume by default.

      [0] — check out shounic's other videos too, there's lots of interesting content there about how complex game systems fail, including about these limits

    • cjfd 13 days ago
      Well, John Carmack is a game programmer, at least originally. In games one has strong upper limits on how long it can take to calculate the next frame. So, it may very well be that something can be said for this in this context. I am not sure in general, though. If one limits the length of the name of a person in some record one can start waiting until one day a person arrives that has a name that is one character longer and then a programmer needs to get involved to increase the limit. Okay, maybe there exists a ridiculously large maximum name length such that this problem is unlikely to occur. Then one may still have the problem that if all people would have names that approach this limit, the system still might get too slow as well. So maybe we should impose a limit on the sum of all the lengths of all the names in the system. Hmmm... Kind of gets a bit too complicated and arbitrary for my tastes. I think for many systems Carmarcks advice really is not very good.
      • layer8 13 days ago
        There should be limits. What if someone pastes the text of the bible into the name field? What if you want to send an email containing the name, and you hit the maximum email size accepted by the SMTP server (they all have a limit)? What if you want to send a postal letter and print the name on the envelope?

        Not setting explicit limits either means that you still have implicit limits, but you don’t know what they are and where and when you’ll run into them, or it means you’re opening yourself up to DoS attacks, because your database or your backups or your AWS bill will run amok.

        • GauntletWizard 13 days ago
          I have legitimately used the complete works of William Shakespeare, unabridged, as a password. Even that is in megabytes, and not significant load for bcrypt2.

          Not that there should be no limits at all, but the upper bound should be relatively high.

          • mellavora 13 days ago
            Now all I have to do is guess which order you used them in, and I'll have full access to your system!
          • asynchronous 13 days ago
            Doesn’t bcrypt2 essentially truncate every source input to no longer than 35 characters?
            • layer8 13 days ago
              It’s 72 bytes, but yes. Probably a good reason to have a length limit on the password field if you use bcrypt.
        • jagged-chisel 13 days ago
          Yes, you need input validation that enforces some kind of limit. But every layer of processing (function calls, RPC…) in the stack from the text entry through the network through the database engine to the disk doesn’t need to be concerned with the 200 character limit for a name.

          The database can give you an error if the size is exceeded. The input field can warn the user about the limit. But the code that converts from the user’s chosen encoding to utf-8 doesn’t need to know anything about those limits.

          • layer8 13 days ago
            Yes, the important thing is to have a data schema that defines the limits and that is enforced at all system boundaries. That way you also identify the cases where the limits may not match those of the protocols you are talking to (mappings between schemata, basically).
    • shoo 13 days ago
      > when you design software you should have an idea under what circumstances it will run and optimize for

      This philosophy is sometimes referred to as "data-oriented design"

      see also: Mike Acton's CppCon 2014 talk:

      Data-oriented design seems appropriate for shipping high-performance games, and shipping them on schedule and on budget. The goal is not to design and code the algorithm with the best theoretical scalability characteristics - and publish it in a paper, or invent the most mathematically general version of the algorithm to be factored out into a library for use by 100 hypothetical future projects. The goal is not to design a system using hexagonal architecture (or whatever) that can more flexibly accommodate arbitrary changes of requirements in future, at the expense of performance. The source code is not the product that the customers buy, the game is the product.

      The goal is to understand the problem you actually have - processing data of a certain finite size or distribution of sizes, on some fixed finite set of target hardware platforms, and figure out how to do this in a way that hits performance targets, with a limited budget of engineering time, to keep moving forward on schedule toward shipping the product.

    • kristopolous 13 days ago
      "Generalized problems are the hardest, so why needlessly pretend like you have them when you don't?"
    • quelsolaar 13 days ago
      Ive come to agree with this. Say you have a struct that contains a name. If you limit the name size to say 64 bytes, then you can store it in the struct, otherwise you need to have a separate allocation and an indirection. This makes the code slower, more error prone and more complex to use. So think hard of when “infinite” is justified.
      • AlotOfReading 13 days ago
        You should also be careful about incorrectly imposing limits on human input, as every "falsehoods programmers should know about x" repeatedly hits on. To continue the name example, there are people with legal names and titles that take far more than 64 bytes to store and truncation is not an appropriate solution. You should store the entire thing, even if that's a small bit more difficult technically. The other limit also applies. Some people don't have any characters in their legal names at all, like my mother (birth certificate name was blank).
        • zeroonetwothree 13 days ago
          I think "should" is a bit strong here. There are always tradeoffs. Having support for very long names can cause other problems, like the opportunity for abuse, making it hard to format UIs, making human inspection of data much harder, or even limiting algorithmic efficiency.

          For example, US Passports have a relatively small limit for name length (I think it's around 20 characters each for first/middle/last).

          • jiggawatts 13 days ago
            “Who you are is inconvenient for me.”
            • PKop 13 days ago
              Yes? Why is someone's existence or circumstance de facto an obligation for someone else?

              No one lives their life completely accommodating every random stranger's needs.

            • GMoromisato 13 days ago
              Conversely, "The needs of the many outweigh the needs of the few, or the one."
            • hugh-avherald 13 days ago
              My first name is defined to be the concatenation of all other forenames in the United States.
        • PKop 13 days ago
          Or you can make a tradeoff and simply exclude these outliers from using your product, instead of optimizing for their unique case.
        • quelsolaar 13 days ago
          Who said anything about the name being the name of a person?
      • zeroonetwothree 13 days ago
        This is true, but note that the "Zero one infinity rule" only applies to the number of items, not the size of an item. Limits on size are a lot more typical anyway.

        (Yes I know someone will point out that to store a list of items requires a linear amount of space, so therefore an unbounded number of items inherently implies an unbounded amount of space)

      • xigoi 13 days ago
        If you're using any reasonable language, using data structures without arbitrary limits is just as easy or easier than those with arbitrary limits.
      • meghan_rain 13 days ago
        > more error prone

        not when using Rust ;-)

    • MawKKe 13 days ago
      Sure if you know the operating domain, just hard code some sensible upper limit and move on. Not that this way is "better" in all cases. YAGNI and all that
    • roflyear 13 days ago
      Only works when fail early is better than run slow which IME is not often the case
    • mhh__ 13 days ago
      > 1m keys

      As opposed to not working at all?

  • deergomoo 13 days ago
    I think this is something programmers intuitively learn pretty quickly (in fact I'm only just learning that there is actually a name for this idea), but it's an interesting principle to explain to non-developers.

    Back when I was doing agency work, on more than one occasion I had clients question why modifying some entity to permit multiple related entities instead of one would be more work than they were anticipating.

    The question was usually along the lines of "would it help if we were to limit related $foobars to three?". But of course, from a structural perspective supporting three related records is usually exactly the same amount of work as supporting arbitrary amounts. Yeah I could probably hack in some fixed amount of extra records, but that's just going to create an increasingly large mess when your business grows and you realise hundred wasn't enough. Might as well do it properly from the start.

  • xlii 13 days ago
    I have a feeling that a lot of commenters mix up structure design with structure implementation.

    Rule talks about the design (and I find approach interesting) yet implementation can (and most likely should) be limited.

    E.g. we are designing a world. In this design we say children can have toys. It doesn’t make sense to design universe so that children are limited to 5 toys or any other arbitrary number. But no room can fit infinite number of toys so implementation is limited. For some it’s 500 toys but for some it’s 500.000 toys.

    Names or parents are good real world examples. Sure, most people have 2 parents pr at most 2 names but this rule argues that if there’s more than one it shouldn’t be limited at all (which also makes sense in real life even though IT system might limit names to 256 characters because hardware).

    • eternalban 13 days ago
      Then change the name of the rule :)

      Zero, One, or Many. I think that would satisfy all concerned.

      • xlii 13 days ago
        Absolutely agree. However that’s for the rule creator. I think that infinity in this context sounds slightly poetic.
  • travisjungroth 13 days ago
    A metarule for using rules like this: instead of thinking of it as a rule, think of it as an option you should usually consider. You get lot of the benefits and less of the costs if you just ask questions like "Would it be easier to solve for N than 3? Easier to use and maintain?". The answer will never always be yes or no, but you'll be more likely to get the right answer if you ask the question.
  • sethammons 13 days ago
    I've been working on high scale systems for over a decade. An early lesson was to put limits on everything. Because we started without them, customers leveraged the lack of limits, and cardinality became a huge problem.

    Real example: at SendGrid, you can attach "categories" to emails which can help organize your email analytics by enabling you to group emails by type. Just as you can view the statistics on all your email activity, you can go a step further and view the statistics by a particular category. Our ability to provide analytics blew up when several customers exploded that to literally millions of categories, most totally unique to a single message.

    We no longer release anything as unlimited. If a use case arises, we can always expand the limit, but establishing a limit after the fact is hard and customers don't like having to go back to change integrations.

    • paxys 13 days ago
      Putting a user-facing restriction in place is very different from having your underlying infrastructure itself make that assumption. If the product spec says a user will be allowed 5 messages per day, and that's all you design the system to support, you will inevitably run into challenges when the number has to be changed to 20 or 2000 or 20 billion down the line. At that point "but we only used an 8-bit integer because you said the max would be..." isn't a good enough answer, and you are going to be stuck in a very messy rewrite.
    • bee_rider 13 days ago
      Was it hurting anything to have millions of categories? I mean, I’d assume the metrics interface they’d gotten from you would be essentially useless, but if they had some workflow they worked, why not let them have it?
  • neilv 13 days ago
    I first heard of "no arbitrary limits" in connection with GNU software. It was one of ways that GNU software not only had different licensing than alternatives, but was also usually technically superior.

    (At the time, there were a lot of different workstation computer vendors, each with their own flavor of Unix. But where GNU tools existed, they'd usually be better than all of the Unix workstation ones. One exception was that the premium vendor C and C++ compilers generated better code and/or were more trusted than GCC at the time. GCC didn't take over until Linux, initially on x86.)

    • kloch 13 days ago
      One example I recently ran into with large image files, specifically a 24GB PNG file, which of couerse requires much more working memory when uncompressed:

      Gimp: Opens this image successfully without complaining. It takes a while and uses lots of swap space but it works.

      OSX Preview App: "The file <filename> could not be opened. It may be damaged or use a file format that Preview doesn't recognize.

      So not only does Preview fail to open the file, it also lies about what the specific issue is. Apparently nobody at Apple ever imagined opening such a large file.

      The early Internet flourished in large part because the protocol designers correctly recognized that they couldn't predict or even imagine all the ways people would use their work. "Be conservative in what you send but liberal in what you accept - John Postel"

  • dragontamer 13 days ago
    This is probably a rule I believed in as a beginner, and completely don't believe in anymore today.

    Lets take a look at an exceptionally well designed system: single-error correction double-error detection codes. Also known as "ECC RAM". The implementation of this only works for *exactly* 64 bits + 8-parity bits.

    Now sure, we can talk about how we can generalize SECED ECC to different numbers of bits or whatever. But I'll also point out that the only implementation in widespread use is ECC Memory, the 64-bits + 8-parity bits (72-bits total) implementation.

    • Nullabillity 13 days ago
      It is generalized, you can chain together infinitely many blocks.
    • alex_sf 13 days ago
      I would argue your example is a special case of only allowing one.
      • dragontamer 13 days ago
        But it detects two-bit errors.

        Which is nonzero, non-one, and non-infinity.


        SEC without DED is the classic implementation of Hamming error correction codes. The double error detection (aka DED) but is grafted on top of the theory, because it's useful in practice.

  • renewiltord 13 days ago
    This is easily misunderstood (e.g. many peer comments). It's a design structure heuristic. It isn't about the limit in there. It's about the style of algorithm or data structure.

    An obvious example is something like you want to support a primary data source and if something is missing there you want to fall back.

    Either you implement it is as no fallback, you implement it as one fallback, or you write your code to support arbitrary fallbacks. What you shouldn't do is specifically support two fallbacks.

    Another example is something like say you want to have badges on a user. You want to store this in the backend and then you want to display badges next to their comment.

    Suppose you decide you want to allow a maximum of 3 badges per user.

    Conforming to 0-1-infinity is about having a badge[]. Not conforming to it is having, on your User, a badge1, badge2, and badge3.

    The two rules of 0-1-infinity and always-have-a-limit are orthogonal.

  • skissane 13 days ago
    If you took this rule literally though, you’d have to support bigints everywhere. Yet, in practice, setting the limit as INT64_MAX is seen as sufficient to count as “infinity”.

    In computing, “infinite” is almost never meant literally. When people say “infinite”, they really mean “finite but really really big”

    • iudqnolq 13 days ago
      Infinity here means "so big you don't have to think about reaching out". For example, the Rust standard library assumes no system could possibly support more usize::MAX/2 threads, but it worries about you cloning an atomically reference counted pointer usize::MAX times.
    • xigoi 13 days ago
      You'll run out of memory long before your array can reach 2^64 items.
      • skissane 13 days ago
        Which means in a practice “infinity” is a lot lower than 2^64, although exactly how much lower depends on system configuration (how much memory the system has, how much of it this process is permitted to consume)
        • xigoi 13 days ago
          The point is that the limit should be determined by physical limitations of the hardware, not an arbitrary number chosen by the programmer.
          • skissane 12 days ago
            > The point is that the limit should be determined by physical limitations of the hardware, not an arbitrary number chosen by the programmer.

            A machine might have 64GB of RAM, but this application might be configured to be allowed to consume a maximum of 4GB of it. In which case, the limit isn’t being determined by the physical limitations of the hardware; whether or not it is under the control of the programmer depends on who does what. Some programmers get to decide how much memory to allocate to their program, for others that decision is made by somebody else.

  • heyzk 13 days ago
    Which, I assume, led to one of my favorite idioms: there are only three numbers in computing: 0, 1, and n.
  • GauntletWizard 13 days ago
    I learned this rule through, of all things, the children's series Animorphs. One book they encounter a colony of ants, who mention that "You are one, we are many" and count that way; any finite number is one, and infinite numbers are many.
  • crazygringo 13 days ago
    I've read the article but I'm having trouble understanding the context of what this was in response to. The article quotes:

    > I formulated it in the early 70s, when I was working on programming language design and annoyed by all the arbitrary numbers that appeared in some of the languages of the day.

    Was it just an issue with languages in the 70's then? Because I'm struggling to think of any "arbitrary number" limits in modern languages.

    Obviously we have numerical limits depending on if a variable is a signed 32-bit integer or whatever. But those aren't arbitrary.

    Like would older languages just arbitrarily say there couldn't be more than 100 elements in an array, or something?

    • nwallin 13 days ago
      Databases used to very much prefer records with fixed sizes. (they still do, but they used to, too) And storage was very, very expensive so these fixed sizes were often the smallest they could conceivably get away with. So there'd be a record in the database, and you might have 10 bytes for the last name, 10 for the first name, and 8 bytes for the password. When I was younger it was super common to see the first 10 letters of my 11 letter last name on official documents. Not long after 9/11 I was hassled by airport security because my last name on my ID "didn't match" the last name on my boarding pass, until an older wiser airport security person explained to him that it was normal for the boarding passes to cut off after 10 characters.

      Early versions of BASIC were limited to 2 character variable names. Some would allow you to use longer variable names, but only the first two characters would determine which actual variable it referenced. So if you had the variable JOHN and the variable JOAN they'd alias to the same value. Later versions of BASIC extended this to 6 characters. This was seen as positively extravagant.

      Many filesystems used 8.3 filenames; you had 8 characters for the stem and 3 characters for the extension. Sometimes people say that this was just an MSDOS thing but it was common on many other OSs at the time. I'm 90% sure CP/M had this limitation too.

    • layer8 13 days ago
      Lots of examples (e.g. maximum number of nesting levels), see for example:

      One aim of finite implementation limits is to define which programs are guaranteed to compile successfully, so that you don’t run into the situation that a program compiles on one implementation but not on another implementation, which would raise the question of whether it’s the program or the compiler that is nonconforming. If both are conforming, you want the program to compile 100% of the time. This isn’t possible if programs can have unlimited complexity.

      The mindset is similar to embedded programming, where you have limited memory and absolutely don’t want to run into the situation that there’s not enough memory left to perform the operation that needs to be performed.

      • kevin_thibedeau 13 days ago
        In the embedded world you're always mindful of memory consumption. Static allocation of a fixed size array has advantages over the same data in a linked list by simple virtue of eliminating node pointers and bookkeeping overhead associated with dynamic objects. Regardless of concerns about algorithmic complexity. Sometimes you really do need to set fixed limits for the sake of economy.

        Minicomputers of the 70s and PCs of the 80s had computing resources comparable to what passes for embedded systems today (i.e. much less capable that an RPi). Those machines required tradeoffs with arbitrary limits to make realizable software and that is still a valid approach for a subset of modern programming where you can't just pretend you have an infinite store that will never be exhausted and if you do hit a limit you can't just throw more hardware at the problem.

        • xigoi 13 days ago
          Dynamic allocation can often use up less memory, since it only reserves as much as it needs, not the maximum for every item.
    • gdprrrr 13 days ago
      Early C compilers had a limit of 6 characters for an identifier
  • chazu 13 days ago
    I'm consistently shocked by the number of people who have never heard of this principle. Introducing arbitrary numerical limits (emphasis on _arbitrary_, as performance limitations or other actual requirements obviously trump this rule) is a design decision that I find myself having to clean up after frequently.

    I see a lot of people here questioning the wisdom of the rule, however, like every other principle used in SWE, it shouldn't be applied blindly. Ask yourself "why am I specifying that a maximum of five wangervanes can be specified in the turboencabulator settings?" _IF_ you have a good reason, fine. Most of the time you will not.

    • zarzavat 13 days ago
      Limits are good. Limits mean that you can test your software under both min and max conditions.

      If there’s no hard limit, then the limit exists merely in the developer’s mind as what they consider sensible or not.

      Inevitably there will be some user who takes your software past what the developer considered sensible. And unknowingly and silently, this user becomes a tester in production.

      The real problem is not DRYing your limits. There should always be one central point of truth, one constant that determines what the limit is.

  • closeparen 13 days ago
    A lot of people talking about correctness and performance, but I have a suspicion that most limits in older software had less to do with that and more to do with the fact that things were written in C, where arrays whose size is not known at compile time are just a lot more work for the programmer.

    There are cases where the allocations and memcpys when a Go or Java collection grows can be a real problem, but the vast majority of the time it works out fine to leave the size unspecified.

  • paxys 13 days ago
    This is especially relevant when designing systems that may need to scale unpredictably. We dealt with a lot of such growing pains at my company, and then made a rule that every dataset has to be theoretically infinite in size. So for cases like doing a database query, returning lists via an API call etc. you cannot make assumptions like the result set will be small enough to fit in the server's memory or fit on a single web page. Pagination, indexes, search/typeahead are all a must for any case where you expect > 1 result, even though for all practical uses that number may just be 5 or 10. Many years later we are still seeing massive dividends from this rule
  • furyofantares 13 days ago
    > It argues that arbitrary limits on the number of instances of a particular type of data or structure should not be allowed.

    Either this is a misuse of "arbitrary" or it's not really arguing for no limits beyond 1. In the Carmack example from another comment, if you know your solution isn't going to scale well beyond 1000 items, that's not an arbitrary limit, and it's not 0 or 1 either. When we choose not to allow users to back up their hard drive in the username field of their account, that's not an arbitrary limit.

    • xigoi 13 days ago
      It is arbitrary. Why can it handle 1000 items, but not 1001 items?
  • Kim_Bruning 13 days ago
    Am I the only person who sometimes makes an exception for "two" ?
    • teo_zero 13 days ago
      I'd say that 2 is indeed quite common in Computer science. In graphs, for example, each edge has 2 ends. In an abstract syntax tree, a binary operator has 2 children. All raster images are based on 2 dimensions. A "for each" loop often involves 2 variables: the iterator object and an index.

      In all such cases it makes sense that a language/ construct/library special-cases 2 elements without generalizing for an arbitrary number of elements.

      By the way, 3 is much less common, except in some specific domains, like 3D vectors, UNIX permissions (user, group, other), etc.

      • xigoi 13 days ago
        Usually you wouldn't store these things in a collection, but as separate fields of a record type, so the rule doesn't apply.
        • teo_zero 12 days ago
          The principle applies to all kinds of structures. For example a common way to implement a generic tree that respects this rule is to model each node with exactly one parent and any number of children. My statement instead is that you may have legitimate reasons to want exactly one parent and from zero to two children, but not more.
          • xigoi 12 days ago
            What would be the use case of such a tree?
            • teo_zero 11 days ago
              An Abstract Syntax Tree, where each node can have zero children (e.g. a literal value), one child (unary operator like 'Not') or two children (a binary operator like +).
              • xigoi 11 days ago
                And what exactly is the benefit of restricting such a tree to two children? Why not just allow an arbitrary amount of children and then only use it with 0,1,2? After all, you might later want to extend the language.
                • teo_zero 11 days ago
                  Allowing arbitrary number of children doesn't come for free. You might need a linked list, which is a waste of memory and access time if you always just use up to two elements. What most parsers do is to provide exactly 2 slots, aptly named left and right, and use zero, one or both of them.
      • Kim_Bruning 13 days ago
        Oh, you're right, there are a few other exceptions for other numbers too!
    • alex_sf 13 days ago
      The argument against two would be: if you can do two, why can you not do three?
    • GauntletWizard 13 days ago
      Wherever I need "Two" I usually find that what I actually need is a one and a many; there's not a master and a replica, there's one master and many replicas. There's not two places that the data is when doing a raid restripe, there's many places the data probably is and one place it should be.
    • ChainOfFools 13 days ago
      I seem to recall that there are certain languages (human languages) that have cases specifically for talking about pairs of objects, situated semantically between singular and plural. I think one of these is an archaic form of Greek, maybe ancient or Hellenistic.
      • guavaNinja 13 days ago
        In Arabic there is the mothanna مثنى (pair) case.
      • tough 13 days ago
        Asked ChatGPT

        Yes, you are correct! Ancient Greek has a dual number, which is used specifically for referring to two people or two objects. The dual is a distinct grammatical number, separate from singular and plural.

        In the ancient Greek language, nouns, pronouns, adjectives, and verbs all have different forms for singular, dual, and plural numbers. The dual number is used when referring to two things or two people, while the singular number is used for one thing or one person, and the plural number is used for three or more things or people.

        The dual number is no longer used in modern Greek, but it was an important feature of the ancient Greek language, which had a rich and complex system of inflectional morphology. Other languages that have a dual number include Old English, Old Norse, Lithuanian, and Slovenian, among others.

    • playingalong 13 days ago
      It has always "bothered" me that AWS allowed for up to two API keys per IAM user.
      • jdreaver 13 days ago
        That's probably so you can rotate your keys without downtime.
    • mmphosis 13 days ago
    • jerf 13 days ago

      But I have the decency to feel bad and leave an apologetic comment.

    • adhesive_wombat 13 days ago
      A ping-pong buffer certainly seems like generalising to a ping-pong-pang-peng-pung-p(infinity)ng buffer would be overkill.
    • hprotagonist 13 days ago
      i fairly regularly want pairwise, yep.
    • grensley 13 days ago
      Two is quite relevant for relationships
      • layer8 13 days ago
        You can have n-ary relationships. Relational databases are built on that.
        • grensley 13 days ago
          What are the columns in a relational table like that?
          • layer8 13 days ago
            Not sure what you mean. A table with N columns corresponds to an N-ary relation. The rows contained in the table are the tuples contained in the relation. Table = relation. That's why it's called a relational database.

            See also this example in Wikipedia of a ternary relation:

            Another example: If you have a table (SSN, Name, DateOfBirth), then that corresponds to the relation "person with SSN x has name y and date of birth z". In that case it's a relation between SSNs, names, and dates.

            • grensley 13 days ago
              Very fair, thanks for the example.
      • ChickeNES 13 days ago
        Well, if you ignore polyamory and polygamy...
  • zarzavat 13 days ago
    For the complete opposite point of view, see Limits in SQLite:
  • hermannj314 12 days ago
    Occasionally, other numbers have interesting properties:

    1. No closed form solution for polynomials of degree 5 or larger.

    2. 4-color theorem for painting planar graphs.

    There are lots more counter-examples: not 0, not 1 and probably not infinite. Probably some other number of counter-examples.

    If you were making a paint the map algorithm - your data structure should assume you need 0, 1 or infinite colors, instead of no more than 4? Is that the point of the principle?

  • falcor84 13 days ago
    Apologies for the spoiler, but I just wanted to mention that this rule plays a pivotal role in Asimov's highly recommended "The Gods Themselves"

  • at_a_remove 13 days ago
    I've had a different take on this that goes "Zero, One, Many." And that when you end up really using your software, the things that were "one" often become "many," and the unthinkable zeroes frequently became ones.
  • iamnotsure 13 days ago
  • dclowd9901 13 days ago
    I believe Cyrillic languages have denominations that are beyond absent, singular and plural, like a large or small plurality.

    We may ask for an ideal in software but as long as people exist, exceptions will exist.

  • cleanchit 13 days ago
    > In real-world software design, violations of this rule of thumb are common. For example, the FAT16 file system imposes a limit of 65,536 files to a directory.[3]

    Isn't it because 2^16 == 65536?

  • IIAOPSW 13 days ago
    Wow. Just + and - i away from completing the Riemann sphere.
  • bandrami 13 days ago
    Arbitrary limits are fine as long as they are powers of 2.
  • 2-718-281-828 13 days ago
    It's my rule of thumb that the only numbers one should find in code are 0 and 1. Anything else is a magic number and needs to be handled accordingly.
    • xigoi 13 days ago

          def solve_quadratic_equation(a, b, c):
              d = b * b - 4 * a * c
              if d >= 0:
                  return [(-b + sqrt(d)) / (2 * a), (-b - sqrt(d)) / (2 * a)]
                  return []
      How would you deal with the “magic numbers” here?
      • 2-718-281-828 12 days ago
        good question. but if you look at the "magic" concept from a higher level then you'll find that this function of course is based on magic numbers as it itself is a _magic function_. it doesn't solve generally equations but specifically _quadratic_ equations. a more extreme and obvious example in the same vain would be:

          def print_every_second_for_a_minute():
        having said that - there are always exceptions.
        • xigoi 12 days ago
          How would you define a “magic function”?

          Solving equations in general is not possible, so if you're trying to say that this function is too specific, good luck generalizing it.

          • 2-718-281-828 12 days ago
            I talk about magic numbers from an abstract, theoretical perspective. from that perspective it's not relevant that you can't generalize an equation solver. but would bet that if it was possible then there would be no numbers except for 0 and 1.


            you could also think of it this way. a generalized equation solver would necessarily derive the solution in a complete fashion. for someone analyzing the code it would be come clear what's going on. your specialized equation solver otoh just has some "magical" solution in there.

  • readthenotes1 13 days ago
    Zero one infinity (many) is a good rubric for test driving, and for thinking how close to infinity your system can/should actually support
  • nixpulvis 13 days ago
    Weak wiki article.
  • antiquark 13 days ago
    RIP the byte.
    • layer8 13 days ago
      It’s just one octet.