11 comments

  • j-krieger 562 days ago
    Putting the discussion about C++ memory safety and concurrency issues aside, I am pretty astonished that C++ continues to include bugs where you have two actual language features - not library functions - which on their own work perfectly fine, but combining the two makes the entire thing silently implode.

    I am trying very hard to imagine any other higher level language where two distinct language features, each used correctly in their own way, can not be safely combined. The only thing that comes to mind are python's default parameters and passing an empty list as one.

    C offers total freedom in exchange for providing ways to shooting yourself in the foot. All of its "dangerous" features can be used correctly for some benefit. With C++ it feels like to me like that benefit is often lost entirely, and whats left is just incorrect code that for some reason passes by the compiler without error.

    • selimnairb 562 days ago
      I have returned to using C++ over the last six-months after not using it really since university 20+ years ago. I mostly write Python these days, Java in the past, but have also recently written some moderately complex and unsafe C (e.g., making a parser using flex and bison; doing lots of void* pointer arithmetic to get around not having runtime reflection). I am astonished at the complexity of modern C++. I’ve studied Rust and used some Swift (on an iOS app for a client a couple of years back), and I struggle to “correctly” write code in C++. I feel as if C++ is collapsing under its own weight.
      • curtis3389 562 days ago
        I programmed in C++ all through college, and I get the same feeling watching its development.

        TR1 brought the language to a pretty good place with smart pointers, and at first C++11 looked awesome with its type inference features and range-based for loops.

        But, also in C++11 were tons of weird gotchas with new and old features. The one that comes to mind is the uniform initialization. It was supposed to fix the issues with calling regular constructors, so you could just use the new syntax and not worry.

        But, they also added initializer lists, and those have issues with uniform initialization, so now you not only need to know all the pre-existing issues with the old syntax, you need to know the new syntax and its issues.

        This all comes back to C++ refusing to break backwards compatibility, which is great and all, but at this point they've made a language that is only good for job security.

      • qwertox 562 days ago
        I started programming with C and C++ in the 90s and I absolutely loved it. Then big projects with multithreading started to blow up because it became to complex to manage. 2010 I started with Python and absolutely fell in love. Since then I've been using it and never regretted it, until I had to upgrade servers and with it migrate from 2.7 to 3.8, now again to 3.10. There are issues with these migrations which aren't negletible if you're running over 50 different servers, doing different things, where suddenly for no reason at all a function parameter gets removed from some asyncio function, or all the virtualenvs are no longer usable because they were set up with 3.8, which got replaced by 3.10. This makes upgrading an OS very painful as it requires many weeks of preparation, which is something you first need to become aware if. I now have tooling which monitors every running Python process and prints out which modules are used an other things, because there was so much to look after. Basically each Python script has to register it's execution at a central server, but because it can be down, they have to use a local caching proxy server for this.

        Then again, two weeks ago I started learning Rust and I'm absolutely blown away. I'm completely absorbed by it. I'm now thinking of migrating some of the Python projects to Rust so that I can have a single executable which I can then copy around without having to care about virtual environments. I mean, I loved C and C++ for its precision, and love Python for it's liveliness (I use `reload` a lot so all the projects are living things which support hot reloading), but Rust seems to hit the sweet spot between C++ and Python. Between performance and little dependencies due to the lack of an interpreter, and a somewhat easy to use threading and async / await with Tokio, with all the safety guarantees Rust offers. I'm currently in heaven for this and I hope it lasts.

        • eru 561 days ago
          It's probably best not to run Python bare-metal on your operating system. Or at least not to rely on your operating system's Python.

          Use a Python package manager, and/or run your Python processes in containers.

          That way you can upgrade your OS without having to upgrade all Python programs at the same time.

          • qwertox 561 days ago
            Thank you for bringing this approach onto my radar. It sounds very promising for solving the OS upgrade issue but also let's me finally use the latest version for my projects. I will read up on this.

            Are there some recommended keywords or things to avoid which I could use to orient myself? Like "use deadsnakes" or "avoid deadsnakes"?

            • eru 561 days ago
              I suggest you look into Python's Poetry package manager.

              There's some ways to keep multiple isolated Python environments around on the same OS (and Poetry can help with that, too). But I would suggest that you just stick every Python program in its own dedicated docker container, if you can get away with the overhead. (There are some ways to make rather slim containers.)

              Of course, it doesn't have to be docker. Any other containerisation or VMs also work.

              In general, I found it a good idea to keep the host operation system as small as possible, and stick everything into containers that you can upgrade separately.

      • synergy20 562 days ago
        what exactly are them? I'm also back to modern c++ but I plan to only use a subset of the super set to make life easier. for example I don't need write a c++ library so many of the new (meta-template-programming) feature I don't need to pay attention to at least for now.

        what I really need are RAII, smart pointer, the STL library and algorithms basically, plus simple OOP use cases, those combined are not very complicated to me.

        • intelVISA 562 days ago
          > what I really need are RAII, smart pointer, the STL library and algorithms basically, plus simple OOP use cases, those combined are not very complicated to me.

          That's just about what anyone needs, I fear most people are simply too deep in their cargo cults these days to reason about a complex lang appropriately.

          • gpderetta 561 days ago
            Exactly. In fact these days you need less metaprogramming, not more.
            • pjmlp 561 days ago
              Agreed, I like to watch metaprogramming talks in a similar vein like solving chess match puzzles, but I hardly use any of that in my own C++ code, and tend to stay away from clever libraries.

              Best part of latest improvements is that most common mortals don't have to deal with SFINAE, enable_if or tag dispatching.

      • TremendousJudge 562 days ago
        Similar story here. I'm constantly wrestling with the tool more than the actual problem I should be solving
      • lloydatkinson 561 days ago
        All of these sorts of reasons are why I skipped learning C++ and don’t intend to use it. I learned C and then C#. I can’t imagine using C++ after using C# - modern C# and .NET have loads of performance and native related functionality including pointers, Span, AOT, etc.

        From my outside perspective it seems absolutely chaotic and simply unmaintainable.

        • pjmlp 561 days ago
          Kind of, except for the small detail that WinDev loves their C++, and leaves to the .NET community the burden to create libraries, or even Windows Runtime components for their APIs, and also has Windows integrations that don't allow for COM implementations in .NET, forcing some of us to keep our C++ skills up to date.
    • eloff 562 days ago
      I've always thought of C++ as the most complex programming language I know. With each revision it becomes more complex. I gave up and just moved on to Rust a year and a half ago. I have no regrets.

      C++ has not quite joined Perl, PHP, and Java as languages I will not work with anymore, but it's the next logical candidate.

      • thrown_22 562 days ago
        This is what annoys me when people say C/C++. C++ is a mess that no one could ever learn. C is a happy little language that lets you move memory around in anything from a micro-controller to a GPU cluster. C is not getting replaced by Rust because Rust can't fit in most of the places C lives. C++ on the other hand can't die fast enough.
        • tialaramex 562 days ago
          Don't overestimate how big Rust needs to be, for a lot of specialist micro controllers the inertia is with C because the vendor ships a C SDK. But those SDKs are often garbage, which means if one or two enthusiasts make a good Rust SDK for your preferred controller suddenly Rust has the upper hand.

          For really tiny stuff Rust won't go there. If you program an 8051 clone, Rust isn't interested in your problem, ideas like "generic programming" and "pattern matching" don't make a whole lot of sense when only 128 bytes of RAM is accessible. But is C really the right tool for that environment either ?

          Once you've got 64kB of RAM, that's definitely potential Rust territory.

      • tasubotadas 562 days ago
        No idea why you decided to lump Java together with these abonimations.

        Java by itself is an extremely simple language.

        • tonyarkles 562 days ago
          For me, one of the biggest problems with Java is its simplicity. Because of the lack of "power tools", you end up with things like dynamically assemble class instances using either XML or magic (see Spring), and adding functions at compile time (see Lombok). The last time I worked on a Java project, it was practically impossible to predict how the system would actually behave at run-time; my joke was that a lot of these tools took issues that could have been detected at compile-time and instead converted them into run-time errors.
          • renewedrebecca 562 days ago
            I've worked just about every day with Java for the past 22 years, and you've hit the nail on the head with the problems.

            I really prefer environments where you could start with main() and figure out the behavior from there, and things like Spring make that impossible.

            Really, if java only had lambdas and lisp-y macros from the beginning, we wouldn't have these problems.

            • bheadmaster 561 days ago
              > I really prefer environments where you could start with main() and figure out the behavior from there

              This is one property of Go that I really like. Because of lack of constructors/destructors/magic functions and all that jazz, the only functions executed in a Go program are functions explicitly called through main().

            • moring 561 days ago
              IMHO, in retrospect, runtime access to annotations was a mistake. Possibly the level of reflection it offers, too. These two make all that code possible that cannot be understood statically.

              And yes, lambdas and macros would probably have been a good replacement for that those two were originally meant to achieve.

              • jcelerier 561 days ago
                It made a ton of software possible that wouldn't even exist otherwise. We don't have runtime access to annotations in c++ and it's a constant PITA as soon as someone wants to make something as basic as generating a GUI automatically from a data model, which is maybe 50% of what programmers do all day in the world
                • moring 561 days ago
                  I first wanted to write: Generating a GUI can happen as a fully automated build stepü, without runtime reflection.

                  But then I realized that such build steps re-introduce the same problems as runtime reflection does: You cannot understand the code statically.

                  So, yeah, maybe reflection and annotations are simply an overused hammer that makes everything look like nails... It feels like every library goes out of its way to use annotations for everything, completely defeating type-safety and static analysis. When it should actually be like, "is there really no way we can solve this without annotations and reflection".

                  • jcelerier 560 days ago
                    Note that what I'm talking about is perfectly doable without any runtime reflection at all, only through static compile-time reflection. E.g. attributes in c++ are a purely compile-time thing - if there was an API to do stuff with them it would also be a compile-time one. The code isn't meaningfully different though.

                    But yeah, "fully automated build step" generally looks like "let's parse enough of the language so that we can implement our own custom reflection system" which leads to every project having an incompatible one and in practice much more pain (and macros) for the developers than if it was a standard one, e.g. Qt, Unreal Engine and so many others have their custom homegrown reflection scheme and there are a thousand libs on GitHub trying to reimplement it: https://github.com/topics/reflection?l=c%2B%2B

                    It's exactly like type systems being Turing complete : if yours isn't, someone somewhere will be making a language that compiles down to yours with one because of how useful of a property it can be in some cases. All of this is not accidental complexity, the alternative to not having them is much, much, much less productive in practice. My personal experience writing code with reflection is that at the cost of one complicated base library, the app code is literally 10x faster and simpler to write, and the huge majority of the time is spent in app code, not library one.

          • tasubotadas 561 days ago
            I am sorry to hear that you have experienced such a sorry bunch of code-bases and frameworks but that doesn't define the language or even an ecosystem.

            I've seen worse magic crap happening with Ruby and Python with their auto-magic method invocations.

            • tonyarkles 561 days ago
              > but that doesn't define the language or even an ecosystem

              I want to agree, but at least locally (as in geographically), these technologies seem to be heralded as fantastic. Maybe there's some kind of Spring Bubble around here, but on the main Maven site, Lombok shows up as the #16 most popular package (with primarily logging and language packages above it), and Spring isn't too far behind.

              > Ruby and Python with their auto-magic method invocations

              It's probably worth separating those two... as far as the functionality goes, Ruby and Python both have the capability of similar magic, but in my experience with popular frameworks, Ruby takes it to a whole other level while in Python it's generally discouraged.

              I realize it's all anecdata here, but in my experience working with various companies and platforms, the Ruby magic generally just lives behind the scenes and Just Works. The Java magic would often turn into runtime exceptions, and the people who tried to do Ruby-like magic in Python would have similar difficulties. All of that revolved, primarily, around the ecosystems around the languages.

        • eloff 562 days ago
          In theory Java is not so bad. I just have never enjoyed working with it or with the code Java developers end up creating. Kotlin is reasonably pleasant. I've sometimes used it when the JVM was a hard requirement.
        • marginalia_nu 562 days ago
          Java is like one of those ice berg memes with a lot of byzantine JVM shit lurking at the depths.
      • bugfix-66 562 days ago
        I used C++ full-time for more than a decade before finally abandoning it in disgust.

        Now I use Go for most purposes, and C or CUDA when appropriate.

        Good riddance.

      • spoiler 562 days ago
        I've not worked with PHP for over 8 years now, but I hear it's gotten much better recently.

        It still has issues that are commonly associated with hosting provider configuration/control over the runtime, apparently.

        (sources are anecdotal; two friends who've had the mental fortitude to stick with the language for so long)

        • jmt_ 562 days ago
          PHP 7 & 8 introduce lots of improvements. I used PHP 5 when I got into web dev and remember moving to Python as soon as I could. At work I've had to use PHP 7 and have been very pleasantly surprised at how solid it is. Now that I have more experience than my PHP 5 days, it's become apparent that PHP is very clearly suited for web work and the built-in functions offer many conveniences for such work that simply aren't available in other languages (granted other languages weren't built for web like PHP was). I've been surprised by my speed in PHP and I chalk most of that up to many helpful standard library functions. Even though PHP has a lot of old/compat functions still present I'm always surprised by how little that actually ends up effecting my productivity. So it's not my choice for greenfield projects but I'm not nearly as adverse to it now after working with modern day PHP for a bit.
          • jamesfinlayson 562 days ago
            Agreed - I've worked on a few PHP 5 era projects with hand-rolled frameworks and more recently I've worked on a few PHP 7/PHP 8 projects using proper frameworks and it's night and day.
    • dkackman11 562 days ago
      Agree. Modern C++, and discussions around it, become more esoteric with every revision. As software engineers we should spend our time understanding the esoterica of the problem domain, not the toolset.
      • hdjjhhvvhga 562 days ago
        I'm of the same opinion. Frankly, when Go first appeared, I felt we would finally get something almost as simple as Python and almost as fast as C. Unfortunately not everything was as nice as I had expected. Still, in that respect it's better than Rust as it makes it easier to focus on the problem rather than the language.
        • _wldu 562 days ago
          The complexity of C++0x is one reason Go was created.

          "For me, the reason I was enthusiastic about Go was just about the same time we were starting on Go, I read (or tried to read) the C++0x proposed standard. And that was the convincer for me." - Ken Thompson

          17:45 mark here: https://www.youtube.com/watch?v=sln-gJaURzk

          • tialaramex 562 days ago
            For reference of anybody too young to remember: C++0x is what you'd have called what became C++ 11 back when the committee thought it might happen in 2009 or, worst case 2010.

            The fact that their "2009" standard shipped only in 2011, after ripping out features everybody agreed were good yet never seemed finished, is why they moved to their current "train" model where there's a new C++ every three years, like it or not. The train will leave, if your feature wasn't ready well there's another train in three years.

        • pkolaczk 562 days ago
          > Still, in that respect it's better than Rust as it makes it easier to focus on the problem rather than the language.

          Quite the opposite. In Go/Java I find myself frequently thinking how to adapt my problem so it can be expressed in a very limited set of language features. I want to build a house but I only get a hammer and some nails. Works nice if I want a wooden house but not so much if I want to use bricks and concrete. In Rust I get a giant toolbox with almost everything and I just pick the right tool for the job. So although I had to learn more tools initially, later I can focus more on the job, not the language, because the language adapts to the problem. And the final result is typically also better in terms of quality, performance, etc.

        • xyzzy4747 562 days ago
          I personally still don't understand the benefit of Go vs Java? They seem to accomplish similar things with similar amounts of boilerplate.
          • tomohawk 562 days ago
            Having used both professionally (several years each), Go is massively simpler than Java, at least for the problems we tackle. We sometimes will get a new dev on the team from Javaland, and the biggest thing for them is unlearning all of unnecessary complexity. YMMV.
        • stark98 562 days ago
          > Unfortunately not everything was as nice as I had expected.

          Could you elaborate? I'm considering switching to Go for the same exact reasons you listed.

          • hdjjhhvvhga 561 days ago
            I should have clarified I was one of early adopters and my main gripe then was mainly about its performance, not the syntax. I mean it was okay, but still not up to my expectations. But that was over a decade ago! Next time when I have the luxury of choosing the language for a new project I'll definitely take Go into consideration. It has its quirks but overall it's quite simple and manageable - you can easily understand code written by someone else which is a huge advantage.
        • bheadmaster 561 days ago
          > Frankly, when Go first appeared, I felt we would finally get something almost as simple as Python and almost as fast as C. Unfortunately not everything was as nice as I had expected.

          Would you care to elaborate what exactly is not as nice as you had expected? I'm really into Go and to me it really does seem as good as expected. Just curious about what made it suck for your usecase.

          • eru 561 days ago
            Go's design always struck me as really hypocritical. At least it's initial design.

            Go's designers get to use tuple for their favourite use case: returning from a function.

            You as a user of the language don't get to use tuples for anything else.

            (And using tuples to indicate failure is really, really silly. You'd want algebraic datatypes for that. Ie a beefed up enum, not an ad-hoc struct.)

            Go's designers get to use generic functions and datastructures for their favourite use cases. Eg the infamous `make` is sort-of generic, and Go's arrays, maps and channels etc are generic.

            • bheadmaster 561 days ago
              > Go's designers get to use tuple for their favourite use case: returning from a function.

              I wouldn't call multiple return values a "tuple". I understand where you coming from, since Python multiple return values are implemented with tuples, but still that point sounds like saying that passing multiple parameters to C functions is "a tuple".

              That being said, Go's "simplicity" in design is not only about interface simplicity - it's a lot about implementation simplicity. I imagine that adding tuples would cause a whole lot of complexity that they'd have to deal with. Since Go doesn't have a concept of immutable data types (except consts), tuples would effectively be just variable-typed constant-size slices, which are already implementable as arrays of empty interfaces, but have a runtime overhead...

              I hope you see where I'm getting. Creating a language is complex and has a lot of tradeoffs. Then again, I see where you're coming from. I suppose it's a question of whether or not the tradeoffs (the "hypocritical design") is worth the good parts (simplicity of language, which leads to fast compile times, fast runtime, etc.). For me, personally, it is.

              Thanks for the input.

              • eru 561 days ago
                > [...] like saying that passing multiple parameters to C functions is "a tuple".

                You intended this as a reductio ad absurdum, but you are entirely right. That's how it's handled in eg Haskell.

                > I imagine that adding tuples would cause a whole lot of complexity that they'd have to deal with. Since Go doesn't have a concept of immutable data types (except consts), tuples would effectively be just variable-typed constant-size slices, which are already implementable as arrays of empty interfaces, but have a runtime overhead...

                I'm not sure that would be a good way to implement tuples. In fact, I suspect it's probably close to the worst way to implement them?

                I'd image you'd want to take the machinery you have for structs, and adjust it so that it can deal with anonymous structs with anonymous fields identified by position. (But still preserve all the static typing, instead of throwing your hands up and going with 'variable-typed'.) The runtime overhead would be more or less exactly the same as for structs.

                In general, I see tuples as a syntactic sugar over structs and expect them to be compiled like that (or better) in a statically typed language; instead of representing them as some kind of weird slice.

                Theoretically, you could probably even implement tuples-via-structs as a preprocessor.

                > I imagine that adding tuples would cause a whole lot of complexity that they'd have to deal with.

                Yes, so instead of abstaining from tuples, because of these complications, they added a special case for their favourite use case.

                Have a look at OCaml for a relatively simple language with less design hypocrisy, and fast compile times and fast runtimes. (Compared eg to the much more complex Haskell.)

              • hdjjhhvvhga 561 days ago
                > Since Go doesn't have a concept of immutable data types (except consts)

                I believe this is incorrect. Strings are immutable, for example. You could also say the same about pointers and maybe a few other data types.

                • bheadmaster 561 days ago
                  You are right.

                  Now that I've had more time to think about it, I think the problem of Python-like tuples in Go is that they require Pythonesque everything-is-a-reference paradigm in order to function the same way, otherwise they just act as heterogenous arrays in Go (which exist as arrays of empty interfaces, with a slight runtime cost). Go is mainly value-oriented language like C, with the exception of built-in container types and strings (and maybe a few others that slipped my mind).

                  I suppose that adding fixed-type pass-by-value tuples would essencially be reimplementing struct. Go doesn't like duplicating functionality.

                  • eru 561 days ago
                    See my other comment: tuples should be thought of as syntactic sugar over structs, not arrays.

                    Whether to pass by reference or value and other design decisions should probably be taken (as much as practicable) from the design decisions for structs in your language.

      • thrwyoilarticle 562 days ago
        I think this is more of a meme than reality. In the past, C++ needed manual memory management and writing ctors over and over with myriad footguns. Now someone can contribute to a C++ codebase barely knowing what a heap or reference is and the compiler will create decent code.
    • kllrnohj 562 days ago
      > C offers total freedom in exchange for providing ways to shooting yourself in the foot. All of its "dangerous" features can be used correctly for some benefit.

      C is pretty archaic in its feature set, so it's not surprising it doesn't have many language features that don't work well together since it doesn't really have any language features in the first place. That's not necessarily a good thing, either, but that's a different discussion.

      > With C++ it feels like to me like that benefit is often lost entirely, and whats left is just incorrect code that for some reason passes by the compiler without error.

      That's an odd statement to make here. Yes this is an unfortunate combination that results in memory leaks, but how is C's "everything is a memory leak" any better? 100% broken is better than 10% broken?

      That said, I do wish C++ hadn't added coroutines in the first place. It feels like a weird addition to the language. Then again, I also wish Rust hadn't added them, either. They are nightmarishly complex in a non-GC'd language, and the benefits are far from clear. The "colored function" problem remains a hotly debated topic, and low-level, performance-focused languages (like C++ & Rust) feel like the very wrong place to hammer on those topics.

      • dtgriscom 562 days ago
        > Yes this is an unfortunate combination that results in memory leaks, but how is C's "everything is a memory leak" any better? 100% broken is better than 10% broken?

        I'd rephrase "100% broken" as "always does what it says it will", and "10% broken" as "almost always does what it says it will." I'd prefer the former 99% of the time.

        • eru 561 days ago
          Well, C still has plenty of undefined behaviours, that may or may not blow up on you.
      • notacoward 562 days ago
        > C's "everything is a memory leak" any better?

        That's a bit hyperbolic, unless you apply the same phrase to any language (including C++) without a full tracing garbage collector or equivalent. X leaking a resource is not the same as X returning a resource that the caller is explicitly responsible for freeing. Further, C in practice is not as bad as you portray. Maintainers of large open-source projects, of which I used to be one, often define their own memory-management facilities and rules which are just as good as C++'s in preventing leaks for most code. I think they shouldn't need to nowadays, and that C would be a poor choice for a new project at any level, but that's easily shown without resorting to hyperbole.

      • snickerbockers 562 days ago
        >but how is C's "everything is a memory leak" any better?

        now there's a spicy take, i've been programming in C since 2005 and i don't think i've ever heard about this "everything is a memory leak" principle.

      • j-krieger 562 days ago
        > That's an odd statement to make here. Yes this is an unfortunate combination that results in memory leaks, but how is C's "everything is a memory leak" any better? 100% broken is better than 10% broken?

        Because if you use features that may leak memory, you can still trust yourself as a programmer. If you cant even trust the language / compiler, I would consider that worse.

        > Then again, I also wish Rust hadn't added them, either

        I agree.

    • hellcow 562 days ago
      > I am trying very hard to imagine any other higher level language where two distinct language features, each used correctly in their own way, can not be safely combined.

      Handling JavaScript exceptions as opposed to promise rejections in async code is a continual source of pain. They don’t work together. If you do try/catch on async code but forget to call await, catch will never be hit and your program might crash. There appears to be no linter available to detect this. Unless I’ve completely missed something, the interaction between these two features is a terrible design.

    • masklinn 562 days ago
      > I am trying very hard to imagine any other higher level language where two distinct features, each used correctly in their own way, can not be safely combined. The only thing that comes to mind are python's default parameters and passing an empty list as one.

      And even then it's not a bug per-se, at best a misfeature (it can be useful as a performance hack although the performance improvements of global and builtins lookups of 3.11 might finally put that to rest).

    • avgcorrection 562 days ago
      There are two sorts of languages: languages with a coherent feature set and those that people actually use. Wait hold on, that doesn’t apply here…
    • 0xbadcafebee 562 days ago
      I find it unreasonable that every language would have all of its features always compatible. Why can't we just say "don't use X and Y together" ? It might solve a ton of problems. All you have to do is have a big red warning in the manual that says DON'T COMBINE X AND Y, and a language feature that lets you turn on X and turn off Y.
      • j-krieger 562 days ago
        > Why can't we just say "don't use X and Y together"?

        Because that goes against the most fundamental idea of programming languages that you can build larger abstractions by combining basic building blocks.

        • travisgriggs 562 days ago
          Abstractly speaking, what if we consider a "big ball of spaghetti" and "working program" to both be abstractions of just "a program". Now it fits under the larger abstraction umbrella still. :D

          All other things aside, C++ seems to be the language "to big to fail" and so it just keeps getting more and more features added to keep it competitive.

        • lionkor 561 days ago
          Its okay to say "but you cant combine those two". Its also okay to point out that not every feature should be combinable with every other feature, and that needing to do so is likely an indicator of terrible (user-side) software design
          • eru 561 days ago
            Then at least the compiler should complain, when you try to combine incompatible features.
        • 0xbadcafebee 562 days ago
          A half dozen languages already do this and abstractions still work
      • CJefferson 562 days ago
        Who reads the "manual of C++" -- what even is that, the standard document (certainly no-one should read that for learning).

        It would be just about OK for the compiler to refuse to compile X and Y together.

        If you "switched off" X, then in C++ (where code is in headers which are just concatenated with your code), this would effectively split the language, as you also couldn't use libraries which use X in their headers.

      • extropy 562 days ago
        Sure, here is your Lego set. But do not put while bricks next to red ones or it will explode!
      • jejones3141 562 days ago
        Orthogonality has been recognized as desirable at least since the days of Algol 68. Once you couldn't pass a struct as a parameter in C. Now you can.
  • scatters 562 days ago
    This is fixed in C++23 by adding explicit object parameters to the member function (and lambda) syntax:

        [x](this auto) -> future<T> {
            co_await something();
            co_return x;
        }
    
    The lambda capture is copied into the coroutine frame on launch, meaning that it won't dangle.
    • sitzkrieg 562 days ago
      this looks so far and away from C++ of ye old its kinda funny. you could pass that as some new language
      • MathMonkeyMan 562 days ago
        future<c++>
      • jcelerier 562 days ago
        > you could pass that as some new language

        ...why would you think that a new standard isn't a new language?

        • Rebelgecko 562 days ago
          C++ seems to have more churn than most other languages.

          e.g. if you're familiar with C99, you can read through a C17 code base and not even realize you're looking at something different. Even going back to C89 it's not that different. Other than some fancy syntax like comments using //, not much has changed. Sure, the standard library might have threads.h now or whatever, but your code is going to use the same letters on the keyboard. There's similar to other languages I'm familiar, even across major revisions the operators and syntax doesn't change much.

          • jcelerier 561 days ago
            Programming language grammars don't have a notion of "change much" - it's exactly the same language or it's another
            • Rebelgecko 561 days ago
              I don't think a grammar (at least the way I've used/written them, but I'm def not an expert) captures things like new headers or functions in the standard library.
            • usrnm 560 days ago
              > it's exactly the same language or it's another

              Have you ever heard of python?

              • jcelerier 558 days ago
                every point release is a new language, yes
        • ijlx 562 days ago
          I mean, between consecutive standard releases not as much changes; it's pretty clear that they're the same language with some slight differences/new features. I think the point is C++ has had so much change over the years it almost seems unrecognizable compared to its earlier forms.

          Almost makes me think of the ship of Theseus. How do we define the point that it's so different it could be considered a different language? Backwards compatibility strikes me as a factor, but backwards compatibility is violated by languages all the time and it's still considered "the same language."

    • jmt_ 562 days ago
      I would have never recognized this syntax as modern day C++ without being told. The language has evolved so much in the last 30+ years that I don't know how anyone is able to keep up unless they've been doing it for decades. As a junior dev, C++ scares the hell out of me - I'd rather wrestle with C, footguns and all, than wrestle with the foot-semiautomatic-rifles C++ provides.
  • jiripospisil 562 days ago
    The discussion continued over at "std-proposals" https://lists.isocpp.org/std-proposals/2020/05/index.php#msg...
  • olliej 562 days ago
    The bug report says that the lambda pointer is dangling and is leaked. I’m not at my computer atm so i can’t tell which this is (by my definition dangling = security bug, leak = annoying, both are bad the former is obviously much worse), my intuition is that it’s a dangling reference but not in a position to see codegen (I’d guess something similar occurs in clang if it’s spec behaviour). Given that this is the obvious use case I feel that this would be worth fixing in the spec before widespread adoption.

    That all said it demonstrates the issues with a lot of new c++ features, in that even today new features are added without real thought to memory safety. There were a couple of proposals to at least make it possible to specify lifetime constraints, even if they’d only be warnings initially, but they died because memory safety remains a lower priority than other sexier features.

    Like I really do love c++, but memory safety has to become higher priority than other features for a while.

    • j-krieger 562 days ago
      > That all said it demonstrates the issues with a lot of new c++ features, in that even today new features are added without real thought to memory safety

      I would go much further than that and claim that there are new C++ features added without real thought to other features. The C++ feature list is so incredibly vast that language maintainers / spec writers would need to know every little intricacy of other features to assert that a combination would still produce correct output.

      • AlotOfReading 562 days ago
        The problem is so much worse than that. Combining even the most fundamental features of the language (like UB and concurrency) leads to dark corners where even experts like Boehm aren't sure. The recent TR about safety-critical C++ is basically "don't use C++", which is correct, but not particularly helpful.

        The reality is that the standard simply isn't a good enough reference for what the language does. If you care about that, you need to treat programming in it as an experimental endeavor where you validate that your particular combination of code + toolchain + hardware does what you want. It's nice if the standard goes along for the ride, but not something you can rely on.

      • asveikau 562 days ago
        We're now 11 years from c++11, but I wanted to say lambdas are generally well thought out with respect to how memory management works in c++. The syntax for captures is weird at first, but once you appreciate that what it's doing is playing nice with copy constructors, it's good.
        • gpderetta 561 days ago
          Yes, lambdas are by far one of the best things to come out of the last 20 years of C++ development. They work exactly as a C++ programmer would expect (i.e. they are equivalent to a manually written struct overloading operator()).

          Coroutines, IMHO not so much; they are extremely complex and feel half baked. The problem is that there was so much discussion around them, that it was either that or nothing for the next 10 years.

    • notacoward 562 days ago
      This is what always seems to happen when systems only grow and never shrink. Later-added features don't always play well with each other, even when they're added sequentially and it seems like they should. It's technical debt at the specification level (as opposed to code) and I've seen it bite more projects than I can count.

      Sometimes you just have to start swinging the axe, or else stop wasting effort and plan migration to an alternative. In this case the alternative seems to be Rust, and no, I'm not a Rust advocate. Far from it, in fact. Personally I find Zig and several others more appealing. However, I also feel that the state of the art will advance more quickly if the C++ programmers migrate to any of those alternatives, and Rust seems pretty far ahead as the direct successor.

    • masklinn 562 days ago
      In fairness, features which break one another are not a new thing in C++.

      > they died because memory safety remains a lower priority than other sexier features.

      Is it a priority at all? Or is it on a list titled "¡" under a cabinet in the disused toilets of the third basement?

      I don't know whether 20 did as well, but C++17 literally added explicitely anti-memory-safety features (std::optional).

      • kllrnohj 562 days ago
        > but C++17 literally added explicitely anti-memory-safety features (std::optional)

        How did you reach the very odd conclusion that std::optional is "anti-memory-safe"?

        • masklinn 562 days ago
          Because the committee took something which is normally a way to lift nullability into the type system and make pointers null-safe, and instead made it… a nullable pointer. With all the absence of safety of a C++ pointer.
          • olliej 562 days ago
            What? I'm genuinely unsure what you mean, is this that

                if (theOptional) { ... }
            
            requires that '...' include

                *theOptional
            
                theOptional->...
            
                .get()...
            
            etc

            but that they are UB _if_ theOptional == None. If that's the case then I agree, wg21 is far too happy to apply UB in cases where the behaviour is (a) specifiable and (b) has no reason to be unspecified either?

            [I very nearly just signed off with my email sign off, which who needs??]

            • kllrnohj 561 days ago
              yes *optional and optional-> are UB if the optional is none. But in your code that's perfectly well defined because you guarded it with that if that made sure it wasn't none in the first place.

              The reason they are UB is because the comments that it's similar to a null pointer are incorrect. std::optional doesn't store a pointer. It stores a value, and it's required that the value isn't constructed for a none optional. That way you don't have to have an empty no-args constructor nor ensure the constructor is side-effect free. And because the WG doesn't control your code, nor does the compiler, the resulting behavior can't be well defined other than by having operator-> have an extra if check & throw. Which if that's the behavior you want, that's what optional's value() gives you.

          • jcelerier 562 days ago
            > Because the committee took something which is normally a way to lift nullability into the type system and make pointers null-safe, and instead made it… a nullable pointer. With all the absence of safety of a C++ pointer.

            anything that adds a branch to check dereference in release builds is an instant no-go, what do you propose to do instead?

            • kouteiheika 562 days ago
              > anything that adds a branch to check dereference in release builds is an instant no-go

              ...which, as shown by Rust, is actually not true, and is very much a go in the vast majority of cases.

              > what do you propose to do instead?

              It's easier and more natural to access the optional in an unsafe way (through the -> operator) than access it in the safe way (through `value()`); this is a wrong default which should have been made the other way around.

              Potentially unsafe behavior should be opt-in, instead of opt-out. Yes, it's important that you have the option to disable this check for performance-sensitive code. But the majority of the code is not performance-sensitive to this degree that an extra branch is going to make any difference whatsoever.

              • kllrnohj 562 days ago
                > It's easier and more natural to access the optional in an unsafe way (through the -> operator) than access it in the safe way (through `value()`); this is a wrong default which should have been made the other way around.

                Then it's inconsistent with std::vector & similar. Consistency across the standard library seems much more worthwhile, especially here where it's pretty obvious when you mess up using a std::optional (like just never checking if it has a value, which jumps out in code reviews quite clearly)

            • pornel 562 days ago
              That "instant no-go" is the problem. C++ is still unwilling to budge even a tiniest bit for safety.

              In most scenarios there has to be a check somewhere. Where C++ got it wrong is allowing separation of the check from the use of the now-known-to-be-set value, so the necessity to have convenient zero-cost unchecked use is a problem C++ has created for itself.

              Safe languages solved this by carrying the checked state over using the type system (e.g. pattern matching or flow typing), so further the uses after the check are free AND guaranteed safe.

              There are some situations where the state of the optional being set is known from the context, but in a convoluted way too complex for the type system to follow. C++ focused on supporting this case — unwilling to compromise on either syntax or performance, and sacrificed safety for it.

              Other languages in such muddy scenario either take the cost of a branch or have an unchecked-unsafe function for optimization, but importantly — that function doesn't get a syntax sugar. It is meant to stand out as a rare risky operation, and not be hidden behind super common innocent-looking syntax.

            • CJefferson 562 days ago
              Why is it a no-go? If you mark the check as "unlikely" the cost is extremely small, and maybe we should just pay the cost rather than have undefined behaviour.
              • jcelerier 562 days ago
                > the cost is extremely small

                until it isn't, i've seen branches eat 10-15% of hotspots sometimes. but honestly, in practical terms, I shouldn't complain if people want that: that's literally more consulting money in performance consulting for me afterwards :-)

                let me repost one of my own past comments with a quick and dirty benchmark: https://news.ycombinator.com/item?id=30867368

                I have been paid hard money for performance improvements much smaller than something like this in my life

                • tialaramex 562 days ago
                  But at the end of those comments you linked, what you concede is that the reason people often end up with code that has lousy performance isn't, in fact, because of a language defect but because programming isn't their "real" job anyway - these are non-experts and so you'd often be fixing the same problems in C++ too.

                  Or maybe more often. From what I've seen those "not really a programmer" types write much better Rust than C++. Such non-expert practitioners are going to lean on defaults more often than an expert, and of course the C++ defaults are all wrong, so that's a problem both for correctness and performance.

                  Well, not a problem for you maybe, you get paid. But clients might wonder if they can't just avoid this problem...

                  • jcelerier 562 days ago
                    when non-experts give me C++ code to optimize most of the time there are enough low-, sometimes very-low- hanging fruits which we can incrementally refactor piece by piece into something that performs better.

                    when it's JS or python code.. like my man, here's your invoice for the rewrite.

                    > much better Rust than C++.

                    I really do not thing we'd agree on what is good code so I don't see a point in arguing more on this

                    > Such non-expert practitioners are going to lean on defaults more often than an expert

                    i'd like to see that. what i've seen is people who were working on safety-critical code who could not write a for-loop if their life depended on it, and most of the time people who just redo what they learned in class which is likely from a 1993 textbook. code entirely written in japanese, identifiers included, or as a french/english mix sometimes in the same identifier. it's really all over the place out there.

                • CJefferson 561 days ago
                  Looking at that benchmark, marking a function as "static" was enough to make the bounds checks be optimised away.

                  One difference between C++ and Rust (yes, I know they seem to get compared a lot) I like is that the "default function" is the safe one. In Rust you disable bounds checking for a particular, but [] does bounds checking. In C++ you can do "bounds checked access", but [] does unsafe.

                  In my opinion, it's best if the "default option" users reach for is the safe, but possibly slower option. Then, in a situation where a user is sure about what they are doing, and the boost is worth it, they can reach for the less safe option. In a program I'm currently working on we have disabled bounds checking in exactly two functions, as benchmarking showed it was worth it, and we carefully checked it would be safe to do.

                  • jcelerier 561 days ago
                    > Looking at that benchmark, marking a function as "static" was enough to make the bounds checks be optimised away.

                    and that's nice when you can do this (or apply LTO which would likely give the same result) - you definitely can't always do this. Think proprietary vendor selling a DLL containing some very specific DSP routines.

          • kllrnohj 562 days ago
            std::optional is not limited to pointers, and it has the safety if you want it - use value().
            • electroly 562 days ago
              There are some platforms that provide std::optional but not value()! macOS 10.13 is one of them. You're forced to use the unsafe * operator on that platform. I cannot imagine the rationale that went into shipping std::optional support without value(), but they did it.
        • hvdijk 562 days ago
          std::optional can be described as memory-unsafe because indirection when an optional is empty has undefined behaviour rather than deterministically throwing an exception or aborting the program, and may misbehave in all sorts of spectacular ways, including accessing random memory, on common implementations. Thankfully, implementations can and do provide ways to get it to behave more predictably.
          • kllrnohj 562 days ago
            But that's not actually true in general. std::optional<int> has no such indirection issues, for example.

            If you're saying that blindly calling `operator->` on a std::optional<int*> without ever checking has_value() or similar can result in dereferencing garbage then yes, sure? But calling that "explicitly memory-unsafe" seems misleading at best, and just aggressively wrong at worst. You can always use `value()` if you want an error-throwing option, just like std::vector has at(). The standard didn't just ignore that.

            • spoiler 562 days ago
              I think the phrasing of "explicitly memory-unsafe" is wrong[1] in letter, but true in spirit. At the end of the day `std::optional<T>` is only marginally better than `T*`. And I'm sometimes not even sure if it's better or worse; the extra API surface you describe is nice, but in practice it's just a mirage of safety, since its API subset includes that of a common pointer. And I've seen ample code use the unsafe API because (convenience/performance/inexperience).

              But at the end of the day, I guess my comment is also irrelevant, because we as developers should strive for correctness, not brevity, in code. If we can achieve both, the better. But alas, brevity and correctness are in an antagonistic tension in C++. So when we want correctness in C++, we should also be prepared to swallow a large portion of spaghetti Bolognese.

              [1]: Or at least no more melodramatic than the phrase "aggressively wrong" lol

              • kllrnohj 562 days ago
                > At the end of the day `std::optional<T>` is only marginally better than `T*`.

                It's dramatically better than `T*` if your data isn't a pointer in the first place.

            • hvdijk 562 days ago
              Sure, std::optional<int> is unlikely to result in such behaviour in practice, and std::optional<int*> is likely to "only" result in such behaviour if the result of operator*() is dereferenced again, despite both already being UB. Think of non-POD types, such as std::optional<std::string>, though: when you end up using uninitialised std::string objects, things do break in practice because of pointers used internally to implement std::string, and badly so.

              The fact that checked versions exist but are not used by default, have to be explicitly opted into, is consistent with C++'s designs and may be used to defend the current design, but at the same time also means it describing std::optional as memory-unsafe becomes a valid opinion based on facts, I think.

              • kllrnohj 562 days ago
                > but at the same time also means it describing std::optional as memory-unsafe becomes a valid opinion based on facts

                But your "facts" are "if I use the API wrong, it behaves wrong." But std::optional isn't easy to accidentally misuse here, unlike string_view (an actually "unsafe" addition). The argument that optional is broken if you both don't use has_value and don't use any of the other helpers (like value_or() or value() or transform or etc...) then it has UB means that optional is "broken by design" is not a very strong position to take.

                It's hard to imagine this being a problem in practice. It's pretty encoded in the code that it's optional, to just completely ignore that and blindly access it anyway seems pretty self-evident as a usage issue. Yes bugs happen, but come on. This is not a particularly sharp edge in C++'s toolbox here. It's a pretty straightforward, intuitive type, doing pretty much exactly what it says it does, exactly how you'd expect it to do.

                Should operator->() and value() be swapped? maybe, but then it'd be inconsistent with std::vector & other older types. And that inconsistency is probably worse overall.

                • hvdijk 562 days ago
                  > But your "facts" are "if I use the API wrong, it behaves wrong."

                  Kind of, yes. That is what memory safety is about, isn't it? If I look for definitions, I find for instance <https://hacks.mozilla.org/2019/01/fearless-security-memory-s...>, explaining it as:

                  > When we talk about building secure applications, we often focus on memory safety. Informally, this means that in all possible executions of a program, there is no access to invalid memory. Violations include:

                  > - use after free

                  > - null pointer dereference

                  > - using uninitialized memory

                  > - double free

                  > - buffer overflow

                  std::optional does not itself protect against using uninitialised memory, it merely provides the tools by which the programmer can prevent using uninitialised memory. Isn't that exactly what memory safety is about, about having std::optional somehow automatically ensure that that doesn't happen? If that isn't what memory safety is, what, in your opinion, does it mean instead?

                  Note that I have attempted to refrain from posting my opinion on whether C++ made the right call or not. That is a separate question from whether it qualifies as memory-safe.

                  • kllrnohj 562 days ago
                    I'm not arguing that c++ is memory safe, it isn't. But the initial claim is that std::optional is "explicitely anti-memory-safety". And that seems like a very unsupported claim. std::optional isn't safer than the rest of C++, but it's definitely not less safe either.
                    • hvdijk 562 days ago
                      Ah, thanks for the clarification, I think we've been talking about two slightly different things, then. For you, std::optional would have to make C++ more memory-unsafe than it already is in order for "anti-memory-safety" to be a fair characterisation. For me, that label merely implies that memory-safer alternative designs of std::optional were considered, and the current design was picked despite its memory-unsafety being a known potential issue. I think I would likely agree with you that std::optional does not make C++ less memory-safe than it already was before that got added.
  • somerando7 562 days ago
    You have to use something similar to https://github.com/facebook/folly/tree/main/folly/experiment... to solve this problem.

    It's a nasty bug that everyone encounters when first working with coroutines. (Similarly everyone will encounter references that don't live until you co_await the task).

    • boundchecked 562 days ago
      C++20 coroutines demonstrated one side of committee-led modern C++ that left an impression of it is ultimately designed-by and -for library writers; instead of being suggested third-party library or "wait for C++23" for better experience, I'd love to see these related machinery released the same time in the standard library.
    • grogers 562 days ago
      Alternatively, you can pass the things you would have captured instead as arguments to the lambda (by value!) and they are valid for the duration of the coroutine. So you can do a lambda returning a coroutine lambda like

        task<Foo> t = [foo]() {
          return [](auto f) -> task<Foo> {
            co_await something();
            co_return f;
          }(foo);
        }();
  • rwmj 562 days ago
    This is from Avi Kivity who added KVM (virt) support to the Linux kernel.
  • globalreset 562 days ago
    I really don't know why people insist on polishing this turd.
  • Ygg2 562 days ago
    Needs 2020 tag.
  • varelse 562 days ago
    undefined