C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe. Over time, and in larger groups of people that always fails. People just aren't that disciplined and they get overconfident of their own skills (or level of discipline). Decades of endless memory leaks, buffer overflows, etc. and the related security issues, crash bugs, data corruption, etc. shows that no code base is really immune to this.
The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.
There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.
It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.
However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.
However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.
This is a good article but it only scratches the surface, as is always the case when it comes to C++.
When I made a meme about C++ [1] I was purposeful in choosing the iceberg format. To me it's not quite satisfying to say that C++ is merely complex or vast. A more fitting word would be "arcane", "monumental" or "titanic" (get it?). There's a specific feeling you get when you're trying to understand what the hell is an xvalue, why std::move doesn't move or why std::remove doesn't remove.
The Forest Gump C++ is another meme that captures this feeling very well (not by me) [2].
What it comes down to is developer experience (DX), and C++ has a terrible one. Down to syntax and all the way up to package management a C++ developper feels stuck to a time before they were born. At least we have a lot of time to think about all that while our code compiles. But that might just be the price for all the power it gives you.
I don't plan on ever using C++ again, but FWIW in Rust there are lots of cases where you specify `move` and stuff doesn't get moved, or don't specify it and it does, and it's also a specific feeling.
- Use a build system like make, you can't just `c++ build`
- Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
- Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
- Oh also understand the compiler doesn't actually output what you want, you also need a linker
- That linker also doesn't know where to find things, so you need the external tool to use it
- Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
Now you can see why things like IDEs became default tools for teaching students how to write C and C++, because there's no "open a text editor and then `c++ build file.cpp` to get output" for anything except hello world examples.
It's really not that big of a deal once you know how it works, and there are tools like CMake and IDEs that will take care of it.
On Windows and OSX it's even easier - if you're okay writing only for those platforms.
It's more difficult to learn, and it seems convoluted for people coming from Python and Javascript, but there are a lot of advantages to not having package management and build tooling tightly integrated with the language or compiler, too.
This is pure Stockholm syndrome. If I were forced to choose between creating a cross-platform C++ project from scratch or taking an honest to god arrow to the knee, the arrow would be less painful.
Why? There are lots of cross platform libraries and most aspects are not platform specific. It's really not a big deal. Use FLTK and you get most of the cross platform stuff for free in a small package.
I used to write a lot of C++ in 2017. Now in 2025 I have no memory of how to do that anymore. It's bespoke Makefile nonsense with zero hope of standardization. It's definitively something that doesn't grow with experience. Meanwhile my gradle setups have been almost unchanged since that time if it wasn't for the stupid backwards incompatible gradle releases.
Not sure how relevant the "in order to use a tool, you need to learn how to use the tool".
Or from the other side: not sure what I should think about the quality of the work produced by people who don't want to learn relatively basic skills... it does not take two PhDs to understand how to use pkg-config.
I'm just pointing out that one reason devex sucks in C++ is because the fact you need a wide array of tools, that are non portable, and require learning and teaching magic incantations at the command line or in build scripts to work, doesn't foster what one could call a "good" experience.
Frankly the idea that your compiler driver should not be a basic build system, package manager, and linker is an idea best left in the 80s where it belongs.
> require learning and teaching magic incantations at the command line
That's exactly my point: if you think that calling `cmake --build build` is "magic", then maybe you don't have the right profile to use C++ in the first place, because you will have to learn some harder concepts there (like... pointers).
To be honest, I find it hard to understand how a software developer can write code and still consider that command line instructions are "magic incantations". To me it's like saying that calling a function like `println("Some text, {}, {}", some_parameter, some_other_parameter)` is a "magic incantation". Calling a function with parameters counts as "the basics" to me.
For most people this is a feature not a bug as you suggest. It may come across as PITA, and for many people will do, but as far as I am concerned, while also having experienced the pain of package managers in C++, this is the right way. In the end it's always about the trade-offs. And all the (large) codebases that used conan, bazel or vcpkg induced a magnitude more issues that you would have to handle which otherwise in a plain CMake you would not have. Package managers are for convenience but not all projects can afford themselves the trouble this convenience brings with it.
that idea that packages and builds belongs to simple problem, large projects need things like more than one laguage and so end up fighting the language
Every modern language seems to have an answer to this problem that C and C++ refuse to touch because it's out of scope for their respective committees and standards orgs
Modern languages don't generally play nice with linux distributions, IMO.
C and C++ have an answer to the dependency problem, you just have to learn how to do it. It's not rocket science, but you have to learn something. Modern languages remove this barrier, so that people who don't want to learn can still produce stuff. Good for them.
C++ has a plethora of available build and package management systems. They just aren't bundled with the compiler. IMO that is a good thing, because it keeps the compiler writers honest.
They are massively loved because people don't want to learn how it works. But the result is that people massively don't understand how package management works, and miss the real cost of dependencies.
"Massively loved" and "good decision" are orthogonal axes. See the current npm drama. People love wantonly importing dependencies the way they love drinking. Both feel great but neither is good for you.
I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit because then they can gleefully point at it and pretend that any first-party package manager for C/C++ would inevitably result in the same, nevermind the other languages that do not have this issue, or have it to a far, far lesser extent. Do these cultists just not use dependencies? Are they just [probably inexpertly] reinventing every wheel? Or do they use system packages like that's any better *cough* AUR exploits *cought*. While dependency hell on nodejs (and even Rust if we're honest) is certainly a concern, it's npm's permissiveness and lack of auditing that's the real problem. That's why Debian is so praised.
Not that npm-style package management is the best we can do or anything, but I would be more sympathetic to this argument if C or C++ had a clearly better security story than JS, Python, etc. (pick your poison), but they're also disasters in this area.
What happens in practice is people end up writing their own insecure code instead of using someone else's insecure code. Of course, we can debate the tradeoffs of one or the other!
Coming from c++, pip and python dependency management is the bane of my life. How do you make a python software leveraging pytorch that will ship as a single .exe and be able to target whatever gpu the user has without downloads?
I'm not going to defend the fact that the C++ devex sucks. There are really a lot of reasons for it, some of which can't sensibly be blamed on the language and some of which absolutely can be. (Most of it probably just comes down to the language and tooling being really old and not having changed in some specific fundamental ways.)
However, it's definitely wrong to say that the typical tools are "non-portable". The UNIX-style C++ toolchains work basically anywhere, including Windows, although I admit some of the tools require MSys/Cygwin. You can definitely use GNU Makefiles with pkg-config using MSys2 and have a fine experience. Needless to say, this also works on Linux, macOS, FreeBSD, Solaris, etc. More modern tooling like CMake and Ninja work perfectly fine on Windows and don't need any special environment like Cygwin or MSys, can use your MSVC installation just fine.
I don't really think applying the mantra of Rust package management and build processes to C++ is a good idea. C++'s toolchain is amenable to many things that Rust and Cargo aren't. Instead, it'd be better to talk about why C++ sucks to use, and then try to figure out what steps could be taken to make it suck less. Like:
- Building C++ software is hard. There's no canonical build system, and many build systems are arcane.
This one really might be a tough nut to crack. The issue is that creating yet another system is bound to just cause xkcd 927. As it is, there are many popular ways to build, including GNU Make, GNU Autotools + Make, Meson, CMake, Visual Studio Solutions, etc.
CMake is the most obvious winner right now. It has achieved defacto standard support. It works on basically any operating system, and IDEs like CLion and Visual Studio 2022 have robust support for CMake projects.
Most importantly, building with CMake couldn't be much simpler. It looks like this:
And you have a build in .build. I think this is acceptable. (A one-step build would be simpler, but this is definitely more flexible, I think it is very passable.)
This does require learning CMake, and CMake lists files are definitely a bit ugly and sometimes confusing. Still, they are pretty practical, and rather easy to get started with, so I think it's a clear win. CMake is the "defacto" way to go here.
- Managing dependencies in C++ is hard. Sometimes you want external dependencies, sometimes you want vendored dependencies.
This problem's even worse. CMake helps a little here, because it has really robust mechanisms for finding external dependencies. However, while robust, the mechanism is definitely a bit arcane; it has two modes, the legacy Find scripts mode, and the newer Config mode, and some things like version constraints can have strange and surprising behavior (it differs on a lot of factors!)
But sometimes you don't want to use external dependencies, like on Windows, where it just doesn't make sense. What can do you really do here?
I think the most obvious thing to do is use vcpkg. As the name implies, it's Microsoft's solution to source-level dependencies. Using vcpkg with Visual Studio and CMake is relatively easy, and it can be configured with a couple of JSON files (and there is a simple CLI that you can use to add/remove dependencies, etc.) When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
CMake itself is also capable of vendoring projects within itself, and it's absolutely possible to support all three modalities of manual vendoring, vcpkg, and external dependencies. However, for obvious reasons this is generally not advisable. It's really complicated to write CMake scripts that actually work properly in every possible case, and many cases need to be prevented because they won't actually work.
All of that considered, I think the best existing solution here is CMake + vcpkg. When using external dependencies is desired, simply not using vcpkg is sufficient and the external dependencies will be picked up as long as they are installed. This gives an experience much closer to what you'd expect from a modern toolchain, but without limiting you from using external dependencies which is often unavoidable in C++ (especially on Linux.)
- Cross-compiling with C++ is hard.
In my opinion this is mostly not solved by the "defacto" toolchains. :)
It absolutely is possible to solve this. Clang is already better off than most of the other C++ toolchains in that it can handle cross-compiling with selecting cross-compile targets at runtime rather than build time. This avoids the issue in GCC where you need a toolchain built for each target triplet you wish to target, but you still run into the issue of needing libc/etc. for each target.
Both CMake and vcpkg technically do support cross-compilation to some extent, but I think it rarely works without some hacking around in practice, in contrast to something like Go.
If cross-compiling is a priority, the Zig toolchain offers a solution for C/C++ projects that includes both effortless cross-compiling as well as an easy to use build command. It is probably the closest to solving every (toolchain) problem C++ has, at least in theory. However, I think it doesn't really offer much for C/C++ dependencies yet. There were plans to integrate vcpkg for this I think, but I don't know where they went.
If Zig integrates vcpkg deeply, I think it would become the obvious choice for modern C++ projects.
I get that by not having a "standard" solution, C++ remains somewhat of a nightmare for people to get started in, and I've generally been doing very little C++ lately because of this. However I've found that there is actually a reasonable happy path in modern C++ development, and I'd definitely recommend beginners to go down that path if they want to use C++.
> Use a build system like make, you can't just `c++ build`
This is a strength not a weakness because it allows you to choose your build system independently of the language. It also means that you get build systems that can support compiling complex projects using multiple programming languages.
> Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
This is a strength not a weakness because it allows you to organize your dependencies and their locations on your computer however you want and are not bound by whatever your language designer wants.
> Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
This is a strength not a weakness because you are not bound to a particular way of how this should work.
> Oh also understand the compiler doesn't actually output what you want, you also need a linker
This is a strength not a weakness because now you can link together parts written in different programming languages which allows you to reuse good code instead of reinventing the universe.
> That linker also doesn't know where to find things, so you need the external tool to use it
This is a strength not a weakness for the reasons already mentioned above.
> Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
This is a strength not a weakness because you can have fully offline builds including ways to distribute dependencies to air-gapped systems and are not reliant on one specific online service to do your job.
Also all of this is a non-issue if you use a half-modern build system. Conflating the language, compiler, build system and package manager is one of the main reason why I stay away from "modern" programming languages. You are basically arguing against the Unix philosophy of having different tools that work together with each tool focusing on one specific task. This allows different tools to evolve independently and for alternatives to exist rather than a single tool that has to fit everyone.
Massive cope, there's no excuse for the lack of decent infrastructure. I mean, the C++ committee for years said explicitly that they don't care about infrastructure and build systems, so it's not really surprising.
I dont know if you're jokingor just naïve, but cmake and the like are massive time sinks if you want anything beyond "here's a few source files, make me an application"
> There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
I've observed the existence in larger projects of "build engineers" whose sole job is to keep the project building on a regular cadence. These jobs predominantly seem to exist in C++ land.
> These jobs predominantly seem to exist in C++ land.
You wish.
These jobs exist for companies with large monorepos in other languages too and/or when you have many projects.
Plenty of stuff to handle in big companies (directory ownership, Jenkins setup, in-company dependency management and release versioning, developer experience in genernal, etc.)
Most of what I have seen came from technical debt aquired over decades. With some of the build engineers hired to "manage" that themselves not being treated as programmers and just adding on top of the mess with "fixes" that are never reviewed or even checked in. Had a fun time once after we reinstalled the build server and found out that the last build engineer created a local folder to store various dependencies instead of of using vcpkg to fetch everything as we had mandated for several years by then.
You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system where everyone does the bare minimum to meet the near term task only and it devolves into a mess no one wants to touch over enough time.
Your choice: do you have the most senior engineers spend time sporadically maintaining the build system, perhaps declaring fires to try to pay off tech debt, or hire someone full time, perhaps cheaper and with better expertise, dedicated to the task instead?
CI is an orthogonal problem but that too requires maintenance - do you maintain it ad-hoc or make it the official responsibility for someone to keep maintained and flexible for the team’s needs?
I think you think I’m saying the task is keeping the build green whereas I’m saying someone has to keep the system that’s keeping the build green going and functional.
> You have a team of 20 engineers on a project you want to maintain velocity on. With that many coooks, you have patches on top of patches of your build system ...
The scenario you are describing does not make sense for the commonly accepted industry definition of "build system." It would make sense if, instead, the description was "application", "product", or "system."
Many software engineers use and interpret the phrase "build system" to be something akin to make[0] or similar solution used to produce executable artifacts from source code assets.
I can only relate to you what I’ve observed. Engineers were hired to rewrite the Make-based system into Bazel and maintain it for single executable distributed to the edge. I’ve also observed this for embedded applications and other stuff.
I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
Wow, I don't understand what anything means in those memes. And I'm so glad I don't!
It seems to me that the people/committees who built C++ just spent decades inventing new and creative ways for developers to shoot themselves in the foot. Like, why does the language need to offer a hundred different ways to accomplish each trivial task (and 98 of them are bad)?
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
This... doesn't really hold water. You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language. Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood. Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
you don't need to understand what an overloaded operator is doing any more than you have to understand the implementation of every function you call, recursively
Exactly, the "you dont have" part lasts until the first error message and then it's 10 feet of stl template instantiation error avalanche. And stl implementation is really advanced c++.
Also, a lot of "modern" code cannot be debugged at all (because putting in a print statement breaks constexprness) and your only recourse is reading code.
This is in big part also because of committee, that prefers hundred-line template monster "can_this_call_that_v" to a language feature, probably thinking that by not including something in language standard and offloading it to library they do good job.
(oh and I think you can write a whole book on the different ways to initialize variables in C++).
The result is you might be able to use C++ to write something new, and stick to a style that's readable... to you! But it might not make everyone else who "knows C++" instantly able to work on your code.
Overloaded operators are great. But overloaded operators that do something entirely different than their intended purpose is bad. So a + operator that does an add in your custom numeric data type is good. But using << for output is bad.
The first programming language that used overloaded operators I really got into was Scala, and I still love it. I love that instead of Java's x.add(y); I can overload + so that it calls .add when between two objects of type a. It of course has to be used responsibly, but it makes a lot of code really more readable.
Those languages need a dedicated operator because they are loosely typed which would make it ambiguous like + in JavaScript.
But C++ doesn't have that problem. Sure, a separate operator would have been cleaner (but | is already used for bitwise or) but I have never seen any bug that resulted from it and have never felt it to be an issue when writing code myself.
Tangential, but Lua is the most write-only language I have had pleasure working with. The implementation and language design are 12 out of 10, top class. But once you need to read someone else's code, and they use overloads liberally to implement MCP and OODB and stuff, all in one codebase, and you have no idea if "." will index table, launch Voyager, or dump core, because everything is dispatched at runtime, it's panic followed by ennui.
It works with arrays (both fixed size, and dynamically sized) and arrays; between arrays and elements; but not between two scalar types that don't overload opBinary!"~", so no it won't work between two `ushorts` to produce a `uint`
Python managed to totally confuse this. "+" for built-in arrays is concatenation. "+" for NumPy arrays is elementwise addition. Some functions accept both types. That can end badly.
If you've done any university-level maths you should have seen the + sign used in many other contexts than adding numbers, why should that be a problem when programming?
There is usually another operator used or concatenation in math though: | or || or ⊕
The first two are already used for bitwise and logical or and the third isn't available in ASCII so I still think overloading + was a reasonable choice and doesn't cause any actual problems IME.
I personally think that operator overloading itself is justified, but the pervasive scope of operator overloading is bad. To me the best solution is from OCaml: all operators are regular functions (`a + b` is `(+) a b`) and default bindings can't be changed but you can import them locally, like `let (+) = my_add in ...`. OCaml also comes with a great convenience syntax where `MyOps.(a + b * c)` is `MyOps.(+) a (MyOps.(*) b c)` (assuming that MyOps defines both `(+)` and `(*)`), which scopes operator overloading in a clear and still convenient way.
A benefit of operator overloads is that you can design drop-in replacements for primitive types to which those operators apply but with stronger safety guarantees e.g. fully defining their behavior instead of leaving it up to the compiler.
This wasn't possible when they were added to the language and wasn't really transparent until C++17 or so but it has grown to be a useful safety feature.
I'm pretty sure the post you are responding to is not seriously suggesting using floating point multiplication and exponentiation as a performance optimization ;)
> in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Only if you have full control on what others are writing. In reality, you're going to read a lot, lots of "clever" codes. And I'm saying as a person who have written a good amount of template meta programming codes. Even for me, some codes take hours to understand and I was usually able to cut 90% of its code after that.
I’m probably guilty of gratuitous template stuff, because it adds fun to the otherwise boring code I spend a lot of time on. But I feel like the 90% cutdowns are when someone used copy-paste instead of templates, overloads, and inheritance. I don’t think both problems happen at the same time, though, or maybe I misunderstood.
When people are obsessed with over-abstraction and over-generalization, you can often see FizzBuzz Enterprise in action where a single switch statement is more than enough.
I see that more with inheritance including pure virtual interface for things that only have one implementation and actor patterns that make the execution flow unnecessarily hard to follow. Basically, Java written in C++.
Most templates are much easier to read in comparison.
Being able to cut 90% of code sounds like someone was getting paid by LoC (which is also a practice from a time when C++ was considered a "modern" language).
Yes, but not always. For example, what can now be written as single "requires requires" or a short chain of "else if constexpr" statements, used to be sprawling, incomprehensible template class hierarchy, before that feature got added.
> Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. Now is that because of Rust? I’d argue in some small part, yes.
Just delete this. Even an hour's familiarity with Rust will give you a visceral understanding that "Rewrites of C++ codebases to Rust always yield more memory-safe results than before" is absolutely not because "any rewrite of an existing codebase is going to yield better results". If you don't have that, skip it, because it weakens the whole piece.
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
Maybe you can do that. But you are probably working in a team. And inevitably someone else in your team thinks that operator overloading and template metaprogramming are beautiful things, and you have to work with their code. I speak from experience.
This is true and I will concede this point. Appreciate your feedback!
However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
Problem is when somebody on the team does not share this view though, that much is true :)
Counter-counter point: if you're going to actively avoid using the majority of a language's features and for the most part write code in it as if it were a different language, doesn't that suggest the language is deeply flawed?
(Note: I'm not saying it is deeply flawed, just that this particular way of using it suggests so).
I wouldn't necessarily put it like that no. I'd say all languages have features that fit certain situations but should be avoided in other situations.
It's like a well equiped workshop, just because you have access to a chainsaw but do not need to use it to build a table does not mean it's a bad workshop.
C is very barebones, languages like C++. C#, Rust and so on are not. Just because you don't need all of it's features does not make those languages inherently bad.
Great question or in this case counter-counter point though.
> However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
How do you define “need” for extra features? C and C++ can fundamentally both do the same thing so if you’re going to write C style C++, why not just write C and avoid all of C++’s foot guns?
"you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language."
You could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
C++ will always stay relevant. Software has eaten the world. That transition is almost complete now. The languages that were around when it happened will stay deeply embedded in our fundamental tech stacks for another couple decades at least, if not centuries. And C and C++ are the lion's share of that.
COBOL sticks around 66 years after its first release. Fortran is 68 years old and is still enormously relevant. Much, much more software was written in newer languages and has become so complex that replacements have become practically impossible (Fuchsia hasn't replaces Linux in Google products, wayland isn't ready to replace X11 etc)
It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.
These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.
C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.
C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.
As long as people write software (no pun intended), software will follow trends. For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia. Things don't neccessarily have to stay the same. Interestingly, such a generational trend currently happens with Rust, despite there has been numerous other popular languages such as D or Zig which didn't have the same traction.
Sure, there are still Fortran codes. But I can hardly imagine that Fortran still plays a big role in another 68 years from now on.
> For instance, in many scientific ecosystems, Matlab was successfully replaced by Scipy. Which happens to get replaced by Julia
If by scientific ecosystems you mean people making prototypes for papers, then yes. But in commercial, industrial setting there is still no alternative for many of Matlab toolboxes, and as for Julia, as cool as it is, you need to be careful to distinguish between real usage and vetted marketing materials created by JuliaSim.
Matlab/Scipy/Julia are totally different since those function more like user interfaces, they are directly user facing. You're not building an app with matlab (though you might be with scipy and julia, it's not the primary use case), you're working with data. C++ on the other hand underpins a lot of key infrastructure.
I am not saying that these languages will stay around forever, mind you. But we have solidified the tech stacks involving these languages by making them ridiculously complex. Replacement of a programming language in one of the core components can only come through gradual and glacially slow evolution at this point. "Rewrite it in XYZ" as a clean slate approach on a big scale is simply a pipe dream.
Re Matlab: I still see it thriving in the industry, for better or worse. Many engineers just seem to love it. I haven't seen many users of Julia yet. Where do you see those? I think that Julia deserves a fair chance, but it just doesn't have a presence in the fields I work in.
You’re thinking of software that is being written today. GP is talking about software we use every day in every device on the planet that hasn’t changed since it was written 30+ years ago.
Especially the 'backend' languages that do all the heavy lifting for domain-specific software. Just in my vertical of choice, financial software, there are literally billions of lines of Java and .NET code powering critical systems. The code is the documentation, and there's little appetite to rewrite all that at enormous cost and risk.
Perhaps AI will get reliable enough to pour through these double-digit million LOC codebases and convert them flawlessly, but that looks like it's decades off at this point.
I'm not so sure. The user experience has really crystallized over the years. It's not hard to imagine a smart tv or something like it just reimplementing that experience in hardware in the not too distant future (say 2055 if transistor and memory scaling stall in 2035).
We live in a special time when general processing efficiency has always been increasing. The future is full of domain specific hardware (enabling the continued use of COBOL code written for slower mainframes). Maybe this will be a half measure like cuda or your c++ will just be a thin wrapper around a makeYoutube() ASIC
Of course if there is a breakthrough in general purpose computing or a new killer app it will wipe out all those products which is why they don't just do it now
> Yes, C++ can be unsafe if you don’t know what you’re doing. But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
I think this is one of the worst (and most often repeated arguments) about C++. C and C++ are inherently unsafe in ways that trip up _all_ developers even the most seasoned ones, even when using ALL the modern C++ features designed to help make C++ somewhat safer.
There are two levels on which this argument feels weak:
* The author is confusing memory safety with other kinds of safety. This is evident from the fact that they say you can write unsafe code in GC languages like python and javascript. unsafe != memory unsafe. Rust only gives you memory safety, it won't magically fix all your bugs.
* The slippery slope trick. I've seen this so often, people say because Rust has unsafe keyword it's the same as c/c++. The reason it's not is because in c/c++ you don't have any idea where to look for undefined behaviour. In Rust at least the code points you to look at the unsafe blocks. The difference is of degree which for practial purposes makes a huge difference.
I would really like to see more people who have never written C++ before port a Rust program to C++. In my opinion, one can argue it may be easy to port initially but it is an order of magnitude more complex to maintain.
Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
I'm a developer since 30 years. I program C#, Rust, Java, some TS etc. I can probably go to most repositories on github and at least clone and build them. I have failed - repeatedly - to build even small C++ libraries despite reasonable effort. And that's not even _writing any C++_. Just installing the tooling around CMake etc is completely Kafkaesque.
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
When it comes to programming, I generally decide my thoughts based on pain-in-my-ass levels. If I constantly have to fiddle with something to get it working, if it's fragile, if it frequently becomes a pain point - then it's not great.
And out of all the tools and architecture I work with, C++ has been some of the least problematic. The STL is well-formed and easy to work with, creating user-defined types is easy, it's fast, and generally it has few issues when deploying. If there's something I need, there's a very high chance a C or C++ library exists to do what I need. Even crossing multiple major compiler versions doesn't seem to break anything, with rare exceptions.
The biggest problem I have with C++ is how easy it is to get very long compile times, and how hard it feels like it is to analyze and fix that on a 'macro' (whole project) level. I waste ungodly amounts of time compiling. I swear I'm going to be on deaths door and see GCC running as my life flashes by.
Some others that have been not-so-nice:
* Python - Slow enough to be a bottleneck semi-frequently, hard to debug especially in a cross-language environment, frequently has library/deployment/initialization problems, and I find it generally hard to read because of the lack of types, significant whitespace, and that I can't easily jump with an IDE to see who owns what data. Also pip is demon spawn. I never want to see another Wheel error until the day I die.
* VSC's IntelliSense - My god IntelliSense is picky. Having to manually specify every goddamn macro, one at a time in two different locations just to get it to stop breaking down is a nightmare. I wish it were more tolerant of having incomplete information, instead of just shutting down completely.
* Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
* CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
> Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
You really should not have global data. Modules are the way to go and have been since Fortran90.
> CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
CMake is not a great language, but great effort has been put into cleaning up how things should be done. However you can't just upgrade, someone needs to go through the effort of using all that new stuff. In almost all projects the build system is an after thought that developers touch as little as possible to make things work and so it builds cruft constantly.
You can do much better in CMake if you put some effort into cleaning it up - I have little hope anyone will do this though. We have a hard time getting developers to clean up messes in production code and that gets a lot more care and love.
I agree. Unless the project is huge, it's totally possible to use CMake in a maintainable way. It just requires some effort (not so much, but not nothing).
If you are willing to give up incremental compilation, concatenating all C++ files into a single file and compiling that on a single core will often outperform a multi-core compilation. The reason is that the compiler spends most of its time parsing headers and when you concentrate everything into a single file (use the C preprocessor for this), it only needs to parse headers once.
Merely parsing C++ code requires a higher time complexity than parsing C code (linear time parsers cannot be used for C++), which is likely where part of the long compile times originate. I believe the parsing complexity is related to templates (and the headers are full of them), but there might be other parts that also contribute to it. Having to deal with far more abstractions is likely another part.
That said, I have been incrementally rewriting a C++ code base at a health care startup into a subset of C with the goal of replacing the C++ compiler with a C compiler. The closer the codebase comes to being C, the faster it builds.
> I never want to see another Wheel error until the day I die.
What exactly do you mean by a "Wheel error"? Show me a reproducer and a proper error message and I'll be happy to help to the best of my ability.
By and large, the reason pip fails to install a package is because doing so requires building non-Python code locally, following instructions included in the package. Only in rare cases are there problems due to dependency conflicts, and these are usually resolved by creating a separate environment for the thing you're trying to install — which you should generally be doing anyway. In the remaining cases where two packages simply can't co-exist, this is fundamentally Python's fault, not the installer's: module imports are cached, and quite a lot of code depends on the singleton nature of modules for correctness, so you really can't safely load up two versions of a dependency in the same process, even if you hacked around the import system (which is absolutely doable!) to enable it.
As for finding significant whitespace (meaning indentation used to indicate code structure; it's not significant in other places) hard to read, I'm genuinely at a loss to understand how. Python has types; what it lacks is manifest typing, and there are many languages like this (including Haskell, whose advocates are famous for explaining how much more "typed" their language is than everyone else's). And Python has a REPL, the -i switch, and a built-in debugger in the standard library, on top of not requiring the user to do the kinds of things that most often need debugging (i.e. memory management). How can it be called hard to debug?
Unfortunately that Wheel situation was far enough back now that I don't have details on hand. I just know it was awful at the time.
As for significant whitespace, the problem is that I'm often dealing with files with several thousand lines of code and heavily nested functions. It's very easy to lose track of scope in that situation. Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am. Feels easier to make mistakes as well.
It works well if everything fits on one screen, it gets harder otherwise, at least for me.
As for types, I'm not claiming it's unique to Python. Just that it makes working with Python harder for me. Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
As for debugging, it's great if you have pure Python. Mix other languages in and suddenly it becomes pain. There's no way to step from another language into Python (or vice-versa), at least not cleanly and consistently. This isn't always true for compiled->compiled. I can step from C++ into Fortran just fine.
Pip has changed a lot in the last few years, and there are many new ecosystem standards, along with greater adoption of existing ones.
> I'm often dealing with files with several thousand lines of code and heavily nested functions.
This is the problem. Also, a proper editor can "fold" blocks for you.
> Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
If you want to use annotations, you can, and have been able to since 3.0. Since 3.5 (see https://peps.python.org/pep-0484/; it's been over a decade now), there's been a standard for understanding annotations as type information, which is recognized by multiple different third-party tools and has been iteratively refined ever since. It just isn't enforced by the language itself.
> Mix other languages in and suddenly it becomes pain.... This isn't always true for compiled->compiled.
Sure, but then you have to understand the assembly that you've stepped into.
>This is the problem. Also, a proper editor can "fold" blocks for you.
I can't fix that. I just work here. I've got to deal with the code I've got to deal with. And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
>Sure, but then you have to understand the assembly that you've stepped into.
Assembly? I haven't touched raw assembly since college.
> And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
How exactly are they more helpful than following the line of the indentation that you're supposed to have as a matter of good style anyway? Do you not have formatting tools? How do you not have a tool that can find the top of a level of indentation, but do have one that can find a paired brace?
>Assembly? I haven't touched raw assembly since college.
How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
>How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
I don't know what IDE GP might be using, but mixed-language debuggers for native code are pretty simple as long as you just want to step over. Adding support for Fortran to, say, Visual Studio wouldn't be a huge undertaking. The mechanism to detect where to put the cursor when you step into a function is essentially the same as for C and C++. Look at the instruction pointer, search the known functions for an address that matches, and jump to the file and line.
A pet peeve of mine is when people claim C++ is a superset of C. It really isn't. There's a lot of little nuanced differences that can bite you.
Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
enums and conversion between integers is very strict in C++.
`char * message = "Hello"` is valid C but not C++ (since you cannot mutate the pointed to string, it must be `const` in C++)
C99 introduced variadic macros that didn't become standard C++ until 2011.
C doesn't allow for empty structs. You can do it in C++, but sizeof(EmptyStruct) is 1. And if C lets you get away with it in some compilers, I'll bet it's 0.
Anyway, all of these things and likely more can ruin your party if you think you're going to compile C code with a C++ compiler.
Also don't forget if you want code to be C callable in C++ you have to use `extern "C"` wrappers.
> It really isn't. There's a lot of little nuanced differences that can bite you.
These are mostly inconsequential when using code other people write. It is trivial to mix C and C++ object files, and where the differences (in headers) do matter, they can be ifdefed away.
> void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
This makes sense because void* -> T* is a downcast. I find the C behavior worse.
> enums and conversion between integers is very strict in C++.
As it should, but unscoped enums are promoted to integers the same way they are in C
> `char * message = "Hello"` is valid C but not C++
Code smell anyway, you can and should use char[] in both languages
You didn't mention the difference in inline semantics which IMO has more impact than what you cited
Not sure I understand, since they're available in c++ designated initializes are one of the features I use most, to the point of making custom structs to pass the arguments if a type cannot be changed to be an aggregate. It makes a huge positive difference in readability and has helped me solve many subtle bugs ; and not initializing things in order will throw a warning so you catch it immediately in your ide
Stating that you can also write unsafe code in memory safe languages is like saying that you can also die from a car crash while wearing a safety belt. Of course you can, but it is still a much better idea to wear the safety belt rather than not to.
I feel like C++ is a bunch of long chains of solutions creating problems that require new solutions, that start from claiming that it can do things better than C.
Problem 1: You might fail to initialize an object in memory correctly.
Solution 1: Constructors.
Problem 2: Now you cannot preallocate memory as in SLAB allocation since the constructor does an allocator call.
Solution 2: Placement new
Problem 3: Now the type system has led the compiler to assume your preallocated memory cannot change since you declared it const.
Solution 3: std::launder()
If it is not clear what I mean about placement new and const needing std::lauder(), see this:
C has a very simple solution that avoids this chain. Use structured programming to initialize your objects correctly. You are not going to escape the need to do this with C++, but you are guaranteed to have to consider a great many things in C++ that would not have needed consideration in C since C avoided the slippery slope of syntactic sugar that C++ took.
But the c++ solution is transparent to the user. You can write entire useful programs that will use std:: containers willy-nilly and all propagate their allocators automatically and recursively without you having to lift a finger because all the steps you've mentioned have been turned in a reusable library, once.
I absolutely agree - your chain of reasoning follows as well.
It doesn't seem like it at first, but the often praised constructor/destructor is actually a source of incredible complexity, probably more than virtual.
You need something like std::launder in any systems language for certain situations, it isn’t a C++ artifact.
Before C++ added it we relied on undefined behavior that the compilers agreed to interpret in the necessary way if and only if you made the right incantations. I’ve seen bugs in the wild because developers got the incantations wrong. std::launder makes it explicit.
For the broader audience because I see a lot of code that gets this wrong, std::launder does not generate code. It is a compiler barrier that blocks constant folding optimizations of specific in-memory constants at the point of invocation. It tells the compiler that the constant it believes lives at a memory address has been modified by an external process. In a C++ context, these are typically restricted to variables labeled ‘const’.
This mostly only occurs in a way that confuses the compiler if you are doing direct I/O into the process address space. Unless you are a low-level systems developer it is unlikely to affect you.
If you are doing something equivalent to placement new on top of existing objects, the compiler often sees that. If that is your case you can avoid it in most cases. That is not what std::launder is for. It is for an exotic case.
std::launder is a tool for object instances that magically appear where other object instances previously existed but are not visible to the compiler. The typical case is some kind of DMA like direct I/O. The compiler can’t see this at compile time and therefore assumes it can’t happen. std::launder informs the compiler that some things it believes to be constant are no longer true and it needs to update its priors.
Great article. Modern C++ has come a really long way. I think lots of people have no idea about the newer features of the standard library and how much they minimize footguns.
Lambdas, a modern C++ feature, can borrow from the stack and escape the stack. (This led to one of the more memorable bugs I've been part of debugging.) It's hard to take any claims about modern C++ seriously when the WG thought this was an acceptable feature to ship.
Capturing lambdas are no different from handwritten structures with operator() ("functors"), therefore it makes no sense castrating them.
Borrowing from stack is super useful when your lambda also lives in the stack; stack escaping is a problem, but it can be made harder by having templates take Fn& instead of const Fn& or Fn&&; that or just a plain function pointer.
Like, I'm not god's gift to programming or anything, but I'm decently good at it, and I wrote a use-after-return bug due to a lambda reference last week.
I'm glad, but my problem is with the claim that modern C++ is safer. They added new features that are very easy to misuse.
Meanwhile in Rust you can freely borrow from the stack in closures, and the borrow checker ensures that you'll not screw up. That's what (psychological) safety feels like.
Lambdas are syntactic sugar over functors, and it was possible all along to define a functor that stores a local address and then return it from the scope, thus leaving a dangling pointer. They don't introduce any new places for bugs to creep in, other than confusing programmers who are used to garbage-collected languages. That C++11 is safer than C++98 is still true, as this and other convenience features make it harder to introduce bugs from boilerplate code.
The ergonomics matter a lot. Of course a lambda is equivalent to a functor that stores a local reference, but making errors with lambdas requires disturbingly little friction.
In any case, if you want safety and performance, use Rust.
>making errors with lambdas requires disturbingly little friction
Not any less than other parts of the language. If you capture by reference you need to mind your lifetimes. If you need something more dynamic then capture by copy and use pointers as needed. It unfortunate the developer who introduced that bug you mentioned didn't keep that in mind, but this is not a problem that lambdas introduced; it's been there all along. The exact same thing would've happened if they had stored a reference to a dynamic object in another dynamic object. If the latter lives longer than the former you get a dangling reference.
>In any case, if you want safety and performance, use Rust.
Personally, I prefer performance and stability. I've already had to fix broken dependencies multiple times after a new rustc version was released. Wake me up when the language is done evolving on a monthly basis.
This is like writing an article entitled "In Defense of Guns", and then belittling the fact it can kill by saying "You always have to track your bullets".[1]
[1] Not me making this up - I started getting into guns and this is what people say.
To me it's as if someone releases a new gun model and people single that gun out and complain that if you shoot someone with it they may die. Like it's a critique of guns as a concept not of that particular one.
In a complete tangent I think that "smart guns" that only let you shoot bullseye targets, animals and designated un-persons are not far off.
Its worse. The day I discovered that std::array is explicitly not range/bounds checked by default I really wanted to write some angry letters to the committee members.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
>Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around?
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
Good news! Contracts were approved for c++26 so they should be in compilers by like 2031 and then you can configure arrays and vectors to abort on out-of-bounds errors instead of corrupting your program.
Let no one accuse the committee of being unresponsive.
Yeah, it's great that the C++ community starts to take safety in consideration, but one has to admit that safety always comes as the last priority, behind compatibility, convenience, performance and expressiveness.
I eagerly await the day when they do away with the distinction between ".cpp" and ".hpp" files and the textual substitution nature of "#include" and replace them all with a proper module system.
It's hard enough to get programmers to care enough about how their code affects build times. Modules make it impossible for them to care, and will lead to horrible problems when building large projects.
Good article overall. There's one part I don't really agree with:
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
This has me scratching my head a bit. In spite of C++ being nearly a superset of C, they are very different languages, and idiomatic C++ doesn't look very much like C. In fact, I'd argue that most of the stuff C++ adds to C allows you to write code that's much cleaner than the equivalent C code, if you use it the intended way. The one big exception I can think of is template metaprogramming, since the template code can be confusing, but if done well, the downstream code can be incredibly clean.
There's an even bigger problem with this recommendation, which is how it relates to something else talked about in the article, namely "safety." I agree with the author that modern C++ can be a safe language, with programmer discipline. C++ offers a very good discipline to avoid resource leaks of all kinds (not just memory leaks), called RAII [1]. The problem here is that C++ code that leverages RAII looks nothing like C.
Stepping back a bit, I feel there may be a more fundamental fallacy in this "C++ is Hard to Read" section in that the author seems to be saying that C++ can be hard to read for people who don't know the language well, and that this is a problem that should be addressed. This could be a little controversial, but in my opinion you shouldn't target your code to the level of programmers who don't know the language well. I think that's ultimately neither good for the code nor good for other programmers. I'm definitely not an expert on all the corners of C++, but I wouldn't avoid features I am familiar with just because other programmers might not be.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
You know, not sure I even agree with the memory leaks part. If you define a memory leak very narrowly as forgetting to free a pointer, this is correct. But in my experience working with many languages including C/C++, forgotten pointers are almost never the problem. You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects or bursty memory allocation patterns. And these occur in all languages.
Nice thing about Rust is not that you cannot write such code, it is you know exactly where you used peaky memory or re-interpreted something as a unsigned integer or replaced your program stack with something else. All of such cases require unsafe blocks in Rust. It is a screaming indicator "here be dragons". It is the do not press this red button unless you intend to.
In C and C++ no such thing exists. It is walking in a minefield. It is worse with C++ because they piled so much stuff, nobody knows on the top of their head how a variable is initialized. The initialization rules are insane: https://accu.org/journals/overload/25/139/brand_2379/
So if you are doing peaky memory stuff with complex partially self-initializing code in C++, there are so many ways of blowing yourself and your entire team up without knowing which bit of code you committed years ago caused it.
> All of such cases require unsafe blocks in Rust.
It's true that Rust makes it much harder to leak memory compared to C and even C++, especially when writing idiomatic Rust -- if nothing else, simply because Rust forces the programmer to think more deeply about memory ownership.
But it's simply not the case that leaking memory in Rust requires unsafe blocks. There's a section in the Rust book explaining this in detail[1] ("memory leaks are memory safe in Rust").
> You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects
I use Rust in a company in a team who made the C++ -> Rust switch for many system services we provide on our embedded devices. I use Rust daily. I am aware that leaking is actually safe.
C++'s design encourages that kind of allocation "leak" though. The article suggests using smart pointers, so let's take an example from there and mix make_shared with weak_ptr. Congrats, you've now extended the lifetime of the allocation to whatever the lifetime of your weak pointer is.
Rc::Weak does the same thing in Rust, but I rarely see anyone use it.
You are correct, it does not affect the lifetime of the pointed object (pointee).
But a shared_ptr manages at least 3 things: control block lifetime, pointee lifetime, and the lifetime of the underlying storage. The weak pointer shares ownership of the control block but not the pointee. As I understand this is because the weak_ptr needs to modify the control block to try and lock the pointer and to do so it must ensure the control block's lifetime has not ended. (It manages the control blocks lifetime by maintaining a weak count in the control block but that is not really why it shares ownership.)
As a bonus trivia, make_shared uses a single allocation for both the control block and the owned object's storage. In this case weak pointers share ownership of the allocation for the pointee in addition to the control block itself. This is viewed as an optimization except in the case where weak pointers may significantly outlive the pointee and you think the "leaked" memory is significant.
It has no effect on the lifetime of the object, but it can affect the lifetime of the allocation. The reason is that weak_ptr needs the control block, which make_shared bundles into the same allocation as the object for optimization reasons.
Quoting cppreference [0]:
If any std::weak_ptr references the control block created by std::make_shared after the lifetime of all shared owners ended, the memory occupied by T persists until all weak owners get destroyed as well, which may be undesirable if sizeof(T) is large.
What's worse in languages like Go, which I love, is that you won't even immediately how to solve this unless you have experience dropping down into doing things you just would have normally done in C or C++.
Even the Go authors themselves on Go's website display a process of debugging memory usage that looks identical to a workflow you would have done in C++. So, like, what's the point? Just use C++.
I really do think Go is nice, but at this point I would relegate it to the workplace where I know I am working with a highly variable team of developers who in almost all cases will have a very poor background in debugging anything meaningful at all.
> You can write simple and readable code in C++ if you want to. You can also write complex and unreadable code in C++ if you want to. It’s all about personal or team preference.
Problem is, if you’re using C++ for anything serious, like the aforementioned game development, you will almost certainly have to use the existing libraries; so you’re forced to match whatever coding style they chose to use for their codebase. And in the case of Unreal, the advice “stick to the STL” also has to be thrown out since Unreal doesn’t use the STL at all. If you could use vanilla, by-the-books C++ all the time, it’d be fine, but I feel like that’s quite rare in practice.
When NIST released its summary judgement against C++ and other languages it deemed memory unsafe, the problem became less technical and more about politics and perception. If you're looking to work within two arms' length of the US Government, you have to consider the "written in C++" label seriously, regardless of how correct the code may be.
The government is still happily commissioning new software projects that use C++. That may change in a few years, and some organizations may already be treating C++ more critically, but so far it's been unimpactful.
Nothing is going to happen for the foreseeable future, at least in the parts of government I tend to work with. It doesn't even come up in discussions of critical high-reliability system. They are still quite happy to buy and use C++, so I expect that is what they will be getting.
C++ is the third programming language I ever tried to learn, I got bored and gave up on both Python and JavaScript after like a month, I now have 150 active hours of learning (I tracked) in C++, and I love it, somehow I find it mentally more stimulating, not sure why.
4th, after basic, assembly & pascal. For a long, long time; it was my default language for personal projects.
But just keeping track of all the features and the exotic ways they interact is a full time job. There are people who have dedicated entire lives to understanding even a tiny corner of the language, and they still don't manage.
Not worth the effort for me, there are other languages.
I would argue that rewrite in C++ will make it a lot better. Rust does have some nice memory safe features that are nice enough that you should question why someone did a rewrite and stuck with C++, but that C++ rewrite would fix a lot.
Fresh codebases have more bugs than mature codebases. Rewriting does not fix bugs; it is a fresh codebase that may have different bugs but extremely rarely fewer bugs than the codebase most of the bugs have been patched out of. Rewriting it in Rust reduces the bugs because Rust inherently prevents large categories of bugs. Rewriting it in C++ has no magical properties that initially writing it in C++ doesn't, especially if you weren't around for the writing of the original. Maybe if there is some especially persnickety known bug that would require a major rearchitecture and you plan to implement this architecture this time around, but that is not the modal bug, and the article is especially talking about memory safety bugs which are a totally separate kind of thing from that.
I think there is significant merit to rewriting a legacy C++ (or C) codebase in very modern C++. I've done it before and it not only greatly reduced the total amount of code but also substantially improved the general safety. Faster code and higher quality. Because both implementations are "C++", there is a much more incremental path and the existing testing more or less just works.
By contrast, my experience with C++ to Rust rewrites is that the inability of Rust to express some useful and common C++ constructs causes the software architecture to diverge to the point where you might as well just be rewriting it from scratch because it is too difficult to track the C++ code.
You left out the full argument (to be clear, I don't agree with the author, but in order to disagree with him you have to quote the full argument):
The author is arguing that the main reason rewriting a C++ codebase in Rust makes it more memory-safe is not because it was done in Rust, but because it benefits from lessons learned and knowledge about the mistakes done during the first iteration. He acknowledges Rust will also play a part, but that it's minor compared to the "lessons learned" factor.
I'm not sure I buy the argument, though. I think rewrites usually introduce new bugs into the codebase, and if it's not the exact same team doing the rewrite, then they may not be familiar with decisions made during the first version. So the second version could have as many flaws, or worse.
The argument could be made that rewriting in general can make a codebase more robust, regardless of the language. But that's not what the article does; it makes it specifically about memory safety:
> That’s how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It’s not because of Rust, it’s because they took the time to rethink and redesign...
If they got the program to work at all in Rust, it would be memory-safe. You can't claim that writing in a memory-safe language is a "minor" factor in why you get memory safety. That could never be proven or disproven.
Did you read what they wrote? Their point is that doing a fresh rewrite of old code in any language will often inherently fix some old issues - including memory safety ones.
Because it's a re-write, you already know all the requirements. You know what works and what doesn't. You know what kind of data should be laid out and how to do it.
Because of that, a fresh re-write will often erase bugs (including memory ones) that were present originally.
That claim appears to contradict the second-system effect [0].
The observation is that second implementation of a successful system is often much less successful, overengineered, and bloated, due to programmer overconfidence.
On the other hand, I am unsure of how frequently the second-system effect occurs or the scenarios in which it occurs either. Perhaps it is less of a concern when disciplined developers are simply doing rewrites, rather than feature additions. I don't know.
I won't say the second-system effect doesn't exist, but I wouldn't say it applies every single time either. There's too many variables. Sometimes a rewrite is just a rewrite. Sometimes the level of bloat or feature-creep is tiny. Sometimes the old code was so bad that the rewrite fully offsets any bloat.
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
Also, avoid using C++ classes while you're at it.
I recently had to go back to writing C++ professionally after a many-year hiatus. We code in C++23, and I got a book to refresh me on the basics as well as all the new features.
And man, doing OO in C++ just plain sucks. Needing to know things like copy and swap, and the Rule of Three/Five/Zero. Unless you're doing trivial things with classes, you'll need to know these things. If you don't need to know those things, you might as well stick to structs.
Now I'll grant C++23 is much nicer than C++03 (just import std!) I was so happy to hear about optional, only to find out how fairly useless it is compared to pretty much every language that has implemented a "Maybe" type. Why add the feature if the compiler is not going to protect you from dereferencing without checking?
std::optional does have dereference checking, but it's a run-time check: std::optional<T>::value(). Of course, you'll get an exception if the optional is empty, because there's nothing else for the callee to do.
I really don't like Object Oriented programming anywhere. Maybe Smalltalk had it right, but I've not messed with Pharo or anything else enough to get a feel for it.
CLOS seems pretty good, but then again I'm a bit inexperienced. Bring back Dylan!
I write C++ daily and I really can't take seriously arguments how C++ is safe if you know what you're doing like come on. Any sufficiently large and complex codebases tend to have bugs and footguns and using tools like memory safe languages limit blast radius considerably.
Smart pointers are neat but they are not a solution for memory safety. Just using standard containers and iterators can lead to lots of footguns, or utils like string_view.
This reads the same way as any other 'defense', 'sales pitch', or what have you, but from a Rust evangelist. The author likes to use C++ and now he must explain to the world why his decision is okay/correct/good/etc.. If you like it that much, just use the thing; no one actually cares.
The safety part in this article is incorrect. There's a google doc somewhere where Google did an internal experiment and determined that safety c annot be achieved in C++ without an owning reference (essentially what Rust has).
I believe most C++ gripes are a classic case of PEBKAC.
One of the most common complaints is the lack of a package manager. I think this stems from a fundamental misunderstanding of how the ecosystem works. Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Another perpetual gripe is that C++ is bad because it is overly complex and baroque, usually from C folks like Linus Torvalds[1]. It's pretty ironic, considering the very compiler they use for C (GCC), is written in C++ and not in C.
> find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
It's really not about being hard to grasp. Once you need a different dependency version than the system provides, you can't easily do it. (Apart from manual copies) Even if the library has the right soname version preventing conflicts (which you can do in C, but not really C++ interfaces), you still have multiple versions of headers to deal with. You're losing features by not having a real package manager.
> Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Okay, but is that actually a good idea? Merely saying that something is idiomatic isn't a counterargument to an allegation that the ecosystem has converged on a bad idiom.
For software that's going to be distributed through that same package manager, yes, sure, that's the right way to handle dependencies. But if you're distributing your app in a format that makes the dependencies self-contained, or not distributing it at all (just running it on your own machines), then I don't see what you gain from letting your operating system decide which versions of your dependencies to use. Also this doesn't work if your distro doesn't happen to package the dependency you need. Seems better to minimize version skew and other problems by having the files that govern what versions of dependencies to use (the manifest and lockfile) checked into source control and versioned in lockstep with the application code.
Also, the GCC codebase didn't start incorporating C++ as an implementation language until eight years after Linus wrote that message.
GCC was originally written in GNU C. Around GCC 4.9, its developers decided to switch to a subset of C++ to use certain features, but if you look at the codebase, you will see that much of it is still GNU C, compiled as GNU C++.
There is nothing you can do in C++ that you cannot do in C due to Turing Completeness. Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
> There is nothing you can do in C++ that you cannot do in C due to Turing Completeness.
While this is technically true, a more satisfying rationale is provided by Stroustrup here[0].
> Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Constructs such as sys/tree.h[1] replicate the functionality of C++ classes and templates via the C macro processor. While they are quite useful, asserting that macro-based definitions provide the same type safety as C++ types is simply not true.
As to the whether macro use results in "creating enormous error messages" or not, that depends on the result of the textual substitution. I can assure you that I have seen reams of C compilation error messages due to invalid macro definitions and/or usage.
I'm not sure what I feel about the article's point on boost. It does contribute a lot to the standard library and does provide some excellent libraries, like boost.Unordered
I'm old enough to recall when boost first came out, and when it matured into a very nice library. What's happened in the last 15 years that boost is no longer something I would want to reach for?
Qt is... fine... as long as you're willing to commit and use only Qt instead of the standard library. It's from before the STL came out, so the two don't mesh together really at all.
I use boost and Qt but completely disagree. Every new version of boost brings extremely useful libraries that will never be in std: boost.pfr was a complete game changer, boost.mp11 ended the metaprogramming framework wars, there's also the recently added support for MQTT, SQL, etc. Boost.Beast is now the standard http and websocket client/server in c++. Boost.json has a simple API and is much more performant than nlohmann. Etc etc.
The one thing I'll say here is age of the language really is and always has been a superficial argument; it's only six years apart from Python, and it's far less controversial of a language choice: https://en.wikipedia.org/wiki/History_of_Python .
Either way, it's hard not to draw parallels between all the drama in US politics and the arguments about language choice sometimes; it feels like both sides lack respect for the other, and it makes things unnecessarily tense.
> you can write perfectly fine code without ever needing to worry about the more complex features of the language
Not really because of undefined behaviour. You must be aware of and vigilant about the complexities of C++ because the compiler will not tell you when you get it wrong.
I would argue that Rust is at least in the same complexity league as C++. But it doesn't matter because you don't need to remember that complexity to write code that works properly (almost all of the time anyway, there are some footguns in async Rust but it's nothing on C++).
> Now is [improved safety in Rust rewrites] because of Rust? I’d argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.
A factor, sure. The biggest? Doubtful. It isn't only Rust's safety that helps here, it's its excellent type system.
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
Somehow managed to fit two fallacies in one sentence!
1. The fallacy of the grey - no language is perfect therefore they are all the same.
2. "I don't make mistakes."
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Not true. As I said already Rust's very strong type system helps to make applications less buggy even ignoring memory safety bugs.
> Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers
lol
> Avoid boost like the plague.
Cool, so the ecosystem isn't confusing but you have to avoid one of the most popular libraries. And Boost is fine anyway. It has lots of quite high quality libraries, even if they do love templates too much.
> Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use.
Uhuh what would you recommend instead of Boost ICL?
I guess it's a valiant attempt but this is basically "in defense of penny farthings" when the safety bicycle was invented.
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Even if we take this claim at face value, isn’t that great?
Memory safety is a HUGE source of bugs and security issues. So the author is hand-waving away a really really good reason to use Rust (or other memory safe by default language).
Overall I agree this seems a lot like “I like C++and I’m good at it so it’s fine” with justifications created from there.
I think this is a case of two distinct populations being inappropriately averaged.
There are many high-level C++ applications that would probably be best implemented in a modern GC language. We could skip the systems language discussion entirely because it is weird that we are using one.
There are also low-level applications like high-performance database kernels where the memory management models are so different that conventional memory safety assumptions don’t apply. Also, their performance is incredibly tightly coupled to the precision of their safety models. It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t. A lot of new C++ usage is in these areas.
Rust to me slots in as a way to materially improve performance for applications that might otherwise be well-served by Java.
Database kernels have some of the strictest resource behavior constraints of all software. Every one I have worked on in vaguely recent memory has managed memory. There is no dynamic allocation from the OS. Many invariants important to databases rely on strict control of resource behavior. An enormous amount of optimization is dependent on this, so performance-engineered systems generally don’t have issues with memory safety.
Modern database kernels are memory-bandwidth bound. Micro-managing the memory is a core mechanic as a consequence. It is difficult to micro-manage memory with extreme efficiency if it isn’t implicitly safe. Companies routinely run formal model checkers like TLA+ on these implementations. It isn’t a rando spaffing C++ code.
I’ve used PostgreSQL a lot but no one thinks of it as highly optimized.
This is true. But it has some weird gaps that make it difficult to express fundamental things in the low-level systems world without using a lot of “unsafe”. Or you can do it safely and sacrifice a lot of performance. I am a fan of formal verification and use it quite a lot but Rust is far more restrictive than formal verification requires.
Rust is a systems language but it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models, among many other well-known quirks as a systems language. Other verifiable safety models exist that don’t have these issues. C++, for better or worse, can deal with this stuff in a straightforward way.
> Yes, C++ can be unsafe if you don’t know what you’re doing
I feel like I always hear this argument for continuing to use C++.
I, on the other hand, want a language that doesn't make me feel like I'm walking a tightrope with every line of code I write. Not sure why people can't just admit the humans are not robots and will write incorrect code.
The article says "I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.".
Yeah, sorry, but no, ask some long-term developers about how this often goes.
I've been a software developer for nearly 2 decades at this point, contributed to several rewrites and oversaw several rewrites of legacy software.
From my experience I can assure you that rewriting a legacy codebase to modern C++ will yield a better and safer codebase overall.
There are multiple factors that contribute to this, such one of which is what I reffer to as "lessons learnt" if you have a stable team of developers maintaining a legacy codebase they will know where the problematic areas are and will be able to avoid re-creating them in a rewrite.
An additonal factor to consider is that a lot of legacy C++ codebases can not be upgraded to use modern language features like smart pointers. The value smart pointers provide in a full rewrite can not be overstated.
Then there's also the factor that is a bit anecdotal which is I find that there are less C++ devs in general as there was 15 years ago, but those that stayed / survived are generally better and more experienced with very few enthusiastic juniors coming in.
I'm sorry you did not enjoy the article though, but thank you for giving it your time and reading it that part I really appreciate.
It depends on the codebase. If the code base deserves to be a case study in how not to do programming, then a rewrite will definitely yield better results.
I once encountered this situation with C# code written by an undergraduate, rewrote it from scratch in C++ and got a better result. In hindsight, the result would have been even better in C since I spent about 80% of my time fighting with C++ to try to use every language feature possible. I had just graduated from college and my code whole better, did a number of things wrong too (although far fewer to my credit). I look back at it in hindsight and think less is more when it comes to language features.
I actually am currently maintaining that codebase at a health care startup (I left shortly after it was founded and rejoined not that long ago). I am incrementally rewriting it to use a C subset of C++ whenever I need to make a change to it. At some point, I expect to compile it as C and put C++ behind me.
It doesn't mention the horrific template error messages. I'd heard that this was an area targeted for improvement a while ago... Is it better these days?
Qualitatively better. C++20 'concepts' obviated the need for the arcane metaprogramming tricks responsible for generating the vast majority of that template vomit.
Now you mostly get an error to the effect of "constraint foo not satisfied by type bar" at the point of use that tells you specifically what needs to change about the type or value to satisfy the compiler.
My go to for formatting would be clang-format, and for testing gtest. For more extensive formatting (that involves the compiler) clang-tidy goes a long way
Running unit tests with the address sanitizer and UB sanitizer enabled go a long way towards addressing most memory safety bugs. The kind of C++ you write then is a far cry from what the haters complain about with bad old VC6 era C++.
I think, if one of the most prominent C++ experts in the world(herb sutter), who chaired the C++ standards committee for 20+ years, who has evangelized the language for even longer than that - decides that complexity in the language has gotten out of control and sits down to write a simpler and safer dialect, then that is indicative of a problem with the language.
My viewpoint on the language is that there are certain types of engineers who thrive in the complexity that is easy to arrive at in a C++ code base. These engineers are undoubtedly very smart, but, I think, lack a sense of aesthetics that I can never get past. Basically, the r/atbge of programming languages (Awful Taste But Great Execution).
I don't think there could be any purer of an expression of the Blub Paradox.
> Just use whatever parts of the language you like without worrying about what's most performant!
It's not about performant. It's about understanding someone else's code six months after they've been fired, and thus restricting what they can possibly have done. And about not being pervasively unsafe.
> "I don’t think C++ is outdated by any stretch of the imagination", "matter of personal taste".
Except of course for header files, forward declarations, Make, the true hell of C++ dependency management (there's an explicit exhortation not to use libraries near the bottom), a thousand little things like string literals actually being byte pointers no matter how thoroughly they're almost compatible with std::string, etc. And of course the pervasive unsafety. Yes, it sure was last updated in 2023, the number of ways of doing the same thing has been expanded from four to five but the module system still doesn't work.
> You can write unsafe code in Python! Rewriting always makes the code more safe whether it's in Rust or not!
No. Nobody who has actually used Rust can reasonably arrive at this opinion. You can write C++ code that is sound; Rust-fluent people often do. The design does not come naturally just because of the process of rewriting, this is an entirely ridiculous thing to claim. You will make the same sorts of mistakes you made writing it fresh, because you are doing the same thing as you were when writing it fresh. The Rust compiler tells you things you were not thinking of, and Rust-fluent people write sound C++ code because they have long since internalized these rules.
And the crack about Python is just stupid. When people say 'unsafe' and Rust in the same sentence, they are obviously talking about UB, which is a class of problem a cut above other kinds of bugs in its pervasiveness, exploitability, and ability to remain hidden from code review. It's 'just' memory safety that you're controlling, which according to Microsoft is 70% of all security related bugs. 70% is a lot! (plus thread safety, if this was not mentioned you know they have not bothered using Rust)
In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
> Just use smart pointers!
Yes, let me spam angle brackets on every single last function. 'Write it the way you want to write it' is the first point in the article, and here is the exact 'write it this way' that was critiquing. And you realistically won't do it on every function so it is just a matter of time until one of the functions you use regular references with creates a problem.
> In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
Yes, this is a serious flaw in the author's argument. Does he think the exact same team that built version 1.0 in C++ is the one writing 2.0 in Rust? Maybe that happens sometimes, I guess, but to draw a general lesson from that seems weird.
Python’s “there should be one obvious way to do it” slogan often collides with reality these days too, since the language sprawled into multiple idioms just like C++: for printing you can use print("hi"), f-strings like f"hi {x}", .format(), % formatting, or concatenation with +; for loops you can iterate with for i in range(n), list comprehensions [f(i) for i in seq], generator expressions (f(i) for i in seq), or map/filter/lambda; unpacking can be done with a,b=pair, tuple() casting, slicing, *args capture, or dictionary unpacking with *; conditionals can be written with if/else blocks, one-line ternary x if cond else y, and/or short-circuit hacks, or pattern matching match/case; default values can come from dict.get(k,default), x or default, try/except, or setdefault; swapping variables can be done with a,b=b,a, with a temp var, with tuple packing/unpacking, or with simultaneous assignment; joining strings can be done with "".join(list), concatenation in a loop, reduce(operator.add, seq), or f-strings; reading files can be open().read(), iterating line by line with for line in f, using pathlib.Path.read_text(), or with open(...) as f; building lists can be done with append in a loop, comprehensions, list(map(...)), or unpacking with [*a,*b]; dictionaries can be merged with {*a,*b}, a|b (Python 3.9+), dict(a,*b), update(), or comprehensions; equality and membership checks can be ==, is, in, any(...), all(...), or chained comparisons; function arguments can be passed positionally, by name, unpacked with * and \*, or using functools.partial; iteration with indexes can be for i in range(len(seq)), for i,x in enumerate(seq), zip(range(n),seq), or itertools; multiple return values can be tuples, lists, dicts, namedtuples, dataclasses, or objects; even truthiness tests can be if x:, if bool(x):, if len(x):, or if x != []:. Whew!
The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.
There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.
It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.
However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.
However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.
When I made a meme about C++ [1] I was purposeful in choosing the iceberg format. To me it's not quite satisfying to say that C++ is merely complex or vast. A more fitting word would be "arcane", "monumental" or "titanic" (get it?). There's a specific feeling you get when you're trying to understand what the hell is an xvalue, why std::move doesn't move or why std::remove doesn't remove.
The Forest Gump C++ is another meme that captures this feeling very well (not by me) [2].
What it comes down to is developer experience (DX), and C++ has a terrible one. Down to syntax and all the way up to package management a C++ developper feels stuck to a time before they were born. At least we have a lot of time to think about all that while our code compiles. But that might just be the price for all the power it gives you.
[1] https://victorpoughon.github.io/cppiceberg/
[2] https://mikelui.io/img/c++_init_forest.gif
If I'm writing a small utility or something the Makefile typically looks something like this:
- Use a build system like make, you can't just `c++ build`
- Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
- Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
- Oh also understand the compiler doesn't actually output what you want, you also need a linker
- That linker also doesn't know where to find things, so you need the external tool to use it
- Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
Now you can see why things like IDEs became default tools for teaching students how to write C and C++, because there's no "open a text editor and then `c++ build file.cpp` to get output" for anything except hello world examples.
On Windows and OSX it's even easier - if you're okay writing only for those platforms.
It's more difficult to learn, and it seems convoluted for people coming from Python and Javascript, but there are a lot of advantages to not having package management and build tooling tightly integrated with the language or compiler, too.
Not sure how relevant the "in order to use a tool, you need to learn how to use the tool".
Or from the other side: not sure what I should think about the quality of the work produced by people who don't want to learn relatively basic skills... it does not take two PhDs to understand how to use pkg-config.
Frankly the idea that your compiler driver should not be a basic build system, package manager, and linker is an idea best left in the 80s where it belongs.
That's exactly my point: if you think that calling `cmake --build build` is "magic", then maybe you don't have the right profile to use C++ in the first place, because you will have to learn some harder concepts there (like... pointers).
To be honest, I find it hard to understand how a software developer can write code and still consider that command line instructions are "magic incantations". To me it's like saying that calling a function like `println("Some text, {}, {}", some_parameter, some_other_parameter)` is a "magic incantation". Calling a function with parameters counts as "the basics" to me.
Exactly: it makes many things nicer to use than the language package managers, e.g. when maintaining a Linux distribution.
But people generally don't know how one maintains a Linux distribution, so they can't really see the use-case, I guess.
C and C++ have an answer to the dependency problem, you just have to learn how to do it. It's not rocket science, but you have to learn something. Modern languages remove this barrier, so that people who don't want to learn can still produce stuff. Good for them.
Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised (stepsecurity.io)
935 points by jamesberthoty 16 hours ago | flag | hide | 730 comments
Maybe obstreperous dependency management ends up being the winning play in 2025 :)
What happens in practice is people end up writing their own insecure code instead of using someone else's insecure code. Of course, we can debate the tradeoffs of one or the other!
However, it's definitely wrong to say that the typical tools are "non-portable". The UNIX-style C++ toolchains work basically anywhere, including Windows, although I admit some of the tools require MSys/Cygwin. You can definitely use GNU Makefiles with pkg-config using MSys2 and have a fine experience. Needless to say, this also works on Linux, macOS, FreeBSD, Solaris, etc. More modern tooling like CMake and Ninja work perfectly fine on Windows and don't need any special environment like Cygwin or MSys, can use your MSVC installation just fine.
I don't really think applying the mantra of Rust package management and build processes to C++ is a good idea. C++'s toolchain is amenable to many things that Rust and Cargo aren't. Instead, it'd be better to talk about why C++ sucks to use, and then try to figure out what steps could be taken to make it suck less. Like:
- Building C++ software is hard. There's no canonical build system, and many build systems are arcane.
This one really might be a tough nut to crack. The issue is that creating yet another system is bound to just cause xkcd 927. As it is, there are many popular ways to build, including GNU Make, GNU Autotools + Make, Meson, CMake, Visual Studio Solutions, etc.
CMake is the most obvious winner right now. It has achieved defacto standard support. It works on basically any operating system, and IDEs like CLion and Visual Studio 2022 have robust support for CMake projects.
Most importantly, building with CMake couldn't be much simpler. It looks like this:
And you have a build in .build. I think this is acceptable. (A one-step build would be simpler, but this is definitely more flexible, I think it is very passable.)This does require learning CMake, and CMake lists files are definitely a bit ugly and sometimes confusing. Still, they are pretty practical, and rather easy to get started with, so I think it's a clear win. CMake is the "defacto" way to go here.
- Managing dependencies in C++ is hard. Sometimes you want external dependencies, sometimes you want vendored dependencies.
This problem's even worse. CMake helps a little here, because it has really robust mechanisms for finding external dependencies. However, while robust, the mechanism is definitely a bit arcane; it has two modes, the legacy Find scripts mode, and the newer Config mode, and some things like version constraints can have strange and surprising behavior (it differs on a lot of factors!)
But sometimes you don't want to use external dependencies, like on Windows, where it just doesn't make sense. What can do you really do here?
I think the most obvious thing to do is use vcpkg. As the name implies, it's Microsoft's solution to source-level dependencies. Using vcpkg with Visual Studio and CMake is relatively easy, and it can be configured with a couple of JSON files (and there is a simple CLI that you can use to add/remove dependencies, etc.) When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
CMake itself is also capable of vendoring projects within itself, and it's absolutely possible to support all three modalities of manual vendoring, vcpkg, and external dependencies. However, for obvious reasons this is generally not advisable. It's really complicated to write CMake scripts that actually work properly in every possible case, and many cases need to be prevented because they won't actually work.
All of that considered, I think the best existing solution here is CMake + vcpkg. When using external dependencies is desired, simply not using vcpkg is sufficient and the external dependencies will be picked up as long as they are installed. This gives an experience much closer to what you'd expect from a modern toolchain, but without limiting you from using external dependencies which is often unavoidable in C++ (especially on Linux.)
- Cross-compiling with C++ is hard.
In my opinion this is mostly not solved by the "defacto" toolchains. :)
It absolutely is possible to solve this. Clang is already better off than most of the other C++ toolchains in that it can handle cross-compiling with selecting cross-compile targets at runtime rather than build time. This avoids the issue in GCC where you need a toolchain built for each target triplet you wish to target, but you still run into the issue of needing libc/etc. for each target.
Both CMake and vcpkg technically do support cross-compilation to some extent, but I think it rarely works without some hacking around in practice, in contrast to something like Go.
If cross-compiling is a priority, the Zig toolchain offers a solution for C/C++ projects that includes both effortless cross-compiling as well as an easy to use build command. It is probably the closest to solving every (toolchain) problem C++ has, at least in theory. However, I think it doesn't really offer much for C/C++ dependencies yet. There were plans to integrate vcpkg for this I think, but I don't know where they went.
If Zig integrates vcpkg deeply, I think it would become the obvious choice for modern C++ projects.
I get that by not having a "standard" solution, C++ remains somewhat of a nightmare for people to get started in, and I've generally been doing very little C++ lately because of this. However I've found that there is actually a reasonable happy path in modern C++ development, and I'd definitely recommend beginners to go down that path if they want to use C++.
This is a strength not a weakness because it allows you to choose your build system independently of the language. It also means that you get build systems that can support compiling complex projects using multiple programming languages.
> Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
This is a strength not a weakness because it allows you to organize your dependencies and their locations on your computer however you want and are not bound by whatever your language designer wants.
> Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
This is a strength not a weakness because you are not bound to a particular way of how this should work.
> Oh also understand the compiler doesn't actually output what you want, you also need a linker
This is a strength not a weakness because now you can link together parts written in different programming languages which allows you to reuse good code instead of reinventing the universe.
> That linker also doesn't know where to find things, so you need the external tool to use it
This is a strength not a weakness for the reasons already mentioned above.
> Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
This is a strength not a weakness because you can have fully offline builds including ways to distribute dependencies to air-gapped systems and are not reliant on one specific online service to do your job.
Also all of this is a non-issue if you use a half-modern build system. Conflating the language, compiler, build system and package manager is one of the main reason why I stay away from "modern" programming languages. You are basically arguing against the Unix philosophy of having different tools that work together with each tool focusing on one specific task. This allows different tools to evolve independently and for alternatives to exist rather than a single tool that has to fit everyone.
Massive cope, there's no excuse for the lack of decent infrastructure. I mean, the C++ committee for years said explicitly that they don't care about infrastructure and build systems, so it's not really surprising.
Your move.
There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
Then repeated foot guns going off, no toes left, company bankrupt and banking system crashed, again
I dont know if you're jokingor just naïve, but cmake and the like are massive time sinks if you want anything beyond "here's a few source files, make me an application"
I've observed the existence in larger projects of "build engineers" whose sole job is to keep the project building on a regular cadence. These jobs predominantly seem to exist in C++ land.
You wish.
These jobs exist for companies with large monorepos in other languages too and/or when you have many projects.
Plenty of stuff to handle in big companies (directory ownership, Jenkins setup, in-company dependency management and release versioning, developer experience in genernal, etc.)
Wasn't CI invented to solve just this problem?
Your choice: do you have the most senior engineers spend time sporadically maintaining the build system, perhaps declaring fires to try to pay off tech debt, or hire someone full time, perhaps cheaper and with better expertise, dedicated to the task instead?
CI is an orthogonal problem but that too requires maintenance - do you maintain it ad-hoc or make it the official responsibility for someone to keep maintained and flexible for the team’s needs?
I think you think I’m saying the task is keeping the build green whereas I’m saying someone has to keep the system that’s keeping the build green going and functional.
The scenario you are describing does not make sense for the commonly accepted industry definition of "build system." It would make sense if, instead, the description was "application", "product", or "system."
Many software engineers use and interpret the phrase "build system" to be something akin to make[0] or similar solution used to produce executable artifacts from source code assets.
0 - https://man.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...
I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
It seems to me that the people/committees who built C++ just spent decades inventing new and creative ways for developers to shoot themselves in the foot. Like, why does the language need to offer a hundred different ways to accomplish each trivial task (and 98 of them are bad)?
You get to choose between 25 flint-bladed axes, some of which are coated in modern plastic, when you really want a chainsaw.
This... doesn't really hold water. You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language. Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood. Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
you don't need to understand what an overloaded operator is doing any more than you have to understand the implementation of every function you call, recursively
> everyone only uses 20% of C++, the problem is that everyone uses a different 20%
This is in big part also because of committee, that prefers hundred-line template monster "can_this_call_that_v" to a language feature, probably thinking that by not including something in language standard and offloading it to library they do good job.
I don't think move semantics are really that bad personally, and some languages move by default (isn't that Rust's whole thing?).
What I don't like is the implicit ambiguous nature of "What does this line of code mean out of context" in C++. Good luck!
I have hope for C++front/Cpp2. https://github.com/hsutter/cppfront
(oh and I think you can write a whole book on the different ways to initialize variables in C++).
The result is you might be able to use C++ to write something new, and stick to a style that's readable... to you! But it might not make everyone else who "knows C++" instantly able to work on your code.
Languages that get it right: SQL, Lua, ML, Perl, PHP, Visual Basic.
But C++ doesn't have that problem. Sure, a separate operator would have been cleaner (but | is already used for bitwise or) but I have never seen any bug that resulted from it and have never felt it to be an issue when writing code myself.
Unfortunately, many languages allow `string + int`, which is quite problematic. Java is to blame for some of this.
And C++ is even worse since literals are `const char[]` which decays to pointer.
Languages okay by my standard but not yours include: Python, Ruby.
(NaN + 0.0) != 0.0 + NaN
Inf + -Inf != Inf
I suspect the algebraists would also be pissed if you took away their overloads for hypercomplex numbers and other exotic objects.
It does: |
That character was put in ASCII specifically for concatenation in PL/1.
Then came C.
Arithmetic addition and sequence concatenation are very very different.
——
Scala got this right as well (except strings, Java holdover)
Concatenation is ++
The first two are already used for bitwise and logical or and the third isn't available in ASCII so I still think overloading + was a reasonable choice and doesn't cause any actual problems IME.
Rust's move semantics are good! C++'s have a lot of non-obvious footguns.
> (oh and I think you can write a whole book on the different ways to initialize variables in C++).
Yeah. Default init vs value init, etc. Lots of footguns.
This wasn't possible when they were added to the language and wasn't really transparent until C++17 or so but it has grown to be a useful safety feature.
right_shifted = (int)(value * pow(2, -bits) - 0.5)
Only if you have full control on what others are writing. In reality, you're going to read a lot, lots of "clever" codes. And I'm saying as a person who have written a good amount of template meta programming codes. Even for me, some codes take hours to understand and I was usually able to cut 90% of its code after that.
Most templates are much easier to read in comparison.
Some lvalue move copy constructor double rainbow, and you’re left wondering wtf
Oh boy!
This person needs control.
That is where I left C++, a better C
Faint praise
> Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. Now is that because of Rust? I’d argue in some small part, yes.
Just delete this. Even an hour's familiarity with Rust will give you a visceral understanding that "Rewrites of C++ codebases to Rust always yield more memory-safe results than before" is absolutely not because "any rewrite of an existing codebase is going to yield better results". If you don't have that, skip it, because it weakens the whole piece.
Maybe you can do that. But you are probably working in a team. And inevitably someone else in your team thinks that operator overloading and template metaprogramming are beautiful things, and you have to work with their code. I speak from experience.
However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
Problem is when somebody on the team does not share this view though, that much is true :)
(Note: I'm not saying it is deeply flawed, just that this particular way of using it suggests so).
It's like a well equiped workshop, just because you have access to a chainsaw but do not need to use it to build a table does not mean it's a bad workshop.
C is very barebones, languages like C++. C#, Rust and so on are not. Just because you don't need all of it's features does not make those languages inherently bad.
Great question or in this case counter-counter point though.
How do you define “need” for extra features? C and C++ can fundamentally both do the same thing so if you’re going to write C style C++, why not just write C and avoid all of C++’s foot guns?
As for why not just go for C. You can write C++ fully as if it were C, you can not ever turn C into C++
You could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
COBOL sticks around 66 years after its first release. Fortran is 68 years old and is still enormously relevant. Much, much more software was written in newer languages and has become so complex that replacements have become practically impossible (Fuchsia hasn't replaces Linux in Google products, wayland isn't ready to replace X11 etc)
These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.
C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.
C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.
Sure, there are still Fortran codes. But I can hardly imagine that Fortran still plays a big role in another 68 years from now on.
If by scientific ecosystems you mean people making prototypes for papers, then yes. But in commercial, industrial setting there is still no alternative for many of Matlab toolboxes, and as for Julia, as cool as it is, you need to be careful to distinguish between real usage and vetted marketing materials created by JuliaSim.
Re Matlab: I still see it thriving in the industry, for better or worse. Many engineers just seem to love it. I haven't seen many users of Julia yet. Where do you see those? I think that Julia deserves a fair chance, but it just doesn't have a presence in the fields I work in.
Maybe GNU Emacs has a larger percentage remaining intact; at least it retains some architectural idiosyncrasies from 1980s.
As of Fortran, modern Fortran is a pretty nice and rich language, very unlike the Fortran-77 I wrote at high school.
Perhaps AI will get reliable enough to pour through these double-digit million LOC codebases and convert them flawlessly, but that looks like it's decades off at this point.
We live in a special time when general processing efficiency has always been increasing. The future is full of domain specific hardware (enabling the continued use of COBOL code written for slower mainframes). Maybe this will be a half measure like cuda or your c++ will just be a thin wrapper around a makeYoutube() ASIC
Of course if there is a breakthrough in general purpose computing or a new killer app it will wipe out all those products which is why they don't just do it now
Bit off more than it could chew, no we all have indigestion
I think this is one of the worst (and most often repeated arguments) about C++. C and C++ are inherently unsafe in ways that trip up _all_ developers even the most seasoned ones, even when using ALL the modern C++ features designed to help make C++ somewhat safer.
* The author is confusing memory safety with other kinds of safety. This is evident from the fact that they say you can write unsafe code in GC languages like python and javascript. unsafe != memory unsafe. Rust only gives you memory safety, it won't magically fix all your bugs.
* The slippery slope trick. I've seen this so often, people say because Rust has unsafe keyword it's the same as c/c++. The reason it's not is because in c/c++ you don't have any idea where to look for undefined behaviour. In Rust at least the code points you to look at the unsafe blocks. The difference is of degree which for practial purposes makes a huge difference.
Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
It's not easy but it is fairly simple.
And out of all the tools and architecture I work with, C++ has been some of the least problematic. The STL is well-formed and easy to work with, creating user-defined types is easy, it's fast, and generally it has few issues when deploying. If there's something I need, there's a very high chance a C or C++ library exists to do what I need. Even crossing multiple major compiler versions doesn't seem to break anything, with rare exceptions.
The biggest problem I have with C++ is how easy it is to get very long compile times, and how hard it feels like it is to analyze and fix that on a 'macro' (whole project) level. I waste ungodly amounts of time compiling. I swear I'm going to be on deaths door and see GCC running as my life flashes by.
Some others that have been not-so-nice:
* Python - Slow enough to be a bottleneck semi-frequently, hard to debug especially in a cross-language environment, frequently has library/deployment/initialization problems, and I find it generally hard to read because of the lack of types, significant whitespace, and that I can't easily jump with an IDE to see who owns what data. Also pip is demon spawn. I never want to see another Wheel error until the day I die.
* VSC's IntelliSense - My god IntelliSense is picky. Having to manually specify every goddamn macro, one at a time in two different locations just to get it to stop breaking down is a nightmare. I wish it were more tolerant of having incomplete information, instead of just shutting down completely.
* Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
* CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
You really should not have global data. Modules are the way to go and have been since Fortran90.
> CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
I like how you wrote my feelings so accurately :D
You can do much better in CMake if you put some effort into cleaning it up - I have little hope anyone will do this though. We have a hard time getting developers to clean up messes in production code and that gets a lot more care and love.
Merely parsing C++ code requires a higher time complexity than parsing C code (linear time parsers cannot be used for C++), which is likely where part of the long compile times originate. I believe the parsing complexity is related to templates (and the headers are full of them), but there might be other parts that also contribute to it. Having to deal with far more abstractions is likely another part.
That said, I have been incrementally rewriting a C++ code base at a health care startup into a subset of C with the goal of replacing the C++ compiler with a C compiler. The closer the codebase comes to being C, the faster it builds.
What exactly do you mean by a "Wheel error"? Show me a reproducer and a proper error message and I'll be happy to help to the best of my ability.
By and large, the reason pip fails to install a package is because doing so requires building non-Python code locally, following instructions included in the package. Only in rare cases are there problems due to dependency conflicts, and these are usually resolved by creating a separate environment for the thing you're trying to install — which you should generally be doing anyway. In the remaining cases where two packages simply can't co-exist, this is fundamentally Python's fault, not the installer's: module imports are cached, and quite a lot of code depends on the singleton nature of modules for correctness, so you really can't safely load up two versions of a dependency in the same process, even if you hacked around the import system (which is absolutely doable!) to enable it.
As for finding significant whitespace (meaning indentation used to indicate code structure; it's not significant in other places) hard to read, I'm genuinely at a loss to understand how. Python has types; what it lacks is manifest typing, and there are many languages like this (including Haskell, whose advocates are famous for explaining how much more "typed" their language is than everyone else's). And Python has a REPL, the -i switch, and a built-in debugger in the standard library, on top of not requiring the user to do the kinds of things that most often need debugging (i.e. memory management). How can it be called hard to debug?
As for significant whitespace, the problem is that I'm often dealing with files with several thousand lines of code and heavily nested functions. It's very easy to lose track of scope in that situation. Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am. Feels easier to make mistakes as well.
It works well if everything fits on one screen, it gets harder otherwise, at least for me.
As for types, I'm not claiming it's unique to Python. Just that it makes working with Python harder for me. Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
As for debugging, it's great if you have pure Python. Mix other languages in and suddenly it becomes pain. There's no way to step from another language into Python (or vice-versa), at least not cleanly and consistently. This isn't always true for compiled->compiled. I can step from C++ into Fortran just fine.
Find an IDE or extension which provides the nesting context on top of the editor. I think vs code has it built in these days.
> I'm often dealing with files with several thousand lines of code and heavily nested functions.
This is the problem. Also, a proper editor can "fold" blocks for you.
> Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
If you want to use annotations, you can, and have been able to since 3.0. Since 3.5 (see https://peps.python.org/pep-0484/; it's been over a decade now), there's been a standard for understanding annotations as type information, which is recognized by multiple different third-party tools and has been iteratively refined ever since. It just isn't enforced by the language itself.
> Mix other languages in and suddenly it becomes pain.... This isn't always true for compiled->compiled.
Sure, but then you have to understand the assembly that you've stepped into.
I can't fix that. I just work here. I've got to deal with the code I've got to deal with. And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
>Sure, but then you have to understand the assembly that you've stepped into.
Assembly? I haven't touched raw assembly since college.
How exactly are they more helpful than following the line of the indentation that you're supposed to have as a matter of good style anyway? Do you not have formatting tools? How do you not have a tool that can find the top of a level of indentation, but do have one that can find a paired brace?
>Assembly? I haven't touched raw assembly since college.
How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
I don't know what IDE GP might be using, but mixed-language debuggers for native code are pretty simple as long as you just want to step over. Adding support for Fortran to, say, Visual Studio wouldn't be a huge undertaking. The mechanism to detect where to put the cursor when you step into a function is essentially the same as for C and C++. Look at the instruction pointer, search the known functions for an address that matches, and jump to the file and line.
Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
enums and conversion between integers is very strict in C++.
`char * message = "Hello"` is valid C but not C++ (since you cannot mutate the pointed to string, it must be `const` in C++)
C99 introduced variadic macros that didn't become standard C++ until 2011.
C doesn't allow for empty structs. You can do it in C++, but sizeof(EmptyStruct) is 1. And if C lets you get away with it in some compilers, I'll bet it's 0.
Anyway, all of these things and likely more can ruin your party if you think you're going to compile C code with a C++ compiler.
Also don't forget if you want code to be C callable in C++ you have to use `extern "C"` wrappers.
These are mostly inconsequential when using code other people write. It is trivial to mix C and C++ object files, and where the differences (in headers) do matter, they can be ifdefed away.
> void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
This makes sense because void* -> T* is a downcast. I find the C behavior worse.
> enums and conversion between integers is very strict in C++.
As it should, but unscoped enums are promoted to integers the same way they are in C
> `char * message = "Hello"` is valid C but not C++
Code smell anyway, you can and should use char[] in both languages
You didn't mention the difference in inline semantics which IMO has more impact than what you cited
C++ designated initializers are slightly different in that the initialization order must match the declared member order. That is not required in C.
Unless you use the C++20 [[no_unique_address]] attribute, in which case it is 0 (if used correctly).
Really appreciate and value everyone's feedback on this.
I can not overstate my excitement that my blog post has generated this much discussion and debate.
I wish I didn’t have to know about std::launder but I do
Problem 1: You might fail to initialize an object in memory correctly.
Solution 1: Constructors.
Problem 2: Now you cannot preallocate memory as in SLAB allocation since the constructor does an allocator call.
Solution 2: Placement new
Problem 3: Now the type system has led the compiler to assume your preallocated memory cannot change since you declared it const.
Solution 3: std::launder()
If it is not clear what I mean about placement new and const needing std::lauder(), see this:
https://miyuki.github.io/2016/10/21/std-launder.html
C has a very simple solution that avoids this chain. Use structured programming to initialize your objects correctly. You are not going to escape the need to do this with C++, but you are guaranteed to have to consider a great many things in C++ that would not have needed consideration in C since C avoided the slippery slope of syntactic sugar that C++ took.
Before C++ added it we relied on undefined behavior that the compilers agreed to interpret in the necessary way if and only if you made the right incantations. I’ve seen bugs in the wild because developers got the incantations wrong. std::launder makes it explicit.
For the broader audience because I see a lot of code that gets this wrong, std::launder does not generate code. It is a compiler barrier that blocks constant folding optimizations of specific in-memory constants at the point of invocation. It tells the compiler that the constant it believes lives at a memory address has been modified by an external process. In a C++ context, these are typically restricted to variables labeled ‘const’.
This mostly only occurs in a way that confuses the compiler if you are doing direct I/O into the process address space. Unless you are a low-level systems developer it is unlikely to affect you.
> Unless you are a low-level systems developer it is unlikely to affect you.
Making new data structure is common. Serializing classes into buffers is common.
std::launder is a tool for object instances that magically appear where other object instances previously existed but are not visible to the compiler. The typical case is some kind of DMA like direct I/O. The compiler can’t see this at compile time and therefore assumes it can’t happen. std::launder informs the compiler that some things it believes to be constant are no longer true and it needs to update its priors.
You don't want std::launder for any of that. If you must create object instances from random preexisting bytes you want std::bit_cast or https://en.cppreference.com/w/cpp/memory/start_lifetime_as.h...
Of course, the article doesn't mention lambdas.
Borrowing from stack is super useful when your lambda also lives in the stack; stack escaping is a problem, but it can be made harder by having templates take Fn& instead of const Fn& or Fn&&; that or just a plain function pointer.
Like, I'm not god's gift to programming or anything, but I'm decently good at it, and I wrote a use-after-return bug due to a lambda reference last week.
Meanwhile in Rust you can freely borrow from the stack in closures, and the borrow checker ensures that you'll not screw up. That's what (psychological) safety feels like.
In any case, if you want safety and performance, use Rust.
Not any less than other parts of the language. If you capture by reference you need to mind your lifetimes. If you need something more dynamic then capture by copy and use pointers as needed. It unfortunate the developer who introduced that bug you mentioned didn't keep that in mind, but this is not a problem that lambdas introduced; it's been there all along. The exact same thing would've happened if they had stored a reference to a dynamic object in another dynamic object. If the latter lives longer than the former you get a dangling reference.
>In any case, if you want safety and performance, use Rust.
Personally, I prefer performance and stability. I've already had to fix broken dependencies multiple times after a new rustc version was released. Wake me up when the language is done evolving on a monthly basis.
[1] Not me making this up - I started getting into guns and this is what people say.
In a complete tangent I think that "smart guns" that only let you shoot bullseye targets, animals and designated un-persons are not far off.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
Let no one accuse the committee of being unresponsive.
The same applies to many of the other baseless complaints I'm seeing here, learn to use your tools fools.
It's hard enough to get programmers to care enough about how their code affects build times. Modules make it impossible for them to care, and will lead to horrible problems when building large projects.
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
This has me scratching my head a bit. In spite of C++ being nearly a superset of C, they are very different languages, and idiomatic C++ doesn't look very much like C. In fact, I'd argue that most of the stuff C++ adds to C allows you to write code that's much cleaner than the equivalent C code, if you use it the intended way. The one big exception I can think of is template metaprogramming, since the template code can be confusing, but if done well, the downstream code can be incredibly clean.
There's an even bigger problem with this recommendation, which is how it relates to something else talked about in the article, namely "safety." I agree with the author that modern C++ can be a safe language, with programmer discipline. C++ offers a very good discipline to avoid resource leaks of all kinds (not just memory leaks), called RAII [1]. The problem here is that C++ code that leverages RAII looks nothing like C.
Stepping back a bit, I feel there may be a more fundamental fallacy in this "C++ is Hard to Read" section in that the author seems to be saying that C++ can be hard to read for people who don't know the language well, and that this is a problem that should be addressed. This could be a little controversial, but in my opinion you shouldn't target your code to the level of programmers who don't know the language well. I think that's ultimately neither good for the code nor good for other programmers. I'm definitely not an expert on all the corners of C++, but I wouldn't avoid features I am familiar with just because other programmers might not be.
[1] https://en.cppreference.com/w/cpp/language/raii.html
Nitpick, I guess, but Windows 1.0 was released in November 1985:
https://en.m.wikipedia.org/wiki/Windows_1.0
You know, not sure I even agree with the memory leaks part. If you define a memory leak very narrowly as forgetting to free a pointer, this is correct. But in my experience working with many languages including C/C++, forgotten pointers are almost never the problem. You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects or bursty memory allocation patterns. And these occur in all languages.
In C and C++ no such thing exists. It is walking in a minefield. It is worse with C++ because they piled so much stuff, nobody knows on the top of their head how a variable is initialized. The initialization rules are insane: https://accu.org/journals/overload/25/139/brand_2379/
So if you are doing peaky memory stuff with complex partially self-initializing code in C++, there are so many ways of blowing yourself and your entire team up without knowing which bit of code you committed years ago caused it.
It's true that Rust makes it much harder to leak memory compared to C and even C++, especially when writing idiomatic Rust -- if nothing else, simply because Rust forces the programmer to think more deeply about memory ownership.
But it's simply not the case that leaking memory in Rust requires unsafe blocks. There's a section in the Rust book explaining this in detail[1] ("memory leaks are memory safe in Rust").
[1] https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
> You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects
I use Rust in a company in a team who made the C++ -> Rust switch for many system services we provide on our embedded devices. I use Rust daily. I am aware that leaking is actually safe.
Rc::Weak does the same thing in Rust, but I rarely see anyone use it.
But a shared_ptr manages at least 3 things: control block lifetime, pointee lifetime, and the lifetime of the underlying storage. The weak pointer shares ownership of the control block but not the pointee. As I understand this is because the weak_ptr needs to modify the control block to try and lock the pointer and to do so it must ensure the control block's lifetime has not ended. (It manages the control blocks lifetime by maintaining a weak count in the control block but that is not really why it shares ownership.)
As a bonus trivia, make_shared uses a single allocation for both the control block and the owned object's storage. In this case weak pointers share ownership of the allocation for the pointee in addition to the control block itself. This is viewed as an optimization except in the case where weak pointers may significantly outlive the pointee and you think the "leaked" memory is significant.
Quoting cppreference [0]:
[0] https://en.cppreference.com/w/cpp/memory/shared_ptr/make_sha...Even the Go authors themselves on Go's website display a process of debugging memory usage that looks identical to a workflow you would have done in C++. So, like, what's the point? Just use C++.
I really do think Go is nice, but at this point I would relegate it to the workplace where I know I am working with a highly variable team of developers who in almost all cases will have a very poor background in debugging anything meaningful at all.
Problem is, if you’re using C++ for anything serious, like the aforementioned game development, you will almost certainly have to use the existing libraries; so you’re forced to match whatever coding style they chose to use for their codebase. And in the case of Unreal, the advice “stick to the STL” also has to be thrown out since Unreal doesn’t use the STL at all. If you could use vanilla, by-the-books C++ all the time, it’d be fine, but I feel like that’s quite rare in practice.
But just keeping track of all the features and the exotic ways they interact is a full time job. There are people who have dedicated entire lives to understanding even a tiny corner of the language, and they still don't manage.
Not worth the effort for me, there are other languages.
I think Rust is probably doing the majority of the work unless you’re writing everything in unsafe. And why would you? Kinda defeats the purpose.
By contrast, my experience with C++ to Rust rewrites is that the inability of Rust to express some useful and common C++ constructs causes the software architecture to diverge to the point where you might as well just be rewriting it from scratch because it is too difficult to track the C++ code.
The author is arguing that the main reason rewriting a C++ codebase in Rust makes it more memory-safe is not because it was done in Rust, but because it benefits from lessons learned and knowledge about the mistakes done during the first iteration. He acknowledges Rust will also play a part, but that it's minor compared to the "lessons learned" factor.
I'm not sure I buy the argument, though. I think rewrites usually introduce new bugs into the codebase, and if it's not the exact same team doing the rewrite, then they may not be familiar with decisions made during the first version. So the second version could have as many flaws, or worse.
> That’s how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It’s not because of Rust, it’s because they took the time to rethink and redesign...
If they got the program to work at all in Rust, it would be memory-safe. You can't claim that writing in a memory-safe language is a "minor" factor in why you get memory safety. That could never be proven or disproven.
I'm not defending TFA, I'm saying if you're going to reject the argument you must quote it in full, without leaving the main part.
Because it's a re-write, you already know all the requirements. You know what works and what doesn't. You know what kind of data should be laid out and how to do it.
Because of that, a fresh re-write will often erase bugs (including memory ones) that were present originally.
The observation is that second implementation of a successful system is often much less successful, overengineered, and bloated, due to programmer overconfidence.
On the other hand, I am unsure of how frequently the second-system effect occurs or the scenarios in which it occurs either. Perhaps it is less of a concern when disciplined developers are simply doing rewrites, rather than feature additions. I don't know.
[0] https://en.wikipedia.org/wiki/Second-system_effect
Also, avoid using C++ classes while you're at it.
I recently had to go back to writing C++ professionally after a many-year hiatus. We code in C++23, and I got a book to refresh me on the basics as well as all the new features.
And man, doing OO in C++ just plain sucks. Needing to know things like copy and swap, and the Rule of Three/Five/Zero. Unless you're doing trivial things with classes, you'll need to know these things. If you don't need to know those things, you might as well stick to structs.
Now I'll grant C++23 is much nicer than C++03 (just import std!) I was so happy to hear about optional, only to find out how fairly useless it is compared to pretty much every language that has implemented a "Maybe" type. Why add the feature if the compiler is not going to protect you from dereferencing without checking?
CLOS seems pretty good, but then again I'm a bit inexperienced. Bring back Dylan!
Smart pointers are neat but they are not a solution for memory safety. Just using standard containers and iterators can lead to lots of footguns, or utils like string_view.
One of the most common complaints is the lack of a package manager. I think this stems from a fundamental misunderstanding of how the ecosystem works. Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Another perpetual gripe is that C++ is bad because it is overly complex and baroque, usually from C folks like Linus Torvalds[1]. It's pretty ironic, considering the very compiler they use for C (GCC), is written in C++ and not in C.
[1]: Torvalds' comment on C++ <https://harmful.cat-v.org/software/c++/linus>
It's really not about being hard to grasp. Once you need a different dependency version than the system provides, you can't easily do it. (Apart from manual copies) Even if the library has the right soname version preventing conflicts (which you can do in C, but not really C++ interfaces), you still have multiple versions of headers to deal with. You're losing features by not having a real package manager.
Okay, but is that actually a good idea? Merely saying that something is idiomatic isn't a counterargument to an allegation that the ecosystem has converged on a bad idiom.
For software that's going to be distributed through that same package manager, yes, sure, that's the right way to handle dependencies. But if you're distributing your app in a format that makes the dependencies self-contained, or not distributing it at all (just running it on your own machines), then I don't see what you gain from letting your operating system decide which versions of your dependencies to use. Also this doesn't work if your distro doesn't happen to package the dependency you need. Seems better to minimize version skew and other problems by having the files that govern what versions of dependencies to use (the manifest and lockfile) checked into source control and versioned in lockstep with the application code.
Also, the GCC codebase didn't start incorporating C++ as an implementation language until eight years after Linus wrote that message.
There is nothing you can do in C++ that you cannot do in C due to Turing Completeness. Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
While this is technically true, a more satisfying rationale is provided by Stroustrup here[0].
> Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Constructs such as sys/tree.h[1] replicate the functionality of C++ classes and templates via the C macro processor. While they are quite useful, asserting that macro-based definitions provide the same type safety as C++ types is simply not true.
As to the whether macro use results in "creating enormous error messages" or not, that depends on the result of the textual substitution. I can assure you that I have seen reams of C compilation error messages due to invalid macro definitions and/or usage.
0 - https://www.stroustrup.com/compat_short.pdf
1 - https://cgit.freebsd.org/src/tree/sys/sys/tree.h
For example:
If you can restrict to using the 'good' parts than it can be OK, but it's pulling in a huge dependency for very little gain these days.
Alternative libraries like QT are more coherent and better thought out.
Either way, it's hard not to draw parallels between all the drama in US politics and the arguments about language choice sometimes; it feels like both sides lack respect for the other, and it makes things unnecessarily tense.
> you can write perfectly fine code without ever needing to worry about the more complex features of the language
Not really because of undefined behaviour. You must be aware of and vigilant about the complexities of C++ because the compiler will not tell you when you get it wrong.
I would argue that Rust is at least in the same complexity league as C++. But it doesn't matter because you don't need to remember that complexity to write code that works properly (almost all of the time anyway, there are some footguns in async Rust but it's nothing on C++).
> Now is [improved safety in Rust rewrites] because of Rust? I’d argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.
A factor, sure. The biggest? Doubtful. It isn't only Rust's safety that helps here, it's its excellent type system.
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
Somehow managed to fit two fallacies in one sentence!
1. The fallacy of the grey - no language is perfect therefore they are all the same.
2. "I don't make mistakes."
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Not true. As I said already Rust's very strong type system helps to make applications less buggy even ignoring memory safety bugs.
> Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers
lol
> Avoid boost like the plague.
Cool, so the ecosystem isn't confusing but you have to avoid one of the most popular libraries. And Boost is fine anyway. It has lots of quite high quality libraries, even if they do love templates too much.
> Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use.
Uhuh what would you recommend instead of Boost ICL?
I guess it's a valiant attempt but this is basically "in defense of penny farthings" when the safety bicycle was invented.
Even if we take this claim at face value, isn’t that great?
Memory safety is a HUGE source of bugs and security issues. So the author is hand-waving away a really really good reason to use Rust (or other memory safe by default language).
Overall I agree this seems a lot like “I like C++and I’m good at it so it’s fine” with justifications created from there.
There are many high-level C++ applications that would probably be best implemented in a modern GC language. We could skip the systems language discussion entirely because it is weird that we are using one.
There are also low-level applications like high-performance database kernels where the memory management models are so different that conventional memory safety assumptions don’t apply. Also, their performance is incredibly tightly coupled to the precision of their safety models. It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t. A lot of new C++ usage is in these areas.
Rust to me slots in as a way to materially improve performance for applications that might otherwise be well-served by Java.
Can't agree there. Why wouldn't they be usable if they weren't memory safe?
Can you give me an example of this mythical "memory safe in practice" database?
Not Postgresql at least: https://www.postgresql.org/support/security/
Modern database kernels are memory-bandwidth bound. Micro-managing the memory is a core mechanic as a consequence. It is difficult to micro-manage memory with extreme efficiency if it isn’t implicitly safe. Companies routinely run formal model checkers like TLA+ on these implementations. It isn’t a rando spaffing C++ code.
I’ve used PostgreSQL a lot but no one thinks of it as highly optimized.
Rust is a systems language but it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models, among many other well-known quirks as a systems language. Other verifiable safety models exist that don’t have these issues. C++, for better or worse, can deal with this stuff in a straightforward way.
Nobody claims that Rust is a perfect language, the argument is that Rust is a better language than C++.
I feel like I always hear this argument for continuing to use C++.
I, on the other hand, want a language that doesn't make me feel like I'm walking a tightrope with every line of code I write. Not sure why people can't just admit the humans are not robots and will write incorrect code.
Yeah, sorry, but no, ask some long-term developers about how this often goes.
I've been a software developer for nearly 2 decades at this point, contributed to several rewrites and oversaw several rewrites of legacy software.
From my experience I can assure you that rewriting a legacy codebase to modern C++ will yield a better and safer codebase overall.
There are multiple factors that contribute to this, such one of which is what I reffer to as "lessons learnt" if you have a stable team of developers maintaining a legacy codebase they will know where the problematic areas are and will be able to avoid re-creating them in a rewrite.
An additonal factor to consider is that a lot of legacy C++ codebases can not be upgraded to use modern language features like smart pointers. The value smart pointers provide in a full rewrite can not be overstated.
Then there's also the factor that is a bit anecdotal which is I find that there are less C++ devs in general as there was 15 years ago, but those that stayed / survived are generally better and more experienced with very few enthusiastic juniors coming in.
I'm sorry you did not enjoy the article though, but thank you for giving it your time and reading it that part I really appreciate.
I once encountered this situation with C# code written by an undergraduate, rewrote it from scratch in C++ and got a better result. In hindsight, the result would have been even better in C since I spent about 80% of my time fighting with C++ to try to use every language feature possible. I had just graduated from college and my code whole better, did a number of things wrong too (although far fewer to my credit). I look back at it in hindsight and think less is more when it comes to language features.
I actually am currently maintaining that codebase at a health care startup (I left shortly after it was founded and rejoined not that long ago). I am incrementally rewriting it to use a C subset of C++ whenever I need to make a change to it. At some point, I expect to compile it as C and put C++ behind me.
Now you mostly get an error to the effect of "constraint foo not satisfied by type bar" at the point of use that tells you specifically what needs to change about the type or value to satisfy the compiler.
I just had a PR on an old C++ project, and spending 8 years in the web ecosystem have raised the bar around tooling expectations.
Rust is particularly sweet to work with in that regard.
Running unit tests with the address sanitizer and UB sanitizer enabled go a long way towards addressing most memory safety bugs. The kind of C++ you write then is a far cry from what the haters complain about with bad old VC6 era C++.
It is even if if you do
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
But here's the thing, that's not a good argument because...
> will just make it a lot harder to have memory leaks or safety issues.
... in reality it's not "just". "Just makes it better" means it's better
My viewpoint on the language is that there are certain types of engineers who thrive in the complexity that is easy to arrive at in a C++ code base. These engineers are undoubtedly very smart, but, I think, lack a sense of aesthetics that I can never get past. Basically, the r/atbge of programming languages (Awful Taste But Great Execution).
the truth
On legacy code bases, sure. C++ rules in legacy C++ codebases. That’s kind of a given isn’t it? So that’s not a benefit. Just a fact.
Why performance-critical domains? Does C++ have a performance edge over Rust?
> Just use whatever parts of the language you like without worrying about what's most performant!
It's not about performant. It's about understanding someone else's code six months after they've been fired, and thus restricting what they can possibly have done. And about not being pervasively unsafe.
> "I don’t think C++ is outdated by any stretch of the imagination", "matter of personal taste".
Except of course for header files, forward declarations, Make, the true hell of C++ dependency management (there's an explicit exhortation not to use libraries near the bottom), a thousand little things like string literals actually being byte pointers no matter how thoroughly they're almost compatible with std::string, etc. And of course the pervasive unsafety. Yes, it sure was last updated in 2023, the number of ways of doing the same thing has been expanded from four to five but the module system still doesn't work.
> You can write unsafe code in Python! Rewriting always makes the code more safe whether it's in Rust or not!
No. Nobody who has actually used Rust can reasonably arrive at this opinion. You can write C++ code that is sound; Rust-fluent people often do. The design does not come naturally just because of the process of rewriting, this is an entirely ridiculous thing to claim. You will make the same sorts of mistakes you made writing it fresh, because you are doing the same thing as you were when writing it fresh. The Rust compiler tells you things you were not thinking of, and Rust-fluent people write sound C++ code because they have long since internalized these rules.
And the crack about Python is just stupid. When people say 'unsafe' and Rust in the same sentence, they are obviously talking about UB, which is a class of problem a cut above other kinds of bugs in its pervasiveness, exploitability, and ability to remain hidden from code review. It's 'just' memory safety that you're controlling, which according to Microsoft is 70% of all security related bugs. 70% is a lot! (plus thread safety, if this was not mentioned you know they have not bothered using Rust)
In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
> Just use smart pointers!
Yes, let me spam angle brackets on every single last function. 'Write it the way you want to write it' is the first point in the article, and here is the exact 'write it this way' that was critiquing. And you realistically won't do it on every function so it is just a matter of time until one of the functions you use regular references with creates a problem.
Yes, this is a serious flaw in the author's argument. Does he think the exact same team that built version 1.0 in C++ is the one writing 2.0 in Rust? Maybe that happens sometimes, I guess, but to draw a general lesson from that seems weird.