The C++ Core Guidelines have existed for nearly 10 years now. Despite this, not a single implementation in any of the three major compilers exists that can enforce them. Profiles, which Bjarne et al have had years to work on, will not provide memory safety[0].
The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes. However, it's already too late. Even if somehow they manage to make changes to the language that enforce memory safety, it will take a decade before the efforts propagate at the compiler level (a case in point is modules being standardised in 2020 but still not ready for use in production in any of the three major compilers).
> The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes.
The example in the article starts with "Wow, we have unordered maps now!"
Just adding things modern languages have is nice, but doesn't fix the big problems.
The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
amluto 33 days ago [-]
I find the unordered_map example rather amusing. C++’s unordered_map is, somewhat infamously, specified in an unwise way. One basically cannot implement it with a modern, high performance hash table for at least two reasons:
1. unordered_map requires some bizarre and not widely useful abilities that mostly preclude hash tables with probing:
2. unordered_map has fairly strict iteration and pointer invalidation rules that are largely incompatible with the implementations that turn out to be the fastest. See:
> References and pointers to either key or data stored in the container are only invalidated by erasing that element, even when the corresponding iterator is invalidated.
And, of course, this is C++, where (despite the best efforts of the “profiles” people), the only way to deal with lifetimes of things in containers is to write the rules in the standards and hope people notice. Rust, in contrast, encodes the rules in the type signatures of the methods, and misuse is deterministically caught by the compiler.
tialaramex 33 days ago [-]
Like std::vector, std::unordered_map also doesn't do a good job on reservation, I've never been entirely sure what to make of that - did they not care? Or is there some subtle reason why what they're doing made sense on the 1980s computers where this was conceived?
For std::vector it apparently just didn't occur to C++ people to provide the correct API, Bjarne Stroustrup claims the only reason to use a reservation API is to prevent reference and iterator invalidation. -shrug-
[std::unordered_map was standardised this century, but, the thing standardised isn't something you'd design this century, it's the data structure you'd have been shown in an undergraduate Data Structures class 40 years ago.]
amluto 33 days ago [-]
> For std::vector it apparently just didn't occur to C++ people to provide the correct API, Bjarne Stroustrup claims the only reason to use a reservation API is to prevent reference and iterator invalidation. -shrug-
Do you mean something like vector::reserve_at_least()? I suppose that, if you don’t care about performance, you might not need it.
FWIW, I find myself mostly using reserve in cases where I know what I intend to append and when I will be done appending to that vector forever afterwards.
tialaramex 33 days ago [-]
I'm not familiar with vector::reserve_at_least but assuming that's an API which reserves capacity without destroying the amortized constant time of the exponential growth built in to the type, yes, that.
amluto 33 days ago [-]
Oh, sorry, that’s an API that doesn’t exist. I was trying to understand what feature you wanted that didn’t exist.
IshKebab 34 days ago [-]
You absolutely can throw things out, and they have! Checked exceptions, `auto`, and breaking changes to operator== are the two I know of. There were also some minor breaking changes to comparison operators in C++20.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
amluto 33 days ago [-]
> auto
It took me several reads to figure out that you probably meant ‘auto’ the storage class specifier. And now I’m wondering whether this was ever anything but a no-op in C++.
TuxSH 33 days ago [-]
> "in C++26 vector::operator[] will be checked"
Every major project in that cares about perf and binary size would disable the option that compiler vendors would obviously provide, like -fno-exceptions.
Rust memory and type system offer stronger guarantees, leading to better optimization of bound checks, AFAIK.
There are more glaring issues to fix, like std::regex performance and so on.
imtringued 34 days ago [-]
"just get good" implies development processes that catch memory and safety bugs. Meaning what they are really saying between the lines is that the minimum cost of C++ development is really high.
Any C++ code without at least unit tests with 100% test coverage on with UB sanitizer etc, must be considered inherently defective and the developer should be flogged for his absurd levels of incompetence.
Then there is also the need for UB aware formal verification. You must define predicates/conditions under which your code is safe and all code paths that call this code must verifiably satisfy the predicates for all calls.
This means you're down to the statically verifiable subset of C++, which includes C++ that performs asserts at runtime, in case the condition cannot be verified at compile time.
How many C++ developers are trained in formal verification? As far as I am aware, they don't exist.
Any C++ developers reading this who haven't at least written unit tests with UB sanitizer for all of their production code should be ashamed of themselves. If this sounds harsh, remember that this is merely the logical conclusion of "just get good".
williamcotton 34 days ago [-]
Add ASan and friends as well as a sanitizer-less build for Valgrind!
ephaeton 34 days ago [-]
That explains very well why rust (to me) feels like C++ommitte-designed, thanks for that!
steveklabnik 34 days ago [-]
I don’t understand what you’re trying to say, could you elaborate?
ephaeton 33 days ago [-]
I'm sharing sentiment.
C++ feels like a language of bean counters.
Rust feels like a language of bean counters.
A lot of C++ folks I know went over to rust.
They were happy with C++ and it was the best thing since sliced bread.
They are now happy with rust and it is the best thing since sliced bread.
To me, languages have a, let's call it 'taste' for the lack of better word off the top of my head. It's that combining quality that pg called 'hacker's languages', such as C, and lisp, for example.
C++ feels like a bureaucratic monster with manual double bookkeeping, byzanthine, baroque, up to outright weird and contradictory in places. Ever since rust was conceived, I gave it multiple shots to learn. When I was not thrown off by what I perceive as java-style annotations, i.e., something orthogonal to the language itself where no one seems to have bothered to come to a consensus to be able to express this from the language itself, its general feel reminds me of something a C++ embracer will feel comfortable in. I.e., in pg's words, not a hacker's language, paired with a crusade of personal enlightenment. What used to be OO and GoF now is memory safety as-implemented-by-rust (note: not by borrow checker, we could've had this with cyclone, for example, more than two decades ago).
I have, in my original comment, marked this as my personal opinion and feeling, as is the above. I'm not arguing. I love FP and the idea of having a systems language with FP concepts working out to memory safety and higher level expression sounds like the holy grail of yester-me. I'm disappointed I couldn't find my professional salvation in rust with how uneasy I feel within the language. It's as if a suit and tie was forced on me, or a hawaii shirt and shorts (depending on your preference, image it's the thing you wouldn't voluntarily wear).
Now, if other folks also mirror my observation of how the folks flock from C++ to rust, you bet they take their mindset and pedestal with them to stand on and preach off of. At least those I know do, only their sermon changed from C++ to rust, the quality of their dogma remained constant.
steveklabnik 33 days ago [-]
> C++ feels like a language of bean counters.
> Rust feels like a language of bean counters.
Gotcha! I just didn't make the connection, when I read your comment I thought "what does a list of C++ features + the idea that people left it because they didn't like where it's going mean that the two languages are the same?"
I wasn't interested in arguing either, I was just trying to understand what you meant, and now I do. Thank you for sharing.
IshKebab 33 days ago [-]
Rust was definitely created as an alternative to C++, but I don't really get your criticism. Unless you're just saying you don't like robust languages with very strong type systems or something?
Rust wasn't designed by committee.
ephaeton 33 days ago [-]
To me, Rust feels as if it had sprung from the same mind. Or in the case of C++, set of minds. Who have a common mindset. I sadly don't critize rust's general design choices constructively. It's more of a public realization, '"C++ mind-set compatible" might just be the quality to describe the specific aroma I dislike in this melange".
I'm fine with robust languages with very strong type systems, I think. Are Haskell, ML, F#, Scala in this set? Robust and very strongly typed enough? I don't dislike their taste, even though I think I've had enough scala, specifically, for this life time. If these aren't in the set you're thinking of, I'd like to know what makes up that set for you.
htfy96 37 days ago [-]
While I sort of agree on the complaint, personally I think the best spot of C++ in this ecosystem is still on great backward-compatibility and marginal safety improvements.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
IshKebab 37 days ago [-]
I think at least Go and Java have as good backwards compatibility as C++.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
thbb123 36 days ago [-]
I don't know about go, but java is pathetic. I have 30 years old c++ programs that work just fine.
However, an application that I had written to be backward compatible with java 1.4, 15 years ago, cannot be compiled today. And I had to make major changes to have it run on anything past java 8, ~10 years ago, I believe.
simoncion 34 days ago [-]
Compared to C++ (or even Erlang), Go is pretty bad.
$DAYJOB got burned badly twice on breaking Go behavioral changes delivered in non-major versions, so management created a group to carefully review Go releases and approve them for use.
All too often, Google's justification for breaking things is "Well, we checked the code in Google, and publicly available on Github, and this change wouldn't affect TOO many people, so we're doing it because it's convenient for us.".
olegkovalov 32 days ago [-]
> delivered in non-major versions
Can you clarify these 2 changes please? Cannot recall anything similar
simoncion 32 days ago [-]
Nope. It has been like five, maybe eight years, so I do not remember. There have been more since then, but after seeing how Google manages the Go project, I pay as little attention to it as I can possibly get away with... so I do not remember any details about them.
olegkovalov 32 days ago [-]
> There have been more since then
Doubt, again. Without a minimal proof of mentioned problems continuing dialogue doesn't make sense for me, thanks.
simoncion 30 days ago [-]
Kek.
pjmlp 33 days ago [-]
As long as you are lucky enough to not have used any stuff dropped in C++14, C++17, C++20 and C++23.
Java has had shit backwards compatibility for as long as I have had to deal with it. Maybe it's better now, but I have not forgotten the days of "you have to use exactly Java 1.4.15 or this app won't work"... with four different apps that each need their own different version of the JRE or they break. The only thing that finally made Java apps tolerable to support was the rise of app virtualization solutions. Before that, it was a nightmare and Java was justly known as "the devil's software" to everyone who had to support it.
ivan_gammel 34 days ago [-]
That was probably 1.4.2_15, because 1.4.15 did not exist. What you describe wasn’t a Java source or binary compatibility problem, it was a shipping problem and it did exist in C++ world too (and still exists - sharing runtime dependencies is hard). I remember those days too. Java 5 was released 20 years ago, so you describe some really ancient stuff.
Today we don’t have those limits on HDD space and can simply ship an embedded copy of JRE with the desktop app. In server environments I doubt anyone is reusing JRE between apps at all.
simoncion 34 days ago [-]
While "Well, just bundle in a copy of the whole-ass JRE" makes packaging Java software easier, it's still true that Java's backwards-compatibility is often really bad.
> ...sharing runtime dependencies [in C or C++] is hard...
Is it? The "foo.so foo.1.so foo.1.2.3.so" mechanism works really well, for libraries whose devs that are capable of failing to ship backwards-incompatible changes in patch versions, and ABI-breaking changes in minor versions.
ivan_gammel 34 days ago [-]
> Java's backwards-compatibility is often really bad.
“Often” is a huge exaggeration. I always hear about it, but never encountered it myself in 25 years of commercial Java development. It almost feels like some people are doing weird stuff and then blame the technology.
> Is it? The "foo.so foo.1.so foo.1.2.3.so"
Is it “sharing” or having every version of runtime used by at least one app?
simoncion 34 days ago [-]
> I always hear about it, but never encountered it myself in 25 years of commercial Java development.
Lucky you, I guess?
> Is it “sharing” or having every version of runtime used by at least one app?
I'm not sure what you're asking here? As I'm sure you're aware, software that links against dependent libraries can choose to not care which version it links against, or link against a major, minor, or patch version, depending on how much it does care, and how careful the maintainers of the dependent software are.
So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
ivan_gammel 34 days ago [-]
> So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
And that is the hard problem, because it’s people problem, not technical one, and it’s platform independent. When some Java app was requiring a specific build of JRE, it wasn’t limitation or requirement of the platform, but rather the choice of developers based on their expectations and level of trust. Windows still dominates desktop space and it’s not uncommon for C++ programs to install or require a specific version of runtime, so you eventually have lots of them installed.
simoncion 32 days ago [-]
I don't see how Microsoft's and Sun's/Oracle's decision to encourage bundling all dependent software (including what would ordinarily be considered to be system libraries) with your program has to do with long-established practices in the *nix world.
I do agree that the world becomes much easier for a language/runtime maintainer if you get to ignore backwards-compatibility concerns because you've convinced your users to just pack in the entire system they built against with their program.
ivan_gammel 32 days ago [-]
First of all, *nix is not synonymous with C++ programming, so focusing on it specifically is bringing apples to discussion about oranges. When Java is brought to the discussion about C++ I do expect that variety of platforms is taken into account.
Second, you can have shared libraries/runtimes on Windows or in Java world. There exists versioning and *nix is not unique in that. Both are rather agnostic to the way you ship your app. In server Java unless you ship a container, you usually do not ship the JRE. On a desktop - it depends, shared JREs were always possible.
Third, DLL hell does exist in *nix environments too. The versioning mechanism you mention is a technical solution to a people problem and it doesn't work perfectly. Things do break if you relax your dependency constraints too much. How much - it depends on developers and the amount of trust they put in maintainers. So you inevitably end up with multiple versions of the same library or runtime on the same machine, no matter what OS or cross-platform solution do you use. It is not much different from shipping a bundle.
simoncion 30 days ago [-]
> First of all, *nix is not synonymous with C++ programming...
Agreed. This is obvious. You even mention it below:
> Second, you can have shared libraries/runtimes on Windows or in Java world. There exists versioning and *nix is not unique in that.
As you said, Windows has the same issue (because it's a fundamental problem of using libraries).
> Third, DLL hell does exist in *nix environments too.
IFF the publisher of the library fails to follow the decades-old convention that works really well.
> Te versioning mechanism you mention is a technical solution to a people problem and it doesn't work perfectly.
Sure. Few things do. That's what pre-release testing is for.
> Things do break if you relax your dependency constraints too much.
Yep. That's why we test.
> So you inevitably end up with multiple versions of the same library ... on the same machine...
Sure. But they're not copies of the same version. That's the entire point of the symlink-based shared object naming scheme (and the equivalent in Windows (IIRC, it used to be called SxS, but consult the second bullet point in [0])).
Language is improving (?), although IME it went besides the point I'm finding new features to be less useful for every day code. I'm perfectly happy with C++17/20 for 99% of the code I write. And keeping the backwards compatibility for most of the real-world software is a feature not a bug, ok? Breaking it would actually make me go away from the language.
pjmlp 37 days ago [-]
Clion, clang tidy and Visual C++ analysers do have partial support for the Core Guidelines, and they can be enforced.
Granted, it is only those that can be machine verified.
Office is using C++20 modules in production, Vulkan also has a modules version.
fooker 34 days ago [-]
>Despite this, not a single implementation in any of the three major compilers exists that can enforce them
Because no one wants it enough to implement it.
richard_todd 34 days ago [-]
I feel like a few decades ago, standards intended to standardize best practices and popular features from compilers in the field. Dreaming up standards that nobody has implemented, like what seems to happen these days, just seems crazy to me.
immibis 34 days ago [-]
It's bottom-up vs top-down design.
lifthrasiir 34 days ago [-]
Or it's better to have other languages besides from C++ for that.
skywal_l 37 days ago [-]
I hoped Sean would open source Circle. It seemed promising, but it's been years and don't see any tangible progress. Maybe I am not looking hard enough?
alexeiz 36 days ago [-]
He's looking to sell Circle. That must be the reason he's not open sourcing it.
fooker 34 days ago [-]
Huh, I guess if that was the motivation all along.
janice1999 37 days ago [-]
I think Carbon is more promising to be honest. They are aiming for something production-ready in 2027.
bluGill 37 days ago [-]
Profiles will not provide perfect memory safety, but they go a long way to making things better. I have 10 million lines of C++. A breaking change (doesn't matter if you call it new C++ or Rust) would cost over a billion dollars - that is not happening. Which is to say I cannot use your perfect solution, I have to deal with what I have today and if profiles can make my code better without costing a full rewrite then I want them.
tialaramex 37 days ago [-]
Changes which re-define the language to have less UB will help you if you want safety/ correctness and are willing to do some work to bring that code to the newer language. An example would be the initialization rules in (draft) C++ 26. Historically C++ was OK with you just forgetting to initialize a primitive before using it, that's Undefined Behaviour in the language so... if that happens too bad all bets are off. In C++ 26 that will be Erroneous Behaviour and there's some value in the variable, it's not always guaranteed to be valid (which can be a problem for say, booleans or pointers) but just looking at the value is no longer UB and if you forgot to initialize say an int, or a char, that's fine since any possible bit sequence is valid, what you did was an error, but it's not necessarily fatal.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
bluGill 37 days ago [-]
The first part is why I'm excited for future C++ - they are making things better.
The reason I life profiles is they are not all or nothing. I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor. Or at least so I hope, it remains to be seen if that is how they work out. I've been trying to figure out how to make rust fit in, but std::vector<SomeVirtualInterface> is a real pain to wrap into rust and so far I haven't managed to get anything done there.
The $1 billion is realistic - this project was a rewrite of a previous product that became unmaintainable and inflation adjusted the cost was $1 billion. You can maybe adjust that down a little if we are more productive, but not much. You can adjust it down a lot if you can come up with a way to keep our existing C++ and just extend new features and fix the old code only where it really is a problem. The code we have written in C++98 (because that was all we had in 2010) still compiles with the latest C++23 compiler and since there are no know bugs it isn't worth updating that code to the latest standards even though it would be a lot easier to maintain (which we never do) if we did.
zozbot234 37 days ago [-]
> I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor.
It's also expected that you'll be able to do this with Safe C++. Of course the interop with older C++ code will then still involve unsafety. But incremental improvement should be possible.
saagarjha 34 days ago [-]
This seems bad actually.
37 days ago [-]
wakawaka28 37 days ago [-]
Enforcing style guidelines seems like an issue that should be tackled by non-compiler tools. It is hard enough to make a compiler without rolling in a ton of subjective standards (yes, the core guidelines are subjective!). There are lots of other tools that have partial support for detecting and even fixing code according to various guidelines.
gHosts 33 days ago [-]
It's part of a compiler ecosystem. ie. The front end is shared.
See clang-tidy and clang analyzer for example.
ps: That's what I like most about the core guidelines, they are trying very hard to stick to guidelines (not rules) that pretty much uncontroversially make things safer _and_ can be checked automatically.
They're explicitly walking away from bikeshed paintings like naming conventions and formatting.
wakawaka28 32 days ago [-]
The core guidelines aren't as subjective as other guidelines but they are still subjective. There is plenty of completely sound code out there that violates the core guidelines. Not only are they subjective, but many of them require someone to think about the best way to write the code and whether the unpopular way to write it is actually better.
I know compiler front ends can be and are used to create tooling. The point is, you shouldn't be required to implement some kinds of checking in the course of implementing a compiler. If you use a compiler, you should not be required to do all this analysis every single time you compile (unless it is enforcing an objectively necessary standard, and the cost of running it is negligible).
mempko 36 days ago [-]
What are you talking about, the language gets better with each release. Using C++ today is a hell of a lot better than even 10 years ago. It seems like people hold "memory safety" as the most important thing a language can have. I completely disagree. It turns out you can build awesome and useful software without memory safety. And it's not clear if memory safety is the largest source of problems building software today.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
WalterBright 34 days ago [-]
The top memory safety bugs in shipped code for C and C++ are out of bounds array indexing.
saagarjha 34 days ago [-]
Are you sure? I generally see more use-after-free and other lifetime issues.
WalterBright 33 days ago [-]
Every survey I've seen corroborated it.
tobias12345 30 days ago [-]
Does it matter whether it is a common class of bugs or a not so common one? The point is, this is a class of bugs you do not have when picking a different language.
C++ claimed for decades to be about eliminating a class of resource management bugs you can have in C code, that was its biggest selling point. So why is eliminating another class of bugs a nice to have now?
C++ is loosing projects to memory safe languages for decades now, just think of all the business software in Java, scientific SW in python, ... . The industry is moving towards memory safe software for decades now. Rust is just the newest option -- and a very compelling one as it has no runtime environment or garbage collector, just like C++.
nindalf 34 days ago [-]
> And it's not clear if memory safety is the largest source of problems building software today.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
logicchains 34 days ago [-]
Two of the biggest use cases for modern C++ are video games and HFT, where memory safety is of absolutely minimal importance (unless you're writing some shitty DRM/anticheat). I work in HFT using modern C++ and bugs related to memory safety are vanishingly rare compared to logic and performance bugs.
imtringued 34 days ago [-]
The importance of memory safety depends on whether your code must accept untrusted inputs or not.
Basically 99% of networked applications that don't talk to a trusted server and all OS level libraries fall under that category.
Your HFT code is most likely not connecting to an exchange that is interested in exploiting your trading code so the exploit surface is quite small. The only potential exploit involves other HFT algorithms trying to craft the order books into a malicious untrusted input to exploit your software.
Meanwhile if you are Google and write an android library, essentially all apps from the play store are out to get you.
Basically C++ code is like an infant that needs to be protected from strangers.
menaerus 27 days ago [-]
Databases are a perfect example of an open-ended complexity space. SQL is a Turing-complete language and your users are programming their workloads against your database kernel. You (as a developer) know nothing about those workloads nor do you know what your users will want to do next. And you basically have to write the code so that it can virtually support any workload that can possibly exist. It's almost as if you're writing a compiler but with a virtual machine inside of its own OS but with the big difference and which is the ability to scale across millions of users (and data). There's probably not much software like that in the world.
And yet, no matter how complex database engines really are, my experience has been the same: the number of bugs related to memory-safety were extremely rare.
pxmpxm 34 days ago [-]
Very much this. For some reason people assume that security/exploits are what the below is refering to, as if that's the endgoal that software is trying to solve.
> it's not clear if memory safety is the largest source of problems building software today
sharedptr 32 days ago [-]
Recently interested in HFT. Are there introductory resources that you recommend from an industry point of view?
Books/repositories anything practical
jpc0 34 days ago [-]
> Around 70% of our high severity security bugs are memory unsafety problems
> ~70% of the vulnerabilities Microsoft assigns a CVE
> 76% of vulnerabilities
What is the difference between the first two (emphasis added) and what you said? Just as a thought experiment...
If I measure a single factor in exclusion to all others I can also find whatever I want in any set of data. Now your point may be valid but it is not what they published and without the full dataset we cannot validate your claim however I can validate that what you claim is no what they claim.
To answer your question in the final paragraph. Yes it is, but it requires the same cultural shift as what it would take to write the same code in rust or swift of golang or whatever other memory safe language you want to pick.
If rust was in fact viable for such a large project, how's the servo project going? That still the resounding success it was expected to be? Rust in the kernel? That going well?
The jury is still out on whether rust will be mass adopted and is able to usurp C/C++ in the domains where C/C++ dominate. It may get there, but I would much much rather start a new project using C++20 than in rust and I would still be able to make it memory safe and yes it is a "skill issue", but purely because of legacy C++ being taught and accepted in new code in a codebase.
Rules for writing memory safe C++ has not just been around for decades but has be statically checkable for over a decade but for a large project there are too many errors to universally apply them to existing code without years of work. However if you submit new code using old practices you should be held financially and legally responsible just like an actual engineer in another field would be.
It's because we are lax about standards that it's even an issue.
As a note, if you see an Arc<Mutex<>> in rust outside of some very specific Library code whoever wrote that code probably wouldn't be able to write the same code in a memory and thread safe manner, also that is an architectural issue.
Arc and Mutex are synchronisation primatives that are meant to be used to build datastructures and not in "userspace" code. It's a strong code smell that is generally accepted in Rust. Arc probably shouldn't even need to exist at all because that is a clear indication nobody thought about the ownership semantics of the data in question, maybe for some datastructures it is required but you should very likely not be typing it into general code.
If Arc<Mutex<>> is littered throughout your rust codebase you probably should have written that code in C#/Java/Go/pick your poison...
tsimionescu 34 days ago [-]
This whole concept that code should be architected as "libraries" and "userspace" is such a C++ism.
It's a really weird concept that probably comes only from having this extremely complex language where even the designers expect some parts of it are too weird for "normal programmers". But then they imagine some advanced class of programmer, the "library programmers", who can deal with such complexity.
The more modern way of designing software is to stick to the YAGNI principle: design your code to be simple and straightforward, and only extract out datastructures into separate libraries if and when they prove to be needed.
Not to mention, the position that shared ownership should just not exist at all is self-evidently absurd. The lifetime of an object can very well be a dynamic property of your program, and a concurrent one. A language that lacks std::shared_ptr / Arc is simply not a complete language, there will be algorithms that you just can't express.
jpc0 34 days ago [-]
So you strongly believe that the programmer should implement .map on arrays and hashmaps etc themselves? Well you will love C code then.
The point of library code is to implement these things once in a safe and efficient manner and reuse the implementation.
Sometimes there are more domain or even company specific things that should be implemented exactly once and reused.
Nobody said there are different tiers of developers like "library developers" and "normal developers". Those are different types of programming that a single developer can do but fundamentally require a different thought pattern. Designing datastructures and algorithms are a lot more CS whereas general programming is much more akin to plumbing. If you think library code isn't needed it's because you overlook the library code you already use.
There are some things that are not yagni, if you have those in place then the rest of your code can literally be implemented that way because you literally won't need it.
It's not that shared_ptr isn't needed, it's that people don't use it where necessary, they use it because it's convenient not to think entirely and because the necessary Library code isn't there. I stand strong that seeing std::shared_ptr/box (or even std::unique_ptr/Box) in general code is a code smell, the fact that you even said that there are certain algorithm's that cannot be expressed without it means you agree, the algorithm should be implemented exactly once and reused. If it's only used one then sure it can be abstracted when needed but that doesn't mean you shouldn't need to justify why it's there.
otabdeveloper4 34 days ago [-]
I million times more systems were infiltrated due to PHP SQL injection bugs than were infiltrated via Chromium use-after-free bugs.
Let's keep some sanity and perspective here, please. C++ has many long-standing problems, but banging on the "security" drum will only drive people away from alternative languages. (Everyone knows that "security" is just a fig leaf they use to strong-arm you into doing stuff you hate.)
zozbot234 37 days ago [-]
> Profiles, which Bjarne et al have had years to work on, will not provide memory safety
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
bluGill 37 days ago [-]
I have seen 3 different safe c++ proposals (most are not papers yet, but they are serious efforts to show what safe c++ could look like). However there is a tradeoff here. the full bower checker in C++ approach is incompatible with all current C+++ and so adopting it is about as difficult is rewriting all your code in some other language. The other proposals are not as safe, but have different levels of you can use this with your existing code. All are not ready to get added to C++, but they all provide something better and I'm hopeful that something gets into C++ (though probably not before C++32)
Maxatar 37 days ago [-]
>the full bower checker in C++ approach is incompatible with all current C++
Circle is an implementation of C++ that includes a borrow checker and is 100% backwards compatible with C++:
That is one of of the three. It isn't really backward compatible because to take adventage of it you need to write\change a lot of code.
a nice attempt but I have millions of lines of c++ that isn't going away-
Maxatar 36 days ago [-]
Circle is 100% backward compatible with C++. That is a technical property of the language.
You are welcome to take your millions of lines of C++ code and it will compile without change using Circle as any valid C++ code is valid Circle code, which is the technical definition of being backward compatible.
You don't need to change existing code to use Circle or the new features Circle introduces, you can just write new classes and functions with those features and your existing code will continue to compile as-is.
bluGill 36 days ago [-]
You don't get the advantages of circle if you are constantly dealing with code that is returning raw pointers you have to deallocate. Or APIs where you need to pass in an index which the called function then uses vectors operator []. Safe C++ (from the same guy from what I can tell) only is safe if you used std2 containers, and otherwise rewrite your C++ entirely. Sure the world would be better if we did, but that would cost billions of dollars so it isn't happening. What we need is a way to introduce some safety into code that already exists without spending billions and a lot of time to rewrite it.
Maxatar 35 days ago [-]
By your standard C++11 isn't backward compatible with C++98.
zozbot234 36 days ago [-]
"C++ isn't really backward compatible with C because to take advantage of its classes and templates you need to change so much code..."
bluGill 36 days ago [-]
That is not backward compatibility. In the real world people mix C and C++ all the time without a lot of complex rewriting. Most of the time they don't even write a wrapper around the C, or if they do it is a easy/thin wrapper (generally you take a function returning a pointer you have to delete and make it a smart pointer), not a deep rewrite of the C code.
All my efforts to do the above so I can mix C++ and Rust have quickly failed when I realized that my wrappers would not be thing, and thus they would cost large performance penalties.
zozbot234 35 days ago [-]
The cxx crate offers partial interop between C++ and Rust - for example, it wraps the C++ unique_ptr (the "take a pointer you have to delete and make it a smart pointer" abstraction) so Rust can make use of it appropriately. It's nowhere near complete, but they do welcome patches and issue reports. Anyway, this isn't even all that relevant to Circle and Safe C++, that can potentially share more with C++ than Rust does, such as avoiding a separate heap abstraction so that Safe C++ might be able to free objects that were allocated in legacy C++ code, etc.
cylemons 34 days ago [-]
But these are not safety features
Animats 34 days ago [-]
I've seen maybe twice that many. Did one myself once. It's possible to make forward progress, but to get any real safety you have to prohibit some things.
34 days ago [-]
vr46 34 days ago [-]
Last weekend, I took an old cross-platform app written by somebody else between 1994-2006 in C++ and faffed around with it until it compiled and ran on my modern Mac running 14.x. I upped the CMAKE_CXX_STANDARD to 20, used Clang, and all was good. Actually, the biggest challenge was the shoddy code in the first place, which had nothing to do with its age. After I had it running, Sonar gave me 7,763 issues to fix.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
boris 34 days ago [-]
> [M]any developers use C++ as if it was still the previous millennium. [...] C++ now offers modules that deliver proper modularity.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
TinkersW 34 days ago [-]
Ya that is rather disingenuous, modules aren't ready, and likely won't be for another 5 years.
Also they are difficult to switch to, so I would expect very few established projects to bother.
gpderetta 34 days ago [-]
Modules were known to be difficult to implement and difficult to migrate to. If modules are mainstream in 5 years, it would be an excellent result.
pjmlp 33 days ago [-]
Office is one of such established projects.
mindcrime 34 days ago [-]
I was an extreme C++ bigot back in the late 90's, early 2000's. My license plate back then was CPPHACKR[1]. But industry trends and other things took my career in the direction of favoring Java, and I've spent most of the last 20+ years thinking of myself as mainly a "Java guy". But I keep buying new C++ books and I always install the C++ tooling on any new box I build. I tell myself that "one day" I'm going to invest the time to bone up on all the new goodies in C++ since I last touched it, and have another go.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
ttul 34 days ago [-]
The programmers on the sound team at the video game company I worked for as an intern in 1998 would always stash a couple of extra void pointers in their classes just in case they needed to add something in later. Programmers should never lose sight of pragmatism. Seeking perfection doesn’t help you ship on time. And often, time to completion matters far more than robustness.
OnionBlender 32 days ago [-]
Vulkan does that with `void* pNext` in a lot of its structs so that they can be extended in the future.
ninkendo 34 days ago [-]
Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”
Interesting. I was SO into the Simpsons at one time, but somehow I'd never seen that episode (as best as I can remember anyway). Now I feel the urge to go back and rewatch every episode of the Simpsons from the beginning. It would be fun, but man, what a time sink. I started the same thing with South Park a while back and stalled out somewhere around Season 5. I'd like to get back to it, but time... time is always against us.
ninkendo 33 days ago [-]
That episode is by far my #1 favorite. Season 8 Episode 2, “You Only Move Twice”, during the period considered by most to be the peak of the Simpsons show quality, and IMO the best episode of the season.
Cypress Creek was intended to be a reference to Silicon Valley and the tech companies there of the time, and it’s got some of the best comedy in the season (Hank Scorpio is the best one-off character ever in the show IMO.)
spacechild1 32 days ago [-]
The Hank Scorpio episode is indeed one of the great classics!
kylecazar 34 days ago [-]
AIEXPERT here I come!
mindcrime 34 days ago [-]
Awesome! My current tag is /DEV/AGI :-)
mindcrime 33 days ago [-]
Note to the above: I am wrong. My license plate back then was C++HACKR, with the actual "+" signs. NC license plates do allow that, although while the +'s are on the tag, they don't show up on your registration card or in the DMV computer system.
I mixed up the tag and my old domain name, which was "cpphacker.co.uk" (and later, just cpphacker.com/org).
pro14 34 days ago [-]
what is the job market like now for C++ programmers? I'm looking for a job.
tialaramex 37 days ago [-]
Here's how Bjarne describes that first C++ program:
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
Maxatar 37 days ago [-]
The collect_lines example won't even compile, it's not valid C++, but there's undefined behavior in one of the examples? I'm very surprised and would like to know what it is, that would be truly shocking.
tialaramex 37 days ago [-]
Really? If you've worked with C++ it shouldn't be shocking.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
Mali- 33 days ago [-]
Bizarre nitpicking - would you rather he used an unbounded integer?
Maxatar 32 days ago [-]
You don't need an unbounded integer to get the algorithm to work though. All you need is to test and set the value to 1.
otabdeveloper4 34 days ago [-]
"Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
notfed 34 days ago [-]
> "Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
An ISO standard? According to who, ISO?
otabdeveloper4 34 days ago [-]
Yeah, legal constructs are not actually real and are based on circular logic. (And not just in software, that's a property of legal constructs in general.)
Your point is what?
modernerd 37 days ago [-]
I haven't read much from Bjarne but this is refreshingly self-aware and paints a hopeful path to standardize around "the good parts" of C++.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
bb88 37 days ago [-]
The problem with 45 years of C++ is that different eras used different features. If you have 3 million lines of C++ code written in the 1990's that still compiles and works today, should you use new 202x C++ features?
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
fuzztester 36 days ago [-]
>footguns
I was expecting that someone would have posted this by now:
I'm curious about that now, too. Is there the equivalent of Python's ruff or Rust's cargo clippy that can call out code that is legal and well-formed but could be better expressed another way?
bluGill 37 days ago [-]
Clang-tidy can rewrite some old code to better. However there is a lot of working code from the 1990s that cannot be automatically rewritten to a new style. Which is what makes adding tooling hard - somehow you need to figure out what code should follow the new style and what is the old style and updating to modern would be too expensive.
lenkite 36 days ago [-]
> As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
einpoklum 37 days ago [-]
> And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
How does enforcing profiles per-translation unit make any sense? Some of these guarantees can only be enforced if assumptions are made about data/references coming from other translation units.
Maxatar 34 days ago [-]
This is the one major stumbling block for profiles right now that people are trying to fix.
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
There is currently no resolution to this issue.
juliangmp 34 days ago [-]
I guess modules are supposed to be the magic solution for that, Bjarne has shown them in this article, even using import std.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
humanrebar 34 days ago [-]
Modules alone do not guarantee one definition per entity per linked program. On the contrary, build systems are needing to add design complexity to support, for instance, multiple built module interfaces for the std module because different translation units are consuming the std module with different settings -- different standards versions for instance.
jpc0 34 days ago [-]
I've been playing with building out an OpenGL app using C++23 on bleeding edge CMake and Clang and it really is a breath of fresh air... I do run into bugs in both but it is really nice. Most of the bugs are related to import std though which is expected... Oh and clangd(LSP) still having very spotty support for modules.
The tooling is way better than it was 6 months ago though asin I can actually compile code in a non Visual Studio project using import std.
I will be extremely happy the day I no longer need to see a preprocessor directive outside of library code.
hoc 34 days ago [-]
I definitely wouldn't have used "<<" in an "ad" for C++ :)
(I must say that I was happy to see/read that article, though)
DonHopkins 34 days ago [-]
Generalizing Overloading for C++2000
Bjarne Stroustrup,
AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected
to become part of the next revision of the standard. The focus is on general ideas rather than technical
details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
Modules sound cool for compile time, but do they prevent duplicative template instantiations? Because that's the real performance killer in my experience.
Maxatar 37 days ago [-]
Modules don't treat templates any differently than non-modules so no, they don't prevent duplicate template instantiations.
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
senkora 37 days ago [-]
> You lose expressiveness
Or more, correctly, the following happens:
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
mempko 36 days ago [-]
Bjarne Stroustrup (the creator of C++) is the best language designer. Many language designers will create a language, work on it for a couple years, and then go and make another language. Stroustrup on the other hand has been methodically working on C++ and each year the language becomes better.
mskcc 34 days ago [-]
Prof. Bjarne's commitment to C++ is beyond comparison!
sixthDot 34 days ago [-]
So now even H news are being poluted with IA.
zie1ony 34 days ago [-]
Seeing badly formatted code snippets without color highlighting in article called "21st Century C++" somehow resonates with my opinion on how hard to write and to ready C++ still is after working with other laguages.
AtlasBarfed 34 days ago [-]
This honestly looks like C++ being feature-juryrigged to a degree that it doesn't even look like what C++ is: a c-derived low level language.
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
DidYaWipe 34 days ago [-]
Yeah, I didn't have a problem keeping my shit straight in C++ in the '90s. The kitchen-sink approach since then hasn't been worth keeping up with. The fact that we're still dealing with header files means that the language stewards' priorities are not in line with practical concerns.
imron 34 days ago [-]
I want to love C++.
Over my career I’ve written hundreds of thousands of lines of it.
But keeping up with it is time consuming and more and more I find myself reaching for other languages.
okanat 34 days ago [-]
Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
araes 34 days ago [-]
> Bjarne has been criticized for accepting too many (questionable) things
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
xyproto 34 days ago [-]
I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.
DrBazza 34 days ago [-]
The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.
01100011 34 days ago [-]
It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.
xyproto 31 days ago [-]
Yeah, it's a struggle. Keeping to a good subset often works out, though. I recognize the feelings. Best of luck. :)
erwincoumans 34 days ago [-]
Same here.
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
midnightclubbed 34 days ago [-]
I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
musicale 34 days ago [-]
I just wish they hadn't repurposed the old "auto" keyword from C and had used a new keyword like "var" or "let".
#define var auto
#define let auto
galkk 34 days ago [-]
If we're going that route, how about
#define var auto
#define let const auto
?
musicale 33 days ago [-]
I was thinking of having one or the other, but let as the const form is appealing. ;-)
maleldil 34 days ago [-]
Given how important backwards compatibility is for C++, it's either take over a basically unused keyword or come up with something so weird that would never appear in existing code.
Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.
34 days ago [-]
William_BB 34 days ago [-]
E.g. `std::ranges::for_each`, where lambda captures a bunch of variables by reference. Like I would hope the compiler optimizes this to be the same as a regular loop. But can I be certain, when compared to a good old for loop?
jpc0 34 days ago [-]
To be fair std::ranges seems like the biggest mistake the committee allowed into the language recently.
Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.
for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.
Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?
jandrewrogers 34 days ago [-]
Yeah, the std::ranges implementation is a bit of a mess. The inability to start clean without regard for backward compatibility reasons limits what is possible. I think most people see how you could implement comparable functionality with nicer properties from a clean sheet of paper. It is the curse of being an old language.
imron 33 days ago [-]
There are sane approaches to dealing with this - e.g. epochs.
This wasn’t proven by the time c++11 was ready, but for c++20 and beyond it’s a shame they didn’t go with this.
throwaway2037 34 days ago [-]
Did you try the two version in Godbolt?
TinkersW 34 days ago [-]
Just ban ranges lib, it is hot garbage anyway. The compilers are able to optimize lambdas fairly well nowadays(when inlined), I wouldn't be that concerned.
34 days ago [-]
midnightclubbed 34 days ago [-]
You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
imron 34 days ago [-]
I’ve been using c++ since the late 90’s but am not stuck there.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
layer8 34 days ago [-]
It’s okay to be a few years behind the standard, the compilers tend to be as well.
imron 34 days ago [-]
Yeah, the issue is more that the perceived complexity means I’m less interested in investing time to catch it all back up
TinkersW 34 days ago [-]
If you already used C++20 you aren't meaningfully behind, very little of interest has been introduced since then, and much of it isn't usable yet because of implementation issues.
imron 33 days ago [-]
I’ve touched on some of c++20, but haven’t used it extensively.
Specifically here are areas I haven’t used that appear to have nontrivial amounts of complexity, footguns, syntax and other things to be aware of:
* Ranges
* Modules
* Concepts
* Coroutines
Each of these is a large enough topic that it will involve time and effort to reach an equivalent level of competence and understanding that I have with other areas of c++.
I don’t mind investing time learning new things but with commentary around the web (and even this thread) calling the implementation and syntax a hot mess, at some point it’s a better investment to put that learning in to a language without all the same baggage.
I really wish c++ had gone with breaking change epochs for c++20.
codr7 34 days ago [-]
I've been writing C++ since 1996-ish.
Less and less, for sure.
Nothing the past few years.
They killed it.
mr_00ff00 34 days ago [-]
If you only read HN, you would think C++ died years ago.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
3vidence 34 days ago [-]
Can also confirm c++ is alive and well at FAANG. Might still be the most popular language for most new projects.
bobnamob 34 days ago [-]
* for some values of FAANG
C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger
3vidence 33 days ago [-]
Fair! I would say people would be surprised to learn pretty much every large AI project is mostly c++ because of its interop with python.
Some FAANGs focus on AI more than others.
codr7 34 days ago [-]
The fact that we don't have a viable alternative yet doesn't exactly mean that the language is in good shape.
chikere232 34 days ago [-]
It just means it's in the best shape of any of the languages in it's domain
bboygravity 34 days ago [-]
Can confirm pretty much the entire embedded systems world uses either C or C++.
That's probably most devices in the world.
gHosts 33 days ago [-]
It used to be C++ would be the last choice for embedded...
Modern C++ with constexpr and friends and the massive work and cunning they have put into avoiding template bloat....
...C++ is now my first choice for embedded.
markus_zhang 34 days ago [-]
I have listened to a few podcasts by HFT people. Looks like you try to maximize performance and use a lot of C++ skills. Very interesting to listen to but I wonder how does anyone pick up the skills?
musicale 34 days ago [-]
Took me a moment to realize that "killed it" was being used in the negative sense.
bogeholm 34 days ago [-]
Almost a haiku :)
gosub100 34 days ago [-]
since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).
midnightclubbed 34 days ago [-]
The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
astrobe_ 34 days ago [-]
You don't have to "keep up with it", if by this you mean what I think you mean.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
01100011 34 days ago [-]
Wrong. Most programmers spend tremendous amounts of time reading and maintaining someone else's code. You absolutely have to keep up with it.
astrobe_ 32 days ago [-]
Thankfully "most" C++ code was written before C++11 (good luck with programs that fully utilize "modern" C++'s constructs and their semantics, because at this point only compilers can reliably manipulate them).
markus_zhang 34 days ago [-]
I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.
01100011 34 days ago [-]
I mean, right from Bjarne's mouth:
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
rvz 34 days ago [-]
On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.
It's really any other language other than those two.
crims0n 37 days ago [-]
For someone who wants to get into systems programming professionally, is C++ going to be a hard requirement or can one mostly get away with C/Rust?
pjmlp 37 days ago [-]
The only places where C++ failed to take C's crown has been on UNIX clones (naturally, due to the symbiotic relationship), and embedded where even modern C couldn't replace C89 + compiler extensions from the chip vendor, many shops are stuck in the past, even though most toolchains are already up to C++20 and C17 nowadays.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
IshKebab 37 days ago [-]
Depends exactly what you want to do. C is not very popular at all in professional settings - C++ is far more popular. I would say if you know Rust then C++ isn't very hard though. You'll write better C++ code too because you'll naturally keep the good habits that the Rust compiler enforces and the C++ compiler doesn't.
mempko 34 days ago [-]
That's why C++ is still around today, it was built on some solid principles. Bjarne is such a good language designer because he never abandoned it. Lesser designers make a language and start another in 5 or 10 years. Bjarne saw the value in what he created and had a sense of responsibility to those using it to keep making it better and take their projects seriously.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
wiseowise 34 days ago [-]
I always hear about “import std” but still don’t see out of the support for it. Is it still experimental?
DidYaWipe 34 days ago [-]
Let us know when C++ gets rid of the mess that is header files.
Until then... YAWN.
osmsucks 34 days ago [-]
The article does mention modules.
xigoi 34 days ago [-]
But it doesn’t mention that you can’t actually use modules without passing a bunch of random compiler flags and hoping that they work.
ephaeton 34 days ago [-]
loving he goes 'int main() { ... }' and never returns an int from it. Even better: without extra error / warning flags the compiler will just eat this and generate some code from it, returning ... yeah. Your guess is probably better than mine.
If the uber-bean counter, herald of the language of bean counters demonstrate unwillingness to count beans, maybe the beans are better counted in another way.
toth 33 days ago [-]
Well, actually... the "main" function is handled specially in the standard. It is the only one where the return type is not void and you don't need to explicitly return from it - if you do it, it is treated as if you returned 0.
(You will most definitely get a compiler error if you try this with any other function.)
You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.
And, for what it's worth, the uber-bean counter didn't miss a bean here...
SunlitCat 33 days ago [-]
To me, it's kinda funny that he starts with
> using namespace std
something you get told not to do easily! :D
justanotheratom 34 days ago [-]
C++ should be known for the amount of collective brain cycles wasted on arguing what subset of C++ is the right one to use.
mempko 34 days ago [-]
Professionals know what tool to use for a job. Does it take time to become good? Of course, like anything.
justanotheratom 34 days ago [-]
Not a question of difficulty or skill. I am saying professionals can't agree what subset to use!
mempko 34 days ago [-]
They don't have to. The subset depends on the job! That's the beauty and power of C++. That's why we have projects written in it in all domains. From websites to spaceships and Mars rovers.
justanotheratom 34 days ago [-]
yes, and you will tell me exactly what subset and coding convention "makes sense" for this domain, and you will give your reasoning too. And I will give my arguments, and on and on it goes. teams have broken up over this.
A well-designed language is one in which there are very few different ways of doing the same thing. And C++ is definitely not that.
justanotheratom 33 days ago [-]
Another feature of a well-designed language is how well it is able to separate features for library writers vs application writers. I have seen way too many smart coders end up polluting application code with unnecessarily complex features of C++ meant for library writers.
mempko 34 days ago [-]
Why would a well designed language have only one or few ways to do the same thing? Seems rather arbitrary. I like when I have many ways to do the same thing.
Imagine if you told a writer or poet that English is bad because there is more than one way to say the same thing...
Programming languages are for people more than machines. Machines are happy with microcode.
justanotheratom 33 days ago [-]
RE: Why would a well designed language have only one or few ways to do the same thing?
So that you focus on solving the problem at hand, instead of endlessly arguing over decisions that are irrelevant to solving the said problem.
mempko 32 days ago [-]
That's the thing though, each similar problem is subtly different. Those differences can matter and the language should allow you to model them.
pro14 34 days ago [-]
is the job market for C++ developers still good?
imron 34 days ago [-]
Depends. For certain fields the pay is great and there’s a dearth of candidates.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
37 days ago [-]
AnonC 34 days ago [-]
Tangential question: is there a Rust equivalent for the book “The Design and Evolution of C++”?
steveklabnik 34 days ago [-]
There is not.
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
AnonC 34 days ago [-]
Thank you. It would be interesting to read the history, including the design decisions, the influences (and distractions), the trade offs, etc.
When I read “The Design and Evolution of C++”, it gave me a better understanding of the language.
munificent 34 days ago [-]
I would 100% buy a hardback gold embossed version of this book.
steveklabnik 34 days ago [-]
Well if I could make it half as good looking as Crafting Interpreters, maybe I’d manage to make it happen, hahah.
I’m mostly focused on jj with my writing right now, but we’ll see…
ninetyninenine 34 days ago [-]
21st century C++? AKA Rust?
jandrewrogers 34 days ago [-]
Unfortunately, Rust is significantly less expressive than C++ and therefore is unlikely to replace it for high-performance systems code. As much as I don’t like C++, it is very powerful as a tool. The ability to express difficult low-level systems constructs and optimizations concisely and safely in the language are its killer feature. Once you know how to use it, other languages feel hobbled.
AlotOfReading 34 days ago [-]
C++ doesn't allow you to express low level systems constructs concisely and safely though. You usually get neither.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
jandrewrogers 34 days ago [-]
This assumes you are writing C++ in the most naive way possible. I’m sure some people do that but nothing requires it. The capabilities of a language are not defined by its worst programmers.
Modern C++ allows you to swap out most features and behaviors of the language with your own implementations that make different guarantees. C++ is commonly used in high-assurance environments with extremely high performance requirements, and it remains the most effective language for these purposes because you can completely replace most of the language with something that makes the safety guarantees you require. This is rather important. For example, userspace DMA is idiomatic in e.g. high-performance databases kernels; handling this is much safer in C++ than Rust. In C++, you can trivially write elegant primitives that completely hide the unusual safety model. In Rust, you have to write a lot of ugly unsafe code to make this work at all because userspace DMA isn’t compatible with a borrow checker. There can always be multiple mutable references to memory but it is not knowable at compile-time, safety of an operation can only be arbitrated at runtime.
Of course, it is still incumbent on the developer to use the language competently in all cases.
AlotOfReading 34 days ago [-]
The capabilities of a language are not defined by its worst programmers.
Is the implication here that Bjarne is a bad C++ developer? If the person in charge of the EWG fails "to use the language competently in all cases", what hope is there for the rest of us mere mortals?
For what it's worth, unsafe Rust is safer than C++. There's very little UB to explode your carefully crafted implementations. Safe rust of course has no UB except for what you write in unsafe blocks, so it's safer still and there's no real difference in the abstractions you can write with concepts vs traits.
I'm not actually arguing for rust here though, because this isn't a great showing for it. Trying to write the related add_wrap(T, T) function in rust is stupidly verbose compared to add_sat(T, T) thanks to bad decisions the num_traits authors made. What I am saying is C++ isn't a form of high level assembly like your original comment suggested. Understanding the relationship between the language and the hardware takes a lot of experience that most people don't use when writing code.
jandrewrogers 34 days ago [-]
UB is a feature of the standard, not the implementation. Many of those behaviors can be defined. Modern C++ conveniently allows you to replace many of the bits that have UB, per standard, with your own bits with defined behavior with zero overhead. This was not always the case. You aren’t dependent on the compiler implementor. The ability to consistently do this transparently became practical around C++17 IMO. The C++ standard library is in many regards obsolete and many orgs treat it that way.
I never suggested that C++ was “a form of high level assembly”. I’ve written enough assembly and C to know better; you lose a bit of precision with C++. But now I can define (or not) the behavior I want in a way that is largely transparent. This has been a brilliant change to the language.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation. Everyone doing high-performance and/or high-assurance systems is dragging in few if any dependencies, so this is practical. The kinds of things that C++ is really good at for new code are the kinds of things where this is what you would do regardless.
Developers don’t even have to be hardware experts, they just have to not use std for most things. That is a pretty low barrier. And std is a mess with the albatross of legacy support. Reimagined C++20 native “standard” libraries are much, much cleaner and safer (and faster).
Legacy C++ code bases aren’t going to be rewritten in a new language. New C++ code bases can take advantage of alternative foundations that ignore std and many do. Most things should not be written in C++, but for some things C++ is unmatched currently and safer in practice than is often suggested with basic hygiene.
AlotOfReading 33 days ago [-]
Modern C++ conveniently allows you to replace many of the bits that have UB, per standard, with your own bits with defined behavior with zero overhead.
Okay, let's continue the example. Please demonstrate how to replace the addition operator on a primitive type. You can't within the confines of the language and that's a good thing in most cases. What you can do is pass -fwrapv, except that MSVC doesn't officially define a comparable flag.
Developers don’t even have to be hardware experts, they just have to not use std for most things.
Signed overflow isn't a problem with std, the solution to it is in std. Null pointers aren't a problem with std, but the recommended fixes are again in std. Etc.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation.
As far as I'm aware, neither folly, absl, nor boost define custom integral types with defined overflow behavior. Please provide examples of anyone doing that.
UB is a feature of the standard, not the implementation.
If you're writing "high assurance code", surely you're writing to the standard and not the implementation? The implementation's guarantees change with every upgrade, every new flag, and each time you build for different targets. I certainly try to avoid compiler assumptions as someone who writes safety critical code.
imtringued 34 days ago [-]
DMA being a problem appears to be mostly a problem with a lack of identification of the data. If the shape of the data could be verified by the language runtime, instead of being an arbitrary stream of bytes whose meaning must be known by the recipient without any negotiation, this form of unsafety would disappear, since the receiving code simply needs to assert the schema, which could be as simple as checking a 32 bit integer.
Then all you need to do is also verify that the sending code adheres to the schema it specified.
This has very little to do with borrow checking. From the perspective of the borrow checker, a DMA call is no different from RPC or writing to a very wide pointer.
p0w3n3d 33 days ago [-]
In Rust you get panic instead. You need to wrapping add which is a separate function
imtringued 34 days ago [-]
Most high performance code is vectorized and Rust is better at autovectorization and aliasing analysis than C++, so I'm not really seeing your point.
Having to drop down to intrinsics early is not a strength.
janwas 34 days ago [-]
hm, I'd be concerned about relying on autovectorization. How much better is 'better'? Compiler friends have told me that something permute-heavy like sorting is unlikely to soon work, if ever.
My biased opinion, from doing this full-time in C++, is that the C++ SIMD story is much further along, especially regarding mature libraries.
biohcacker84 36 days ago [-]
After decades of C++ development, I prefer C, modern Fortran and Rust.
mskcc 34 days ago [-]
your 200+ git repo's attest to that !
34 days ago [-]
DidYaWipe 34 days ago [-]
Just reading the first 1/5 of this made me bored. I started my career with C++, being heavy into it for 10 years. But I've been doing Swift for the last 10 at least. I had a job interview last week for a job that was heavy C++, with major reliance on templates and post-C++ 11... and it didn't go well. You know what? I don't give a shit.
bboygravity 34 days ago [-]
It's crazy that with that amount of experience you wouldn't get the job, just because you lack some modern C++ info in your brain's memory. Stuff you could search for or ask an LLM in 5 seconds (or even look up in a freaking physical book). You'd probably be fully up to date within a few weeks.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
jpc0 34 days ago [-]
If you last worked on Pre templates C++ and now need to work on a template heavy codebase you are effectively writing in a different language. I don't think it will be a few weeks of catching up.
DidYaWipe 31 days ago [-]
Wow, that must be a very long time ago. Templates were around when I started (if I remember correctly), or soon after.
Which reminds me of something I hate more than header files: macros.
DidYaWipe 31 days ago [-]
Ha, thanks, and obviously true. But I can understand companies wanting people who can just march into their codebase and "hit the ground running," I guess.
I don't need the stress anyway. The dough would've been nice, though...
999900000999 34 days ago [-]
C++ and C still force a usage of header files.
For whatever reason this is probably the biggest reason I've struggled with it( aside from tooling... Makes me miss npm).
Something about the formatting of the code blocks used is all messed up for me. Seems to be independent on browser, happens in both Firefox and Chrome.
npalli 37 days ago [-]
This is a Bjarne issue. For personal reasons he uses proportional fonts in his code blocks (in his texts) instead of monospaced and the code snippets always look bad. I guess he is stuck in his ways, just have to work around this ugly look.
breppp 37 days ago [-]
Looking at how aesthetically charming the C++ syntax is, I wouldn't expect anything less than Comic Sans code blocks
James_K 37 days ago [-]
> This is a Bjarne issue.
I have come to find this category of error to be distressingly large.
adrian_b 37 days ago [-]
Bjarne has nothing to do with the HTML/CSS pages of the ACM site, which select for displaying the code the default monospace font that is configured in the browser of the user.
If a proportional font is used for rendering, the most likely cause is that the user has not configured the default monospace font in the settings of the browser.
adrian_b 37 days ago [-]
This is not a Bjarne issue.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
edflsafoiewq 37 days ago [-]
No, the formatting was definitely botched. It should look much better than it does even in a proportional font.
kstrauser 37 days ago [-]
Agreed. I wouldn't mind if, say, end of line comments weren't perfectly aligned. There's zero indentation so things like
for (string line; getline(is,line); )
s.insert(line);
are hard to visually parse.
adrian_b 37 days ago [-]
This must depend on some settings of the browser and perhaps also on the locally installed typefaces.
On my Firefox on Linux, this HTML page is not rendered with any custom typefaces, but it uses those specified by me as defaults for serif/sans serif/monospace.
The C++ code is rendered in my browser with my default, i.e. with JetBrains Mono and there is nothing weird.
The code quoted by you is indented as expected, not as in your posting.
On my computer, I have mostly typefaces that I have bought myself and which are seldom encountered in most computers. I do not have any of the typefaces that are typically specified in CSS rules, i.e. none of the typefaces that can be found in default installations of Windows, Linux or MacOS.
So perhaps there is a bug in their CSS at the definition of "wp-block-code", which on other computers selects a bad typeface that is proportional, so that the narrow spaces make the indentation disappear. (Their wp-block-code says "font-family:inherit" and I have not searched further to see from where the wrong font-family may be inherited.)
Here, perhaps because that bad typeface cannot be found, the browser uses my default monospace font and the code is displayed fine.
Or else, perhaps you have not set in your browser a proper default for monospace fonts and it just takes Arial or other such inappropriate system font even for monospace.
edflsafoiewq 37 days ago [-]
The formatting has been (partially) fixed since it was posted.
jcelerier 37 days ago [-]
this is definitely an issue for the editors of the ACM journal
37 days ago [-]
hkwerf 37 days ago [-]
It's typical Stroustrup style to write code in a variable width font. I'd wager they didn't have an option to use a variable-width font in their code blocks in their CMS and normal paragraphs are trimmed automatically.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
tialaramex 37 days ago [-]
The other give away is that he wants to use his awful "I/O streams" feature even though he also wants very modern features like modules.
Normal people who have a modern environment would std::println but Bjarne insists on using the I/O streams from last century instead
adrian_b 37 days ago [-]
While you are right about the books of Stroustrup, here your inference is wrong, because Stroustrup cannot have anything to do with the CSS style sheets of the ACM Web site, which, in conjunction with the browser settings, determine the font used for rendering the text.
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
hkwerf 37 days ago [-]
They just fixed it by now. It was different when the story was new.
The code blocks aren't in a preformatted tag like <pre> so the whitespace gets collapsed. It seems the intention was to turn spaces into but however it was done was messed up because lots of spaces didn't get converted.
adrian_b 37 days ago [-]
The code blocks are formatted as "wp-block-code", which seems to select the default monospace font of the browser.
My browser has an appropriate default monospace font (JetBrains Mono), so the code is formatted and indented correctly, as expected.
Where this does not happen, the setting for the default monospace font must be wrong, so it should be corrected.
edflsafoiewq 37 days ago [-]
It has been changed since it was posted. You can check the Wayback Machine for the original.
adrian_b 37 days ago [-]
Have you verified that your browsers have correct settings for their default fonts, i.e. a real monospace font as the default for "monospace"?
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
Cieric 37 days ago [-]
Firefox reader view seems to be a slight improvement since it removes the random right alignments in the article.
speerer 37 days ago [-]
This doesn't seem to be a code blog, but a general science communication blog. The editors may not be familiar with code syntax, and may simply be using a content management system and copy-pasting from source material.
mmoskal 37 days ago [-]
> ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges.
Most of programming language conferences are organized by ACM.
speerer 37 days ago [-]
I know, but the blog itself doesn't seem oriented to post code snippets. I clicked a few articles which were much more general.
kanbankaren 37 days ago [-]
Yeah. Looks nasty. Don't the editors of the ACM have a say on how the article is presented?
Communications of the ACM has had unbelievably bad typography for code samples for decades (predating the web). No idea how this is allowed to continue.
rbanffy 38 days ago [-]
I'm guessing someone pasted from what went into the print edition. Or Bjarne himself.
It's just the first code snippet that's messed up. The rest is merely wonky.
rgovostes 34 days ago [-]
You don't use 10 spaces of indentation? It's the 21st century.
layer8 34 days ago [-]
It’s a wchar_tab.
34 days ago [-]
Kenji 34 days ago [-]
[dead]
James_K 37 days ago [-]
[flagged]
dang 34 days ago [-]
Ok, but please don't fulminate on Hacker News. We're trying for something different here.
> Between Rust and Zig, the problems of C++ have been solved much more elegantly
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
James_K 37 days ago [-]
> it is not difficult to write very nice, readable, efficient, and safe C++ code
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
SirHumphrey 37 days ago [-]
Even Cobol code hasn't been ported in it's entirety, and the whole codebase at the peak was probably orders of magnitude smaller than C++. It's also far easier to port Cobol - with it being used mostly for data processing and business logic - than C++ that was used for all manners of strange, esoteric and complicated pieces of software requiring thousand to millions of man-hours to port (for example most of Gecko and Blink).
C++ will be here forever, at least in some manner.
edit: spelling
James_K 37 days ago [-]
We can all at least appreciate that COBOL is something you try to get rid of where possible. If we took the same attitude to C++ as we do COBOL, then I think the issue would be much less severe.
spacechild1 37 days ago [-]
> It is so objectively horrible in every capacity,
Total hyperbole and simply not true.
> but it still somehow managed to limp on for all these years
Before Rust became somewhat popular, there was simply no serious alternative to C++ in many domains.
James_K 37 days ago [-]
That in and of itself is a failure. The decision to continually bolt more stuff onto this mess instead of developing a viable alternative is honestly painful. When you look at something like Zig, it gets you much of what C++ offers and in a way that doesn't cause you pain. Is the argument that Zig simply wasn't possible 30 years ago? I doubt it. As best I can tell, Zig comes as the result of a relatively experienced C programmer making the observation that you could improve C in a lot of easy ways. Were it not for the existing mess, he might have called his language C++. Instead a Scandinavian nut-job decided to heap some mess on top of C and everyone just went along with it.
spacechild1 37 days ago [-]
I guess someone had to make all these mistakes so that others can now learn from them :)
bandika 37 days ago [-]
Honestly, I am a happier and more productive developer since left C++ behind for other languages. And it's not just the language, but the lack of ecosystem too. Things like the build system, managing dependencies, etc, all such a pain compared to modern languages with good ecosystem (Rust, Flutter, Kotlin, etc)
pjmlp 37 days ago [-]
Start by removing Rust's dependency in GCC and LLVM, both written in C++.
tialaramex 37 days ago [-]
Rust doesn't "depend" on LLVM in the sense you seem to imagine, you can instead lower Rust's MIR into Cranelift (which is written in Rust) if you want for example.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
pjmlp 37 days ago [-]
It can, but until that becomes the reference implementation backend, it hardly matters.
Similar to how much Python folks disregard PyPy's existence.
I doubt LLVM project would start accepting polyglot contributions, beyond what they already do for language specific frontends.
Also, the ongoing GCC support is dependent on C++ as well.
otteromkram 34 days ago [-]
I dislike the style of Code used to write this. I understand that, given who wrote the article, this is blasphemy.
Opening braces should be inline with the expression or definition.
Comments can be above what they're referred to.
Combined, this makes any code snippet look like crap on mobile and almost impossible to follow as a result.
[0] https://www.circle-lang.org/draft-profiles.html
The example in the article starts with "Wow, we have unordered maps now!" Just adding things modern languages have is nice, but doesn't fix the big problems. The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
1. unordered_map requires some bizarre and not widely useful abilities that mostly preclude hash tables with probing:
https://stackoverflow.com/questions/21518704/how-does-c-stl-...
2. unordered_map has fairly strict iteration and pointer invalidation rules that are largely incompatible with the implementations that turn out to be the fastest. See:
> References and pointers to either key or data stored in the container are only invalidated by erasing that element, even when the corresponding iterator is invalidated.
https://en.cppreference.com/w/cpp/container/unordered_map
And, of course, this is C++, where (despite the best efforts of the “profiles” people), the only way to deal with lifetimes of things in containers is to write the rules in the standards and hope people notice. Rust, in contrast, encodes the rules in the type signatures of the methods, and misuse is deterministically caught by the compiler.
For std::vector it apparently just didn't occur to C++ people to provide the correct API, Bjarne Stroustrup claims the only reason to use a reservation API is to prevent reference and iterator invalidation. -shrug-
[std::unordered_map was standardised this century, but, the thing standardised isn't something you'd design this century, it's the data structure you'd have been shown in an undergraduate Data Structures class 40 years ago.]
Do you mean something like vector::reserve_at_least()? I suppose that, if you don’t care about performance, you might not need it.
FWIW, I find myself mostly using reserve in cases where I know what I intend to append and when I will be done appending to that vector forever afterwards.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
It took me several reads to figure out that you probably meant ‘auto’ the storage class specifier. And now I’m wondering whether this was ever anything but a no-op in C++.
Every major project in that cares about perf and binary size would disable the option that compiler vendors would obviously provide, like -fno-exceptions.
Rust memory and type system offer stronger guarantees, leading to better optimization of bound checks, AFAIK.
There are more glaring issues to fix, like std::regex performance and so on.
Any C++ code without at least unit tests with 100% test coverage on with UB sanitizer etc, must be considered inherently defective and the developer should be flogged for his absurd levels of incompetence.
Then there is also the need for UB aware formal verification. You must define predicates/conditions under which your code is safe and all code paths that call this code must verifiably satisfy the predicates for all calls.
This means you're down to the statically verifiable subset of C++, which includes C++ that performs asserts at runtime, in case the condition cannot be verified at compile time.
How many C++ developers are trained in formal verification? As far as I am aware, they don't exist.
Any C++ developers reading this who haven't at least written unit tests with UB sanitizer for all of their production code should be ashamed of themselves. If this sounds harsh, remember that this is merely the logical conclusion of "just get good".
C++ feels like a language of bean counters.
Rust feels like a language of bean counters.
A lot of C++ folks I know went over to rust.
They were happy with C++ and it was the best thing since sliced bread.
They are now happy with rust and it is the best thing since sliced bread.
To me, languages have a, let's call it 'taste' for the lack of better word off the top of my head. It's that combining quality that pg called 'hacker's languages', such as C, and lisp, for example.
C++ feels like a bureaucratic monster with manual double bookkeeping, byzanthine, baroque, up to outright weird and contradictory in places. Ever since rust was conceived, I gave it multiple shots to learn. When I was not thrown off by what I perceive as java-style annotations, i.e., something orthogonal to the language itself where no one seems to have bothered to come to a consensus to be able to express this from the language itself, its general feel reminds me of something a C++ embracer will feel comfortable in. I.e., in pg's words, not a hacker's language, paired with a crusade of personal enlightenment. What used to be OO and GoF now is memory safety as-implemented-by-rust (note: not by borrow checker, we could've had this with cyclone, for example, more than two decades ago).
I have, in my original comment, marked this as my personal opinion and feeling, as is the above. I'm not arguing. I love FP and the idea of having a systems language with FP concepts working out to memory safety and higher level expression sounds like the holy grail of yester-me. I'm disappointed I couldn't find my professional salvation in rust with how uneasy I feel within the language. It's as if a suit and tie was forced on me, or a hawaii shirt and shorts (depending on your preference, image it's the thing you wouldn't voluntarily wear).
Now, if other folks also mirror my observation of how the folks flock from C++ to rust, you bet they take their mindset and pedestal with them to stand on and preach off of. At least those I know do, only their sermon changed from C++ to rust, the quality of their dogma remained constant.
> Rust feels like a language of bean counters.
Gotcha! I just didn't make the connection, when I read your comment I thought "what does a list of C++ features + the idea that people left it because they didn't like where it's going mean that the two languages are the same?"
I wasn't interested in arguing either, I was just trying to understand what you meant, and now I do. Thank you for sharing.
Rust wasn't designed by committee.
I'm fine with robust languages with very strong type systems, I think. Are Haskell, ML, F#, Scala in this set? Robust and very strongly typed enough? I don't dislike their taste, even though I think I've had enough scala, specifically, for this life time. If these aren't in the set you're thinking of, I'd like to know what makes up that set for you.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
However, an application that I had written to be backward compatible with java 1.4, 15 years ago, cannot be compiled today. And I had to make major changes to have it run on anything past java 8, ~10 years ago, I believe.
$DAYJOB got burned badly twice on breaking Go behavioral changes delivered in non-major versions, so management created a group to carefully review Go releases and approve them for use.
All too often, Google's justification for breaking things is "Well, we checked the code in Google, and publicly available on Github, and this change wouldn't affect TOO many people, so we're doing it because it's convenient for us.".
Can you clarify these 2 changes please? Cannot recall anything similar
Doubt, again. Without a minimal proof of mentioned problems continuing dialogue doesn't make sense for me, thanks.
Exception especifications, gets, GC, string ABI,...
Today we don’t have those limits on HDD space and can simply ship an embedded copy of JRE with the desktop app. In server environments I doubt anyone is reusing JRE between apps at all.
> ...sharing runtime dependencies [in C or C++] is hard...
Is it? The "foo.so foo.1.so foo.1.2.3.so" mechanism works really well, for libraries whose devs that are capable of failing to ship backwards-incompatible changes in patch versions, and ABI-breaking changes in minor versions.
“Often” is a huge exaggeration. I always hear about it, but never encountered it myself in 25 years of commercial Java development. It almost feels like some people are doing weird stuff and then blame the technology.
> Is it? The "foo.so foo.1.so foo.1.2.3.so"
Is it “sharing” or having every version of runtime used by at least one app?
Lucky you, I guess?
> Is it “sharing” or having every version of runtime used by at least one app?
I'm not sure what you're asking here? As I'm sure you're aware, software that links against dependent libraries can choose to not care which version it links against, or link against a major, minor, or patch version, depending on how much it does care, and how careful the maintainers of the dependent software are.
So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
And that is the hard problem, because it’s people problem, not technical one, and it’s platform independent. When some Java app was requiring a specific build of JRE, it wasn’t limitation or requirement of the platform, but rather the choice of developers based on their expectations and level of trust. Windows still dominates desktop space and it’s not uncommon for C++ programs to install or require a specific version of runtime, so you eventually have lots of them installed.
I do agree that the world becomes much easier for a language/runtime maintainer if you get to ignore backwards-compatibility concerns because you've convinced your users to just pack in the entire system they built against with their program.
Second, you can have shared libraries/runtimes on Windows or in Java world. There exists versioning and *nix is not unique in that. Both are rather agnostic to the way you ship your app. In server Java unless you ship a container, you usually do not ship the JRE. On a desktop - it depends, shared JREs were always possible.
Third, DLL hell does exist in *nix environments too. The versioning mechanism you mention is a technical solution to a people problem and it doesn't work perfectly. Things do break if you relax your dependency constraints too much. How much - it depends on developers and the amount of trust they put in maintainers. So you inevitably end up with multiple versions of the same library or runtime on the same machine, no matter what OS or cross-platform solution do you use. It is not much different from shipping a bundle.
Agreed. This is obvious. You even mention it below:
> Second, you can have shared libraries/runtimes on Windows or in Java world. There exists versioning and *nix is not unique in that.
As you said, Windows has the same issue (because it's a fundamental problem of using libraries).
> Third, DLL hell does exist in *nix environments too.
IFF the publisher of the library fails to follow the decades-old convention that works really well.
> Te versioning mechanism you mention is a technical solution to a people problem and it doesn't work perfectly.
Sure. Few things do. That's what pre-release testing is for.
> Things do break if you relax your dependency constraints too much.
Yep. That's why we test.
> So you inevitably end up with multiple versions of the same library ... on the same machine...
Sure. But they're not copies of the same version. That's the entire point of the symlink-based shared object naming scheme (and the equivalent in Windows (IIRC, it used to be called SxS, but consult the second bullet point in [0])).
[0] <https://learn.microsoft.com/en-us/previous-versions/visualst...>
Granted, it is only those that can be machine verified.
Office is using C++20 modules in production, Vulkan also has a modules version.
Because no one wants it enough to implement it.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
The reason I life profiles is they are not all or nothing. I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor. Or at least so I hope, it remains to be seen if that is how they work out. I've been trying to figure out how to make rust fit in, but std::vector<SomeVirtualInterface> is a real pain to wrap into rust and so far I haven't managed to get anything done there.
The $1 billion is realistic - this project was a rewrite of a previous product that became unmaintainable and inflation adjusted the cost was $1 billion. You can maybe adjust that down a little if we are more productive, but not much. You can adjust it down a lot if you can come up with a way to keep our existing C++ and just extend new features and fix the old code only where it really is a problem. The code we have written in C++98 (because that was all we had in 2010) still compiles with the latest C++23 compiler and since there are no know bugs it isn't worth updating that code to the latest standards even though it would be a lot easier to maintain (which we never do) if we did.
It's also expected that you'll be able to do this with Safe C++. Of course the interop with older C++ code will then still involve unsafety. But incremental improvement should be possible.
See clang-tidy and clang analyzer for example.
ps: That's what I like most about the core guidelines, they are trying very hard to stick to guidelines (not rules) that pretty much uncontroversially make things safer _and_ can be checked automatically.
They're explicitly walking away from bikeshed paintings like naming conventions and formatting.
I know compiler front ends can be and are used to create tooling. The point is, you shouldn't be required to implement some kinds of checking in the course of implementing a compiler. If you use a compiler, you should not be required to do all this analysis every single time you compile (unless it is enforcing an objectively necessary standard, and the cost of running it is negligible).
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
C++ claimed for decades to be about eliminating a class of resource management bugs you can have in C code, that was its biggest selling point. So why is eliminating another class of bugs a nice to have now?
C++ is loosing projects to memory safe languages for decades now, just think of all the business software in Java, scientific SW in python, ... . The industry is moving towards memory safe software for decades now. Rust is just the newest option -- and a very compelling one as it has no runtime environment or garbage collector, just like C++.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
Chromium Security: Memory Safety (https://www.chromium.org/Home/chromium-security/memory-safet...)
Microsoft found that
> ~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues
A proactive approach to more secure code (https://msrc.microsoft.com/blog/2019/07/a-proactive-approach...)
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
Eliminating Memory Safety Vulnerabilities at the Source (https://security.googleblog.com/2024/09/eliminating-memory-s...)
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
Basically 99% of networked applications that don't talk to a trusted server and all OS level libraries fall under that category.
Your HFT code is most likely not connecting to an exchange that is interested in exploiting your trading code so the exploit surface is quite small. The only potential exploit involves other HFT algorithms trying to craft the order books into a malicious untrusted input to exploit your software.
Meanwhile if you are Google and write an android library, essentially all apps from the play store are out to get you.
Basically C++ code is like an infant that needs to be protected from strangers.
And yet, no matter how complex database engines really are, my experience has been the same: the number of bugs related to memory-safety were extremely rare.
> it's not clear if memory safety is the largest source of problems building software today
Books/repositories anything practical
> ~70% of the vulnerabilities Microsoft assigns a CVE
> 76% of vulnerabilities
What is the difference between the first two (emphasis added) and what you said? Just as a thought experiment...
If I measure a single factor in exclusion to all others I can also find whatever I want in any set of data. Now your point may be valid but it is not what they published and without the full dataset we cannot validate your claim however I can validate that what you claim is no what they claim.
To answer your question in the final paragraph. Yes it is, but it requires the same cultural shift as what it would take to write the same code in rust or swift of golang or whatever other memory safe language you want to pick.
If rust was in fact viable for such a large project, how's the servo project going? That still the resounding success it was expected to be? Rust in the kernel? That going well?
The jury is still out on whether rust will be mass adopted and is able to usurp C/C++ in the domains where C/C++ dominate. It may get there, but I would much much rather start a new project using C++20 than in rust and I would still be able to make it memory safe and yes it is a "skill issue", but purely because of legacy C++ being taught and accepted in new code in a codebase.
Rules for writing memory safe C++ has not just been around for decades but has be statically checkable for over a decade but for a large project there are too many errors to universally apply them to existing code without years of work. However if you submit new code using old practices you should be held financially and legally responsible just like an actual engineer in another field would be.
It's because we are lax about standards that it's even an issue.
As a note, if you see an Arc<Mutex<>> in rust outside of some very specific Library code whoever wrote that code probably wouldn't be able to write the same code in a memory and thread safe manner, also that is an architectural issue.
Arc and Mutex are synchronisation primatives that are meant to be used to build datastructures and not in "userspace" code. It's a strong code smell that is generally accepted in Rust. Arc probably shouldn't even need to exist at all because that is a clear indication nobody thought about the ownership semantics of the data in question, maybe for some datastructures it is required but you should very likely not be typing it into general code.
If Arc<Mutex<>> is littered throughout your rust codebase you probably should have written that code in C#/Java/Go/pick your poison...
It's a really weird concept that probably comes only from having this extremely complex language where even the designers expect some parts of it are too weird for "normal programmers". But then they imagine some advanced class of programmer, the "library programmers", who can deal with such complexity.
The more modern way of designing software is to stick to the YAGNI principle: design your code to be simple and straightforward, and only extract out datastructures into separate libraries if and when they prove to be needed.
Not to mention, the position that shared ownership should just not exist at all is self-evidently absurd. The lifetime of an object can very well be a dynamic property of your program, and a concurrent one. A language that lacks std::shared_ptr / Arc is simply not a complete language, there will be algorithms that you just can't express.
The point of library code is to implement these things once in a safe and efficient manner and reuse the implementation.
Sometimes there are more domain or even company specific things that should be implemented exactly once and reused.
Nobody said there are different tiers of developers like "library developers" and "normal developers". Those are different types of programming that a single developer can do but fundamentally require a different thought pattern. Designing datastructures and algorithms are a lot more CS whereas general programming is much more akin to plumbing. If you think library code isn't needed it's because you overlook the library code you already use.
There are some things that are not yagni, if you have those in place then the rest of your code can literally be implemented that way because you literally won't need it.
It's not that shared_ptr isn't needed, it's that people don't use it where necessary, they use it because it's convenient not to think entirely and because the necessary Library code isn't there. I stand strong that seeing std::shared_ptr/box (or even std::unique_ptr/Box) in general code is a code smell, the fact that you even said that there are certain algorithm's that cannot be expressed without it means you agree, the algorithm should be implemented exactly once and reused. If it's only used one then sure it can be abstracted when needed but that doesn't mean you shouldn't need to justify why it's there.
Let's keep some sanity and perspective here, please. C++ has many long-standing problems, but banging on the "security" drum will only drive people away from alternative languages. (Everyone knows that "security" is just a fig leaf they use to strong-arm you into doing stuff you hate.)
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
Circle is an implementation of C++ that includes a borrow checker and is 100% backwards compatible with C++:
https://www.circle-lang.org/site/index.html
a nice attempt but I have millions of lines of c++ that isn't going away-
You are welcome to take your millions of lines of C++ code and it will compile without change using Circle as any valid C++ code is valid Circle code, which is the technical definition of being backward compatible.
You don't need to change existing code to use Circle or the new features Circle introduces, you can just write new classes and functions with those features and your existing code will continue to compile as-is.
All my efforts to do the above so I can mix C++ and Rust have quickly failed when I realized that my wrappers would not be thing, and thus they would cost large performance penalties.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
Also they are difficult to switch to, so I would expect very few established projects to bother.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
https://youtube.com/watch?v=yDbvVFffWV4
Cypress Creek was intended to be a reference to Silicon Valley and the tech companies there of the time, and it’s got some of the best comedy in the season (Hank Scorpio is the best one-off character ever in the show IMO.)
I mixed up the tag and my old domain name, which was "cpphacker.co.uk" (and later, just cpphacker.com/org).
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
An ISO standard? According to who, ISO?
Your point is what?
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
And I'm supposed to read that and then:
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
https://github.com/microsoft/GSL
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
I was expecting that someone would have posted this by now:
How to Shoot Yourself In the Foot:
https://www-users.york.ac.uk/~ss44/joke/foot.htm
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
https://github.com/hsutter/cppfront/
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
There is currently no resolution to this issue.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
The tooling is way better than it was 6 months ago though asin I can actually compile code in a non Visual Studio project using import std.
I will be extremely happy the day I no longer need to see a preprocessor directive outside of library code.
(I must say that I was happy to see/read that article, though)
Bjarne Stroustrup, AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected to become part of the next revision of the standard. The focus is on general ideas rather than technical details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
https://www.stroustrup.com/whitespace98.pdf
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
Or more, correctly, the following happens:
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
Over my career I’ve written hundreds of thousands of lines of it.
But keeping up with it is time consuming and more and more I find myself reaching for other languages.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.
Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.
for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.
Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?
This wasn’t proven by the time c++11 was ready, but for c++20 and beyond it’s a shame they didn’t go with this.
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
Specifically here are areas I haven’t used that appear to have nontrivial amounts of complexity, footguns, syntax and other things to be aware of:
* Ranges * Modules * Concepts * Coroutines
Each of these is a large enough topic that it will involve time and effort to reach an equivalent level of competence and understanding that I have with other areas of c++.
I don’t mind investing time learning new things but with commentary around the web (and even this thread) calling the implementation and syntax a hot mess, at some point it’s a better investment to put that learning in to a language without all the same baggage.
I really wish c++ had gone with breaking change epochs for c++20.
Less and less, for sure.
Nothing the past few years.
They killed it.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger
Some FAANGs focus on AI more than others.
That's probably most devices in the world.
Modern C++ with constexpr and friends and the massive work and cunning they have put into avoiding template bloat....
...C++ is now my first choice for embedded.
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
It's really any other language other than those two.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
Until then... YAWN.
If the uber-bean counter, herald of the language of bean counters demonstrate unwillingness to count beans, maybe the beans are better counted in another way.
You might say this is very silly, and you'd be right. But as quirks of C++ go it is one of the most benign ones. As usual it is there for backwards compatibility.
And, for what it's worth, the uber-bean counter didn't miss a bean here...
> using namespace std
something you get told not to do easily! :D
A well-designed language is one in which there are very few different ways of doing the same thing. And C++ is definitely not that.
Imagine if you told a writer or poet that English is bad because there is more than one way to say the same thing...
Programming languages are for people more than machines. Machines are happy with microcode.
So that you focus on solving the problem at hand, instead of endlessly arguing over decisions that are irrelevant to solving the said problem.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
When I read “The Design and Evolution of C++”, it gave me a better understanding of the language.
I’m mostly focused on jj with my writing right now, but we’ll see…
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
Modern C++ allows you to swap out most features and behaviors of the language with your own implementations that make different guarantees. C++ is commonly used in high-assurance environments with extremely high performance requirements, and it remains the most effective language for these purposes because you can completely replace most of the language with something that makes the safety guarantees you require. This is rather important. For example, userspace DMA is idiomatic in e.g. high-performance databases kernels; handling this is much safer in C++ than Rust. In C++, you can trivially write elegant primitives that completely hide the unusual safety model. In Rust, you have to write a lot of ugly unsafe code to make this work at all because userspace DMA isn’t compatible with a borrow checker. There can always be multiple mutable references to memory but it is not knowable at compile-time, safety of an operation can only be arbitrated at runtime.
Of course, it is still incumbent on the developer to use the language competently in all cases.
For what it's worth, unsafe Rust is safer than C++. There's very little UB to explode your carefully crafted implementations. Safe rust of course has no UB except for what you write in unsafe blocks, so it's safer still and there's no real difference in the abstractions you can write with concepts vs traits.
I'm not actually arguing for rust here though, because this isn't a great showing for it. Trying to write the related add_wrap(T, T) function in rust is stupidly verbose compared to add_sat(T, T) thanks to bad decisions the num_traits authors made. What I am saying is C++ isn't a form of high level assembly like your original comment suggested. Understanding the relationship between the language and the hardware takes a lot of experience that most people don't use when writing code.
I never suggested that C++ was “a form of high level assembly”. I’ve written enough assembly and C to know better; you lose a bit of precision with C++. But now I can define (or not) the behavior I want in a way that is largely transparent. This has been a brilliant change to the language.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation. Everyone doing high-performance and/or high-assurance systems is dragging in few if any dependencies, so this is practical. The kinds of things that C++ is really good at for new code are the kinds of things where this is what you would do regardless.
Developers don’t even have to be hardware experts, they just have to not use std for most things. That is a pretty low barrier. And std is a mess with the albatross of legacy support. Reimagined C++20 native “standard” libraries are much, much cleaner and safer (and faster).
Legacy C++ code bases aren’t going to be rewritten in a new language. New C++ code bases can take advantage of alternative foundations that ignore std and many do. Most things should not be written in C++, but for some things C++ is unmatched currently and safer in practice than is often suggested with basic hygiene.
Then all you need to do is also verify that the sending code adheres to the schema it specified.
This has very little to do with borrow checking. From the perspective of the borrow checker, a DMA call is no different from RPC or writing to a very wide pointer.
Having to drop down to intrinsics early is not a strength.
My biased opinion, from doing this full-time in C++, is that the C++ SIMD story is much further along, especially regarding mature libraries.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
Which reminds me of something I hate more than header files: macros.
I don't need the stress anyway. The dough would've been nice, though...
For whatever reason this is probably the biggest reason I've struggled with it( aside from tooling... Makes me miss npm).
(How is that possible, someone may ask? It's the SCP! - see https://news.ycombinator.com/item?id=26998308)
I have come to find this category of error to be distressingly large.
If a proportional font is used for rendering, the most likely cause is that the user has not configured the default monospace font in the settings of the browser.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
On my Firefox on Linux, this HTML page is not rendered with any custom typefaces, but it uses those specified by me as defaults for serif/sans serif/monospace.
The C++ code is rendered in my browser with my default, i.e. with JetBrains Mono and there is nothing weird.
The code quoted by you is indented as expected, not as in your posting.
On my computer, I have mostly typefaces that I have bought myself and which are seldom encountered in most computers. I do not have any of the typefaces that are typically specified in CSS rules, i.e. none of the typefaces that can be found in default installations of Windows, Linux or MacOS.
So perhaps there is a bug in their CSS at the definition of "wp-block-code", which on other computers selects a bad typeface that is proportional, so that the narrow spaces make the indentation disappear. (Their wp-block-code says "font-family:inherit" and I have not searched further to see from where the wrong font-family may be inherited.)
Here, perhaps because that bad typeface cannot be found, the browser uses my default monospace font and the code is displayed fine.
Or else, perhaps you have not set in your browser a proper default for monospace fonts and it just takes Arial or other such inappropriate system font even for monospace.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
Normal people who have a modern environment would std::println but Bjarne insists on using the I/O streams from last century instead
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
My browser has an appropriate default monospace font (JetBrains Mono), so the code is formatted and indented correctly, as expected.
Where this does not happen, the setting for the default monospace font must be wrong, so it should be corrected.
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
Most of programming language conferences are organized by ACM.
It's just the first code snippet that's messed up. The rest is merely wonky.
https://news.ycombinator.com/newsguidelines.html
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
C++ will be here forever, at least in some manner.
edit: spelling
Total hyperbole and simply not true.
> but it still somehow managed to limp on for all these years
Before Rust became somewhat popular, there was simply no serious alternative to C++ in many domains.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
Similar to how much Python folks disregard PyPy's existence.
I doubt LLVM project would start accepting polyglot contributions, beyond what they already do for language specific frontends.
Also, the ongoing GCC support is dependent on C++ as well.
Opening braces should be inline with the expression or definition.
Comments can be above what they're referred to.
Combined, this makes any code snippet look like crap on mobile and almost impossible to follow as a result.