xiphias23 hours ago
It's great that finally bounds checking happened in C++ by (mostly) default.

The only thing that's less great is that this got so much less upvotes than all the Safe-C++ languages that never really had the chance to get into production in old code.

BinaryIgor3 hours ago
Interesting how C++ is still improving; seems like changes of this kind my rival at least some of the Rust use cases; time will tell
galangalalgol1 hour ago
The issue with safer c++ and modern c++ is the mirror of the problem with migrating a code base from c++ to rust. There is just so much unmodern and unsafe c++ out there. Mixing modern c++ into older codebases leaves uncertain assumptions everywhere and sometimes awkward interop with the old c++. If there was a c++23{} that let the compiler know that only modern c++ and libc++ existed inside it would make a huge difference by making those boundaries clear and you can document the assumptions at that boundary. Then move it over time. The optimizer would have an advantage in that code too. But they don't want to do that. The least they could do is settle on a standard c++abi to make interop with newer languages easier, but they don't want to do that either. They have us trapped with sunk cost on some gaint projects. Or they think they do. The big players are still migrating to rust slowly, but steadily.
kaz-inc1 minute ago
There kind of is. There's __cplusplus, which I'll grant you is quite janky.

  #IF __cplusplus==202302L
josephg1 hour ago
I’m not really sure how checks like this can rival rust. Rust does an awful lot of checks at compile time - sometimes even to the point of forcing the developer to restructure their code or add special annotations just to help the compiler prove safety. You can’t trivially reproduce those all those guardrails at runtime. Certainly not without a large performance hit. Even debug mode stdc++ - with all checks enabled - still doesn’t protect against many bugs the rust compiler can find and prevent.

I’m all for C++ making these changes. For a lot of people, adding a bit of safety to the language they’re going to use anyway is a big win. But in general guarding against threading bugs, or use after free, or a lot of more obscure memory issues requires either expensive GC like runtime checks (Fil-C has 0.5x-4x performance overhead and a large memory overhead). Or compile time checks. And C++ will never get rust’s extensive compile time checks.

semiinfinitely1 hour ago
> Interesting how C++ is still improving

its not

Conscat19 minutes ago
Do you read the Clang git commit log every day? C++ improves in many ways faster than any other language ecosystem.
fweimer1 hour ago
How does this compare to _GLIBCXX_ASSERTIONS in libstdc++ (on by default in Fedora since 2018)?
beached_whale1 hour ago
My understanding that this is like that but both libstdc++/libc++ have been doing more since. Additionally, Google did a blog not to long ago where they talked to actual the performance impact on their large C++ codebase and it averaged about 0.3% I think https://security.googleblog.com/2024/11/retrofitting-spatial...

Since then, libc++ has categorized the checks by cost and one can scale them back too.

ris3 hours ago
See also the "lite assertions" mode @ https://gcc.gnu.org/wiki/LibstdcxxDebugMode for glibc, however these are less well documented and it's less clear what performance impact these measures are expected to have.
tialaramex4 hours ago
> those that lead to undefined behavior but aren't security-critical.

Once again C++ people imagining into existence Undefined Behaviour which isn't Security Critical as if somehow that's a thing.

Mostly I read the link because I was intrigued as to how this counted as "at scale" and it turns out that's misleading, the article's main body is about the (at scale) deployment at Google, not the actual hardening work itself which wasn't in some special way "at scale".

AshamedCaptain4 hours ago
Of course there is undefined behavior that isn't security critical. Hell, most bugs aren't security critical. In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.

The author of TFA actually makes another related assumption:

> A crash from a detected memory-safety bug is not a new failure. It is the early, safe, and high-fidelity detection of a failure that was already present and silently undermining the system.

Not at all? Most memory-safety issues will never even show up in the radar, while with "Hardening" you've converted all of them into crashes that for sure will, annoying customers. Surely there must be a middle ground, which leads us back to the "debug mode" that the article is failing to criticize.

AlotOfReading1 hour ago

    In fact, most software isn't security critical, at all. If you are writing software which is security critical, then I can understand this confusion; but you have to remember that most people don't.
No one knows what software will be security critical when it's written. We usually only find out after it's already too late.

Language maintainers have no idea what code will be written. The people writing libraries have no idea how their library will be used. The application developers often don't realize the security implications of their choices. Operating systems don't know much about what they're managing. Users may not even realize what software they're running at all, let alone the many differing assumptions about threat model implicitly encoded into different parts of the stack.

Decades of trying to limit the complexity of writing "security critical code" only to the components that are security critical has resulted in an ecosystem where virtually nothing that is security critical actually meets that bar. Take libxml2 as an example.

FWIW, I disagree with the position in the article that fail-stop is the best solution in general, but there's experimental evidence to support it at least. The industry has tried many different approaches to these problems in the past. We should use the lessons of that history.

criemen1 hour ago
> Of course there is undefined behavior that isn't security critical.

But undefined behavior is literally introduced as "the compiler is allowed to do anything, including deleting all your files". Of course that's security critical by definition?

charleslmunger4 hours ago
>Not at all? Most memory-safety issues will never even show up in the radar

Citation needed? There's all sorts of problems that don't "show up" but are bad. Obvious historical examples would be heartbleed and cloudbleed, or this ancient GTA bug [1].

1: https://cookieplmonster.github.io/2025/04/23/gta-san-andreas...

gishh3 hours ago
Most people around here are too busy evangelizing rust or some web framework.

Most people around here don’t have any reason to have strong opinions about safety-critical code.

Most people around here spend the majority of their time trying to make their company money via startup culture, the annals of async web programming, and how awful some type systems are in various languages.

Working on safety-critical code with formal verification is the most intense, exhausting, fascinating work I’ve ever done.

Most people don’t work a company that either needs or can afford a safety-critical toolchain that is sufficient for formal, certified verification.

The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely. This subtle point seems to have been lost a long time ago with “*end” developers trying to sell ads, or whatever.

AlotOfReading29 minutes ago

    The goal of formal verification and safety critical code is _not_ to eliminate undefined behavior, it is to fail safely.
Software safety cases depend on being able to link the executable semantics of the code to your software safety requirements.

You don't inherently need to eliminate UB to define the executable semantics of your code, but in practice you do. You could do binary analysis of the final image instead. You wouldn't even need a qualified toolchain this way. The semantics generated would only be valid for that exact build, and validation is one of the most expensive/time-consuming parts of safety critical development.

Most people instead work at the source code level, and rely on qualified toolchains to translate defined code into binaries with equivalent semantics. Trying to define the executable semantics of source code inherently requires eliminating UB, because the kind of "unrestricted UB" we're talking about has no executable semantics, nor does any code containing it. Qualified toolchains (e.g. Compcert, Green Hills, GCC with solidsand, Diab) don't guarantee correct translation of code without defined semantics, and coding standards like MISRA also require eliminating it.

As a matter of actual practice, safety critical processes "optimistically ignore" some level of undefined behavior, but that's not because it's acceptable from a principled stance on UB.

kccqzy3 hours ago
I appreciate your insights about formal verification but they are irrelevant. Notice that GP was talking about security-critical and you substituted it for safety-critical. Your average web app can have security-critical issues but they probably won’t have safety-critical issues. Let’s say through a memory safety vulnerability your web app allowed anyone to run shell commands on your server; that’s a security-critical issue. But the compromise of your server won’t result in anyone being in danger, so it’s not a safety-critical issue.
gishh2 hours ago
Safety-critical systems aren’t connected to a MAC address you can ping. I didn’t move the goalposts.
AlotOfReading15 minutes ago
Individual past experiences aren't always representative of everything that's out there.

I've worked on safety critical systems with MAC addresses you can ping. Some of those systems were also air-gapped or partially isolated from the outside world. A rare few were even developed as safety critical.

josephg1 hour ago
Sure they are. Eg, 911 call centers. Flight control. These systems aren’t on the open internet, but they’re absolutely networked. Do they apply regular security patches? If they do, they open themselves up to new bugs. If not, there are known security vulnerabilities just waiting for someone to use to slip into their network and exploit.

And what makes you think buggy software only causes problems when hackers get in? Memory bugs cause memory corruption and crashes. I don’t want my pacemaker running somebody’s cowboy C++, even if the device is never connected to the internet.

gishh29 minutes ago
Ah. I was responding to:

> Your average web app can have security-critical issues but they probably won’t have safety-critical issues.

How many air-gapped systems have you worked on?

samdoesnothing3 hours ago
nooooo you don't understand, safety is the most important thing ever for every application, and everything else should be deprioritized compared to that!!!
forrestthewoods35 minutes ago
> Undefined Behaviour which isn't Security Critical as if somehow that's a thing

Undefined behavior in the (poorly written) spec doesn't mean undefined behavior in the real world. A given compiler is perfectly free to specify the behavior.

dana3211 hour ago
Imagine hardening the regex library, its already as slow as molasses.
semiinfinitely1 hour ago
by deleting it?
on_the_train4 hours ago
std::optional is unsafe in idiomatic use cases? I'd like to challenge that.

Seems like the daily anti c++ post

steveklabnik3 hours ago
Two of the authors are libc++ maintainers and members of the committee, it would be pretty odd if they were anti C++.
maccard3 hours ago
I’m very much pro c++, but anti c++’s direction.

> optional is unsafe in idiomatic use cases? I’d like to challenge that.

    std::optional<int> x(std::nullopt);
    int val = *x;

Optional is by default unsafe - the above code is UB.
on_the_train3 hours ago
But using the deref op is deliberately unsafe, and never used without a check in practice. This would neither pass a review, nor static analysis.
canyp3 hours ago
GP picked the less useful of the two examples. The other one is a use-after-move, which static analysis won't catch beyond trivial cases where the relevant code is inside function scope.

I also agree with them: I am pro-C++ too, but the current standard is a fucking mess. Go and look at modules if you haven't, for example (don't).

mohinder2 hours ago
> This would neither pass a review, nor static analysis

I beg to differ. Humans are fallible. Static analysis of C++ cannot catch all cases and humans will often accept a change that passes the analyses.

einpoklum2 hours ago
> Static analysis of C++ cannot catch all cases

You're ignoring how static analysis can be made to err on the side of safety rather than promiscuity.

Specifically, for optional dereferencing, static analysis can be made to disallow it unless it can prove the optional has a value.

IshKebab2 hours ago
> never used without a check in practice

Ho ho ho good one.

TinkersW3 hours ago
That is actually memory safe, as null will always trigger access violation..

Anyway safety checked modes are sufficient for many programs, this article claims otherwise but then contradicts itself by showing that they caught most issues using .. safety checked modes.

steveklabnik3 hours ago
It is undefined behavior. You cannot make a claim about what it will always do.
maccard3 hours ago
>null will always trigger access violation..

No, it won't. https://gcc.godbolt.org/z/Mz8sqKvad

TinkersW3 hours ago
Oh my bad, I read that as nullptr, I use a custom optional that does not support such a silly mode as "disengaged"
canyp3 hours ago
How is that an optional then?

The problem is not nullopt, but that the client code can simply dereference the optional instead of being forced to pattern-match. And the next problem, like the other guy mentioned above, is that you cannot make any claims about what will happen when you do so because the standard just says "UB". Other languages like Haskell also have things like fromJust, but at least the behaviour is well-defined when the value is Nothing.

maccard58 minutes ago
What do you return if there is no value set? That’s the entire point of optional.
wild_pointer3 hours ago
You didn't read this, did you? https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

It's not a pointer.

boulos3 hours ago
They linked directly to https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/ which did exactly what I'd guessed as its example:

> The following code for example, simply returns an uninitialized value:

  #include <optional>

  int f() {
    std::optional<int> x(std::nullopt);
    return *x;
  }
on_the_train3 hours ago
But that is not idiomatic at all. Idiomatic would be too use .value()
Maxatar2 hours ago
Just a cursory search on Github should put this idea to rest. You can do a code search for std::optional and .value() and see that only about 20% of uses of std::optional make use of .value(). The overwhelming majority of uses off std::optional use * to access the value.
electroly2 hours ago
Sadly I have lots of code that exclusively uses the dereference operator because there are older versions of macOS that shipped without support for .value(); the dereference operator was the only way to do it! To this day, if you target macOS 10.13, clang will error on use of .value(). Lots of this code is still out there because they either continue to support older macOS, or because the code hasn't been touched since.
IshKebab2 hours ago
Not only is this a silly No True Scotsman argument, but it's also absolute nonsense. It's perfectly idiomatic to use `*some_optional`.
canyp3 hours ago
It is discussed in the linked post: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/

tl;dr: use-after-move, or dereferencing null.