New revision of the paper from Jon Bauman (Rust Foundation), Timur Doumler, Nevin Liber, Ryan McDougall, Pablo Halpern, Jeff Garland, and Jonathan Müller. Targeting EWG.
The paper tries to answer two questions: what does "memory-safe language" actually mean, and should C++ become one? The core argument is that "memory-safe" means systematically preventing all undefined behavior, not just reducing the rate of memory bugs. They argue that profiles, sanitizers, and library hardening are valuable but fundamentally insufficient for the "memory-safe" classification. The proposed path is a subset-of-superset approach - add language features that enable a UB-free subset, similar to how Rust's default mode is safe and unsafe is the opt-in.
Notably, the paper does not commit to a specific design. No borrow checker proposal, no syntax, no concrete feature. It's asking EWG to commit to the goal of becoming a memory-safe language first, then figure out the how.
R0 went through SG23 in Kona with consensus to forward (13-5-5-2-0 for encouraging design work, 10-4-8-2-1 for forwarding to EWG). The margins on that second poll are... interesting.
Paper: https://wg21.link/p3874r1
Reminder: be civil. Paper authors sometimes read these threads. Debate the ideas, not the people. If you can't distinguish between those two things, take a walk.
"Should C++ be a memory-safe language?" is the kind of paper title that guarantees 200 comments where exactly 4 people have read past the abstract.
sir this is a language with 45 years of undefined behavior and you're asking it to go on a diet
memory safety is when your program crashes deterministically instead of randomly
unironically yes. that's literally what they're proposing. defined behavior under all inputs. the horror.
every 6 months r/wg21 has the exact same memory safety thread and every time it devolves into the same argument. at least this time there's a paper number attached
see you in 6 months when they post R2 with the same conclusions and EWG says "needs more work"
I actually read the paper. The most consequential claim is buried in the definition section and most people here will miss it.
The paper argues that "memory-safe language" doesn't mean "a language that prevents memory bugs." It means "a language that systematically prevents all undefined behavior." They quote Ralf Jung:
This is a much stronger claim than what most people mean when they say "memory safe." Under this definition, signed integer overflow is a memory safety issue. Violating a
[[pre:]]contract is a memory safety issue. Anything that triggers UB, regardless of whether it touches memory at all, disqualifies you.The strategic question isn't whether this definition is correct - it's whether it's wise. By defining memory safety as "zero UB in the safe subset," you set a bar that's clear and defensible, but you also define away every incremental improvement as insufficient. Profiles? Not enough. Hardened containers? Not enough. Contracts? Not enough. Nothing short of a Rust-style safe/unsafe split qualifies.
I think the weird machine argument (any UB is exploitable, therefore all UB is equally dangerous from a security standpoint) is technically sound. But there's a gap between "technically sound" and "strategically useful for directing committee resources." You can agree that all UB is bad without agreeing that the right response is to build a parallel safe dialect of C++.
The paper knows this is the crux. The three options in "Should C++ become a memory-safe language?" are framed so that (1) is obviously insufficient and (3) is the paper's conclusion, making (2) the thing they're really arguing against. And the argument against (2) is essentially: incremental improvements can't reach the destination because the destination requires guarantees, not reductions.
That's the real debate. Not "is memory safety good" - obviously yes. The question is whether C++ should commit to guarantees when reductions might be enough to keep it viable.
The weird machine framing is the strongest part of the paper, though. Garen's point that "even one weird machine bug is a sufficient gadget to reprogram an entire process" isn't theoretical - it's what every CVE database shows. An integer overflow that never touches a pointer still creates the computational environment where an attacker can pivot. The distinction between "memory UB" and "other UB" is a C++ committee distinction, not an attacker's distinction.
I agree the scope is ambitious. But if you don't define the goal correctly, you end up optimizing for the wrong metric. Google's experience with hardened libc++ found thousands of bugs - great - and also proved that hardening alone doesn't change the classification. No government procurement officer is going to accept "we catch 90% of UB" when Rust offers "safe code can't trigger UB."
Fair. I'll concede the weird machine point - the security argument doesn't care about our taxonomies of UB.
But "no government procurement officer is going to accept 90% reduction" assumes the procurement decision is binary. In practice it's a risk assessment. If C++ with profiles and hardening gets you from "70% of CVEs are memory bugs" to "5% are," that changes the procurement calculus even without the "memory-safe" label. The question is whether the label itself has enough institutional power to justify the engineering cost of a safe subset.
I suspect the answer is "yes, because the label is what CISA cares about." But we should be honest that we're partly doing language design to satisfy a checklist.
wait, is signed integer overflow a memory safety issue now?
According to this paper's definition, yes. If it triggers UB, it's in scope. That's... ambitious. It also means the safe subset can't have signed integer overflow, which means either wrapping semantics, trapping, or making signed arithmetic checked by default. Each of those has performance implications.
Rust went with wrapping-on-overflow-in-release, panic-on-overflow-in-debug for the default arithmetic. That alone was years of debate.
the fact that a paper titled "should C++ be a memory-safe language" has a Rust Foundation employee as first author tells you everything you need to know about where C++ is right now
Having someone who deeply understands Rust on the author list is a feature, not a bug. The paper is about what "memory safe" means in the context where industry uses that term. Who better than someone working on Rust/C++ interop to know how the term is actually used?
Also, the co-author list includes Timur Doumler (SG21 co-chair, decades of C++), Pablo Halpern (allocator guy, WG21 forever), Jeff Garland (Boost.Date_Time, been around since before most of us were born), and Jonathan Müller (think-cell, foonathan, SG9 chair). This isn't a Rust paper wearing a C++ hat.
I wasn't attacking the authors. I'm saying the fact that C++ needs to import Rust expertise to figure out its own safety story says something about how far behind the curve C++ is.
or it says that cross-pollination between language communities is healthy and normal? the acknowledgements section literally thanks Mara Bos and Ralf Jung. the Rust folks are helping because they genuinely care about systems programming getting safer, not because they're running an infiltration campaign
"just use Rust" sure let me just rewrite the 30 million lines of avionics software that's been certified for 15 years
nobody said rewrite. google and microsoft are writing new components in Rust. the paper literally says "A memory-safe C++ is about new code." you're agreeing with it.
The paper is well-written and the definition is defensible, but there's a structural problem: it asks EWG to commit to crossing the river without showing them the bridge.
The three options are:
1. "Already safe enough" - the paper dismisses this
2. "Incrementally safe" - the paper argues this is insufficient
3. "Become memory-safe" - the paper's conclusion
But options (2) and (3) aren't mutually exclusive. You can pursue incremental improvements and explore a safe subset. The paper frames it as a choice to create urgency, but in practice the committee is already doing (2) and the question is whether to also do (3). That's a resource allocation question, not a philosophical one.
The SG23 polls tell the story. 13-5-5-2-0 for "encourage design work" - that's comfortable. 10-4-8-2-1 for "forward to EWG" - that's a lot of neutrals. The neutrals are saying "I'm not against this, but I want to see more before I commit."
You can't design the bridge without first agreeing you want to cross the river. That's literally what this paper is trying to establish.
P3390 showed a bridge - Circle's borrow-checking approach applied to C++. It got bogged down in SG23 partly because people kept asking "but do we even want this?" The design discussion can't proceed until the directional question is settled. This paper exists because the directional question was never answered.
Asking for a complete design before committing to a direction is a recipe for nothing ever happening. Which, to be fair, is a well-tested strategy in WG21.
Fair point about P3390. But I'd argue the SG23 experience showed something different: people weren't asking "do we want this?" - they were asking "is this particular bridge the right one?" The Circle approach imports Rust's borrow checker wholesale, with all the ergonomic constraints that come with it. Some committee members want to explore alternatives (ARC like Swift, ownership types, region-based memory management).
The 10-4-8-2-1 poll reflects that uncertainty. "Commit to becoming memory-safe" locks in a destination. The neutrals want to know: are there destinations with a cheaper path? What if the only viable bridge is borrow checking, and borrow checking is too expensive to retrofit onto C++?
The neutrals are exactly the people this paper is trying to convince. But here's the thing - you can't simultaneously demand design exploration and refuse to fund it. "Encourage design work" got 13-5. People want exploration. The question this paper asks is: exploration toward what goal?
If the answer is "explore whether a safe subset is feasible" - sure, but that's already happening with Circle. If the answer is "explore making C++ memory-safe" - that's the commitment this paper asks for, and it enables actual design work with a clear acceptance criterion.
Without the commitment, every design paper will face the same "but do we want this?" objection that P3390 faced. And we'll be having this same thread in 2028.
I don't disagree with the procedural argument. "Pick a direction or nothing moves" is real. My concern is that "C++ will become a memory-safe language" as a committee position creates expectations that may not be meetable on any reasonable timeline. If EWG says yes and then it takes 15 years to deliver, that's worse for C++'s reputation than never making the promise.
I'd vote for "explore this direction with the intent of becoming memory-safe" but not "we will become memory-safe." The paper arguably conflates the two.
Perspective from someone whose code runs on hardware with 256KB of flash and 64KB of RAM.
The paper mentions constexpr as an existing UB-free subset. That's true and it's great for what it does. But the paper also says "a subset that does not allow pointer dereference or arithmetic would not be useful." I agree - and that's exactly the tension. The features that make C++ useful for embedded are the features that introduce UB.
My concern with the subset-of-superset approach: if the safe subset can't express the low-level operations I need without escaping to unsafe, then most of my embedded code will be unsafe C++ with a safe wrapper. That's not a safe codebase - it's an unsafe codebase with a marketing label.
Rust has this same problem in embedded. Look at any
no_stdRust crate and count theunsafeblocks. The ratio is much higher than application-level Rust.This. I can't use half the standard library already because it allocates. Now they want me to use a different half that's safe? I'll end up with an intersection of "no-alloc" and "safe" that's approximately
constexpr int add(int a, int b).The paper explicitly says the safe subset is about new code and is complementary to profiles. Your existing embedded code isn't going anywhere. The question is whether new embedded projects would benefit from a safe subset with well-defined escape hatches.
Also, the Rust embedded comparison cuts both ways. Yes,
no_stdcode has more unsafe. But the bugs in that unsafe code are dramatically easier to find because the unsafe blocks are syntactically marked and auditable. That's not nothing.I know. My concern isn't about existing code. It's about whether the safe subset will be expressive enough for the kind of new code I write. If every register access, every DMA transfer, every interrupt handler requires
unsafe, then the safe subset is decorative for my use case.The auditability point is fair though. I'd take "same code but unsafe blocks are marked" over what we have today.
The standard library section is where the paper's optimism bumps into reality. Let me quote:
That sentence is doing an enormous amount of work. Let's enumerate what "soundly encapsulate" actually means for some common types:
std::vector- iterators are the problem. The paper notes that "the requirements that begin and end iterators reference the same container cannot be easily enforced for syntactically separate entities." This means a safe vector needs range-based iteration at minimum. That's a different API surface.std::string_view- the entire type is a dangling reference waiting to happen. A safe string_view needs lifetime tracking, which is the hardest unsolved problem in the paper.std::optional<T&>- same lifetime issue.Anything touching allocators - allocator-aware types store references to allocators that can outlive them. Sound encapsulation requires tracking allocator lifetimes.
The paper mentions
std::variantas mostly declarable as safe. Sure. Now tryiostream. Orregex. Or anything with locale dependencies.I'm not saying it's impossible. Rust spent a decade building their standard library with safety in mind from day one. We'd be retrofitting safety onto 30 years of API surface. The effort is real and the paper should acknowledge its scale more honestly.
Counter: you don't need to make all of the stdlib safe to be useful. You need: vector (with range iteration), string (owning), optional, variant, tuple, span (with lifetime tracking), smart pointers, and the algorithms that operate on ranges. That covers 80% of what application developers actually use.
Nobody's asking for safe
iostream. Let it live in unsafe C++ forever. It deserves its fate.Fair, but "vector with range iteration" and "string (owning)" and "span with lifetime tracking" each independently require the lifetime safety problem to be solved first. The safe stdlib bootstraps on the hardest unsolved language problem. You can't ship one without the other.
wait, people still use iostream?
I think the paper's framing of the three options is misleading. It presents "incrementally become safe enough" and "become a memory-safe language" as competing strategies. They're not. One is a subset of the other.
P3081 (safety profiles) and P3589 (profiles framework) give us something now. They catch real bugs in real codebases today. Google's experience with hardened libc++ proved that library hardening alone eliminates a significant fraction of exploitable bugs in production.
The paper's all-or-nothing framing - either you guarantee zero UB or you're not memory-safe - ignores the practical impact of reducing vulnerability surface by 70-90%. That reduction saves real money and real lives while the committee figures out the long-term story.
I'm not against a safe subset eventually. I'm against defining "memory safe" in a way that makes everything we're doing right now sound worthless.
The paper explicitly says profiles and hardening should continue. Literally: "existing work to improve language- and functional-safety should continue." It's not "all or nothing." It's "do both."
The distinction the paper draws isn't "profiles are worthless" - it's "profiles don't make you a memory-safe language." And they're right. The term "memory-safe language" has a meaning in the industry conversation that C++ is losing. CISA doesn't list "languages that are sort of safe if you use the right compiler flags." They list languages with systematic guarantees.
Profiles give you "we tried our best to catch bugs." A safe subset gives you "this code cannot exhibit UB." Those are fundamentally different claims. One is engineering diligence. The other is a mathematical property.
"Mathematical property" - but only within the safe subset. The program as a whole still depends on unsafe code being correct. The paper's own appendix explains that soundness depends on "thoughtful design and abstraction boundaries" in the unsafe part. So the guarantee is: "this code cannot exhibit UB, assuming the unsafe code it transitively depends on was written correctly by humans."
That's a weaker guarantee than it sounds. It shifts the burden from "all code might have UB" to "only the unsafe code might have UB," which is genuinely useful for auditing. But it's not the iron-clad mathematical property people imagine when they hear "memory-safe language."
It's the same guarantee Rust provides. And Rust is classified as memory-safe. The paper addresses exactly this point - the guarantee is local, not global. The value is that the unsafe surface area is small, syntactically explicit, and auditable. That's not nothing - it's the entire reason Rust has a dramatically lower CVE rate than C++.
You're essentially arguing "the guarantee isn't perfect, so why bother." But "dramatically reduced and auditable attack surface" is exactly what the recommendations are asking for. Perfection was never the bar.
I'm not arguing "why bother." I'm arguing that the cost is enormous and the paper understates it. Adding a safe subset means carrying two mental models of C++ forever. Every feature, every library, every teaching resource needs to address both modes. The committee already can't keep up with the language's complexity.
I'd rather have one good language with excellent tooling than two halves of a Frankenstein that nobody fully understands.
Edit: to be clear, I'm voting "explore the direction" if this comes to EWG. I just want the exploration to happen with eyes open about the costs.
I work on a major C++ compiler frontend. Some reality on the implementation side.
The paper references Circle as existence proof that a safe subset can be added to C++. Circle is impressive work - Sean Baxter has built a functional borrow checker for C++ that runs on Compiler Explorer today. But Circle is one person's research compiler. The gap between "one person implemented this" and "three major compilers agree on the semantics and ship a conforming implementation" is measured in years and tens of millions of dollars.
Borrow checking in Rust depends on MIR - a mid-level IR that preserves control flow information that's been optimized away by the time you hit LLVM IR. Adding an equivalent pass to GCC, Clang, and MSVC means adding a new compilation phase that doesn't exist in any of them. It's not a flag you flip. It's a new layer of the compiler.
I'm not saying don't do it. I'm saying the paper should level with people about what "some significant new feature will be required" actually means in implementation terms. It means 3-5 years of compiler work after the committee agrees on a design. And that's the optimistic timeline.
Sean Baxter has been presenting this work to the committee for years. P3390 is a thorough paper with a working implementation. At least this paper acknowledges it instead of pretending the design space is unexplored.
But you're right that going from "Circle can do this" to "the standard requires this" is a different problem entirely. Modules are still barely shipping and they're simpler than a borrow checker.
Exactly. And Circle's approach isn't the only possible one. The paper mentions Swift's ARC as an alternative. There might be C++-specific approaches nobody's explored yet - ownership types without full borrow checking, escape analysis, region-based approaches. The paper wisely doesn't commit to a design, but the committee will eventually need to evaluate multiple options, each with different implementation costs.
The modules comparison is apt and terrifying. C++ modules were "done" in 2020 and we're still arguing about build system integration in 2026.
[removed by moderator]
what did they say?
something about the committee being captured by Rust evangelists and how accepting Rust's definition of safety is "ideological surrender." the usual.
Rule 3. Take a breath.
I love how the paper says "This proposal does not seek to make the perfect the enemy of the good" and then spends 5 pages arguing that every existing safety improvement is insufficient because it doesn't achieve perfection
it's not contradictory. they're saying profiles are good AND insufficient for the specific goal of "memory-safe language" classification. complementary, not competing. it's literally in the text:
potato potato. the framing still positions the existing work as "option 2" and then argues for option 3. the implication is clear even if the words are diplomatic.
Meanwhile CISA: "please use memory-safe languages"
C++ committee: "first we must define what you mean by that. we'll get back to you in 2029."
You joke but CISA's definition is also vague. The paper literally quotes the ACM: "there remains no shared, technology-neutral terminology or framework with which to specify memory-safety requirements." The committee trying to nail down what the term actually means is doing useful work.
CISA's definition is "use Go, Rust, Java, or C#." it's not that deep. the fact that the academic definition is fuzzy doesn't change the procurement reality.
At my company we've already been told new projects can't use C++ without a waiver from the CISO. The writing is on the wall regardless of what the committee decides. The question isn't "should C++ become memory-safe" - it's "can C++ become memory-safe fast enough to matter."
this is the real motivation. not technical merit, procurement requirements. the committee is doing language design by compliance checkbox. and honestly? if that's what it takes to keep C++ relevant, fine. but let's be honest about it.
CISA literally acknowledged that "there remains no shared, technology-neutral terminology." The paper quotes it. Did anyone in this thread read the paper?
We measured. Bounds checking on our hot path costs 2-3ns per access. On a path processing 10M messages/sec, that adds up to a rounding error - unless you're competing against someone who doesn't pay that cost, in which case it's the difference between filling an order and missing it.
Any "safe subset" that mandates runtime checks in contexts where we need raw speed is a non-starter for us. The paper says the safe subset would restrict operations that "necessarily risk UB for the sake of performance, such as unchecked array access." If I can't do unchecked array access in the safe subset, then my entire critical path is in unsafe C++.
Which is fine - the paper says unsafe C++ continues to exist. But then the value proposition for us is "write your business logic in safe C++ and your hot path in unsafe C++." We already do that with careful engineering practices. The safe subset doesn't change our life much.
That's fair for HFT. But HFT is maybe 0.1% of C++ codebases. The safe subset isn't designed for the extreme tail of the performance distribution. It's designed for the other 99.9% where people are writing application logic, network services, data processing - code where a 3ns bounds check is noise in a 3ms request.
The paper's appendix shows how
get()wrapsget_unchecked()in Rust. Your hot path would use the equivalent ofget_unchecked()in an unsafe block. Everyone else usesget(). That's exactly the design.Sure. My point is just that every performance-sensitive domain will have the same pattern: safe wrappers around unsafe cores. Gaming, embedded, HFT, database engines, compiler backends. The "unsafe for perf" escape hatch will be much more common than "unsafe because legacy."
can we please just get networking in the standard before I retire. we're out here arguing about whether C++ should have a borrow checker while
std::socketdoesn't exist.bold of you to assume you'll live long enough to see either
Perspective from someone who teaches C++ to ~200 undergrads a year.
The biggest win from a safe subset isn't the safety guarantees themselves - it's the error messages. Right now, my students write code with dangling references and the compiler says nothing. The program works on their machine, crashes on the grading server, and they spend 8 hours debugging what the compiler could have caught in 0.2 seconds.
If there's a safe subset where the compiler rejects dangling references at compile time, that's a pedagogical revolution. Not because students will write "safe C++" in industry - they might - but because compiler-enforced rules teach people what correct code looks like. Rust's borrow checker is famously frustrating for beginners, but every Rust programmer I know says it made them a better systems programmer. That's the real value.
The paper says "A memory-safe C++ is about new code." From a teaching perspective, all student code is new code.
can confirm. debugging UB at 2am is my entire semester. last week I had a dangling reference that only segfaulted when compiled with -O2. the TA said "that's just how C++ works." I am begging for the compiler to yell at me.
skill issue
my brother in christ the compiler didn't even warn me
laughs in compile times
yes let's add borrow checking to C++. I'm sure that won't add another 3x to my already 45-minute clean build
borrow checking is actually fast - it's a single pass over a mid-level IR. in Rust it's typically <5% of total compile time. the thing that kills your build is template instantiation, and borrow checking doesn't make that worse.
your 45-minute build is a build system problem, not a language problem.
tell that to my CI budget
For people who don't read poll results for fun:
The revision history tells the story. Three polls in SG23 at Kona:
1. "Encourage design work in line with P3874" - 13-5-5-2-0 (consensus for)
2. "Forward to EWG" - 10-4-8-2-1 (consensus for, but eight neutrals)
3. "Forward to EWG as part of a Standing Document" - 18-4-2-0-1 (strong consensus)
Poll 1 says "most people want this explored." Poll 2 says "many people aren't ready to send it to EWG yet." Poll 3 says "almost everyone wants this in a standing document."
The gap between poll 1 and poll 2 is the interesting part. People support exploration but are cautious about escalation. That's a committee saying "we like the direction but we're nervous about the commitment." Which is exactly the tension this thread is arguing about.
Also notable: poll 3 getting 18 SF is remarkable. "Incorporate into a standing document" means "make this part of the committee's official direction." That's a strong signal, even with the caveats.
great, another paper that will take 10 years to get through EWG. by then Rust will have edition 2033 and we'll still be debating whether undefined behavior means undefined behavior
reading this paper and thread makes me think C++ is going through its midlife crisis. buying a sports car named "Safe Subset" won't make you young again. but it might keep you out of the nursing home.
sir C++ is 45 years old, this is well past midlife. this is the "I should really update my will" crisis.
The section on data race safety is surprisingly optimistic and I think rightly so. The paper notes that Rust achieves thread safety through
SendandSynctraits - not language-level constructs but unsafe traits that enforce synchronization requirements through the type system. Circle adopts the same approach.C++ already has the building blocks:
std::mutex,std::atomic,std::shared_mutex. What's missing is the type-level enforcement that prevents you from accessing shared data without holding the lock. That's a concepts/traits problem, not a borrow checker problem. You could imagine:The hard part is making the compiler enforce that you can't access the inner value without going through
with_lock. That requires some notion of "this field is only accessible through this API" - which is basically what Rust's ownership model provides.Still, thread safety might be the easier half of the memory safety problem. Lifetimes are the hard part.
memory safety? in MY undefined behavior? it's more likely than you think
Unpopular opinion: the paper is right that profiles aren't enough, but the committee will never reach consensus for a safe subset because it would require admitting that 40 years of C++ design philosophy was wrong about UB being an acceptable trade-off for performance. The entire language is built on the premise that the programmer is responsible for correctness and the compiler is responsible for speed. A safe subset inverts that contract. Half the committee has built careers on the old contract. They'll vote it down not because the technical argument is wrong, but because accepting it means accepting that the thing they built was flawed from the start.
This paper will get polite nods in EWG, a "needs more work" result, and die the slow death of a thousand revisions. The committee is structurally incapable of this kind of change. They'll prove me right in Sofia.
I disagree with the cynicism but not the analysis. The committee can change - ranges, concepts, coroutines, and modules all represent fundamental shifts in how C++ is written. The safety question is harder because it touches identity rather than just features. But the external pressure is real and the committee members who vote are mostly the ones who ship code to production. They know what's at stake.
The appendix has a nice example of sound encapsulation that I think a lot of people are missing. The Rust
get()/get_unchecked()pattern is directly translatable to C++:The point is:
get()is safe regardless of inputs. There is no combination of arguments that triggers UB. That's the difference between "the programmer must use this correctly" and "the API prevents misuse." We already write code like this in C++. The paper is saying: let the compiler enforce that safe code only calls safe APIs.godbolt link if you want to play with it.
Nitpick:
optional<T&>isn't in the standard yet (P2988 is still in flight). Which is a nice microcosm of the whole problem - we can't even return optional references yet and we're talking about a safe subset.Schrödinger's C++: simultaneously dying because it's not memory safe and immortal because nothing can replace it
Has anyone thought about how a safe subset interacts with ABI? If we're adding new types, new calling conventions for safe references, new function coloring...
ABI is an implementation concern. The paper is about language semantics. Safe references wouldn't necessarily need different ABI - a borrow-checked reference could be the same as
T&at the machine level, just with additional compile-time constraints.tell that to the people who can't upgrade their compiler because their distro won't break ABI. "same at the machine level" doesn't matter if the mangling changes.
I've been thinking about this more and I want to revise my earlier take. I said the question is whether "guarantees vs reductions" matters. I think the real question is about composability.
With profiles and hardening, safety is a property of a build configuration. Turn on the right flags, link the right libraries, use the right analysis tools, and you get safety-ish. But that safety doesn't compose. Library A with profiles + library B with profiles doesn't give you a program with profiles-level safety, because the boundaries between them aren't checked.
With a safe subset, safety is a property of the code itself. Safe function A calling safe function B gives you a safe call chain, checked by the compiler, regardless of build flags. That composes. And composability is the thing that scales.
I still think the implementation cost is enormous. But the composability argument is what makes the paper's position stronger than "just use sanitizers."
Edit: this is what the paper means by "guarantees depend upon provable properties and automatic enforcement." Profiles are enforcement without provability. The safe subset is both.
[removed by moderator]
report and move on
I came from r/all. I have no idea what any of this means but it sounds like you guys are having a 300-comment argument about whether your programming language should stop crashing
that is a devastatingly accurate summary
The paper says constexpr is already a UB-free subset. So technically C++ is already a memory-safe language. You just have to run everything at compile time. Problem solved. Ship it.
my username has been waiting for this moment
The appendix is the most important part of the paper and almost nobody will read it. It explains something that both the "Rust is perfectly safe" and "Rust's safety is a lie" crowds get wrong.
The example with
get_uncheckedis key. Safe Rust code can cause UB through a chain of causality - you set an index to an invalid value in safe code, then unsafe code uses that index without checking. The UB happens in unsafe, but the "bug" is in safe code. The paper acknowledges this honestly, which is more than most safety advocates do.The practical value isn't "safe code can't cause bugs." It's "when a bug causes UB, there's an unsafe block in the call chain you can point to." That's auditability, not perfection. And auditability is what makes Rust's approach work at scale - you know where to look.
The same would be true for a C++ safe subset. The guarantee isn't "your program is correct." It's "if your program has UB, you know which escape hatches to audit." That's a much more honest pitch and I wish the paper led with it.
bold move, publishing a paper that's basically "hey C++ committee, I know you've been doing this for 45 years but have you considered... doing it differently"