Authors: Jon Bauman, Timur Doumler, Nevin Liber, Ryan McDougall, Pablo Halpern, Jeff Garland, Jonathan Müller
Document: P3874R1
Date: 2026-02-23
Target: EWG
Link: wg21.link/p3874r1
The big one. Seven authors - including Timur Doumler, Pablo Halpern, and Nevin Liber - are asking the committee to answer a question it has been dancing around since the CISA reports dropped: should C++ actually commit to becoming a memory-safe language, or keep patching around the edges?
The paper argues that "memory-safe language" means having a syntactically explicit, compiler-enforced subset free of undefined behavior - not just fewer memory bugs, but zero UB in the safe subset. By this definition, Rust qualifies because of its safe/unsafe split. C++ does not. The paper proposes a subset-of-superset strategy: add new features that create a usable UB-free subset, while leaving existing unsafe code untouched. The key claim: profiles, contracts, hardening, and sanitizers are necessary but insufficient. To actually count as "memory safe" under government and industry definitions, C++ needs something more fundamental.
They point to lifetime safety as the primary unsolved problem, reference Circle and Rust as existence proofs, and note that a significant portion of the standard library may need new safe interfaces. SG23 gave this favorable polls in Kona (18-2 with 5 neutral on encouraging the design direction). Now it goes to EWG, where the real fight happens.
Paper metadata: P3874R1 | EWG | 2026-02-23 | wg21.link/p3874r1
Reminder: be civil. The paper authors sometimes read these threads. Comments that violate Rule 2 will be removed.
*laughs in compile times*
this is why we can't have nice things, exhibit P3874
exhibit number matches the paper number, nice
at least it'll be easier to compile when we're all writing safe subset C++ with 47 lifetime annotations per function
seven authors to ask a question the internet has been screaming about for three years
to be fair the internet screams about everything. the committee asking is new.
There's something subtle happening in this paper that I think most people skimming the abstract will miss.
The entire argument hinges on the definition of "memory-safe language" in section 2. The paper defines it as a language with a subset that is "systematically free of undefined behavior" - not just buffer overflows and use-after-free, but ALL undefined behavior. This is a significantly broader definition than what CISA, Google, and the ACM are actually talking about when they recommend "memory-safe languages." Those recommendations are about the specific class of bugs that lead to exploitable vulnerabilities - spatial and temporal memory errors.
By choosing the broadest possible definition, the paper makes the incrementalist position (profiles + contracts + hardening) look inadequate by definition. If your standard for "memory safe" is "zero UB in the safe subset," then of course no amount of profiling or hardening gets you there. But that's a definitional choice, not an empirical finding.
The paper actually acknowledges this tension in the "weird machine" section - Ralf Jung's argument that ALL UB creates weird machines is compelling but also contested. Integer overflow UB and unsequenced modification UB are categorically different from buffer overflow UB in terms of exploitability.
This is a well-crafted direction paper, but the definition IS the argument. If you accept a narrower definition of memory safety - one focused on the specific bug classes that actually cause CVEs - the committee's current trajectory (profiles + P3100) might be adequate. The paper needs you to accept the broad definition before the conclusion follows.
Read section 2 carefully. The rest is downstream.
Edit: since people are misreading this - I'm not saying the paper is wrong to try to define the term. I'm saying the definition does more work than it appears to, and if EWG pushes back on the definition, the argument loses its foundation.
Hard disagree on the framing. The paper's definition isn't "choosing the broadest possible" - it's aligning with how the term is actually used in practice.
Ralf Jung isn't some random blogger. He literally built the formal model for Rust's safety guarantees (RustBelt). When he says "it doesn't matter why your program has UB, what matters is that a program with UB defies the basic abstractions of the language itself," that's not a rhetorical flourish - it's a statement about the computational properties of undefined behavior.
A program with ANY form of UB can be exploited through a weird machine. The exploitability distinction you're drawing between integer overflow and buffer overflow is an empirical observation about current exploit techniques, not a fundamental property of the UB itself.
If your "memory safe" subset still permits UB that creates weird machines, it's not safe - it's safer. There's a meaningful difference, and that's exactly what the government recommendations are responding to.
I take your point about Ralf Jung - he's among the best in the field. But even within PL theory, there isn't consensus that "memory safety" means "UB-free." RustBelt defines memory safety in terms of specific memory operations (reads, writes, deallocations), not all UB. The Rust Reference lists "Behavior Considered Undefined" separately from its memory safety guarantees. These are overlapping but not identical concepts.
The paper cites Jung's weird machine argument but uses it to collapse the distinction between "memory safety" and "UB-freedom." That's a rhetorical move, not a given. If the committee accepts the collapsed definition, the conclusion follows. If they don't, the argument needs more work.
I think the paper's GOAL is correct - C++ should pursue systematic guarantees. I just think the definitional move is doing more heavy lifting than the paper admits.
OK, fair point on the RustBelt distinction. But consider the practical consequence of your narrower definition: you end up with a "memory safe" subset that still has signed integer overflow UB, unsequenced evaluation UB, and a dozen other categories from P3100. The government says "use a memory safe language," your company ships your narrow-definition-safe C++, and then a CVE drops from signed overflow creating a weird machine. Was the subset "memory safe"? Technically yes under your definition. Does that help?
I think the paper is making a pragmatic argument: if you're going to build the machinery for a safe subset, build one that actually eliminates the weird machine problem entirely. Half-measures cost almost as much and deliver less.
Jumping in because I think you're both right but for different reasons.
The paper conflates two things: (1) the definition of "memory safety" as a formal language property, and (2) the goal C++ should pursue. You can accept the narrow definition of memory safety AND still argue for a UB-free subset. The paper doesn't need to redefine "memory safety" to make its case - it just needs to argue that a UB-free subset is desirable for security, which is independently true.
The conflation creates a vulnerability in the paper's argument: EWG members who know the PL literature will object to the definition, and that objection gets entangled with the goal. It would be stronger to separate them: "Here's what memory safety technically means. Here's why we should go further than that."
That's a good reframing. The definition is too broad in a technical sense - I'll concede that. But separating the definition from the goal doesn't change what the paper is actually asking for. The goal is a UB-free safe subset of C++, whether you call that "memory safety" or something else. Presentation issue, not substance.
imagine writing a paper that says "UB bad" and getting it past SG23
you're proving my point about misreading the abstract
This is the best comment in the thread. The definition IS the argument. Saving this for when EWG discussion starts.
I spent two years writing Rust professionally before coming back to C++ for a game engine role. Here's what the paper gets right and wrong, from someone who's lived in both worlds:
Right: Lifetime safety IS the primary challenge. The borrow checker is the reason Rust works, and it's the reason Rust is painful. You can't have one without the other. The paper correctly identifies that bounds checking is straightforward and lifetime checking is the hard part.
Right: The subset-of-superset strategy is the only viable path. You can't break existing C++ code. Period. A new language mode or subset that opts in is the way.
Wrong (or at least incomplete): The paper underestimates how much the borrow checker constrains API design. In Rust, entire design patterns exist BECAUSE of the borrow checker -
Arc<Mutex<T>>is the canonical example. You don't just bolt a borrow checker onto existing C++ patterns. You need new patterns. The paper mentions Circle and safecpp.org but doesn't engage with how much C++ API design would need to change.Missing: No mention of the interop story. Half the value of "safe C++" is calling existing unsafe C++ from the safe subset. Rust's FFI boundary with C is a constant pain point. C++ safe-to-unsafe interop would be the defining challenge and the paper doesn't discuss it at all.
Overall: I agree with the direction but the committee needs to understand this is a 10-year, all-hands effort comparable to adding templates in the first place. The paper should say that explicitly instead of leaving it to EWG to figure out.
of course the Rust person supports making C++ more like Rust
This matches my experience exactly. We evaluated Rust at my employer and the interop cost was the dealbreaker. A safe C++ subset with seamless unsafe interop would have saved us the evaluation entirely.
great breakdown, thanks. the interop gap is a really good catch.
I work on a major C++ compiler frontend. Some perspective on the implementation side.
The paper references Rust's MIR (mid-level intermediate representation) as the mechanism that makes borrow checking work. What it doesn't mention is that Rust's MIR was designed for borrow checking from the start. C++ has no equivalent IR at the right abstraction level. Clang's AST is too high-level and LLVM IR is too low-level - both have had lifetime information stripped or never represented.
Building the equivalent of MIR for C++ means creating a new compiler pass that preserves ownership and lifetime information through templates, overload resolution, implicit conversions, and NRVO. The complexity is not comparable to what Rust had to do. Rust's type system was designed to make lifetime information recoverable. C++ was designed for maximum flexibility, which is exactly the wrong property when you want to prove lifetime safety.
This is not a "can't be done" - Circle proves it can. But Circle started with the advantage of a single-person codebase where every compiler pass could be modified simultaneously. Doing this in GCC, Clang, and MSVC independently, while maintaining backward compatibility, is a different order of magnitude.
I estimate 5-8 years of compiler work per vendor after the design is finalized. That's the real timeline the committee should be discussing.
Edit: several people pointed out that Clang's CFG analysis is closer to what I'm describing than I implied. Fair - the infrastructure isn't zero. But the gap between "can detect some lifetime issues" and "can prove lifetime safety" is still enormous.
Can you elaborate on the MIR complexity? Specifically - does the template instantiation model make this fundamentally harder or just more work? My understanding was that by the time you're doing lifetime analysis, templates are already instantiated.
Both. Templates are instantiated, yes, but the borrow checker needs to verify the template definition generically - not just each instantiation. Otherwise you can't give library authors errors at definition time. Rust's generics with trait bounds allow this. C++ templates with concepts are close but not quite there - concepts constrain the interface but don't carry lifetime information.
The alternative is checking every instantiation, which means (a) you miss bugs in uninstantiated templates and (b) compile time scaling is multiplicative with the number of instantiations. Neither is acceptable for a safety guarantee.
so basically this is a decade of compiler work minimum?
if it takes a decade for GCC/Clang/MSVC, embedded vendors will add another decade after that. we'll see this in production in the 2040s.
just. use. Rust.
seriously, this paper is 15 pages of the committee slowly discovering what Mozilla figured out in 2010. you want a memory-safe systems language? it exists. it's called Rust. you don't need to rebuild C++ from the inside to get there.
wow what an original take, nobody has ever thought of that. you should write a paper.
some of us have 50 million lines of C++ and can't "just use" anything. the paper literally addresses this in the section on new code vs existing code.
that's a business problem not a language problem. the committee shouldn't redesign the language because organizations refuse to invest in migration.
spoken like someone who's never had to maintain a codebase older than their career
also love the implication that the C++ committee should just... give up? disband? what exactly is the alternative you're proposing?
Locking this sub-thread. You know the rules.
Embedded perspective, since this paper won't engage with it: we can't use exceptions. We can't use RTTI. We can't use dynamic allocation in most contexts. We barely got constexpr working across our toolchains last year.
Now a paper with seven authors is proposing a language subset that requires (at minimum) new lifetime annotation syntax, a new IR in the compiler, and potentially a new standard library. The paper says this is about "new code" but our new code still runs on STM32s with 256K flash. Every feature that makes the compiler heavier makes our builds slower and our binaries bigger.
I'm not against safety. I'm against the committee designing safety features with desktop/server C++ as the only use case. If the safe subset requires
Arc<Mutex<T>>-style patterns, we can't use it. If it requires runtime bounds checks, we're out. The paper should have a section on embedded constraints. It doesn't even mention the word "embedded."finally someone talking about the real-world constraints. the entire safety discourse treats C++ as if it only runs on x86 with 64GB RAM and an SSD.
this is the comment the paper authors need to read. if the safe subset is "desktop C++ with lifetime annotations," you've lost half the C++ community on day one.
Some context on the committee dynamics, since the paper doesn't discuss them and they matter more than the technical arguments:
The SG23 polls look strong - 18-2 with 5 neutral on encouraging the design direction. But SG23 is a study group that self-selects for people who care about safety. EWG is a different animal. EWG includes the people who killed P2771 ("Towards memory safety in C++") and the people who voted P3390 ("Safe C++") below profiles in the prioritization poll (19-9 with 11 for both).
The paper is carefully framed as "should we pursue this goal" rather than "here is the design." That's smart - it avoids the P3390 problem of asking the committee to evaluate a specific design before agreeing on the goal. But EWG may see it as asking for a blank check: "agree to pursue memory safety, and we'll figure out the details later." The committee has been burned by that pattern before.
Watch the poll language carefully when this comes to EWG. "Do we want C++ to be memory-safe" will poll very differently from "Do we want to add borrow checking to C++."
the SG23 polls are encouraging but EWG has killed more ambitious papers than this. the profiles crowd has been building momentum for two years and they're not going to roll over because a direction paper says profiles are "insufficient."
great so we'll get this in C++38
The section on the standard library is the weakest part of the paper and I wish the authors had spent more time on it.
This sentence is doing approximately all of the heavy lifting in the standard library discussion. In practice, "soundly encapsulating" the STL means:
1. Every container's iterator model needs rethinking - begin/end as separate objects is fundamentally incompatible with borrow checking
2.
std::string's small buffer optimization interacts with move semantics in ways that a borrow checker can't reason about without special handling3. The allocator model (stateful allocators, fancy pointers, propagation traits) is a lifetime nightmare
The paper handwaves this as "significant work" but doesn't give any estimate of what that means in committee-years. The range-v3 to std::ranges effort took 6+ years and that was ADDING features, not redesigning existing ones.
I support the direction in principle. But I'm tired of direction papers that treat the standard library as a "we'll figure it out later" problem.
You're looking at this from inside the committee. From the outside, here's what's happening: my employer is evaluating whether new projects should use Rust instead of C++. Not because Rust is better for our domain - it isn't - but because the security team can't justify C++ to auditors anymore.
The library cost is real. But the cost of NOT doing this is losing an entire generation of projects. Every year the committee says "we'll incrementally improve safety," another major organization announces a Rust migration. Google, Microsoft, Amazon - these aren't small players.
The paper doesn't need to solve the library problem. It needs to get the committee to agree the goal is worth pursuing. The library will follow.
I've been involved in LEWG for 8 years. I've watched three major library features go from "the direction is obvious" to "oh god what about allocators" to "let's ship a subset and iterate." Ranges. Executors. Networking.
I'm not arguing against the goal. I'm arguing against the paper's rhetorical strategy of treating the hardest part as an afterthought. If you want EWG to say yes, give them an honest estimate of the cost. Otherwise you get consensus on the direction and then the whole thing stalls in LEWG for a decade because nobody planned for the hard parts.
Fair point on the executors comparison. But the paper explicitly says most existing interfaces that are already UB-free can "simply be declared as part of the memory-safe subset." It's not a full rewrite. The new interfaces are needed for the parts that can't be made safe by declaration - iterators being the main one.
The paper says "in at least some cases, wholly different APIs will be required." The iterator model touches everything. It's not "some cases" - it's the fundamental abstraction layer of the entire standard library.
But I take your broader point. Perfect is the enemy of good, and I'd rather have a partially safe standard library than none. I just want the authors to be honest about the timeline.
maybe if some of you actually contributed to the committee instead of complaining on reddit the library would be done already. armchair standards experts everywhere.
Rule 2. Take a breath.
[removed by moderator]
what did they say?
something about C++ being a dead language and everyone who writes it being irresponsible. the usual.
I teach C++ to undergrads. Here's what I keep thinking about when I read papers like this:
My students already struggle with move semantics. They struggle with the difference between
const&and&&. They writeauto x = std::move(y); /* uses y */and don't understand why it compiles. Adding lifetime annotations to this picture is going to be brutal.BUT - and this is the part I don't hear from the profiles crowd - if there's a safe subset that prevents those mistakes at compile time, my students actually learn FASTER because the compiler catches their errors before runtime. The borrow checker in Rust is painful for experienced devs but genuinely helpful for learners because it surfaces misconceptions immediately.
The paper should have engaged with the pedagogical dimension. A safe subset isn't just a security feature - it's a teaching tool. But only if the error messages are good. And C++ template error messages are... a known quantity.
as a recent grad, can confirm move semantics already broke me. the idea that there's a mode where the compiler just says "no you can't use that after moving it" sounds like a dream honestly.
exactly. and we want to ADD lifetime annotations on top? either the safe subset makes this simpler (great!) or more complex (terrible). the paper doesn't tell us which.
love how the paper opens by citing CISA and government recommendations. nothing like federal agencies telling programmers what languages to use.
the government telling us what languages to use, what could go wrong. maybe they should mandate COBOL while they're at it, since they seem to love it so much.
to be fair CISA didn't say "don't use C++." They said "prefer memory-safe languages for new projects." There's a meaningful distinction between a recommendation and a mandate, and the paper is responding to the recommendation, not panicking about regulation.
the NSA said the same thing. when the intelligence community agrees with academia, agrees with CISA, and agrees with the largest tech companies in the world, maybe we should listen instead of reflexively dismissing it as "the government telling us what to do."
the paper references Circle and safecpp.org as an existence proof but doesn't engage with the design at all. Sean Baxter literally implemented borrow checking for C++ and showed it works. the committee saw the demo. what more do they need?
Sean showed it works, and then SG23 prioritized profiles over Safe C++ 19-9. The committee looked at a working implementation and said "nah, we'd rather do something less ambitious." That's the actual history and it's worth remembering when reading this paper's optimism about committee direction.
Circle is impressive but it's one person's compiler, not a production toolchain. Moving from "demo in a research compiler" to "implemented in GCC, Clang, and MSVC with full backward compatibility" is where most ambitious C++ features go to die or take a decade.
that's what they said about clang itself, which started as one person's research compiler at Apple
The strongest section of the paper is the weird machine argument. For those who haven't read it:
This is genuinely important and I don't think the r/cpp crowd appreciates it. When your program hits UB, the abstract machine's semantics no longer apply. The compiler optimized assuming no UB. The hardware executes whatever the optimizer produced. The result is a computational environment that is Turing-complete in ways neither the programmer nor the compiler intended. THAT is why UB leads to security vulnerabilities - not because of the specific kind of UB, but because UB breaks the semantic contract between programmer and machine.
The paper cites Geoff Garen's C++Now talk on WebKit memory safety: "even one weird machine bug is a sufficient gadget to reprogram an entire process." That's not hyperbole. It's a description of how real exploits work.
this is exactly what I was getting at in my comment above. the weird machine argument is the paper's strongest tool but it proves MORE than the paper needs. you don't need to redefine "memory safety" to use the weird machine argument - you just need to argue that UB-freedom is independently desirable.
we can't even agree on how modules work and now we want borrow checking. the committee's eyes are bigger than its stomach.
Some historical perspective for the "this will take 10 years" crowd: yes, it will, and that's actually fine.
Concepts were first proposed for C++0x, dropped, redesigned as Concepts Lite, and finally shipped in C++20. That's roughly 15 years from proposal to standard. Modules were discussed informally since 2004, formally proposed around 2012, and shipped (barely) in C++20. Ranges went from range-v3 to C++20 in about 6 years. Coroutines: ~7 years from first proposal to C++20.
If the committee agrees on the GOAL in 2026 and the first design proposals appear by 2028, a safe subset could conceivably ship in C++35 or C++38. That's 9-12 years. By committee standards, that's actually efficient for something this fundamental.
The paper's job isn't to solve the timeline problem. It's to get the committee to agree the destination is worth walking toward. Everything else is details.
modules took 15 years and we still don't have a build system story for them. that's the real cautionary tale. you can ship a feature in the standard and still not have it be usable in practice for another 5 years after that.
to be fair, concepts eventually shipped and are genuinely useful. the timeline was bad but the result was good. maybe safety follows the same path.
The paper makes an interesting point that I haven't seen discussed much:
This is true and genuinely important. The constexpr evaluator is a proof-of-concept for a UB-free subset of C++. If your code triggers UB during constant evaluation, it's a compile error. No weird machines, no exploitation, just a diagnostic. Consider:
The paper correctly notes this is insufficient for general programming - you can't do I/O, heap allocation (in C++20), or anything with side effects. But it shows C++ already has the concept of "the compiler rejects UB in this mode." The question is whether we can extend that model to runtime code. That's... exactly what a borrow checker does.
the paper literally addresses this and says it's insufficient. you're restating the paper's own argument.
I know. I'm elaborating on WHY it's important as a conceptual bridge. Most people in this thread haven't connected "constexpr catches UB" to "we could catch UB at runtime too with the right model." The paper states it; I'm explaining it.
can we please just get networking in the standard before we pursue memory safety nirvana
networking and safety are actually related. half the CVEs that CISA is worried about come from network-facing code with buffer overflows. if you get the safe subset first, the networking API that eventually ships won't be a vulnerability factory.
sure, but one of these might actually happen in my lifetime
R0 was called "Safety Strategy Requirements for C++." R1 is "Should C++ be a memory-safe language?" At this rate R2 will be "C++ and the Meaning of Life" and R3 will just be a picture of a borrow checker.
tbf this is only R1. some papers are on R15. you ain't seen nothing yet.
wait, the paper says memory safety means freedom from ALL undefined behavior? So signed integer overflow is a memory safety issue? That doesn't match any definition of memory safety I've ever seen.
see my top-level comment. The paper's definition of memory safety is broader than the conventional one. That broadening is load-bearing for the argument.
I know I already posted in the library sub-thread but I want to add a top-level perspective from the application developer side.
I don't care about the definition of "memory safety." I don't care about weird machines or formal PL theory. I care about whether my team can ship code that auditors will sign off on. Right now, they won't sign off on C++ for new security-sensitive components. That's not a theoretical concern - it's a business constraint that directly affects what language we choose.
If the committee says "C++ is pursuing memory safety," that signal alone changes the conversation in my organization. The paper doesn't even need to deliver the feature. It needs to deliver the commitment. The feature can take a decade. The signal needs to happen now.
I feel this, but from the opposite side. If the committee says "C++ is pursuing memory safety" and the design that emerges doesn't work for embedded, we're stuck with a language that's "memory safe" in the same way that C++20 modules are "ready" - technically in the standard, practically unusable for half the user base.
The paper is right. Profiles are theater. The committee should have started this in 2020 instead of wasting three years on P3589 and the profiles framework, which cannot and will never deliver the guarantees that CISA is asking for. Everyone in SG23 knows this. The profiles prioritization poll was a political outcome, not a technical one. If you think compile-time annotations without a borrow checker can deliver lifetime safety, you don't understand the problem, full stop.
committee: "should C++ be memory safe?"
C++ developers: "should C++ compile in under 10 minutes?"
committee: "anyway about the memory safety..."
Something nobody is discussing: the paper cites the CISA roadmap and the ACM position paper, but doesn't mention that the EU's Cyber Resilience Act takes effect in 2027. That's not a recommendation - it's regulation. Software vendors will need to demonstrate due diligence on security for products sold in the EU.
The committee can debate definitions all it wants, but the regulatory environment is moving faster than the standards process. If C++ can't credibly claim to be on a path to memory safety, vendors selling products with C++ components will face increasing compliance burden. That's the real urgency behind papers like this.
you're basically saying the government should decide what C++ looks like.
no, I'm saying the market will decide and the government is signaling where the market goes. the committee can choose to lead or to be led. the paper is arguing for leading.
I didn't read the paper but isn't this just saying "add a borrow checker to C++"? Hasn't that been tried and rejected?
no. the paper explicitly does not propose a specific design. it asks whether the committee should commit to the GOAL of a memory-safe subset. borrow checking is one possible mechanism, ARC is another. the paper leaves the design open. maybe try reading it.
my senior lead just sent this paper to our whole team with the subject line "the future of C++". I think he's been waiting for this paper for a while.
your senior lead is not wrong
you people keep adding features nobody asked for. give me C with classes and leave me alone.
the constexpr subset being UB-free is such a good point though. imagine if we could just expand that model to runtime code. "constexpr but also at runtime" is basically what constexpr has been evolving toward anyway -
constexpr new,constexpr vector, etc. maybe the safe subset is just the limit of that progression.just read this whole thread. the encounter between former_pl_researcher and safety_skeptic_42 is better analysis than most actual papers I've read. someone send this thread to EWG.
[deleted]
the entire C++ committee should be disbanded and replaced with the Rust core team. they clearly know more about language design than people who've been doing this for 40 years.
[removed by moderator]
one of the lead authors works at the Rust Foundation. I'm sure this will produce a totally unbiased assessment of whether C++ should become more like Rust.
Boost/Asio/Folly already solved most of these problems in userspace. The difference is nobody audits against "uses Boost.Asio" - they audit against "uses a memory-safe language." Standards matter for compliance even when libraries solve the engineering problem.
I just read 15 pages about whether C++ should be safe and 95 comments about whether the 15 pages are right. I need a drink.
the paper says "C++ stands at a crossroads." buddy, C++ has been standing at the crossroads since C++11. we're just really good at standing.
Edit: RIP my inbox
Edit2: to the person DMing me about Rust, please stop
Here's my prediction: EWG will poll favorably on "should C++ pursue memory safety" because nobody wants to be the person who voted against safety. Then nothing concrete will happen for 3 years because the actual design questions (borrow checker vs profiles vs something else) are where the real disagreements are, and this paper deliberately avoids them.
Agree with the direction. Pessimistic about the execution.