Author: Bjarne Stroustrup
Document: P3984R0
Date: 2026-02-22
Target: EWG, SG12, SG20, SG23
Link: wg21.link/p3984r0
Bjarne's latest on profiles - this time with actual detail on what type-safety and resource-safety profiles would look like in practice. Four target audiences tells you the scope.
The pitch: "subset a superset" of C++. Add safe abstractions (hardened containers, not_null, runtime checks where needed), then ban the raw features those abstractions replace (pointer arithmetic, unchecked subscripts, uninitialized access). Opt-in per TU. Result is still ISO C++, linkable with everything else.
The paper walks through each class of type violation - initialization, ranges, nullptr, and the hard one: invalidation (dangling pointers). For invalidation it proposes static analysis plus a new [[not_invalidating]] attribute for functions that don't invalidate their pointer arguments. Section 2.4 on copies and moves is... one sentence. Section 5 outlines a plan for progress toward standardization.
Explicitly described as "aimed at understanding, not a standards text" - so no proposed wording yet. Profiles have been in flight since the Core Guidelines era. This paper is the philosophical foundation; P3081 is the one with concrete wording targeting C++26.
Paper: P3984R0 - A type-safety profile | Author: Bjarne Stroustrup | Date: 2026-02-22 | Audience: EWG, SG12, SG20, SG23 | Link: wg21.link/p3984r0
Reminder: paper authors sometimes read these threads. Be civil, be specific, critique the paper not the person. Rule 2 violations will be removed.
profiles R0. we've been doing profiles R0 for five years now. this is like groundhog day but the groundhog is a WG21 paper number.
at this rate C++ will be memory safe by 2050
optimistic
I remember when profiles was just a slide in a Core Guidelines talk at CppCon 2015. we were so young. so naive.
This is the sentence that should have been in the abstract. The entire paper's guarantee has an asterisk, and the asterisk says "unless the code is hard."
I've spent the last four years working on lifetime analysis tooling. The problem isn't the easy cases - yeah, you can catch the
push_backinvalidation example in the paper. That's freshman stuff. The problem is the exponential blowup when you have:1. Conditional returns of pointers with different origins
2. Callbacks that store references into containers
3. Type-erased handles (
any,function,coroutine_handle)4. Placement new into
aligned_storageor successor typesThe paper acknowledges this ("fundamentally impossible to statically analyze") but then handwaves the boundary. What happens at the edge? The paper says reject the code. Rejection means refactoring. Refactoring means cost. Nobody's quantifying that cost.
Rust solved this by making the boundary explicit in the type system - lifetimes are part of the signature. Profiles tries to infer the boundary from existing C++ semantics, and the paper admits the inference has limits. The question isn't whether profiles can handle the vector example. The question is what percentage of real-world code falls into the "too complex" bucket.
I'd bet it's larger than anyone on the committee wants to hear.
This is the thing that kills me about profiles. The paper almost gets it right - the invalidation rules in 2.6 are genuinely clever for the cases they cover. But the "too complex" escape hatch is where the guarantee lives or dies, and the paper treats it as an implementation detail instead of a design parameter.
How big is the escape hatch? That's the question the paper should answer and doesn't.
I implement lifetime analysis tooling and the "too complex" boundary is narrower than people think. We handle 85-90% of real-world patterns in our static analyzer. The remaining 10-15% isn't "fundamentally impossible" - it's "would require inter-procedural analysis that's expensive but feasible."
The paper is conservative because it's aimed at compiler-integrated analysis, not standalone tools. Compilers have tighter time budgets. But Clang's lifetime safety analysis already handles a lot of the patterns people assume are intractable. The gap between "what a compiler can check in reasonable time" and "what's fundamentally impossible" is where the engineering happens.
85-90% is doing a lot of work in that sentence. What's in the other 10-15%? Because in my experience that's where the CVEs live. The easy patterns are the ones nobody gets wrong anyway.
Show me the escape hatch boundaries for type-erased callbacks, coroutine frames, and placement new. Those aren't exotic - they're the backbone of every async framework, every signal/slot system, every ECS. If profiles can't handle the patterns that modern C++ actually uses, the guarantee is theoretical.
Fair challenge. Type-erased callbacks: if the type erasure goes through
std::functionorstd::move_only_function, the analyzer can track through the known interface. Custom type erasure: that's in the "requires unsafe block" bucket, yes. Placement new: same bucket. Coroutine frames: genuinely hard because the frame lifetime is opaque to the caller.I'll concede coroutines are the weak point. But "profiles can't handle coroutines perfectly" isn't "profiles are useless." It's "profiles need to mark coroutine frame access as unsafe by default until better analysis exists."
We agree on the real question then: is 85% coverage with explicit escape hatches for the rest good enough to satisfy the NSA/CISA pressure, or does the industry need Rust-level guarantees? I genuinely don't know the answer. But the paper should be honest about where the boundary is instead of deferring it to "too complex."
I'm not reading all that but I trust you
just use Rust lol. this entire paper is 17 pages of coping with the fact that C++ can't do what Rust does by default.
there it is. 23 minutes. new record for this sub.
some of us have codebases written before you were born that generate actual revenue. "just use Rust" isn't a migration plan, it's a bumper sticker.
skill issue
Rule 2. We get it.
From section 1:
From section 2.6:
The paper argues against annotations in section 1 and introduces a new annotation in section 2.6. I realize these aren't technically the same thing -
[[not_invalidating]]is "just an optimization" per the paper - but in practice, any non-trivial API surface will need this on dozens of functions to avoid false positive rejections.The annotation burden for the standard library alone is non-trivial.
vector::size(),vector::empty(),vector::capacity(),map::find(), every const accessor on every container - all need[[not_invalidating]]or the analyzer assumes the worst. And getting it wrong means either false rejections (annotation missing) or false safety (annotation wrong on a function that does invalidate).The paper says the annotation "can be verified when the function definition is compiled" - but the implementation cost of re-analyzing every annotated function under profile rules is itself non-trivial. This is the kind of detail that separates a 17-page vision document from a specification.
the irony is thick. "we don't want annotations." proceeds to add an annotation. "but this one is different." sure it is.
annotations all the way down. it's annotations the whole way.
great, another paper that will take 10 years to get through committee
targeted to EWG actually
even worse
The runtime checks concern me. The paper says "eliminating undefined behavior (supported by runtime checks where necessary)" and that's the part that makes embedded developers nervous.
We run on Cortex-M4 with 256K flash and 64K RAM. Every byte matters. Every branch matters. If profiles means my
spanaccess now has a bounds check that I can't elide, that's a non-starter for ISR code.The paper claims no "inessential overheads" but doesn't define "inessential." In my world, a single branch in a tight loop is essential overhead. I need to know: can I opt into type-safety for my application layer and opt out for my HAL code within the same TU? The paper's TU-level granularity might be too coarse for us.
you guys have 64K?
We run safety-critical avionics code on bare metal. I'm cautiously interested but need to see the runtime check overhead quantified with actual numbers. "No inessential overhead" isn't a specification, it's a marketing claim. The zero-overhead principle was already a compromise - Bjarne said so himself in D&E. Layering profiles on top of that compromise doesn't resolve it.
For context, here's the profiles paper trail for anyone trying to follow along:
- P1179 - Lifetime safety (the original Sutter/Stroustrup proposal, 2019)
- P3038 - Stroustrup's initial profiles framework
- P3081 - Core safety profiles for C++26
- P3274 - Framework for profile development
- P3390 - Safe C++ (Baxter's competing approach)
- P3586 - "Problems with profiles" (the rebuttal paper)
- P3984 - This paper (type-safety profile detail)
That's seven papers on the same problem space and I'm probably missing a few. SG23 has been trying to navigate between profiles-first and safe-C++-first for two years now. The Wrocław meeting reportedly had polls where profiles won priority, but "won priority" and "achieved consensus" are very different things in committee-speak.
The fact that this paper targets four groups (EWG, SG12, SG20, SG23) suggests Bjarne wants to go wide rather than deep through SG23 alone. That's either strategic or an acknowledgment that SG23 isn't converging.
I need a flowchart just for the profiles papers alone. seven papers on the same problem space and you're "probably missing a few." there are papers about the papers at this point.
If you want the other side of the argument, read P3586 ("Problems with profiles"). It's the most detailed technical critique of the profiles approach. And P3081 for the concrete core safety profile that's actually targeting C++26 standardization. P3984 reads like the philosophical foundation paper; P3081 is the one with proposed wording.
[removed by moderator]
what did they say?
something about the inventor of C++ being out of touch with modern development. you know, the usual respectful discourse we're known for here.
Last warning. Rule 2.
I work on a major C++ compiler's frontend and my eye is twitching reading section 2.6. the invalidation analysis as described is... let's say "optimistic" about what compile-time analysis can achieve in reasonable time.
which compiler
nice try
Section 2.4. Copies and moves. The entirety:
That's it. That's the section. One sentence for the feature responsible for approximately 40% of all non-trivial C++ bugs I've seen in production. Move semantics alone has generated more committee papers than some entire programming languages have features.
I had to scroll back to check and I'm dying. one sentence. for copies and moves. the man really said "exercise left to the reader" on the hardest part.
Copies and moves are where half the production bugs in C++ live. Move-from state, self-assignment, exception safety in copy constructors, the rule of five vs rule of zero tension. The paper spends half a page on the
push_backinvalidation example that any intern could debug, and one sentence on the move semantics footgun factory that has generated more CVEs than I can count.The invalidation analysis from 2.6 catches this. But what about:
Moved-from state is type-safe by the standard's definition but not safe by any reasonable definition. Where does the profile draw the line?
16.67ms per frame. That's what I have. Runtime bounds checks on every container access in our ECS iteration? No.
But - and I need to be honest here - 90% of our codebase isn't the hot path. Our UI code, our asset pipeline, our network layer, our scripting bridge - all of that could benefit from profiles. The question is granularity. Can I profile-protect my asset pipeline code while leaving my particle system alone?
The paper talks about TU-level opt-in. Most game engines don't organize TUs by "safe" and "unsafe" - they organize by system. I'd need something finer.
frame budgets say no. 16.67ms or you're a slideshow.
In practice the problem is convincing a tech lead to adopt profiles when the exception list for performance-critical code is longer than the adoption guide. We need case studies. Take a real game engine module - say, the resource loading pipeline - apply profiles, measure the overhead. Until someone does that, it's all theory.
We measure in nanoseconds. Literally. Our hot path processes market data at 2.3 million messages/second and every conditional branch matters.
The paper says profiles insert "run-time checks to ensure that an error action is triggered rather than reaching the point of UB." In our world, we've spent years removing runtime checks. We know our data is valid because we control the entire pipeline. Adding bounds checks to our
spanaccess in the matching engine would cost us about 3ns per message. At our volume, that's 6.9 milliseconds per second of pure overhead.I want type safety for our configuration layer, our logging, our REST API code. I do not want it anywhere near the matching engine. The paper's TU-level granularity is better than nothing, but what I really need is function-level or scope-level opt-in.
tell me you work in HFT without telling me you work in HFT
committee gonna committee. see you all in 2030 when we're debating the exact semantics of
[[not_invalidating]]for the 47th time.The fundamental question isn't "do profiles work" - it's "do profiles work well enough compared to the alternative."
Circle's Safe C++ gives you a borrow checker. Actual ownership tracking in the type system. The compiler knows who owns what and for how long. The cost: new syntax, new semantics, a learning curve.
Profiles gives you: "we'll reject code that's too complex and ban some features." The benefit: no new syntax, opt-in, links with everything. The cost: the guarantee has escape hatches.
Both are trying to answer the same question from the NSA and CISA: "prove your code is memory safe." But they answer it at different confidence levels. A profile says "we checked everything we could check." A borrow checker says "we checked everything."
The committee chose profiles. I'm not sure the committee chose correctly, but I understand the politics.
The thing people keep missing: they're not solving the same problem at the same depth. Circle adds lifetime tracking to the type system. That's a new capability - the compiler knows ownership and borrowing relationships. Profiles restricts what you can do with the existing type system and adds runtime checks. You can verify strictly more things with a richer type system than with a restricted subset of the same type system. That's not opinion, that's math.
That's technically correct but it ignores the adoption curve. Circle requires new syntax, new mental models, a complete rethink of ownership at API boundaries. Profiles requires banning some patterns and adopting hardened containers. For a company with 10 million lines of C++, which migration path doesn't require rewriting everything?
I was in the room for some of the SG23 discussions and the migration-cost argument is what tipped the balance.
Committee support and $5 gets you a coffee. The migration argument is real but it's also the argument against every improvement since the invention of the wheel. At some point the cost of NOT migrating exceeds the cost of migration. The NSA memo wasn't subtle. Companies that can't demonstrate memory safety guarantees are going to start losing government contracts. "We banned pointer arithmetic" is a weaker answer than "our type system enforces ownership."
Look, both approaches have genuine merit. The honest answer is probably that profiles covers the 80% case with less migration cost, and Circle-style ownership covers the 95% case with more migration cost. The question is which percentage the market will actually demand. If the answer turns out to be 95%, then profiles bought us time but didn't solve the underlying problem.
why not both.jpg
I teach intermediate C++ at a university. The paper opens with Bjarne's original motivation - combining C's hardware access with Simula's type safety. My students weren't born when Simula was relevant.
What I need from profiles is: "turn this on and 80% of the footguns disappear." What this paper gives me is: "turn this on and some footguns disappear but the boundary is complex and you need to understand
[[not_invalidating]]and the invalidation analysis rules and which code patterns are 'too complex for the analyzer.'"That's not a win for pedagogy. It's another thing to teach. I'd rather have fewer guarantees with a simpler mental model than more guarantees with escape hatches students don't understand.
my students can barely understand const correctness and you want me to add profiles to the curriculum. I'm going to need a bigger syllabus.
ok but real talk how does this interact with build systems. is
-fprofile=type_safetya compiler flag? a pragma? a module attribute? the paper doesn't say and that's kind of important for anyone who actually has to ship this.CMake support when
asking the real questions while everyone else debates theory
modules first, profiles second. you can't build on a foundation that doesn't compile.
Section 2.2 defines a resource as something that "must be acquired before use and released after its last use" - but then immediately narrows it to things with constructor/destructor pairs.
This means every C API resource (
FILE*,HANDLE, socket fd, OpenGL context, Vulkan handles...) is invisible to the resource-safety profile until someone wraps it. The paper's answer is "encapsulate first." But the entire point of profiles is to help code that ISN'T already following best practices. If your code already wraps every C resource in RAII, you've already solved the problem profiles is trying to solve.See Microsoft's GSL and the Core Guidelines - they've been trying to get people to wrap raw resources for a decade. Profiles adds enforcement but doesn't solve the adoption problem.
FILE*has entered the chat. and exited without being closed. as is tradition.fclose(NULL)is straight UB per C17 7.21.5.1. not even implementation-defined. the profile can't help you if you never wrapped the handle in the first place. that's the point.The invalidation example in the paper is the easy case. Here's the hard one:
The
string_viewreturned fromget_refaliases memory owned by the map. The map isn't modified through the same reference chain. How does the profile track this? The paper says "don't let a pointer to a deleted object out of the function" butget_refreturns a pointer to a VALID object - it only becomes invalid whenm.clear()is called later, in a different scope.This is the class of bug that actually kills people in production, and I don't see how the rules in section 2.6 handle it without whole-program analysis or Rust-style lifetime annotations.
For the curious: Clang trunk with
-Wdanglingalready catches some of these patterns. The lifetime safety analysis docs cover the current state of the art. It's not the full profile as described here but it's the closest thing to a prototype we have.I showed this example to my coworker. he didn't switch to Rust, he just sighed and said "we use
shared_ptrfor everything anyway." I think that's worse.nobody's gonna talk about the elephant in the room? the NSA memo, the CISA guidance, the White House push for memory-safe languages. this paper exists because governments are about to start mandating this stuff and C++ doesn't have an answer yet.
NSA literally said "use memory-safe languages." they didn't say "use C++ with some static analysis on top."
and then everyone switched to Rust and lived happily ever after. that's not how billion-dollar codebases work.
fair, but "profiles might work in a few years" isn't going to satisfy a government auditor in 2027. the clock is ticking whether the committee wants it to or not.
[deleted]
I actually read the whole paper. Including section 5, the plan for progress.
The plan basically says: implement the type-safety and resource-safety profiles, get experience, iterate. It references prototypes ([HS21, CG, GDR25]) and suggests that the work is "sufficiently detailed to be translated in standardese with relatively few technical changes."
The word "relatively" is doing a lot of heavy lifting there. The gap between "here's a set of rules described in English prose" and "here's normative wording that three independent compilers can implement compatibly" is where profiles papers go to age. P1179 (lifetime safety) was 2019. We're in 2026. "Relatively few technical changes" in committee time could mean 2030.
The real question in section 5 isn't the plan - it's whether the committee has the patience for another iteration cycle on the same idea while the industry pressure for memory safety keeps growing.
you read all 17 pages? on purpose?
Section 5 is where the real question lives. The plan for progress is basically "implement it, get experience, iterate." That's been the plan since 2019. The question is whether "iterate" converges before the market moves on.
Bjarne wrote this on a typewriter, I'm convinced. the formatting screams "I use Word and I'm not sorry about it."
the numbered section headers, the bullet points that start mid-line, the references like [BS25b]. peak academic formatting. respect honestly.
can we please get networking in the standard before I retire. asking for a friend who started the petition in 2014.
[removed by moderator]
the paper references itself as [BS25b, BS22b, BS23]. Bjarne really said "for further reading, see: me." king behavior honestly.
lore master. when you invented the language, every prior work IS your work.
I just want to write a web server in C++ without worrying about use-after-free. is that too much to ask.
yes
compiles first try if you never use raw pointers. profiles just makes that the law. I'm here for it actually.
One thing that hasn't gotten enough attention: the paper explicitly says "a profile cannot change the semantics of a program beyond defining the meaning of some forms of undefined behavior" and then says profiles insert "run-time checks."
Runtime checks are overhead. The paper says profiles don't impose "inessential overheads." But who decides what's essential? A bounds check on
vector::operator[]is essential for safety and inessential for performance. You can't have both.The zero-overhead principle was the constraint that prevented Stroustrup from enforcing safety in the first place - he says so explicitly in the introduction. Profiles tries to thread the needle, but the needle hasn't gotten any wider in 40 years.
Edit: to be clear, I'm not saying profiles is wrong. I'm saying the paper should acknowledge the tension directly instead of papering over it with "inessential."
runtime checks are not zero overhead. the paper acknowledges this in one sentence and then moves on. pick one: safety or speed. you can't have both without a LOT more nuance than this paper provides.
imagine explaining profiles to a recruiter. "what are your safety profile preferences?" "I prefer type_safety with resource_safety but I opt out in my ISR handlers." recruiter: closes laptop
"do you know profiles?" "I know LinkedIn profiles." hired.
this paper has been my entire personality for the last month and I'm not ok
I don't understand why we need this. I've been writing C++ since '98 and I've never had a memory safety issue that couldn't be found with a debugger and two cups of coffee. This is a solution looking for a problem.