P3984R0 - A type-safety profile WG21
Posted by u/constexpr_everything_2024 · 14 hr. ago

Author: Bjarne Stroustrup
Document: P3984R0
Date: 2026-02-22
Target: EWG, SG12, SG20, SG23
Link: wg21.link/p3984r0

Bjarne's latest on profiles - this time with actual detail on what type-safety and resource-safety profiles would look like in practice. Four target audiences tells you the scope.

The pitch: "subset a superset" of C++. Add safe abstractions (hardened containers, not_null, runtime checks where needed), then ban the raw features those abstractions replace (pointer arithmetic, unchecked subscripts, uninitialized access). Opt-in per TU. Result is still ISO C++, linkable with everything else.

The paper walks through each class of type violation - initialization, ranges, nullptr, and the hard one: invalidation (dangling pointers). For invalidation it proposes static analysis plus a new [[not_invalidating]] attribute for functions that don't invalidate their pointer arguments. Section 2.4 on copies and moves is... one sentence. Section 5 outlines a plan for progress toward standardization.

Explicitly described as "aimed at understanding, not a standards text" - so no proposed wording yet. Profiles have been in flight since the Core Guidelines era. This paper is the philosophical foundation; P3081 is the one with concrete wording targeting C++26.

▲ 1,247 points (89% upvoted) · 87 comments
sorted by: best
u/AutoModerator 1 point 14 hr. ago pinned comment

Paper: P3984R0 - A type-safety profile | Author: Bjarne Stroustrup | Date: 2026-02-22 | Audience: EWG, SG12, SG20, SG23 | Link: wg21.link/p3984r0

Reminder: paper authors sometimes read these threads. Be civil, be specific, critique the paper not the person. Rule 2 violations will be removed.

u/just_ship_it_already 340 points 13 hr. ago 🏆

profiles R0. we've been doing profiles R0 for five years now. this is like groundhog day but the groundhog is a WG21 paper number.

u/compile_time_enjoyer 189 points 13 hr. ago

at this rate C++ will be memory safe by 2050

u/template_pessimist 267 points 12 hr. ago 🏆

optimistic

u/former_boost_contributor 34 points 12 hr. ago

I remember when profiles was just a slide in a Core Guidelines talk at CppCon 2015. we were so young. so naive.

u/lifetime_lint_survivor 187 points 12 hr. ago 🏆
we must accept only code that's not too complex to analyze

This is the sentence that should have been in the abstract. The entire paper's guarantee has an asterisk, and the asterisk says "unless the code is hard."

I've spent the last four years working on lifetime analysis tooling. The problem isn't the easy cases - yeah, you can catch the push_back invalidation example in the paper. That's freshman stuff. The problem is the exponential blowup when you have:

1. Conditional returns of pointers with different origins
2. Callbacks that store references into containers
3. Type-erased handles (any, function, coroutine_handle)
4. Placement new into aligned_storage or successor types

The paper acknowledges this ("fundamentally impossible to statically analyze") but then handwaves the boundary. What happens at the edge? The paper says reject the code. Rejection means refactoring. Refactoring means cost. Nobody's quantifying that cost.

Rust solved this by making the boundary explicit in the type system - lifetimes are part of the signature. Profiles tries to infer the boundary from existing C++ semantics, and the paper admits the inference has limits. The question isn't whether profiles can handle the vector example. The question is what percentage of real-world code falls into the "too complex" bucket.

I'd bet it's larger than anyone on the committee wants to hear.

u/static_analysis_enjoyer 78 points 11 hr. ago

This is the thing that kills me about profiles. The paper almost gets it right - the invalidation rules in 2.6 are genuinely clever for the cases they cover. But the "too complex" escape hatch is where the guarantee lives or dies, and the paper treats it as an implementation detail instead of a design parameter.

How big is the escape hatch? That's the question the paper should answer and doesn't.

u/async_skeptic_42 92 points 10 hr. ago

I implement lifetime analysis tooling and the "too complex" boundary is narrower than people think. We handle 85-90% of real-world patterns in our static analyzer. The remaining 10-15% isn't "fundamentally impossible" - it's "would require inter-procedural analysis that's expensive but feasible."

The paper is conservative because it's aimed at compiler-integrated analysis, not standalone tools. Compilers have tighter time budgets. But Clang's lifetime safety analysis already handles a lot of the patterns people assume are intractable. The gap between "what a compiler can check in reasonable time" and "what's fundamentally impossible" is where the engineering happens.

u/static_analysis_enjoyer 56 points 10 hr. ago

85-90% is doing a lot of work in that sentence. What's in the other 10-15%? Because in my experience that's where the CVEs live. The easy patterns are the ones nobody gets wrong anyway.

Show me the escape hatch boundaries for type-erased callbacks, coroutine frames, and placement new. Those aren't exotic - they're the backbone of every async framework, every signal/slot system, every ECS. If profiles can't handle the patterns that modern C++ actually uses, the guarantee is theoretical.

u/async_skeptic_42 67 points 9 hr. ago

Fair challenge. Type-erased callbacks: if the type erasure goes through std::function or std::move_only_function, the analyzer can track through the known interface. Custom type erasure: that's in the "requires unsafe block" bucket, yes. Placement new: same bucket. Coroutine frames: genuinely hard because the frame lifetime is opaque to the caller.

I'll concede coroutines are the weak point. But "profiles can't handle coroutines perfectly" isn't "profiles are useless." It's "profiles need to mark coroutine frame access as unsafe by default until better analysis exists."

u/static_analysis_enjoyer 43 points 9 hr. ago

We agree on the real question then: is 85% coverage with explicit escape hatches for the rest good enough to satisfy the NSA/CISA pressure, or does the industry need Rust-level guarantees? I genuinely don't know the answer. But the paper should be honest about where the boundary is instead of deferring it to "too complex."

u/intern_at_faang 12 points 11 hr. ago

I'm not reading all that but I trust you

u/definitely_not_rustacean -47 points 13 hr. ago

just use Rust lol. this entire paper is 17 pages of coping with the fact that C++ can't do what Rust does by default.

u/senior_cpp_dev_69 234 points 13 hr. ago 🏆

there it is. 23 minutes. new record for this sub.

u/legacy_codebase_survivor 89 points 12 hr. ago

some of us have codebases written before you were born that generate actual revenue. "just use Rust" isn't a migration plan, it's a bumper sticker.

u/UB_is_a_feature -12 points 12 hr. ago

skill issue

u/paper_trail_2019 1 point 12 hr. ago

Rule 2. We get it.

🔒 This thread has been locked by the moderators of r/wg21.
u/not_a_library_author 89 points 11 hr. ago

From section 1:

we avoid having to rely on developers following guidelines and sprinkling annotations all over the code

From section 2.6:

Add a [[not_invalidating]] attribute to be used to speed up analysis

The paper argues against annotations in section 1 and introduces a new annotation in section 2.6. I realize these aren't technically the same thing - [[not_invalidating]] is "just an optimization" per the paper - but in practice, any non-trivial API surface will need this on dozens of functions to avoid false positive rejections.

The annotation burden for the standard library alone is non-trivial. vector::size(), vector::empty(), vector::capacity(), map::find(), every const accessor on every container - all need [[not_invalidating]] or the analyzer assumes the worst. And getting it wrong means either false rejections (annotation missing) or false safety (annotation wrong on a function that does invalidate).

The paper says the annotation "can be verified when the function definition is compiled" - but the implementation cost of re-analyzing every annotated function under profile rules is itself non-trivial. This is the kind of detail that separates a 17-page vision document from a specification.

u/pragma_once_and_done 45 points 10 hr. ago

the irony is thick. "we don't want annotations." proceeds to add an annotation. "but this one is different." sure it is.

u/annotation_skeptic 23 points 10 hr. ago

annotations all the way down. it's annotations the whole way.

u/another_lewg_paper 156 points 12 hr. ago

great, another paper that will take 10 years to get through committee

u/specs_not_my_job 8 points 12 hr. ago

targeted to EWG actually

u/just_a_grad_student 67 points 11 hr. ago

even worse

u/stm32_or_die 73 points 11 hr. ago

The runtime checks concern me. The paper says "eliminating undefined behavior (supported by runtime checks where necessary)" and that's the part that makes embedded developers nervous.

We run on Cortex-M4 with 256K flash and 64K RAM. Every byte matters. Every branch matters. If profiles means my span access now has a bounds check that I can't elide, that's a non-starter for ISR code.

The paper claims no "inessential overheads" but doesn't define "inessential." In my world, a single branch in a tight loop is essential overhead. I need to know: can I opt into type-safety for my application layer and opt out for my HAL code within the same TU? The paper's TU-level granularity might be too coarse for us.

u/not_a_real_cpp_dev 45 points 10 hr. ago

you guys have 64K?

u/embedded_by_necessity 32 points 10 hr. ago

We run safety-critical avionics code on bare metal. I'm cautiously interested but need to see the runtime check overhead quantified with actual numbers. "No inessential overhead" isn't a specification, it's a marketing claim. The zero-overhead principle was already a compromise - Bjarne said so himself in D&E. Layering profiles on top of that compromise doesn't resolve it.

u/sg23_lurker 62 points 11 hr. ago

For context, here's the profiles paper trail for anyone trying to follow along:

- P1179 - Lifetime safety (the original Sutter/Stroustrup proposal, 2019)
- P3038 - Stroustrup's initial profiles framework
- P3081 - Core safety profiles for C++26
- P3274 - Framework for profile development
- P3390 - Safe C++ (Baxter's competing approach)
- P3586 - "Problems with profiles" (the rebuttal paper)
- P3984 - This paper (type-safety profile detail)

That's seven papers on the same problem space and I'm probably missing a few. SG23 has been trying to navigate between profiles-first and safe-C++-first for two years now. The Wrocław meeting reportedly had polls where profiles won priority, but "won priority" and "achieved consensus" are very different things in committee-speak.

The fact that this paper targets four groups (EWG, SG12, SG20, SG23) suggests Bjarne wants to go wide rather than deep through SG23 alone. That's either strategic or an acknowledgment that SG23 isn't converging.

u/coroutine_casualty 87 points 10 hr. ago

I need a flowchart just for the profiles papers alone. seven papers on the same problem space and you're "probably missing a few." there are papers about the papers at this point.

u/paper_trail_lurker 28 points 10 hr. ago

If you want the other side of the argument, read P3586 ("Problems with profiles"). It's the most detailed technical critique of the profiles approach. And P3081 for the concrete core safety profile that's actually targeting C++26 standardization. P3984 reads like the philosophical foundation paper; P3081 is the one with proposed wording.

[deleted] 11 hr. ago

[removed by moderator]

u/move_semantics_hater 12 points 10 hr. ago

what did they say?

u/signed_overflow_fan 34 points 10 hr. ago

something about the inventor of C++ being out of touch with modern development. you know, the usual respectful discourse we're known for here.

u/not_on_the_committee 1 point 10 hr. ago

Last warning. Rule 2.

u/volatile_brain 112 points 11 hr. ago

I work on a major C++ compiler's frontend and my eye is twitching reading section 2.6. the invalidation analysis as described is... let's say "optimistic" about what compile-time analysis can achieve in reasonable time.

u/strictly_conforming 5 points 10 hr. ago

which compiler

u/volatile_brain 78 points 10 hr. ago

nice try

u/the_real_template_wizard 134 points 10 hr. ago 🏆

Section 2.4. Copies and moves. The entirety:

To be handled similar to constructors and destructors.

That's it. That's the section. One sentence for the feature responsible for approximately 40% of all non-trivial C++ bugs I've seen in production. Move semantics alone has generated more committee papers than some entire programming languages have features.

u/daily_segfault_42 67 points 10 hr. ago

I had to scroll back to check and I'm dying. one sentence. for copies and moves. the man really said "exercise left to the reader" on the hardest part.

u/copy_semantics_matter 56 points 9 hr. ago

Copies and moves are where half the production bugs in C++ live. Move-from state, self-assignment, exception safety in copy constructors, the rule of five vs rule of zero tension. The paper spends half a page on the push_back invalidation example that any intern could debug, and one sentence on the move semantics footgun factory that has generated more CVEs than I can count.

u/dangerous_moves 34 points 8 hr. ago
std::vector<std::unique_ptr<Widget>> widgets;
auto& w = widgets[0];
widgets.push_back(std::make_unique<Widget>());
w->update();  // w may be dangling after reallocation

The invalidation analysis from 2.6 catches this. But what about:

auto w = std::move(widgets[0]);
widgets[0]->update();  // moved-from access

Moved-from state is type-safe by the standard's definition but not safe by any reasonable definition. Where does the profile draw the line?

u/frame_budget_bill 45 points 10 hr. ago

16.67ms per frame. That's what I have. Runtime bounds checks on every container access in our ECS iteration? No.

But - and I need to be honest here - 90% of our codebase isn't the hot path. Our UI code, our asset pipeline, our network layer, our scripting bridge - all of that could benefit from profiles. The question is granularity. Can I profile-protect my asset pipeline code while leaving my particle system alone?

The paper talks about TU-level opt-in. Most game engines don't organize TUs by "safe" and "unsafe" - they organize by system. I'd need something finer.

u/not_enough_frames 23 points 9 hr. ago

frame budgets say no. 16.67ms or you're a slideshow.

u/game_engine_pragmatist 31 points 9 hr. ago

In practice the problem is convincing a tech lead to adopt profiles when the exception list for performance-critical code is longer than the adoption guide. We need case studies. Take a real game engine module - say, the resource loading pipeline - apply profiles, measure the overhead. Until someone does that, it's all theory.

u/nanosecond_accountant 67 points 10 hr. ago

We measure in nanoseconds. Literally. Our hot path processes market data at 2.3 million messages/second and every conditional branch matters.

The paper says profiles insert "run-time checks to ensure that an error action is triggered rather than reaching the point of UB." In our world, we've spent years removing runtime checks. We know our data is valid because we control the entire pipeline. Adding bounds checks to our span access in the matching engine would cost us about 3ns per message. At our volume, that's 6.9 milliseconds per second of pure overhead.

I want type safety for our configuration layer, our logging, our REST API code. I do not want it anywhere near the matching engine. The paper's TU-level granularity is better than nothing, but what I really need is function-level or scope-level opt-in.

u/turbo_llama_9000 89 points 9 hr. ago

tell me you work in HFT without telling me you work in HFT

u/committee_gonna_committee 89 points 10 hr. ago

committee gonna committee. see you all in 2030 when we're debating the exact semantics of [[not_invalidating]] for the 47th time.

u/circle_curious 103 points 9 hr. ago

The fundamental question isn't "do profiles work" - it's "do profiles work well enough compared to the alternative."

Circle's Safe C++ gives you a borrow checker. Actual ownership tracking in the type system. The compiler knows who owns what and for how long. The cost: new syntax, new semantics, a learning curve.

Profiles gives you: "we'll reject code that's too complex and ban some features." The benefit: no new syntax, opt-in, links with everything. The cost: the guarantee has escape hatches.

Both are trying to answer the same question from the NSA and CISA: "prove your code is memory safe." But they answer it at different confidence levels. A profile says "we checked everything we could check." A borrow checker says "we checked everything."

The committee chose profiles. I'm not sure the committee chose correctly, but I understand the politics.

u/dangling_by_design 67 points 8 hr. ago

The thing people keep missing: they're not solving the same problem at the same depth. Circle adds lifetime tracking to the type system. That's a new capability - the compiler knows ownership and borrowing relationships. Profiles restricts what you can do with the existing type system and adds runtime checks. You can verify strictly more things with a richer type system than with a restricted subset of the same type system. That's not opinion, that's math.

u/sg23_lurker 45 points 8 hr. ago

That's technically correct but it ignores the adoption curve. Circle requires new syntax, new mental models, a complete rethink of ownership at API boundaries. Profiles requires banning some patterns and adopting hardened containers. For a company with 10 million lines of C++, which migration path doesn't require rewriting everything?

I was in the room for some of the SG23 discussions and the migration-cost argument is what tipped the balance.

u/dangling_by_design 78 points 7 hr. ago

Committee support and $5 gets you a coffee. The migration argument is real but it's also the argument against every improvement since the invention of the wheel. At some point the cost of NOT migrating exceeds the cost of migration. The NSA memo wasn't subtle. Companies that can't demonstrate memory safety guarantees are going to start losing government contracts. "We banned pointer arithmetic" is a weaker answer than "our type system enforces ownership."

u/sg23_lurker 34 points 7 hr. ago

Look, both approaches have genuine merit. The honest answer is probably that profiles covers the 80% case with less migration cost, and Circle-style ownership covers the 95% case with more migration cost. The question is which percentage the market will actually demand. If the answer turns out to be 95%, then profiles bought us time but didn't solve the underlying problem.

u/why_not_both_cpp 12 points 8 hr. ago

why not both.jpg

u/actually_teaches_cpp 58 points 9 hr. ago

I teach intermediate C++ at a university. The paper opens with Bjarne's original motivation - combining C's hardware access with Simula's type safety. My students weren't born when Simula was relevant.

What I need from profiles is: "turn this on and 80% of the footguns disappear." What this paper gives me is: "turn this on and some footguns disappear but the boundary is complex and you need to understand [[not_invalidating]] and the invalidation analysis rules and which code patterns are 'too complex for the analyzer.'"

That's not a win for pedagogy. It's another thing to teach. I'd rather have fewer guarantees with a simpler mental model than more guarantees with escape hatches students don't understand.

u/template_pupil 45 points 8 hr. ago

my students can barely understand const correctness and you want me to add profiles to the curriculum. I'm going to need a bigger syllabus.

u/cmake_victim_2025 34 points 8 hr. ago

ok but real talk how does this interact with build systems. is -fprofile=type_safety a compiler flag? a pragma? a module attribute? the paper doesn't say and that's kind of important for anyone who actually has to ship this.

u/modules_when 23 points 7 hr. ago

CMake support when

u/asking_real_questions 12 points 7 hr. ago

asking the real questions while everyone else debates theory

u/build_system_survivor 8 points 7 hr. ago

modules first, profiles second. you can't build on a foundation that doesn't compile.

u/resource_leak_detective 78 points 8 hr. ago

Section 2.2 defines a resource as something that "must be acquired before use and released after its last use" - but then immediately narrows it to things with constructor/destructor pairs.

This means every C API resource (FILE*, HANDLE, socket fd, OpenGL context, Vulkan handles...) is invisible to the resource-safety profile until someone wraps it. The paper's answer is "encapsulate first." But the entire point of profiles is to help code that ISN'T already following best practices. If your code already wraps every C resource in RAII, you've already solved the problem profiles is trying to solve.

See Microsoft's GSL and the Core Guidelines - they've been trying to get people to wrap raw resources for a decade. Profiles adds enforcement but doesn't solve the adoption problem.

u/fclose_nullptr_fan 56 points 7 hr. ago

FILE* has entered the chat. and exited without being closed. as is tradition.

u/pedantic_standards_nerd 23 points 6 hr. ago

fclose(NULL) is straight UB per C17 7.21.5.1. not even implementation-defined. the profile can't help you if you never wrapped the handle in the first place. that's the point.

u/dangling_by_design 92 points 8 hr. ago

The invalidation example in the paper is the easy case. Here's the hard one:

auto get_ref(std::map<int,std::string>& m) {
    auto [it, inserted] = m.try_emplace(42, "hello");
    return std::string_view{it->second};
}

void use() {
    std::map<int,std::string> m;
    auto sv = get_ref(m);
    m.clear();            // invalidates sv
    std::cout << sv;      // dangling string_view
}

The string_view returned from get_ref aliases memory owned by the map. The map isn't modified through the same reference chain. How does the profile track this? The paper says "don't let a pointer to a deleted object out of the function" but get_ref returns a pointer to a VALID object - it only becomes invalid when m.clear() is called later, in a different scope.

This is the class of bug that actually kills people in production, and I don't see how the rules in section 2.6 handle it without whole-program analysis or Rust-style lifetime annotations.

u/godbolt_addict 34 points 7 hr. ago

For the curious: Clang trunk with -Wdangling already catches some of these patterns. The lifetime safety analysis docs cover the current state of the art. It's not the full profile as described here but it's the closest thing to a prototype we have.

u/rewrite_in_rust_guy 45 points 7 hr. ago

I showed this example to my coworker. he didn't switch to Rust, he just sighed and said "we use shared_ptr for everything anyway." I think that's worse.

u/throwaway_wg21_84729 34 points 7 hr. ago

nobody's gonna talk about the elephant in the room? the NSA memo, the CISA guidance, the White House push for memory-safe languages. this paper exists because governments are about to start mandating this stuff and C++ doesn't have an answer yet.

u/memory_safe_or_bust 12 points 6 hr. ago

NSA literally said "use memory-safe languages." they didn't say "use C++ with some static analysis on top."

u/definitely_not_a_committee_member 67 points 6 hr. ago

and then everyone switched to Rust and lived happily ever after. that's not how billion-dollar codebases work.

u/policy_nerd_2026 23 points 6 hr. ago

fair, but "profiles might work in a few years" isn't going to satisfy a government auditor in 2027. the clock is ticking whether the committee wants it to or not.

[deleted] 6 hr. ago

[deleted]

u/read_all_17_pages 38 points 3 hr. ago

I actually read the whole paper. Including section 5, the plan for progress.

The plan basically says: implement the type-safety and resource-safety profiles, get experience, iterate. It references prototypes ([HS21, CG, GDR25]) and suggests that the work is "sufficiently detailed to be translated in standardese with relatively few technical changes."

The word "relatively" is doing a lot of heavy lifting there. The gap between "here's a set of rules described in English prose" and "here's normative wording that three independent compilers can implement compatibly" is where profiles papers go to age. P1179 (lifetime safety) was 2019. We're in 2026. "Relatively few technical changes" in committee time could mean 2030.

The real question in section 5 isn't the plan - it's whether the committee has the patience for another iteration cycle on the same idea while the industry pressure for memory safety keeps growing.

u/overworked_reviewer 56 points 2 hr. ago

you read all 17 pages? on purpose?

u/sg23_lurker 28 points 2 hr. ago

Section 5 is where the real question lives. The plan for progress is basically "implement it, get experience, iterate." That's been the plan since 2019. The question is whether "iterate" converges before the market moves on.

u/xX_consteval_Xx 67 points 5 hr. ago

Bjarne wrote this on a typewriter, I'm convinced. the formatting screams "I use Word and I'm not sorry about it."

u/paper_number_goes_brrr 34 points 5 hr. ago

the numbered section headers, the bullet points that start mid-line, the references like [BS25b]. peak academic formatting. respect honestly.

u/networking_ts_forever 45 points 5 hr. ago

can we please get networking in the standard before I retire. asking for a friend who started the petition in 2014.

[deleted] 4 hr. ago

[removed by moderator]

u/lord_of_the_vtable 78 points 6 hr. ago

the paper references itself as [BS25b, BS22b, BS23]. Bjarne really said "for further reading, see: me." king behavior honestly.

u/template_archaeologist 34 points 5 hr. ago

lore master. when you invented the language, every prior work IS your work.

u/i_just_want_a_web_server 56 points 4 hr. ago

I just want to write a web server in C++ without worrying about use-after-free. is that too much to ask.

u/someone_who_benchmarks 123 points 4 hr. ago

yes

u/compiles_first_try 23 points 4 hr. ago

compiles first try if you never use raw pointers. profiles just makes that the law. I'm here for it actually.

u/lifetime_lint_survivor 54 points 3 hr. ago

One thing that hasn't gotten enough attention: the paper explicitly says "a profile cannot change the semantics of a program beyond defining the meaning of some forms of undefined behavior" and then says profiles insert "run-time checks."

Runtime checks are overhead. The paper says profiles don't impose "inessential overheads." But who decides what's essential? A bounds check on vector::operator[] is essential for safety and inessential for performance. You can't have both.

The zero-overhead principle was the constraint that prevented Stroustrup from enforcing safety in the first place - he says so explicitly in the introduction. Profiles tries to thread the needle, but the needle hasn't gotten any wider in 40 years.

Edit: to be clear, I'm not saying profiles is wrong. I'm saying the paper should acknowledge the tension directly instead of papering over it with "inessential."

u/hot_path_or_die 34 points 2 hr. ago

runtime checks are not zero overhead. the paper acknowledges this in one sentence and then moves on. pick one: safety or speed. you can't have both without a LOT more nuance than this paper provides.

u/interview_ptsd 45 points 2 hr. ago

imagine explaining profiles to a recruiter. "what are your safety profile preferences?" "I prefer type_safety with resource_safety but I opt out in my ISR handlers." recruiter: closes laptop

u/recruiter_brain_syndrome 78 points 1 hr. ago

"do you know profiles?" "I know LinkedIn profiles." hired.

u/this_is_my_personality_now 12 points 1 hr. ago

this paper has been my entire personality for the last month and I'm not ok