Document: P2900R14
Authors: Joshua Berne, Timur Doumler, Andrzej Krzemieński
Date: 2025-02-13
Audience: CWG, LWG (plenary-approved)
Link: wg21.link/p2900r14
It happened. Contracts are in C++26. After the infamous removal from C++20, five years of work in SG21, fourteen revisions, and a Wroclaw EWG vote that had twelve Strongly Against, the committee approved P2900R14 at the Hagenberg plenary today.
The paper gives you three kinds of contract assertions: preconditions (pre(expr)), postconditions (post(r: expr)), and assertion statements (contract_assert(expr)). There are four evaluation semantics - ignore, observe, enforce, and quick_enforce - and which one your compiler uses is implementation-defined. A new <contracts> header provides std::contracts::contract_violation and lets you install a custom violation handler.
Notable design decisions in R14: predicates implicitly see locals and *this as const, virtual functions are excluded entirely (stripped at Hagenberg), and UB in contract predicates is... just UB. The committee voted against reducing it. There are related papers worth reading: P2899R1 (rationale), P3506R0 ("P2900 Is Still Not Ready"), P3265R3 ("Ship Contracts in a TS"), and P3573R0 ("Contract Concerns").
Reminder: the paper authors read this subreddit. Argue the design, not the people. Thread is being watched.
Contracts in the C++ standard. In my lifetime. I genuinely did not think I would see this day. What's next, networking?
Don't.
give it another 14 revisions and a couple more removals from the working draft
Fourteen revisions. The paper went through more iterations than my company's product roadmap and unlike our product roadmap it actually shipped something.
12 SA votes in EWG. Twelve. And they shipped it anyway. Committee gonna committee.
WG21 consensus doesn't mean unanimity. The Wroclaw vote was 25 SF, 17 F, 0 N, 3 A, 12 SA. That's 42 in favor vs 15 against. By WG21 rules, that's consensus - and the bar for "consensus against" on a removal poll at Hagenberg was not met either (9|8|3|19|41 against removing it).
You can disagree with the model but the 12 SA is not some procedural scandal. Those people filed their objections, the chair noted them, the direction stood. That's how it works.
12 SA out of 57 is 21%. One in five people in the room thought this was bad enough to register the strongest possible objection. That's not "consensus." That's a hostage situation where 42 people want to leave the building and 12 people are chained to the radiator.
For context, C++20 modules had comparable SA counts at various stages. Ranges had SA votes. Coroutines had SA votes. The bar is not zero dissent - it's whether dissent rises to the level where the chair determines no consensus. It didn't.
at least it's not sender/receiver
I said don't.
The fundamental tension in this paper is one nobody in this thread is going to talk about because the sarcasm-to-signal ratio is approximately 15:1, so let me be the signal.
P2900 standardizes contract predicates that are evaluated at runtime. The predicate in
pre(ptr != nullptr)is a C++ expression evaluated in the caller's context. Here's the thing: ifptris a dangling pointer, evaluatingptr != nullptris undefined behavior. The contract predicate - the thing you wrote to detect the bug - triggers UB before it can catch anything.The committee voted at Hagenberg (9|7|9|22|19) against reducing UB in contract predicates. The safety feature has undefined behavior and the committee voted to keep it that way.
The argument is coherent if you accept the premise: contract predicates are C++ expressions, C++ expressions can have UB, and special-casing contracts would require defining a "safe subset" of expressions that nobody agrees on. Fair enough.
But the consequence is that contracts don't improve the worst case. They improve the average case - you catch the nulls and the out-of-ranges and the precondition violations that are straightforward to check. The lurking UB - the dangling pointers, the data races, the type-punning - remains invisible to the contract system because evaluating the check for those conditions is itself UB.
This is probably the right tradeoff. But the marketing of "contracts improve safety" needs an asterisk the size of section 6.8.4.
Edit: some people are reading this as "contracts are bad." I'm not saying that. I'm saying the gap between what contracts promise and what they deliver is larger than the abstract suggests, and the committee made a deliberate choice to keep that gap.
the contract checking your null pointer triggers UB on the null pointer. peak C++. you genuinely cannot parody this language.
This is the best comment in the thread so thank you. To make the point even more concrete:
If someone passes a
spanconstructed from a dangling pointer, the second precondition dereferences invalid memory to compare against nullptr. The contract doesn't catch the bug - it joins the bug.The UB argument is a category error. In my world - STM32, no heap, deterministic timing - the predicates I write are things like
pre(len <= BUFFER_SIZE)andpre(state == State::READY). Integer comparisons. Enum comparisons. No pointers to dangle. No allocations to corrupt. The UB surface in a well-written embedded precondition is zero.The people worried about UB in predicates are the people who would write predicates with UB. Don't do that.
You're right for your domain and wrong about the general case. The standard doesn't ship to your domain. It ships to everyone. And "don't write UB in your predicates" is the "don't write bugs" school of software engineering - correct, useless.
The real cost isn't your STM32. It's the team at a large company that refactors a data structure, invalidates a pointer that a postcondition dereferences, and gets a heisenbug that reproduces only in
observemode because predicate evaluation order differs fromenforce.I hear you. But the alternative was (a) no contracts in C++26, (b) a "safe predicate" sublanguage that would have taken three more cycles, or (c) enforce-only, which the committee rejected 6|1|3|15|24 in Tokyo. Sometimes you ship the pragmatic thing.
The heisenbug you describe is real. But we already have that class of bugs with
assert()andNDEBUG. Contracts don't make the situation worse - they formalize what people are already doing.We agree on the tradeoff. We disagree on whether the paper is honest about it. The introduction says contracts "help programmers express and check function interfaces." It doesn't say "help programmers express and check function interfaces, unless the check itself has UB, in which case you get what you deserve."
An informative note in the standard would cost nothing and might save someone a day of debugging.
Fair. I'd support that note.
I give it two standards cycles before someone proposes P3900 "Contracts for Contracts" - a mechanism for asserting preconditions on your preconditions
P3901: Meta-Contracts: Preconditions for Your Preconditions
P3902: Pre-Meta-Contracts: Asserting That Your Meta-Contract Predicates Are Well-Formed
P3903: Revised Revised Contracts Revision Revision (R47)
P3904: Removing Contracts From C++29 Due To Insufficient Implementation Experience Of Meta-Contracts
Can someone ELI5 what this actually does? I started learning C++ eight months ago and this paper is like 200 pages and everyone here seems either furious or ecstatic and I can't tell which reaction is correct
Short version: you can now write rules about what a function expects and what it promises, and the compiler can check those rules at runtime.
If someone calls
divide(10, 0), the precondition fails and you get told about it instead of just getting a divide-by-zero. The controversy is over the details - when the checks run, what happens when they fail, what they're allowed to see.Both reactions are correct. It's a genuinely useful feature with genuinely contentious design decisions.
it lets you write "this should never be null" and then the compiler goes "lol maybe I'll check that, maybe I won't, depends on how I'm feeling today (implementation-defined)"
so it's like assert() but fancier?
Genuinely yes. But the "fancier" part includes: it's part of the function declaration (so tools can see it), it has four different checking modes, it has a customizable handler, and the compiler knows what it is rather than it being a macro. The "fancier" is doing real work.
Meanwhile in Rust, the type system prevents most of these bugs at compile time. No runtime checks needed. No "implementation-defined evaluation semantics." No UB in your safety checks. Just... types.
I know this gets said every thread but maybe the fact that it gets said every thread is trying to tell you something.
Rust's type system doesn't express "the input buffer must contain at most 64 bytes." It doesn't express "the motor RPM must be between 0 and 12000." It doesn't express "this function must only be called after init() has been called." These are semantic constraints. They live in the domain, not in the type system. Contracts address exactly this class of bug.
Also, Rust has
debug_assert!which is literally the same concept. So.there it is. nine comments deep. right on schedule
every thread. every single thread. we could post "P9999R0: Adding Semicolons To C++" and someone would show up to explain how Rust's borrow checker makes semicolons unnecessary
Let me list what is not in the MVP:
1. No contracts on virtual functions (removed at Hagenberg, vote: 20|24|13|14|2)
2. No contracts on function pointers
3. No contracts on coroutines
4. No
assumesemantic (consensus against at St. Louis, 0|0|3|18|11)5. No ODR for contract-bearing entities
6. No contracts in the standard library itself
What's left: free functions, non-virtual member functions, and templates. Which is genuinely useful! But people expecting to slap
pre()on their virtual interface hierarchies are going to have a bad time.The phasing argument - ship the core, iterate on the extensions - is reasonable. The risk is that "iterate later" in WG21 means "revisit in 2031."
no virtual functions? in an object-oriented language? in 2025? outstanding move
The virtual function question is genuinely hard. Do the derived class contracts have to be at least as loose as the base? Stricter? Checked by the compiler? At runtime? Liskov substitution is a can of worms when you try to specify it precisely. Cutting it from MVP was the right call - getting the core semantics stable matters more than coverage.
See P2899R1 for the rationale on what made it into the MVP and what didn't.
ah yes, the "we'll add it later" approach, famously reliable in C++ standardization. see also: modules (2020), still broken in 2025
One thing people miss: even without contracts in the standard library, nothing stops implementations from adding them as QoI. GCC and libstdc++ could absolutely add
pre(!empty())tostd::vector::front()as a non-normative extension. The mechanism is there.For those wondering "didn't we already have this?" - a comparison:
Boost.Contract - library-based, uses RAII tricks and macros, supports class invariants, old values, subcontracting (virtual function Liskov checks). Works today. Verbose. Significant compile-time cost. Nobody uses it.
GSL
Expects()/Ensures()- macros. Configurable behavior (terminate, throw, nothing). Widely used in codebases that follow the Core Guidelines. Dead simple. No compiler integration.assert()- the OG. Binary: on or off viaNDEBUG. No postconditions. No customization. Everyone uses it.P2900 contracts - language feature. Four evaluation semantics. Compiler sees the annotations. Part of the function declaration. Customizable violation handler. No class invariants yet. No old values yet. No subcontracting.
So P2900 is roughly "GSL Expects/Ensures but in the language, with more modes, and the compiler knows about it." The things Boost.Contract does that P2900 doesn't (invariants, subcontracting, old values) are explicitly deferred to future papers.
The killer feature isn't any of the runtime behavior - it's that the contracts are visible in the declaration. Static analysis tools, IDEs, documentation generators, and eventually optimizers can read
pre(x > 0)and do something with it. A macro is invisible to everything except the preprocessor.From the implementation side: LLVM #127870 tracks the Clang P2900 work. GCC has experimental patches. The core feature is straightforward to lower - preconditions become checks at function entry, postconditions at exit, with a call to the violation handler on failure.
The interesting codegen question is what happens with
quick_enforce. In theory, the compiler terminates immediately without calling the handler, which means it can be lowered to something like a conditional trap instruction on architectures that have one (ARMbrk, x86ud2). Zero function-call overhead. That's the "we measured this at Bloomberg and the overhead was acceptable in production" story.The complicated part is
observe- the handler runs, and then execution continues. The compiler has to emit a call, handle potential exceptions from the handler, and then keep going as if nothing happened. This has real implications for optimization - the compiler can't assume the precondition is true after anobserve-mode check."the handler runs, execution continues" is a wild sentence. the precondition failed. the function's invariant is violated. and we just... keep going. what could go wrong.
Something that's going to surprise people. In P2900, postcondition predicates see
*thisasconst:This compiles because
begin()andend()have const overloads. But:If
is_valid()is not const-qualified, this is ill-formed. The Hagenberg vote on removing constification was 9|7|6|37|14 - overwhelmingly against removing it. The rationale: predicates should be side-effect-free.Reasonable. But it means you need const-correct APIs to use postconditions effectively. Which is... not the state of most C++ codebases I've worked in.
wait
thisis const in a postcondition? I just finished learning about const correctness and now you're telling me the safety feature requires it?The point is that a predicate that mutates state would mean your program behaves differently depending on whether contracts are checked. Imagine a postcondition that increments a counter - in
enforcemode the counter goes up, inignoremode it doesn't. That's an ODR-style nightmare. Constification kills this entire class of bugs.I understand the rationale. I just think it's going to be a papercut for 80% of codebases that aren't religiously const-correct, which is most of them. The feature that requires you to fix your const-correctness before you can use it is the feature that won't get used.
The violation handler story is more complex than most people realize. The handler can throw. This was explicitly preserved - St. Louis voted 0|5|4|9|16 against banning exceptions from the handler.
So consider: you have a
noexceptfunction with a precondition. The precondition fails. The handler throws. Now what? You're throwing out of anoexceptfunction. That'sstd::terminate. Which is maybe what you wanted, but it means the handler's "recovery" path is silently killed.The interaction matrix between evaluation semantics,
noexcept, and handler behavior is something I expect to generate a lot of Stack Overflow questions in 2027.And it gets worse with coroutines (if contracts were allowed on them, which they aren't in the MVP). A coroutine's precondition would need to be checked before the first suspension point, in the caller's frame. But the coroutine might have already allocated its frame. Cleanup on precondition failure in that context is a whole separate design problem.
I suspect this is one of the real reasons coroutines were deferred, not just the Liskov concerns.
Let me make sure I understand this correctly. P2900 lets me write:
And then the compiler is allowed to... not check it. At all. The
pre(b != 0)is in the source. The compiler sees it. And it goes "nah." Because the evaluation semantic isignore.I understand why. Performance. Same argument as
NDEBUGforassert. Fine. Butassertis a macro - it's obviously conditional, you can see#ifdef NDEBUGin the header. With P2900, the syntax looks unconditional. You wrote a precondition. It looks like it's part of the function's interface. A new team member reads that code and thinks the division is protected.That sentence is doing a LOT of work.
"implementation-defined" is the C++ standard's version of "it depends on the deployment"
To be fair, the alternative - mandating enforcement - was polled and decisively rejected (6|1|3|15|24 in Tokyo). The embedded and HFT people cannot afford unconditional runtime checks on every function call. Making enforcement optional is the only way to get both camps to agree.
The syntax question is real though. A precondition looks like it does something. That's a teachability issue worth solving, maybe in compiler warnings.
quick_enforceis the semantic that justifies this entire paper for us. We run contract checks in production. Not debug builds, production. On the hot path. When we detect a violated precondition, we want the process dead immediately - no handler, no stack unwinding, no logging. Just trap. Reboot. Failover handles it.Our current mechanism is a custom
FATAL_CHECKmacro that compiles to__builtin_trap(). It works but the compiler doesn't know it's a precondition, so it can't optimize around it. Withquick_enforce, the compiler knows the predicate is expected to be true, and if it's false, the process is gone. That's exactly our model.We can't use
quick_enforcein a game engine. Process death is not an option when 500,000 players are in a session. We needobserve- log the violation, continue running, fix it in the next patch. Butobservemeans the compiler can't assume the precondition holds, so we lose the optimization benefit you're talking about.It feels like contracts were designed for your use case (fail fast, restart) and awkwardly stretched to cover ours (fail gracefully, keep running).
You're thinking about it as one semantic for the whole binary. That's not the intent. You use
quick_enforceon the hot path where you can't afford a handler call. You useobserveon the non-critical path where you want logging. Different functions, different semantics. The paper doesn't force one mode per translation unit.The real question for your case is whether your violation handler can be fast enough for frame budgets. If you're calling into a logging system on every violation that's a different conversation than if you're incrementing an atomic counter.
That's fair. I was assuming per-TU semantics. If we can mix per-function, the story is much better. I'll need to read the paper more carefully on how semantic selection actually works in practice - whether it's a compiler flag or something more granular.
The thing that doesn't get enough discussion: should this have been a TS?
P3265R3 argued for a Contracts TS. The SG21 chairs responded with P3269R0 arguing against. The committee sided with shipping in the IS.
The TS argument: implementation experience is thin. Only experimental compiler branches exist. No production deployments at scale. The C++20 contracts failure should have taught us that shipping early leads to pulling features.
The IS argument: a TS delays adoption by 3-6 years. The design has been iterated for five years in SG21 with more committee review than most features. A TS for contracts would have been the first TS for a language feature since Concepts TS, and the Concepts TS showed that TS-based design exploration creates more churn, not less.
Concepts went through a TS. The design changed substantially. Terse syntax was removed.
boolconcepts became a thing. The TS arguably saved C++ from shipping a worse version of concepts.Contracts went straight to IS. We'll find out in 2028 whether that was wisdom or hubris.
and concepts took 10 extra years. if contracts went through a TS we'd be looking at C++35 delivery. I'll be dead. My codebase will be dead. We'd all be writing Rust by then anyway.
Real question: how are evaluation semantics going to work in practice? Is this a compiler flag? Per-TU? Per-function? Does my CMakeLists.txt need a
target_contract_semantics()? Does-fcontracts=enforceapply to everything including third-party headers?Because I can already see the bug report: "my release build crashes because some library was compiled with
quick_enforceand my code passes bad inputs that were fine when the library was compiled withignore."this is going to be
-fmodulesall over again isn't itP2900 deliberately leaves this implementation-defined. GCC and Clang will presumably have
-fcontract-*flags. Whether it's per-TU or more granular depends on the implementation. The standard intentionally doesn't constrain this because different vendors have different models.The mixed-semantics-across-TUs scenario is a real operational concern, yes. The paper includes a discussion of it.
[removed by moderator]
what did they say?
something about the C++20 removal being a conspiracy and naming names. you know the usual.
Rule 3. Naming committee members to blame them personally is not civil discourse.
real talk: what's the compile time impact? every contract annotation is an additional expression the compiler has to parse, type-check, and potentially instantiate (in templates). Our codebase has ~40k source files. If contracts add even 2% to compile times I'm going to hear about it from management.
In
ignoremode the predicate still has to be parsed and type-checked (it's not a macro that disappears), but no codegen happens. Expect something comparable to the cost of astatic_assertexpression that's never evaluated. For templates, the predicate is instantiated with the template arguments, so yes, there's a cost proportional to how complex your predicates are.2% is probably pessimistic for most code. But if you put contracts on every function in a heavily-templated header-only library, you'll feel it.
laughs in compile times
friend, if 2% compile time increase is what makes management notice, you've got bigger problems than contracts
This is entirely redundant.
[[assume(expr)]]already exists in C++23 and does the same thing. Why do we need a second mechanism? The committee is just reinventing wheels at this point.[[assume]]tells the optimizer the expression is true, period. If it's false, you get UB. It's an optimization hint with zero safety value. Contracts are the opposite - they exist to check the expression. One enables optimizations, the other catches bugs. They are complementary, not redundant.The committee explicitly voted (0|0|3|18|11 at St. Louis) that the MVP should not leave the door open for an
assumeevaluation semantic. They're deliberately keeping contracts and assumptions separate.For the new people: C++ has tried to standardize contracts before. P0542R5 was voted into the C++20 working draft at Rapperswil in 2018. Then the design started unraveling - disagreements over assume semantics, checking levels, interaction with optimization. By Cologne 2019 the design was so contested that it was pulled entirely from C++20.
SG21 was formed specifically to fix this. Five years of design iteration later, P2900 is a fundamentally different paper with different authors, different semantics, and different design philosophy. The only thing it shares with the C++20 attempt is the word "contracts."
Whether the committee learned the right lessons from 2019 depends on who you ask. The 12 SA voters would say no. The 42 in favor would say yes. We'll know for sure in about three years when implementations mature.
Has anyone read P3506R0? "P2900 Is Still Not Ready for C++26." The list of concerns is not crankery - it's from people who implement compilers:
And P3573R0 raised similar concerns. These aren't fringe positions. These are the people who have to make this work.
Read P3573R0 and then read P3591R0 which is the response. Most of the concerns were already addressed in committee. The TS/readiness debate was thoroughly litigated - the committee voted and the result was clear. At some point "I disagree with the result" needs to be distinguished from "the process was flawed."
Eiffel had this in 1986. Design by Contract with full class invariants, old-value capture, and Liskov-compliant subcontracting. Thirty-nine years later, C++ gets preconditions and postconditions without invariants, without old values, and without subcontracting. The "MVP" is a feature that was complete in another language before most of this subreddit was born.
I'm glad it's happening. I'm just... not going to pretend it's groundbreaking.
C++ speedrunning features from 1986 any%
Master C++26 Contracts in just 30 days! Our exclusive course covers everything from pre/post to violation handlers. Use code REDDIT20 for 20% off. Link in bio.
report and move on
The elephant in the room is the safety narrative. The White House told everyone to stop using memory-unsafe languages. The NSA published guidance. Google and Microsoft publish annual CVE counts showing C++ memory bugs. And the committee's response for C++26 is... a feature that lets you write
pre(ptr != nullptr)and then the compiler might not check it, and if it does check it and the pointer is dangling, the check itself is UB.I'm a contracts supporter. I think this is net positive. But let's not pretend this is the answer to the safety question. P3500R1 ("Are Contracts Safe?") asked exactly this and the answer is: contracts are orthogonal to memory safety. They don't prevent use-after-free. They don't prevent buffer overflows. They catch logic errors in the domain layer, which is valuable but different.
If someone tries to market C++26 contracts as "C++ is now safe," push back.
this is the most honest thing anyone has said in this thread
FWIW, Bloomberg has been using contract-like annotations internally for years. Their internal
BSLS_ASSERTfacility is essentially what P2900 standardizes, minus the postconditions. They have data on the overhead, the debugging value, and the false-positive rate across millions of lines of production code. This isn't a theoretical proposal - it's a standardization of established practice from multiple organizations.The authors know what they're doing. Joshua Berne is Bloomberg. The design reflects production experience. Whether the committee added the right amount of generality on top of that experience is the question.
scrolled through 87 comments expecting a consensus on whether this is good or bad. got sarcasm, a Rust tangent, a deleted conspiracy theory, two compiler devs disagreeing about trap instructions, and a spam bot. classic r/wg21.
anyway I'm cautiously optimistic. it's not perfect but it's something. better than writing
assert(false && "unreachable")for another decade.