Document: P3666R3
Author: Jan Schultke
Date: 2026-02-21
Audience: EWG, LEWG
Revision: 3
Jan Schultke's proposal to bring C23's _BitInt to C++ as a set of fundamental types, with std::bit_int<N> and std::bit_uint<N> alias templates for the C++-friendly spelling. R3 removes std::simd and std::atomic support from the MVP scope, rebases wording on N5032, and tracks WG14's approval of _BitInt(1) via N3747.
The core pitch: C has had _BitInt since C23, Clang has shipped it as an extension for years, GCC 14+ supports it in C mode, and C++ currently has no portable way to call a C function taking _BitInt(128). The paper argues for fundamental types over a class template on the basis of bit-field usage, the unsigned combiner syntax, narrowing detection in list initialization, and the enormous volume of operator overloads a library type would need.
Key design decisions still in play: signed overflow stays UB (matching C), implicit conversions remain permissive (matching C), and BITINT_MAXWIDTH minimum is only 64 (matching C). Sensing a theme? This is a C compatibility proposal first, with C++ ergonomics as a secondary concern.
This thread has been automatically created for P3666R3 - Bit-precise integers.
Author: Jan Schultke · Audience: EWG, LEWG · Date: 2026-02-21
Reminder: be civil and constructive. Paper authors sometimes read these threads.
Bit-precise integers: for when you need your compile times to be precise down to the bit too.
"Minimal viable product" that touches 25+ standard library headers. Only in C++ is this considered minimal.
I don't care about fundamental vs library. I don't care about
_BitInt(1). I don't care about thewbliteral suffix. I just wantstd::bit_int<128>to be guaranteed to exist on every major platform. This paper can't even promise that.You and 90% of the people who will ever use this feature. But getting 128 as a guaranteed minimum requires implementers to commit, and MSVC doesn't even have
_BitIntyet. Baby steps.The paper's own Section 3.4 makes the case for me:
Then Section 4.6 argues signed overflow should remain UB. Section 4.7 argues implicit conversions should stay permissive. So the paper identifies integer promotion and overflow as the motivating problem, proposes a new integer family, and then gives it the same UB and similar conversion semantics.
The actual fix is narrower than it sounds:
bit_uint<16>won't promote toint. That's the win. Everything else is inherited wholesale.The "we can define it later" argument for overflow is the same argument that's kept signed overflow UB since C89. The committee will not come back and change it because by then there'll be a decade of code relying on the optimization. Section 4.7.3 almost gets there - banning conversions to
booland character types - but that's the low-hanging fruit of low-hanging fruit.This is the comment I came here for. Though the promotion fix alone is valuable - mixed-sign arithmetic with small unsigned types is a genuine source of bugs in embedded and safety-critical code. Having
bit_uint<10>stay 10 bits wide through arithmetic is a real improvement even if everything else stays the same.The promotion fix is valuable, agreed. But I'm more conflicted about the implicit conversion design. The paper's
div_ceilexample in Section 4.7.2 is revealing:Every line of this breaks for
unsigned _BitInt(8)with strict conversions. But isn't that kind of the point? This code has latent mixed-signedness issues. Strict conversions would find them.At minimum I want
booland character conversions banned, as Section 4.7.3 suggests. Thecout << your_int_was_actually_a_charproblem is real.Have you actually maintained a C/C++ header that wraps a library using
_BitInt?I do, for a crypto library. Mixed signedness everywhere.
unsigned _BitInt(128)passed to functions expectingunsigned long longfor the low bits. It's ugly, but it compiles in C. If C++ adds conversion restrictions C doesn't have, I need#ifdef __cplusplusaround every third line.The
div_ceiltemplate should be rewritten to be correct. The interop header works and should not be broken by a language boundary.Fair point on the interop headers - I underestimated how much
_BitIntcode is already in the wild. I'll concede the general conversion case. Still holding the line onboolconversions though. Nobody is writingbool b = my_bitint;in a C interop header.I've worked on a compiler frontend with
_BitIntsupport. A few notes:1. Codegen is genuinely good up to 128 bits on x86-64. Native
mul/divpairs, proper strength reduction, the works.2. Beyond 128 bits, everything degenerates to software emulation. Users will be surprised when
bit_int<256> / 10compiles to a function call instead of a multiply-shift sequence.3. Template argument deduction from
_BitInt(N)will generate... interesting error messages. We already get complaints about array-bound deduction failures.4. The
BITINT_MAXWIDTHof 8 million in Clang is generous. I have seen people try_BitInt(1048576)in C and then wonder why their struct is 128KB.The fundamental type approach is correct. We already have the intrinsics internally. Wrapping them in a class template would just add compilation overhead for no benefit.
Does the strength reduction for
x / 10actually fire for_BitInt(128)today, or only up to 64-bit operands?Rust has had
i128since 1.26 (2018). Just saying.Rust also doesn't have to maintain ABI compatibility with a language standardized in 1989. Just saying.
Embedded developer here. Section 4.4 says the quiet part loud:
BITINT_MAXWIDTHbeing only 64 on some targets means you can't use this for the one thing hardware people actually want: portable 128-bit multiply for fixed-point DSP. The paper's own motivation in Section 3.1 mentions Q32.32, which requires 128-bit intermediates. If your target's MAXWIDTH is 64, you're back to compiler extensions or inline assembly.That said, the no-promotion guarantee is genuinely useful. If I have a 10-bit ADC value, I want
bit_uint<10>arithmetic to stay 10 bits wide. That's the real win for us, not the big numbers.N1692 (2004), N1744 (2005), N4038 (2014), P3140 (2024), P3639 (2025), P3666 (2025). Multi-precision integers are the networking of numerics. Always five years away.
The
make_signed/make_unsignedchange in Section 5.13 is easy to miss but important. Without it, adding bit-precise integers silently changesmake_unsigned_t<int>fromunsigned intto_BitInt(32)because the latter has lower conversion rank.The fix is correct: standard integer types in, standard integer types out. Bit-precise types only when you feed bit-precise types. But this is exactly the kind of "existing code silently changes meaning" landmine that makes standardizing new integer types terrifying. How many codebases use
make_unsigned_tin template metaprogramming and assume they get back a standard integer?Can we talk about Section 4.6? The paper lists "wrapping" and "wrapping + erroneous" as alternatives for signed overflow and then picks... UB. Again.
The "it can be changed later" argument is wishful thinking. When has the committee ever gone back and defined previously-undefined behavior for an existing type?
The UB is useful for optimization.
x + 1 > xfolds totruewith UB semantics. With wrapping it can't. For code where performance is the entire point - crypto, HFT, DSP - this matters.The erroneous behavior compromise exists exactly for this case. Wrap in release, trap in debug. Rust does it. It works. We just refuse to try.
Any word on MSVC support? The table in Section 3.3 is a sad little row of red crosses.
MSVC has
__msvc_int128internally but nothing public. They've been waiting for the standard to settle on the spelling before committing resources. Which is reasonable, honestly - the fundamental vs library question was only decided at Sofia.[deleted]
what did they say?
something about
_BitInt(1)being a betterbool, the usual