Document: P3973R0
Author: Daniel Towner (Intel)
Date: 2026-01-19
Audience: LEWG, SG1
Proposes std::bit_cast_as<T>(simd_vec) for reinterpreting simd objects at different element granularities. Instead of std::bit_cast<vec<uint16_t, 8>>(bytes) with manually computed element counts, you write std::bit_cast_as<uint16_t>(bytes) and the count is inferred automatically. Brings std::simd to parity with platform intrinsics like _mm256_castps_si256. Generalizes naturally to std::array and std::span.
This is long overdue. Platform intrinsics have had
_mm256_castps_si256forever - same bits, different element type, zero overhead.std::simdmade you spell out the entire target type with manual count computation.bit_cast_as<uint32_t>(floats)is the natural C++ equivalent and it's embarrassing we didn't have it from the start.The compile-time size verification is the key feature.
bit_cast_as<uint32_t>(vec<uint8_t, 15>)won't compile because 15 bytes doesn't divide evenly into 4-byte elements. You catch the error at compile time instead of at runtime or in a code review.The generalization to
std::arrayis natural and useful.bit_cast_as<uint16_t>(std::array<uint8_t, 16>)returningstd::array<uint16_t, 8>is exactly what protocol parsing code needs. Thestd::spanvariant returning a view (not a copy) is the right semantic choice too.Depends on P3983R0 for array-like layout guarantees in simd. Without guaranteed contiguous element storage,
bit_castbetween simd types isn't portable. The dependency is well-documented and the prerequisite paper is in the pipeline.bit_cast_asvsas_elements- the paper chosebit_cast_asbecause the relationship tostd::bit_castshould be obvious. I agree. When you seebit_cast_asyou know this is bit-level reinterpretation, not a semantic conversion.Intel's simd implementation has had this internally since early development. It's widely used both in the library internals (interfacing with compiler intrinsics) and in user code (wireless signal processing, packet handling). Production-tested.