Bigger, Faster, Stronger Types
Jul 14, 2025We continue to make exciting progress developing new libraries for inclusion in Boost, and expanding those already available.
New Libraries
int128
Int128 (https://github.com/cppalliance/int128) is a small library that has more or less fallen out of work on Decimal. It provides two type: an unsigned 128-bit integer and a signed 128-bit integer. Since my last post the library should now be ready for beta, and subsequently production use. Much effort was put into optimizing every operation on a multitude of architectures. The documentation includes bar charts showing the performance of our types vs Boost.Multiprecision, and built-in types (if available). While orgianlly envisioned as just an improvement to the Decimal backend arithmetic, I think this has much more additional usefulness.
Decimal
Decimal (https://github.com/cppalliance/decimal) is a ground-up implementation of IEEE 754 Decimal Floating Point types in C++14, co-authored with Chris Kormanyos.
In January we had our formal review for inclusion in Boost.
As int128 above became more production ready we have integrated it into Decimal as a new backend integer type.
Not only do we now use the int128 as a backend, we were able to find a few bugs in integration due to the special functions test suite in Decimal.
The relationship during co-development has worked out really well.
We also recently merged a new 256-bit integer backend based on developments and lessons learned from int128
.
These combined have given us >100% speedups for the 128-bit types, and also benefit the 64-bit types to a lesser degree.
Discussions are ongoing in the Cpplang Slack channel #boost-decimal
.
Existing Libraries
Math
As posted a few versions ago Boost.Math began offering support to be run on GPU with NVCC, NVRTC, and SYCL. We have recieved a few bug reports now that this functionallity is being used more. For Boost 1.89 we have put good effort into fixes and internal restructuring to address the issues that people have been having.
Multiprecision
For the first time in a while Boost.Multiprecision has a new backend type: cpp_double_fp_backend
.
This project was started during GSoC 2021 and has finally come to full fruition.
The cpp_double_fp_backend
back-end is the sum of two IEEE floating-point numbers combined to create a type having roughly twice the composite width of one of its parts.
The cpp_double_fp_backend
back-end is used in conjunction with number
and acts as an entirely C++ header only floating-point number type.
If you need more precision than a double
with less computational expense than arbitrary precision types look out for this in Boost 1.89.
All Posts by This Author
- 07/14/2025 Bigger, Faster, Stronger Types
- 04/14/2025 Looking at the Numbers
- 01/10/2025 Another new library underway
- 10/21/2024 CUDA comes to Math
- 07/08/2024 Matt's Q2 2024 Update
- 04/22/2024 Matt's Q1 2024 Update
- 01/10/2024 Matt's Q4 2023 Update
- 10/27/2023 Matt's Q3 2023 Update
- View All Posts...