Joaquín's Q4 2023 Update

Jan 10, 2024

During Q4 2023, I’ve been working (mostly in collaboration with Chris) in the following areas:


  • Implemented bulk visitation for boost::concurrent_flat_[map|set]. In short, bulk visitation visits a bunch of elements at once, so instead of writing:
std::array<int, N> keys;
for(const auto& key: keys) {
  m.visit(key, [](auto& x) { ++x.second; });

we can do this:

m.visit(keys.begin(), keys.end(), [](auto& x) { ++x.second; });

This functionality is not provided for mere syntactic convenience: Boost.Unordered speeds up the entire process by pipelining the different internal stages of each individual visitation, which results in performance improvements of 40% and more. The article “Bulk visitation in boost::concurrent_flat_map discusses this new feature in much detail.

  • Removed some unneeded using declarations (removal of unneeded using declarations), contributed some hardening code, revamped the repo’s
  • Shipped Boost.Unordered 1.84.
  • Begun exploratory work towards adding new containers based on perfect hashing. The key idea behind a perfect hashmap is that its elements are known in advance at initialization time, which allows for the construction of an ad hoc hash function guaranteeing zero collisions (for the given set of elements). There’s a tradeoff between lookup times (which can be extremely fast based on the zero-collision assumption) and construction times (typically much larger than for a classical hashmap), and moreover elements can’t be inserted and deleted once the map is built. We have explored so far two well-known techniques from the literature for the generation of the associated perfect hash function: Hash, Displace and Compress (without the compress part) and the algorithm from Fredman, Komlós and Szemerédi (FKS), with promising results. Progress, however, has been slower than expected, so the target for new perfect containers in Boost.Unordered is Boost 1.86 (Aug 2024).
  • After our launch of boost::concurrent_flat_map, a new contender called ParlayHash has arisen. ParlayHash achieves very good performance for massively parallel scenarios (dozens of cores) thanks to its smart latch-free design based on epochs for the reclamation of erased elements. The design imposes some limitations not present in boost::concurrent_flat_map, most notably that elements must be immutable, but its excellent performance has spurred Fernando and I to begin exploratory work towards adopting similar techniques in the open-addressing context we use. It’s currently too early to know if this work will result in the addition of new concurrent containers to Boost.Unordered. As a spin-off of this activity, a variant of boost::concurrent_flat_map with almost-latch-free insertion has been implemented —the decision is pending whether this will be officially merged.

New website

  • I’ve contributed a small section on tweet proposals. Although the presence of Boost in social media has increased notably in the last few years, I think much more need to be done, and has to be done with contributions from the entire community.

Looking back and forward

I began collaborating with the C++ Alliance almost two years ago, when I was gently hooked by Vinnie and Peter to work on the evolution project for Boost.Unordered alongide my colleague Chris Mazakas. The experience so far has been a joyous one, and I’ve had the opportunity to meet and work with a group of outstanding professionals from all over the globe. Braden Ganetsky recently joined the Boost.Unordered maintainance team, and it’s been my pleasure to guide him through the onboarding process.

Going forward, I feel that most of the goals for Boost.Unordered put forth by Peter Dimov in 2022 have been met, and it’s only natural that the activitiy in this library will decrease along this year. I’m totally open to new challenges for the evolution of Boost, particularly if they’re math-oriented and can advance the state of the art for C++ in general —drop me a line if you have an idea in mind!

All Posts by This Author