Jekyll2024-03-11T20:12:14+00:00http://cppalliance.org/feed.xmlThe C++ AllianceThe C++ Alliance is dedicated to helping the C++ programming language evolve. We see it developing as an ecosystem of open source libraries and as a growing community of those who contribute to those libraries..Alan’s Q4 Update 20232024-01-12T00:00:00+00:002024-01-12T00:00:00+00:00http://cppalliance.org/alan/2024/01/12/AlanQ4Update<h1 id="summary">Summary</h1>
<ul>
<li><a href="#mrdocs">MrDocs</a></li>
<li><a href="#handlebars">Handlebars</a></li>
<li><a href="#boost-libraries">Boost Libraries</a></li>
<li><a href="#boost-release-tools">Boost Release Tools</a></li>
<li><a href="#boost-website">Boost Website</a></li>
<li><a href="#c-github-actions">C++ Github Actions</a></li>
</ul>
<h2 id="mrdocs">MrDocs</h2>
<p>Over the last quarter, we continued to work intensely on <a href="https://github.com/cppalliance/mrdocs">MrDocs</a>, a
documentation generator for C++ projects. I’ve been overseeing and reviewing all the work done by the other contributors
in the project. Besides the work done by Krystian, Fernando also joined the project as a contributor.</p>
<p>We’ve been working on making the project at least as competitive as Doxygen + Docca, so we can start using it in a
subset of Boost libraries. In this process, we have also been improving usability and documentation, considering
feedback we received.</p>
<p>In particular, work in Q4 included:</p>
<ul>
<li>Improvements to the documentation, both in javadocs and exposition</li>
<li>Reducing technical debt, including extending APIs and problems with handlebars templates</li>
<li>Updating CI workflows in coordination with the <a href="#c-github-actions">C++ Github Actions</a> project</li>
<li>Usability improvements, including tools relative to installed executable and many new features for templates</li>
<li>Fixes and improvements to installation procedure</li>
<li>Features such as symbols with links to repositories and many new sections in templates</li>
<li>Auxiliary utilities, such as a complete implementation of <code>std::expected</code> for the project</li>
<li>Generating binaries for the new LLVM version and updating the documentation</li>
<li>Improvements to build scripts with CMake presets and vcpkg manifest files</li>
</ul>
<p>In parallel, Boost.URL has integrated MrDocs, and is temporarily generating the documentation with both
Doxygen+Docca and Antora+MrDocs. The documentation includes an extension to collect reference pages
generated by MrDocs and include them in the Antora documentation. A preview of the documentation
generated by MrDocs is available at https://792.urlantora.prtest2.cppalliance.org/site/url/index.html.</p>
<p>In general, I’ve been responsible for:</p>
<ul>
<li>setting up and maintaining CI for the project;</li>
<li>MrDocs and LLVM release binaries;</li>
<li>build scripts;</li>
<li>setting up and integrating dependencies;</li>
<li>setting up and deploying the Antora toolchains and documentation to the project website;</li>
<li>working on supporting libraries; and</li>
<li>supervising and reviewing the work done by other contributors (Krystian and Fernando);</li>
<li>fixing bugs.</li>
</ul>
<h2 id="handlebars">Handlebars</h2>
<p>MrDocs includes a support library that reimplements the <a href="https://handlebarsjs.com/">Handlebars</a> template engine in C++.
This module is used to generate documentation from templates. This continued to be the MrDocs support
library in which I have been investing most of my time.</p>
<p>MrDocs also includes a support library called “DOM” that provides a C++ interface to type-erased property trees, such as
JSON and Javascript Objects. This module is used by MrDocs to create opaque representations of property trees that can
be used by the Handlebars engine and other library functions. Such representations can come from a variety of sources,
including JSON files, Javascript objects, and internal C++ objects with information parsed by MrDocs.</p>
<p>In Q4, a number of improvements relevant to MrDocs were made to the Handlebars and DOM libraries. The main
feature is the addition of complete bindings for Javascript objects, arrays, and functions. The C++/JavaScript
bindings work both ways, so that C++ objects can be used in Javascript and Javascript objects can be used in C++.
Unit tests for the DOM were also included to complement the Handlebars unit tests.</p>
<p>These changes enabled the original implementation of handlebars.js to be completely removed from the project.</p>
<h2 id="boost-libraries">Boost Libraries</h2>
<p>As in other quarters, the Boost Library in which I have been investing most of my time
is <a href="https://github.com/boostorg/url">Boost.URL</a>. The library is in maintenance mode since our focus shifted to MrDocs,
but there is a constant demand for work fixing bugs and improving the documentation.</p>
<p>In Q4, Boost.URL has integrated MrDocs, and is temporarily generating the documentation with both
Doxygen+Docca and Antora+MrDocs:</p>
<ul>
<li>All the content in the QuickBook documentation has been rewritten in asciidoc format. Alternatives have been
developed to replace special QuickBook features, such as snippets.</li>
<li>The Boost.URL repository now includes an extension to automatically find or
download MrDocs and use it to collect reference pages generated by MrDocs. The reference pages are aggregates
as logical files in the Antora system, which includes them in the Antora documentation.</li>
</ul>
<p>A preview of the documentation generated by MrDocs is available
at https://792.urlantora.prtest2.cppalliance.org/site/url/index.html.</p>
<p>Besides the integration with MrDocs, work in Q4 included:</p>
<ul>
<li>Improvements, simplifications, and a number of extensions to CI</li>
<li>Fixed previous design choices that were incompatible with the super-project</li>
<li>Fixes to build scripts to support for more compilers</li>
<li>Extended integration tests to cover all cases described by the super-project CMake scripts</li>
<li>Fixes to algorithms, such as the <code>parse_query</code> and <code>format</code> functions</li>
<li>GDB pretty printers are now provided</li>
</ul>
<p>Many improvements had to be coordinated with the <a href="#c-github-actions">C++ Github Actions</a> project, which had new
features implemented for these use cases.</p>
<p>In general, I’ve been responsible for:</p>
<ul>
<li>upgrading CI, mostly coordinating with the <a href="#c-github-actions">C++ Github Actions</a>;</li>
<li>maintaining, simplifying, and updating build scripts;</li>
<li>integrating more spec tests, such as the Ada tests included more recently;</li>
<li>including more examples, such as the more recent sanitize-URL example;</li>
<li>fixing documentation content that is out of date; and</li>
<li>fixing bugs.</li>
</ul>
<p>Besides Boost.URL, as usual, I’ve been overseeing and fixing smaller issues with other boost libraries. In Q4,
only some small work in Boost.StaticString was necessary.</p>
<h2 id="boost-release-tools">Boost Release Tools</h2>
<p>Over the last quarter, I’ve been working on the integration of toolchains I developed into the Boost Release Tools to
add support for features desired for the new website. In Q3, I previously included Antora support in the release tools
and this has already been deployed in the official Boost release 1.84.0.</p>
<p>In Q4, I opened a pull request to include support for archive variants in the release tools. This pull request is
currently under review and will be considered for inclusion in a period between Boost releases when priority
moves from the <a href="#boost-website">Boost website</a>.</p>
<h2 id="boost-website">Boost Website</h2>
<p>Among the many support projects for the new Boost website, I’ve been helping the most on
<a href="https://github.com/cppalliance/site-docs"><code>cppalliance/site-docs</code></a>, which includes the Boost website documentation as
an Antora project.
Its components represent the “User Guide”, “Contributor Guide”, and “Formal Review” sections of the website.</p>
<p>Since the inception of the project, I’ve been overseeing and reviewing all the work done by the other contributors to
the project.</p>
<p>In general, I continue to be responsible for:</p>
<ul>
<li>reviewing and merging all pull requests to the project;</li>
<li>setting up and maintaining CI for the project;</li>
<li>coordinating with the website project on content uploaded to AWS buckets;</li>
<li>build scripts to be reused by the release tools and previews;</li>
<li>writing sections of the documentation that require technical knowledge;</li>
<li>developing custom Boost/Antora extensions, such as the Boost Macro extension;</li>
<li>maintaining the Antora toolchain and templates; and</li>
<li>adjusted Boost libraries to match formats expected by the website.</li>
</ul>
<h2 id="c-github-actions">C++ Github Actions</h2>
<p><a href="https://github.com/alandefreitas/cpp-actions">C++ Github Actions</a> is a project I created and have been maintaining
since Q3. It is a collection of compasable, independent, and reusable Github Actions for any C++ project that needs to
be tested on a variety of compilers and environments.</p>
<p>Both MrDocs are Boost.URL are currently using these actions in their CI. Boost.Http is now also using these actions
in its CI, which gave us the opportunity to get more feedback and improve the actions.</p>
<p>The project includes actions to:</p>
<ul>
<li>Generate a Github Actions Matrix for C++ projects;</li>
<li>Setup C++ compilers;</li>
<li>Install and setup packages;</li>
<li>Clone Boost modules;</li>
<li>Run complete CMake and <code>b2</code> workflows;</li>
<li>Generate changelogs from conventional commits;</li>
<li>Generate summaries; and</li>
<li>Generate time-trace reports and flame graphs</li>
</ul>
<p>In particular, a number of new features have been added to the project in Q4.</p>
<ul>
<li>The matrix has been adapted to include multiple extra flags and values over time.</li>
<li>Actions that generate summaries included improvements and new sections.</li>
<li>Actions can now identify many new types of warnings that should be emitted.</li>
<li>CMake and B2 workflows included more options and features.</li>
<li>The matrix new support composite factors, combinatorial factors, and custom containers.</li>
<li>Documentation has been improved to also use the collector Antora extension.</li>
</ul>
<p>Besides new features, most work went into fixing issues that were revealed by testing the actions in new environments.
About ~50 bugs with various levels of severity have been fixed in Q4.</p>Summary MrDocs Handlebars Boost Libraries Boost Release Tools Boost Website C++ Github Actions MrDocs Over the last quarter, we continued to work intensely on MrDocs, a documentation generator for C++ projects. I’ve been overseeing and reviewing all the work done by the other contributors in the project. Besides the work done by Krystian, Fernando also joined the project as a contributor. We’ve been working on making the project at least as competitive as Doxygen + Docca, so we can start using it in a subset of Boost libraries. In this process, we have also been improving usability and documentation, considering feedback we received. In particular, work in Q4 included: Improvements to the documentation, both in javadocs and exposition Reducing technical debt, including extending APIs and problems with handlebars templates Updating CI workflows in coordination with the C++ Github Actions project Usability improvements, including tools relative to installed executable and many new features for templates Fixes and improvements to installation procedure Features such as symbols with links to repositories and many new sections in templates Auxiliary utilities, such as a complete implementation of std::expected for the project Generating binaries for the new LLVM version and updating the documentation Improvements to build scripts with CMake presets and vcpkg manifest files In parallel, Boost.URL has integrated MrDocs, and is temporarily generating the documentation with both Doxygen+Docca and Antora+MrDocs. The documentation includes an extension to collect reference pages generated by MrDocs and include them in the Antora documentation. A preview of the documentation generated by MrDocs is available at https://792.urlantora.prtest2.cppalliance.org/site/url/index.html. In general, I’ve been responsible for: setting up and maintaining CI for the project; MrDocs and LLVM release binaries; build scripts; setting up and integrating dependencies; setting up and deploying the Antora toolchains and documentation to the project website; working on supporting libraries; and supervising and reviewing the work done by other contributors (Krystian and Fernando); fixing bugs. Handlebars MrDocs includes a support library that reimplements the Handlebars template engine in C++. This module is used to generate documentation from templates. This continued to be the MrDocs support library in which I have been investing most of my time. MrDocs also includes a support library called “DOM” that provides a C++ interface to type-erased property trees, such as JSON and Javascript Objects. This module is used by MrDocs to create opaque representations of property trees that can be used by the Handlebars engine and other library functions. Such representations can come from a variety of sources, including JSON files, Javascript objects, and internal C++ objects with information parsed by MrDocs. In Q4, a number of improvements relevant to MrDocs were made to the Handlebars and DOM libraries. The main feature is the addition of complete bindings for Javascript objects, arrays, and functions. The C++/JavaScript bindings work both ways, so that C++ objects can be used in Javascript and Javascript objects can be used in C++. Unit tests for the DOM were also included to complement the Handlebars unit tests. These changes enabled the original implementation of handlebars.js to be completely removed from the project. Boost Libraries As in other quarters, the Boost Library in which I have been investing most of my time is Boost.URL. The library is in maintenance mode since our focus shifted to MrDocs, but there is a constant demand for work fixing bugs and improving the documentation. In Q4, Boost.URL has integrated MrDocs, and is temporarily generating the documentation with both Doxygen+Docca and Antora+MrDocs: All the content in the QuickBook documentation has been rewritten in asciidoc format. Alternatives have been developed to replace special QuickBook features, such as snippets. The Boost.URL repository now includes an extension to automatically find or download MrDocs and use it to collect reference pages generated by MrDocs. The reference pages are aggregates as logical files in the Antora system, which includes them in the Antora documentation. A preview of the documentation generated by MrDocs is available at https://792.urlantora.prtest2.cppalliance.org/site/url/index.html. Besides the integration with MrDocs, work in Q4 included: Improvements, simplifications, and a number of extensions to CI Fixed previous design choices that were incompatible with the super-project Fixes to build scripts to support for more compilers Extended integration tests to cover all cases described by the super-project CMake scripts Fixes to algorithms, such as the parse_query and format functions GDB pretty printers are now provided Many improvements had to be coordinated with the C++ Github Actions project, which had new features implemented for these use cases. In general, I’ve been responsible for: upgrading CI, mostly coordinating with the C++ Github Actions; maintaining, simplifying, and updating build scripts; integrating more spec tests, such as the Ada tests included more recently; including more examples, such as the more recent sanitize-URL example; fixing documentation content that is out of date; and fixing bugs. Besides Boost.URL, as usual, I’ve been overseeing and fixing smaller issues with other boost libraries. In Q4, only some small work in Boost.StaticString was necessary. Boost Release Tools Over the last quarter, I’ve been working on the integration of toolchains I developed into the Boost Release Tools to add support for features desired for the new website. In Q3, I previously included Antora support in the release tools and this has already been deployed in the official Boost release 1.84.0. In Q4, I opened a pull request to include support for archive variants in the release tools. This pull request is currently under review and will be considered for inclusion in a period between Boost releases when priority moves from the Boost website. Boost Website Among the many support projects for the new Boost website, I’ve been helping the most on cppalliance/site-docs, which includes the Boost website documentation as an Antora project. Its components represent the “User Guide”, “Contributor Guide”, and “Formal Review” sections of the website. Since the inception of the project, I’ve been overseeing and reviewing all the work done by the other contributors to the project. In general, I continue to be responsible for: reviewing and merging all pull requests to the project; setting up and maintaining CI for the project; coordinating with the website project on content uploaded to AWS buckets; build scripts to be reused by the release tools and previews; writing sections of the documentation that require technical knowledge; developing custom Boost/Antora extensions, such as the Boost Macro extension; maintaining the Antora toolchain and templates; and adjusted Boost libraries to match formats expected by the website. C++ Github Actions C++ Github Actions is a project I created and have been maintaining since Q3. It is a collection of compasable, independent, and reusable Github Actions for any C++ project that needs to be tested on a variety of compilers and environments. Both MrDocs are Boost.URL are currently using these actions in their CI. Boost.Http is now also using these actions in its CI, which gave us the opportunity to get more feedback and improve the actions. The project includes actions to: Generate a Github Actions Matrix for C++ projects; Setup C++ compilers; Install and setup packages; Clone Boost modules; Run complete CMake and b2 workflows; Generate changelogs from conventional commits; Generate summaries; and Generate time-trace reports and flame graphs In particular, a number of new features have been added to the project in Q4. The matrix has been adapted to include multiple extra flags and values over time. Actions that generate summaries included improvements and new sections. Actions can now identify many new types of warnings that should be emitted. CMake and B2 workflows included more options and features. The matrix new support composite factors, combinatorial factors, and custom containers. Documentation has been improved to also use the collector Antora extension. Besides new features, most work went into fixing issues that were revealed by testing the actions in new environments. About ~50 bugs with various levels of severity have been fixed in Q4.Krystian’s Q4 Update2024-01-12T00:00:00+00:002024-01-12T00:00:00+00:00http://cppalliance.org/krystian/2024/01/12/KrystianQ4Update<p>Much like the last, my primary focus this quarter was on MrDocs, with some additional work done on Boost.StaticString and clang.</p>
<h2 id="mrdocs">MrDocs</h2>
<p>The last significant feature MrDocs needed in order to produce documentation on par with Docca was the ability to render overload set. At a glance this may appear trivial, but MrDocs was designed to render documentation on a per-declaration basis: each declaration would result in a single documentation section being emitted by a generator. This is problematic for overload sets, as an overload sets are not declarations. I ended implementing them as a “view” of the lookup table for a particular scope.</p>
<p>Besides implementing support for rendering overload sets, I further expanded the kinds of declarations supported by MrDocs to include friend declarations, deduction guides, and enumerators. Previously, enumerators were stored as a non-<code>Info</code> type, meaning they could not be found by the name lookup mechanism when referenced with <code>@ref</code>/<code>@copydoc</code>. Adding support for friend declarations also had its own set of challenges due to a lack of support by the clang USR generator. As an interim solution, I’m generating pseudo-USRs for friends by concatenating the USR of their lexical scope with the USR of the referenced declaration or type. While this is sufficient for now, it will ultimately be necessary to submit patches to clang to fix the plethora of bugs in USR generation, as well as add support for newer C++ features such as constraints.</p>
<p>Another problem area I addressed was the representation of qualified names for symbols that were not extracted. Previously, three different kinds of <code>TypeInfo</code> were used to represent “terminal” types (i.e. typenames): <code>BuiltinTypeInfo</code> for builtin types, <code>TagTypeInfo</code> for class and enumeration types, and <code>SpecializationTypeInfo</code> for class template specializations. These types were awkward to work with, required a non-trivial amount of boilerplate, and were incapable of representing a typename that was qualified by a nested-name-specifier that named a symbol that was not extracted. To remedy this, I created a <code>SymbolName</code> type that can represent a qualified-id naming any symbol and replaced the three terminal <code>TypeInfo</code> kinds with <code>NamedTypeInfo</code>.</p>
<h2 id="clang">Clang</h2>
<p>On the clang side of things, I continued work on fixing C++ conformance issues. This included diagnosing friend function specialization definitions (e.g. <code>friend void f<int>() { }</code>), diagnosing unexpanded packs in function template explicit specializations (e.g. <code>template<> void f<Ts>();</code> where <code>Ts</code> is a pack), and improving diagnostics for unexpanded packs in class/variable template partial/explicit specializations.</p>
<p>In terms of in-progress patches, I am currently working on a patch that will significantly improve dependent name lookup – both in terms of conformance and diagnostics. Currently, even obviously ill-formed constructs such as:</p>
<pre><code class="language-cpp">template<typename T>
struct A
{
auto f()
{
return this->x;
}
};
</code></pre>
<p>are not diagnosed until the template is instantiated. Although this behavior is conforming, in less contrived scenarios, it would be far better to avoid an avalanche of diagnositic messages by diagnosing this at the point of definition. This is possible primarily due to <a href="http://eel.is/c++draft/temp.dep.type#6">[temp.dep.type] p6</a>:</p>
<blockquote>
<p>If, for a given set of template arguments, a specialization of a template is instantiated that refers to a member of the current instantiation with a qualified name, the name is looked up in the template instantiation context.
If the result of this lookup differs from the result of name lookup in the template definition context, name lookup is ambiguous.</p>
</blockquote>
<p>and <a href="http://eel.is/c++draft/temp.dep.type#5">[temp.dep.type] p5</a>:</p>
<blockquote>
<p>A qualified name is dependent if</p>
<ul>
<li>[…]</li>
<li>its lookup context is the current instantiation and has at least one dependent base class, and qualified name lookup for the name finds nothing</li>
</ul>
</blockquote>
<p>This guarantees that, within the definition of a template, if the
lookup context of a qualified name is that that template:</p>
<ul>
<li>if lookup finds any member of the template, then the result of lookup in the instantiation context must also find that member, or</li>
<li>if lookup finds nothing, then the program is ill-formed unless the name is found in a dependent base class.</li>
</ul>Much like the last, my primary focus this quarter was on MrDocs, with some additional work done on Boost.StaticString and clang. MrDocs The last significant feature MrDocs needed in order to produce documentation on par with Docca was the ability to render overload set. At a glance this may appear trivial, but MrDocs was designed to render documentation on a per-declaration basis: each declaration would result in a single documentation section being emitted by a generator. This is problematic for overload sets, as an overload sets are not declarations. I ended implementing them as a “view” of the lookup table for a particular scope. Besides implementing support for rendering overload sets, I further expanded the kinds of declarations supported by MrDocs to include friend declarations, deduction guides, and enumerators. Previously, enumerators were stored as a non-Info type, meaning they could not be found by the name lookup mechanism when referenced with @ref/@copydoc. Adding support for friend declarations also had its own set of challenges due to a lack of support by the clang USR generator. As an interim solution, I’m generating pseudo-USRs for friends by concatenating the USR of their lexical scope with the USR of the referenced declaration or type. While this is sufficient for now, it will ultimately be necessary to submit patches to clang to fix the plethora of bugs in USR generation, as well as add support for newer C++ features such as constraints. Another problem area I addressed was the representation of qualified names for symbols that were not extracted. Previously, three different kinds of TypeInfo were used to represent “terminal” types (i.e. typenames): BuiltinTypeInfo for builtin types, TagTypeInfo for class and enumeration types, and SpecializationTypeInfo for class template specializations. These types were awkward to work with, required a non-trivial amount of boilerplate, and were incapable of representing a typename that was qualified by a nested-name-specifier that named a symbol that was not extracted. To remedy this, I created a SymbolName type that can represent a qualified-id naming any symbol and replaced the three terminal TypeInfo kinds with NamedTypeInfo. Clang On the clang side of things, I continued work on fixing C++ conformance issues. This included diagnosing friend function specialization definitions (e.g. friend void f<int>() { }), diagnosing unexpanded packs in function template explicit specializations (e.g. template<> void f<Ts>(); where Ts is a pack), and improving diagnostics for unexpanded packs in class/variable template partial/explicit specializations. In terms of in-progress patches, I am currently working on a patch that will significantly improve dependent name lookup – both in terms of conformance and diagnostics. Currently, even obviously ill-formed constructs such as: template<typename T> struct A { auto f() { return this->x; } }; are not diagnosed until the template is instantiated. Although this behavior is conforming, in less contrived scenarios, it would be far better to avoid an avalanche of diagnositic messages by diagnosing this at the point of definition. This is possible primarily due to [temp.dep.type] p6: If, for a given set of template arguments, a specialization of a template is instantiated that refers to a member of the current instantiation with a qualified name, the name is looked up in the template instantiation context. If the result of this lookup differs from the result of name lookup in the template definition context, name lookup is ambiguous. and [temp.dep.type] p5: A qualified name is dependent if […] its lookup context is the current instantiation and has at least one dependent base class, and qualified name lookup for the name finds nothing This guarantees that, within the definition of a template, if the lookup context of a qualified name is that that template: if lookup finds any member of the template, then the result of lookup in the instantiation context must also find that member, or if lookup finds nothing, then the program is ill-formed unless the name is found in a dependent base class.Dmitry’s Q4 2023 Update2024-01-12T00:00:00+00:002024-01-12T00:00:00+00:00http://cppalliance.org/dmitry/2024/01/12/dmitrys-q4-update<p>In the fourth quarter of 2023 the functionality for direct parsing in JSON was
finalised and the code was merged into the mainline and then released in
Boost 1.84.0. Thus, I’ve moved to the natural opposite of direct parsing:
direct serialisation. Boost.JSON’s serialisation is less customisable then
parsing, since the demand for custom serialisation is significantly lower. As
a result, the design of the serialiser is quite different from that of the
parser, and hence a different approach had to be taken to implement direct
serialisation. That approach, in my opinion, has a big benefit for the user:
there’s no need for a dedicated direct serializer type, it can be done with
the regular <code>boost::json::serializer</code>. On the other hand, it presents a
different challenge: making changes to <code>serializer</code> in a way that does not
negatively affect its performance too much.</p>
<p>This fight for performance has occupied most of my time in the last quarter.
And it also provided me an opportunity to experiment with different potential
optimisations to the serializer. I would also like to comment that different
C++ implementations sometimes have directly opposite view on what’s better or
worse for performance, which poses quite a conundrum in such line of work. And
finally, this work was greatly influenced by the availability of continous
integration infrastructure set up by the C++ Alliance, and automatic
benchmarking in particular.</p>
<p>Another positive effect of C++ Alliance’s CI is due to coverage reporting.
As I was striving to never decrease the rate of code coverage, I’ve discovered
code in the serializer that used to perform a function, but have since become
unnecessary due to refactoring.</p>
<p>Overall the work on serializer has vastly increased my understanding on how
JSON’s serializer works under the hood. I plan to finish the feature of direct
serialization before the next Boost release.</p>In the fourth quarter of 2023 the functionality for direct parsing in JSON was finalised and the code was merged into the mainline and then released in Boost 1.84.0. Thus, I’ve moved to the natural opposite of direct parsing: direct serialisation. Boost.JSON’s serialisation is less customisable then parsing, since the demand for custom serialisation is significantly lower. As a result, the design of the serialiser is quite different from that of the parser, and hence a different approach had to be taken to implement direct serialisation. That approach, in my opinion, has a big benefit for the user: there’s no need for a dedicated direct serializer type, it can be done with the regular boost::json::serializer. On the other hand, it presents a different challenge: making changes to serializer in a way that does not negatively affect its performance too much. This fight for performance has occupied most of my time in the last quarter. And it also provided me an opportunity to experiment with different potential optimisations to the serializer. I would also like to comment that different C++ implementations sometimes have directly opposite view on what’s better or worse for performance, which poses quite a conundrum in such line of work. And finally, this work was greatly influenced by the availability of continous integration infrastructure set up by the C++ Alliance, and automatic benchmarking in particular. Another positive effect of C++ Alliance’s CI is due to coverage reporting. As I was striving to never decrease the rate of code coverage, I’ve discovered code in the serializer that used to perform a function, but have since become unnecessary due to refactoring. Overall the work on serializer has vastly increased my understanding on how JSON’s serializer works under the hood. I plan to finish the feature of direct serialization before the next Boost release.Peter Turcan Q4 2023 Status2024-01-11T00:00:00+00:002024-01-11T00:00:00+00:00http://cppalliance.org/peter/2024/01/11/PeterTurcan-Q4-2023<ul>
<li>
<p>Wrote a section on the Super-project, based on some legacy Wiki pages that needed considerable updating, and the boostdep tool steps as this covered installing the necessary components. Covered four main topics: the Super-project layout, Getting Started, Library Maintenance and Library Workflow.</p>
</li>
<li>
<p>Wrote a Best-Practices section on Beneficial Dependencies - the dependencies used by the most libraries of Boost. These include Boost.Config, Boost.Core, Boost.Assert, Boost.StaticAssert, Boost.ThrowException, and for metaprogramming Boost.MP11.</p>
</li>
<li>
<p>Added a section to the Boost History topic covering the BoostCon/C++ Now conference. The location of the conference in Aspen, Colorado comes with some colorful twists: “Mild-mannered black bears live in the area. Please close doors behind you in the evenings.”</p>
</li>
<li>
<p>Updated the Documentation and Requirements sections with some good information for organization requirements, library metadata and documentation guidelines.</p>
</li>
<li>
<p>Added a Site-docs Style Guide, based on an existing style guide that was outside of the documentation, covering some basic color, wording, URL and image guidelines.</p>
</li>
<li>
<p>Improved some navigation linking issues.</p>
</li>
<li>
<p>Improved some headline text, bringing the headlines into line with our standards.</p>
</li>
<li>
<p>Provided some detailed blog post feedback.</p>
</li>
<li>
<p>Working on a number of topics that have yet to be completed, including the B2 and CMake build systems, the developers FAQ, the Boost Test Matrix, Advanced Builds, and others.</p>
</li>
</ul>Wrote a section on the Super-project, based on some legacy Wiki pages that needed considerable updating, and the boostdep tool steps as this covered installing the necessary components. Covered four main topics: the Super-project layout, Getting Started, Library Maintenance and Library Workflow. Wrote a Best-Practices section on Beneficial Dependencies - the dependencies used by the most libraries of Boost. These include Boost.Config, Boost.Core, Boost.Assert, Boost.StaticAssert, Boost.ThrowException, and for metaprogramming Boost.MP11. Added a section to the Boost History topic covering the BoostCon/C++ Now conference. The location of the conference in Aspen, Colorado comes with some colorful twists: “Mild-mannered black bears live in the area. Please close doors behind you in the evenings.” Updated the Documentation and Requirements sections with some good information for organization requirements, library metadata and documentation guidelines. Added a Site-docs Style Guide, based on an existing style guide that was outside of the documentation, covering some basic color, wording, URL and image guidelines. Improved some navigation linking issues. Improved some headline text, bringing the headlines into line with our standards. Provided some detailed blog post feedback. Working on a number of topics that have yet to be completed, including the B2 and CMake build systems, the developers FAQ, the Boost Test Matrix, Advanced Builds, and others.Christian’s Q4 2023 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/christian/2024/01/10/ChristiansQ4Update<p>This last quarter has been an interesting one. Development on the Unordered
library has slowed down so I’ve been re-tasked with aiding in the development
of a whole new slew of HTTP/1-based libraries.</p>
<p>The new year is a common time for reflection on where one’s been and how far one
has come. When I first started working on Unordered, I knew relatively little
about hash tables. I was somewhat versed in C++ container design and
implementation but in hindsight, I knew little to nothing in actuality.</p>
<p>I’ve now since become an expert in library minutiae. As an example, I spent no
less than an hour discussing the validity of allocator-constructing a
stack-local variable as part of an optimization technique for Unordered’s flat
maps.</p>
<p>It’s been quite a privilege to essentially study C++ under a couple of world
experts, Joaquín M López Muñoz and Peter Dimov. I’ll never be able to see
hash table design the way Joaquín does but his incredibly sharp and compact way
of solving complex problems has forever changed how I write C++ code. On the
other hand, Peter’s helped guide and shape how I think about testing and
actualizing it in code effectively.</p>
<p>My new found aptitude for testing has led to a shift in how I develop software
going forward, something I’m calling “failure driven development”. While most
TDD workflows involve starting with a failing test case first, they don’t often
stress the importance of testing failures themselves. For example, code that
opens a file and the file does not exist. I’ve applied the principles I learned
on the job to my hobby projects and because of this, I’ve actually found a bug
in the io_uring Linux kernel module. I also helped diagnose a performance
regression as well.</p>
<p>A principle I’ve learned is that you don’t really understand code or a system
until you test what kinds of errors it outputs and how it behaves under those
conditions.</p>
<p>I look forward to the future in helping deliver these HTTP/1 libraries as
they’re going to be dramatic improvements over the existing Beast library but
I’ll never forget what Unordered taught me.</p>
<p>It’s interesting working for a fully remote company like the Alliance because
my coworkers are scattered all over the globe, from Spain to Bulgaria and
beyond. Expertise is scattered all throughout the world and it’s amazing how
technology enables so much collaboration. It also enables me to hone my skills
in slow-cooking recipes and I’m now on a quest to completely master the dish
chile colorado.</p>This last quarter has been an interesting one. Development on the Unordered library has slowed down so I’ve been re-tasked with aiding in the development of a whole new slew of HTTP/1-based libraries. The new year is a common time for reflection on where one’s been and how far one has come. When I first started working on Unordered, I knew relatively little about hash tables. I was somewhat versed in C++ container design and implementation but in hindsight, I knew little to nothing in actuality. I’ve now since become an expert in library minutiae. As an example, I spent no less than an hour discussing the validity of allocator-constructing a stack-local variable as part of an optimization technique for Unordered’s flat maps. It’s been quite a privilege to essentially study C++ under a couple of world experts, Joaquín M López Muñoz and Peter Dimov. I’ll never be able to see hash table design the way Joaquín does but his incredibly sharp and compact way of solving complex problems has forever changed how I write C++ code. On the other hand, Peter’s helped guide and shape how I think about testing and actualizing it in code effectively. My new found aptitude for testing has led to a shift in how I develop software going forward, something I’m calling “failure driven development”. While most TDD workflows involve starting with a failing test case first, they don’t often stress the importance of testing failures themselves. For example, code that opens a file and the file does not exist. I’ve applied the principles I learned on the job to my hobby projects and because of this, I’ve actually found a bug in the io_uring Linux kernel module. I also helped diagnose a performance regression as well. A principle I’ve learned is that you don’t really understand code or a system until you test what kinds of errors it outputs and how it behaves under those conditions. I look forward to the future in helping deliver these HTTP/1 libraries as they’re going to be dramatic improvements over the existing Beast library but I’ll never forget what Unordered taught me. It’s interesting working for a fully remote company like the Alliance because my coworkers are scattered all over the globe, from Spain to Bulgaria and beyond. Expertise is scattered all throughout the world and it’s amazing how technology enables so much collaboration. It also enables me to hone my skills in slow-cooking recipes and I’m now on a quest to completely master the dish chile colorado.Fernando’s Q4 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/fernando/2024/01/10/Fernando43Update<p>As the year comes to a close, I reflect on the exciting and productive final quarter of 2023. My involvement has been primarily focused on the Boost Libraries and the development of MrDocs, both of which have offered unique challenges and opportunities for growth.</p>
<h1 id="boost-modularization-refining-and-collaborating">Boost Modularization: Refining and Collaborating</h1>
<h2 id="ongoing-boost-modularization-work">Ongoing Boost Modularization Work:</h2>
<p>My primary focus has been on advancing the modularization of Boost. The effort is directed towards creating individual Conan packages for each Boost library, an initiative that promises to significantly enhance user experience and integration.</p>
<h2 id="strategic-discussions-and-collaborative-efforts">Strategic Discussions and Collaborative Efforts:</h2>
<p>Discussions with the Conan team have been ongoing, focusing on the practical aspects of creating modularized Conan recipes. A new repository, separate from the Conan Center Index (CCI), is in the works to facilitate these efforts with fewer restrictions. I have also been developing a web application using HTML, CSS, JavaScript, and a graph management library to assist in visualizing the dependencies of Boost libraries.</p>
<h1 id="boost-unordered-collaborative-advancements">Boost Unordered: Collaborative Advancements</h1>
<h2 id="responsive-action-triggered-by-parlays-performance">Responsive Action Triggered by Parlay’s Performance:</h2>
<p>The impressive speed demonstrated by <a href="https://github.com/cmuparlay/parlayhash">Parlay</a> set the stage for our action. It highlighted the need to enhance the performance of our concurrent data structures, particularly in scenarios involving high thread counts.</p>
<h2 id="joaquíns-innovative-ideas-and-my-role-in-testing">Joaquín’s Innovative Ideas and My Role in Testing:</h2>
<p>Joaquín, leading the charge, conceptualized various strategies to improve Boost Unordered. His ideas were pivotal in shaping our approach towards creating more efficient concurrent data structures. My contribution primarily involved conducting extensive testing and benchmarking of these ideas on high-core-count machines. This collaboration was instrumental in validating and refining our approaches.</p>
<h2 id="learning-and-contributing-to-boost-flatmap-and-concurrent-flatmap">Learning and Contributing to Boost FlatMap and Concurrent FlatMap:</h2>
<p>Gradually delving deeper into the implementation of Boost FlatMap and Concurrent FlatMap, I am moving towards a position where I can actively contribute code and ideas. The learning curve is steep, but it is an exciting journey that promises significant contributions to the Boost ecosystem.</p>
<p>Joaquín’s work in developing a latch-free concurrent map and a variant of <code>boost::concurrent_flat_map</code> that performs almost-latch-free insertion for SIMD-powered architectures is a testament to our team’s commitment to pushing the boundaries of C++ performance. His insights and our collaborative efforts are paving the way for potential enhancements in concurrent data structures, which are fundamental to high-performance computing applications.</p>
<p>The developments in this domain are ongoing, and we are continuously working to identify and overcome points of contention. Our aim is to not only match but exceed the performance benchmarks set by competitors like Parlay, especially in high-thread environments. This journey, though challenging, is a remarkable opportunity for innovation and growth in the field of concurrent programming.</p>
<h1 id="deepening-involvement-in-mrdocs">Deepening Involvement in MrDocs</h1>
<h2 id="enhancing-user-experience-with-mrdocs">Enhancing User Experience with MrDocs:</h2>
<ul>
<li>
<p>Streamlining the Workflow:
I’ve dedicated significant effort to enhancing the user experience of MrDocs. A key development is enabling MrDocs to directly obtain the compile_commands.json file by invoking CMake. This improvement alleviates the need for users to generate this file manually, thus simplifying the process.</p>
</li>
<li>
<p>Intelligent Inference of System’s Default Include Paths:
Another crucial enhancement is the capability of MrDocs to intelligently infer the system’s default include paths from any arbitrary compile_commands.json. This is achieved by having MrDocs interact with the compiler to request information about the default include directories. These directories are then utilized in creating the “Compilation Database”, making MrDocs more intuitive and efficient for various C++ projects.</p>
</li>
</ul>
<h2 id="gaining-experience-with-clangs-libtooling">Gaining Experience with Clang’s LibTooling:</h2>
<ul>
<li>Deep Dive into LibTooling: As MrDocs extensively uses Clang’s LibTooling, I am focusing on gaining more experience with this powerful library. This involves understanding its intricacies and exploring its capabilities in parsing and analyzing C++ code. My journey with LibTooling is not only about enhancing MrDocs but also about enriching my own skills and understanding of compiler technologies.</li>
</ul>
<p>These advancements in MrDocs represent a significant step towards making the tool more accessible and efficient for users. By reducing complexity and enhancing functionality, I am contributing to a tool that is becoming increasingly vital for C++ developers, especially in the realm of documentation and code analysis. My exploration of LLVM and Clang’s LibTooling is playing a crucial role in this endeavor, opening up new possibilities for future enhancements and innovations.</p>
<h1 id="continued-collaboration-with-boost-beast">Continued Collaboration with Boost Beast</h1>
<p>While my direct involvement with Boost Beast has reduced, I continue to support the project, providing insights and assistance to Mohammad, who is doing an exceptional job as the maintainer.</p>
<p>This quarter has been a journey of technical exploration, collaboration, and innovation. My work in modularization, performance optimization, and tool development reflects my dedication to the continuous advancement of the Boost Libraries and MrDocs. I am excited about the potential impact of these projects and look forward to contributing further to these vibrant and dynamic communities.</p>As the year comes to a close, I reflect on the exciting and productive final quarter of 2023. My involvement has been primarily focused on the Boost Libraries and the development of MrDocs, both of which have offered unique challenges and opportunities for growth. Boost Modularization: Refining and Collaborating Ongoing Boost Modularization Work: My primary focus has been on advancing the modularization of Boost. The effort is directed towards creating individual Conan packages for each Boost library, an initiative that promises to significantly enhance user experience and integration. Strategic Discussions and Collaborative Efforts: Discussions with the Conan team have been ongoing, focusing on the practical aspects of creating modularized Conan recipes. A new repository, separate from the Conan Center Index (CCI), is in the works to facilitate these efforts with fewer restrictions. I have also been developing a web application using HTML, CSS, JavaScript, and a graph management library to assist in visualizing the dependencies of Boost libraries. Boost Unordered: Collaborative Advancements Responsive Action Triggered by Parlay’s Performance: The impressive speed demonstrated by Parlay set the stage for our action. It highlighted the need to enhance the performance of our concurrent data structures, particularly in scenarios involving high thread counts. Joaquín’s Innovative Ideas and My Role in Testing: Joaquín, leading the charge, conceptualized various strategies to improve Boost Unordered. His ideas were pivotal in shaping our approach towards creating more efficient concurrent data structures. My contribution primarily involved conducting extensive testing and benchmarking of these ideas on high-core-count machines. This collaboration was instrumental in validating and refining our approaches. Learning and Contributing to Boost FlatMap and Concurrent FlatMap: Gradually delving deeper into the implementation of Boost FlatMap and Concurrent FlatMap, I am moving towards a position where I can actively contribute code and ideas. The learning curve is steep, but it is an exciting journey that promises significant contributions to the Boost ecosystem. Joaquín’s work in developing a latch-free concurrent map and a variant of boost::concurrent_flat_map that performs almost-latch-free insertion for SIMD-powered architectures is a testament to our team’s commitment to pushing the boundaries of C++ performance. His insights and our collaborative efforts are paving the way for potential enhancements in concurrent data structures, which are fundamental to high-performance computing applications. The developments in this domain are ongoing, and we are continuously working to identify and overcome points of contention. Our aim is to not only match but exceed the performance benchmarks set by competitors like Parlay, especially in high-thread environments. This journey, though challenging, is a remarkable opportunity for innovation and growth in the field of concurrent programming. Deepening Involvement in MrDocs Enhancing User Experience with MrDocs: Streamlining the Workflow: I’ve dedicated significant effort to enhancing the user experience of MrDocs. A key development is enabling MrDocs to directly obtain the compile_commands.json file by invoking CMake. This improvement alleviates the need for users to generate this file manually, thus simplifying the process. Intelligent Inference of System’s Default Include Paths: Another crucial enhancement is the capability of MrDocs to intelligently infer the system’s default include paths from any arbitrary compile_commands.json. This is achieved by having MrDocs interact with the compiler to request information about the default include directories. These directories are then utilized in creating the “Compilation Database”, making MrDocs more intuitive and efficient for various C++ projects. Gaining Experience with Clang’s LibTooling: Deep Dive into LibTooling: As MrDocs extensively uses Clang’s LibTooling, I am focusing on gaining more experience with this powerful library. This involves understanding its intricacies and exploring its capabilities in parsing and analyzing C++ code. My journey with LibTooling is not only about enhancing MrDocs but also about enriching my own skills and understanding of compiler technologies. These advancements in MrDocs represent a significant step towards making the tool more accessible and efficient for users. By reducing complexity and enhancing functionality, I am contributing to a tool that is becoming increasingly vital for C++ developers, especially in the realm of documentation and code analysis. My exploration of LLVM and Clang’s LibTooling is playing a crucial role in this endeavor, opening up new possibilities for future enhancements and innovations. Continued Collaboration with Boost Beast While my direct involvement with Boost Beast has reduced, I continue to support the project, providing insights and assistance to Mohammad, who is doing an exceptional job as the maintainer. This quarter has been a journey of technical exploration, collaboration, and innovation. My work in modularization, performance optimization, and tool development reflects my dedication to the continuous advancement of the Boost Libraries and MrDocs. I am excited about the potential impact of these projects and look forward to contributing further to these vibrant and dynamic communities.Joaquín’s Q4 2023 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/joaquin/2024/01/10/Joaquins2023Q4Update<p>During Q4 2023, I’ve been working (mostly in collaboration with Chris) in the following areas:</p>
<h3 id="boostunordered">Boost.Unordered</h3>
<ul>
<li>Implemented <em>bulk visitation</em> for <code>boost::concurrent_flat_[map|set]</code>. In short, bulk visitation
visits a bunch of elements at once, so instead of writing:</li>
</ul>
<pre>std::array<int, N> keys;
...
for(const auto& key: keys) {
m.visit(key, [](auto& x) { ++x.second; });
}</pre>
<p>we can do this:</p>
<pre>
m.visit(keys.begin(), keys.end(), [](auto& x) { ++x.second; });</pre>
<p>This functionality is not provided for mere syntactic convenience: Boost.Unordered speeds up
the entire process by pipelining the different internal stages of each individual visitation,
which results in performance improvements of 40% and more. The article
<a href="https://bannalia.blogspot.com/2023/10/bulk-visitation-in-boostconcurrentflatm.html">“Bulk visitation in <code>boost::concurrent_flat_map</code>”</a>
discusses this new feature in much detail.</p>
<ul>
<li><a href="https://github.com/boostorg/unordered/pull/218">Removed some unneeded using declarations</a> (removal of unneeded
using declarations), contributed some <a href="https://github.com/boostorg/unordered/commit/dbe93c765c56cb242c99a3801828f9d506fbb658">hardening code</a>,
<a href="https://github.com/boostorg/unordered/pull/219">revamped the repo’s README.md</a>.</li>
<li>Shipped <a href="https://www.boost.org/doc/libs/1_84_0/libs/unordered/doc/html/unordered.html#changes_release_1_84_0_major_update">Boost.Unordered 1.84</a>.</li>
<li>Begun exploratory work towards adding new containers based on
<a href="https://en.wikipedia.org/wiki/Perfect_hash_function"><em>perfect hashing</em></a>. The key idea behind
a perfect hashmap is that its elements are known in advance at initialization time, which
allows for the construction of an ad hoc hash function guaranteeing <em>zero collisions</em> (for
the given set of elements). There’s a tradeoff between lookup times (which can be extremely
fast based on the zero-collision assumption) and construction times (typically much larger
than for a classical hashmap), and moreover elements can’t be inserted and deleted once
the map is built. We have explored so far two well-known techniques from the literature for
the generation of the associated perfect hash function:
<a href="https://cmph.sourceforge.net/papers/esa09.pdf">Hash, Displace and Compress</a> (without the compress part)
and the algorithm from <a href="https://dl.acm.org/doi/pdf/10.1145/828.1884">Fredman, Komlós and Szemerédi</a>
(FKS), with promising results. Progress, however, has been slower than expected, so the
target for new perfect containers in Boost.Unordered is Boost 1.86 (Aug 2024).</li>
<li>After our launch of <code>boost::concurrent_flat_map</code>, a new contender
called <a href="https://github.com/cmuparlay/parlayhash">ParlayHash</a> has arisen. ParlayHash achieves
very good performance for massively parallel scenarios (dozens of cores) thanks to its
smart latch-free design based on <a href="http://csng.cs.toronto.edu/publication_files/0000/0159/jpdc07.pdf">epochs</a>
for the reclamation of erased elements. The design imposes some limitations not present
in <code>boost::concurrent_flat_map</code>, most notably that elements must be immutable, but
its excellent performance has spurred Fernando and I to begin exploratory work towards adopting similar
techniques in the open-addressing context we use. It’s currently too early to know if this
work will result in the addition of new concurrent containers to Boost.Unordered. As a
spin-off of this activity, a variant of <code>boost::concurrent_flat_map</code> with
<a href="https://github.com/boostorg/unordered/tree/feature/cfoa_alf_insert">almost-latch-free insertion</a>
has been implemented —the decision is pending whether this will be officially merged.</li>
</ul>
<h3 id="new-website">New website</h3>
<ul>
<li>I’ve contributed a small section on <a href="https://www.preview.boost.org/doc/contributor-guide/tweeting.html">tweet proposals</a>.
Although the presence of Boost in social media has increased notably in the last few years,
I think much more need to be done, and has to be done with contributions from the entire community.</li>
</ul>
<h3 id="looking-back-and-forward">Looking back and forward</h3>
<p>I began collaborating with the C++ Alliance almost two years ago, when I was gently hooked by
Vinnie and Peter to work on the evolution project for Boost.Unordered alongide my colleague
Chris Mazakas. The experience so far has been a joyous one, and I’ve had the opportunity
to meet and work with a group of outstanding professionals from all over the globe.
Braden Ganetsky recently joined the Boost.Unordered maintainance team,
and it’s been my pleasure to guide him through the onboarding process.</p>
<p>Going forward, I feel that most of the <a href="https://pdimov.github.io/articles/unordered_dev_plan.html">goals for Boost.Unordered</a>
put forth by Peter Dimov in 2022 have been met, and it’s only natural that the activitiy
in this library will decrease along this year. I’m totally open to new challenges for
the evolution of Boost, particularly if they’re math-oriented and can advance the state of
the art for C++ in general —drop me a line if you have an idea in mind!</p>During Q4 2023, I’ve been working (mostly in collaboration with Chris) in the following areas: Boost.Unordered Implemented bulk visitation for boost::concurrent_flat_[map|set]. In short, bulk visitation visits a bunch of elements at once, so instead of writing: std::array<int, N> keys; ... for(const auto& key: keys) { m.visit(key, [](auto& x) { ++x.second; }); } we can do this: m.visit(keys.begin(), keys.end(), [](auto& x) { ++x.second; }); This functionality is not provided for mere syntactic convenience: Boost.Unordered speeds up the entire process by pipelining the different internal stages of each individual visitation, which results in performance improvements of 40% and more. The article “Bulk visitation in boost::concurrent_flat_map” discusses this new feature in much detail. Removed some unneeded using declarations (removal of unneeded using declarations), contributed some hardening code, revamped the repo’s README.md. Shipped Boost.Unordered 1.84. Begun exploratory work towards adding new containers based on perfect hashing. The key idea behind a perfect hashmap is that its elements are known in advance at initialization time, which allows for the construction of an ad hoc hash function guaranteeing zero collisions (for the given set of elements). There’s a tradeoff between lookup times (which can be extremely fast based on the zero-collision assumption) and construction times (typically much larger than for a classical hashmap), and moreover elements can’t be inserted and deleted once the map is built. We have explored so far two well-known techniques from the literature for the generation of the associated perfect hash function: Hash, Displace and Compress (without the compress part) and the algorithm from Fredman, Komlós and Szemerédi (FKS), with promising results. Progress, however, has been slower than expected, so the target for new perfect containers in Boost.Unordered is Boost 1.86 (Aug 2024). After our launch of boost::concurrent_flat_map, a new contender called ParlayHash has arisen. ParlayHash achieves very good performance for massively parallel scenarios (dozens of cores) thanks to its smart latch-free design based on epochs for the reclamation of erased elements. The design imposes some limitations not present in boost::concurrent_flat_map, most notably that elements must be immutable, but its excellent performance has spurred Fernando and I to begin exploratory work towards adopting similar techniques in the open-addressing context we use. It’s currently too early to know if this work will result in the addition of new concurrent containers to Boost.Unordered. As a spin-off of this activity, a variant of boost::concurrent_flat_map with almost-latch-free insertion has been implemented —the decision is pending whether this will be officially merged. New website I’ve contributed a small section on tweet proposals. Although the presence of Boost in social media has increased notably in the last few years, I think much more need to be done, and has to be done with contributions from the entire community. Looking back and forward I began collaborating with the C++ Alliance almost two years ago, when I was gently hooked by Vinnie and Peter to work on the evolution project for Boost.Unordered alongide my colleague Chris Mazakas. The experience so far has been a joyous one, and I’ve had the opportunity to meet and work with a group of outstanding professionals from all over the globe. Braden Ganetsky recently joined the Boost.Unordered maintainance team, and it’s been my pleasure to guide him through the onboarding process. Going forward, I feel that most of the goals for Boost.Unordered put forth by Peter Dimov in 2022 have been met, and it’s only natural that the activitiy in this library will decrease along this year. I’m totally open to new challenges for the evolution of Boost, particularly if they’re math-oriented and can advance the state of the art for C++ in general —drop me a line if you have an idea in mind!Matt’s Q4 2023 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/matt/2024/01/10/MattsQ4Update<p>Over the past few months I have been working on a number of libraries both for proposal to, and currently in Boost.</p>
<h2 id="new-libraries">New Libraries</h2>
<h3 id="charconv">Charconv</h3>
<p>The Charconv (https://github.com/cppalliance/charconv) review period is scheduled for 15 - 25 Jan 2024.
Directions for use with B2, VCPKG, and Conan are provided to allow for testing, and evaluation of the library.
All feedback is welcome and appreciated.
Reviews can be submitted to the mailing list or the Review Manager, Chris Kormanyos at e_float@yahoo.com.
Discussion of this library can be found on the Cpplang slack at <code>#boost-charconv</code>.</p>
<h3 id="decimal">Decimal</h3>
<p>Decimal (https://github.com/cppalliance/decimal) is a ground-up implementation of the IEEE 754 Decimal Floating Point types in C++14, and is co-authored by Chris Kormanyos.
The library has made significant progress this quarter with most of the features from IEEE 754-2019 and TR 24733 being implemented.
Looking to next quarter we will continue to implement more features, and begin working on optimization as we have been focusing first on correctness.
Discussion of this library can be found on the Cpplang slack at <code>#boost-decimal</code>.</p>
<h2 id="existing-libraries">Existing Libraries</h2>
<h3 id="math">Math</h3>
<p>A recent RFC in Scipy has led to the decision to begin replacing their existing Special Function implementations with C++ to enable CUDA support.
They will be using the existing code from Boost.Math and in return contribute bugfixes and CUDA implementations as needed.
This continues to deepen our mutually beneficial relationship with them.</p>
<h3 id="random">Random</h3>
<p>An implementation of Vigna’s Splitmix64 (https://prng.di.unimi.it/index.php) has been merged recently.
The next step is to complete the implementation of the xoshiro / xoroshiro PRNGs.
These new PRNGs are can be faster, and have fewer BigCrush failures than the PRNGs defined in the C++ Standard.</p>
<h3 id="numericodeint">Numeric.Odeint</h3>
<p>As of a few weeks ago Nick Thompson and I have been added as primary maintainers of Boost.Numeric.Odeint.
Our immediate goal is to modernize the library (e.g. remove unneeded dependencies for C++03), and fix existing issues to support both the Boost and R communities.</p>
<h2 id="a-year-in-review-at-the-c-alliance">A year in review at the C++ Alliance</h2>
<p>As of writing I have now worked at the C++ Alliance for a full year.
This is my first job working in Open Source Software, and first remote position.
I have thoroughly enjoyed collaborating with and meeting other developers from around the world.
I look forward to continuing development of Boost Libraries in the future!</p>Over the past few months I have been working on a number of libraries both for proposal to, and currently in Boost. New Libraries Charconv The Charconv (https://github.com/cppalliance/charconv) review period is scheduled for 15 - 25 Jan 2024. Directions for use with B2, VCPKG, and Conan are provided to allow for testing, and evaluation of the library. All feedback is welcome and appreciated. Reviews can be submitted to the mailing list or the Review Manager, Chris Kormanyos at e_float@yahoo.com. Discussion of this library can be found on the Cpplang slack at #boost-charconv. Decimal Decimal (https://github.com/cppalliance/decimal) is a ground-up implementation of the IEEE 754 Decimal Floating Point types in C++14, and is co-authored by Chris Kormanyos. The library has made significant progress this quarter with most of the features from IEEE 754-2019 and TR 24733 being implemented. Looking to next quarter we will continue to implement more features, and begin working on optimization as we have been focusing first on correctness. Discussion of this library can be found on the Cpplang slack at #boost-decimal. Existing Libraries Math A recent RFC in Scipy has led to the decision to begin replacing their existing Special Function implementations with C++ to enable CUDA support. They will be using the existing code from Boost.Math and in return contribute bugfixes and CUDA implementations as needed. This continues to deepen our mutually beneficial relationship with them. Random An implementation of Vigna’s Splitmix64 (https://prng.di.unimi.it/index.php) has been merged recently. The next step is to complete the implementation of the xoshiro / xoroshiro PRNGs. These new PRNGs are can be faster, and have fewer BigCrush failures than the PRNGs defined in the C++ Standard. Numeric.Odeint As of a few weeks ago Nick Thompson and I have been added as primary maintainers of Boost.Numeric.Odeint. Our immediate goal is to modernize the library (e.g. remove unneeded dependencies for C++03), and fix existing issues to support both the Boost and R communities. A year in review at the C++ Alliance As of writing I have now worked at the C++ Alliance for a full year. This is my first job working in Open Source Software, and first remote position. I have thoroughly enjoyed collaborating with and meeting other developers from around the world. I look forward to continuing development of Boost Libraries in the future!Mohammad’s Q4 2023 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/mohammad/2024/01/10/MohammadsQ4Update<p>Over the last few months I have been mainly working on Boost.Beast and Boost.PropertyTree.</p>
<h3 id="keeping-boostbeast-in-good-form">Keeping Boost.Beast in Good Form</h3>
<p>I’ve recently taken on a more active role in maintaining Boost.Beast. To begin, I reviewed all the open issues to gain a better understanding of the project’s current state. In the course of this process, I successfully addressed several issues that did not necessitate significant refactoring.</p>
<p>Here are a couple of contributions to the project that I find interesting:</p>
<h5 id="specializing-asioassociator-for-bind_wrapper-and-bind_front_wrapper">Specializing <code>asio::associator</code> for <code>bind_wrapper</code> and <code>bind_front_wrapper</code></h5>
<p>Because <code>bind_wrapper</code> and <code>bind_front_wrapper</code> wrap the user’s handlers, all of the associators with the original handler become invisible to Asio. In order to resolve that, Beast has been specializing each associator individually for bound wrappers. However, this makes Asio consistently assume the presence of an associated executor with the bound handlers.</p>
<p>Fortunately, the fix is easy; we only need to specialize <code>asio::associator</code> for the bound wrappers, and it can query all the associators from the wrapper handler. You can read more in <a href="https://github.com/boostorg/beast/pull/2782">this pull request</a>.</p>
<h5 id="replacing-internal-uses-of-beastbind_front_handler-with-asioprepend">Replacing internal uses of <code>beast::bind_front_handler</code> with <code>asio::prepend</code></h5>
<p><code>bind_front_handler</code> is a utility in Beast that allows binding a list of arguments to an existing handler and creating a new handler with a different signature. This is especially useful when we want to pass additional arguments to handlers. For example, the following code binds an error code to a handler, making it invokable without needing any argument:</p>
<pre><code class="language-C++">asio::dispatch(ex, beast::bind_front_handler(std::move(handler), ec));
</code></pre>
<p>With the introduction of <code>asio::prepend</code> in Boost 1.80, we can replace the previous code with:</p>
<pre><code class="language-C++">asio::dispatch(ex, asio::prepend(std::move(handler), ec));
</code></pre>
<p>However, <code>beast::bind_front_handler</code> has a specialized invoke function for member functions, which makes it possible to concisely turn member functions into handlers:</p>
<pre><code class="language-C++">ws_.async_read(
buffer_,
beast::bind_front_handler(
&websocket_session::on_read,
shared_from_this()));
</code></pre>
<p>This isn’t possible with <code>asio::prepend</code>; that’s why I decided to leave uses of <code>beast::bind_front_handler</code> in the examples intact.</p>
<h3 id="trimming-dead-leaves-off-boostpropertytree">Trimming Dead Leaves off Boost.PropertyTree</h3>
<p><a href="https://github.com/boostorg/property_tree">PropertyTree</a> has served Boost users for almost two decades. However, it experienced a period without an active maintainer, resulting in a backlog of issues and pull requests in its repository.</p>
<p>Given PropertyTree’s shift to maintenance mode, we made the decision to refrain from introducing new features or making breaking changes to the interfaces. Consequently, I reviewed all the issues and pull requests, addressing them unless they involved adding a new feature. With these fixes implemented and an improved CI script, PropertyTree was prepared for the Boost 1.84 release, hopefully in a slightly better state.</p>Over the last few months I have been mainly working on Boost.Beast and Boost.PropertyTree. Keeping Boost.Beast in Good Form I’ve recently taken on a more active role in maintaining Boost.Beast. To begin, I reviewed all the open issues to gain a better understanding of the project’s current state. In the course of this process, I successfully addressed several issues that did not necessitate significant refactoring. Here are a couple of contributions to the project that I find interesting: Specializing asio::associator for bind_wrapper and bind_front_wrapper Because bind_wrapper and bind_front_wrapper wrap the user’s handlers, all of the associators with the original handler become invisible to Asio. In order to resolve that, Beast has been specializing each associator individually for bound wrappers. However, this makes Asio consistently assume the presence of an associated executor with the bound handlers. Fortunately, the fix is easy; we only need to specialize asio::associator for the bound wrappers, and it can query all the associators from the wrapper handler. You can read more in this pull request. Replacing internal uses of beast::bind_front_handler with asio::prepend bind_front_handler is a utility in Beast that allows binding a list of arguments to an existing handler and creating a new handler with a different signature. This is especially useful when we want to pass additional arguments to handlers. For example, the following code binds an error code to a handler, making it invokable without needing any argument: asio::dispatch(ex, beast::bind_front_handler(std::move(handler), ec)); With the introduction of asio::prepend in Boost 1.80, we can replace the previous code with: asio::dispatch(ex, asio::prepend(std::move(handler), ec)); However, beast::bind_front_handler has a specialized invoke function for member functions, which makes it possible to concisely turn member functions into handlers: ws_.async_read( buffer_, beast::bind_front_handler( &websocket_session::on_read, shared_from_this())); This isn’t possible with asio::prepend; that’s why I decided to leave uses of beast::bind_front_handler in the examples intact. Trimming Dead Leaves off Boost.PropertyTree PropertyTree has served Boost users for almost two decades. However, it experienced a period without an active maintainer, resulting in a backlog of issues and pull requests in its repository. Given PropertyTree’s shift to maintenance mode, we made the decision to refrain from introducing new features or making breaking changes to the interfaces. Consequently, I reviewed all the issues and pull requests, addressing them unless they involved adding a new feature. With these fixes implemented and an improved CI script, PropertyTree was prepared for the Boost 1.84 release, hopefully in a slightly better state.Ruben’s January update: presenting Boost.MySQL’s new pool!2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/q4_update/2024/01/10/RubenQ4<p>In spite of it being winter here, Boost.MySQL is presenting a new pool this year!
This was one of the most requested features in the library, and will finally be generally
available as an experimental feature in Boost 1.85.</p>
<p>Connection pools manage tasks that are easy to get wrong, like reconnections,
health checks and session cleanup. As they reuse physical connections,
they also provide <a href="https://www.boost.org/doc/libs/master/libs/mysql/doc/html/mysql/connection_pool.html#mysql.connection_pool.benchmarks">a nice efficiency boost</a>.</p>
<p>Using a pool is as simple as:</p>
<pre><code>boost::asio::awaitable<std::int64_t> get_num_employees(boost::mysql::connection_pool& pool)
{
// Get a fresh connection from the pool
auto conn = co_await pool.async_get_connection(boost::asio::use_awaitable);
// Use the connection. It will be returned to the pool on scope exit
results result;
co_await conn->async_execute("SELECT COUNT(*) FROM employee", result, boost::asio::use_awaitable);
co_return result.rows().at(0).at(0).as_int64();
}
</code></pre>
<p>Despite its simple interface, implementing <code>connection_pool</code> has been a long and hard
task that has taken me a considerable effort. In the process, I’ve been able to
learn in-depth many things about Boost.Asio which I thought I knew, but it turns out I did not.</p>
<p>For instance, I’ve finally understood the Asio property system, and wrote
<a href="https://anarthal.github.io/cppblog/asio-props.html">a blog post on it</a>, hoping
that others will find it easier to understand. I’ve also managed to make <code>connection_pool</code>
easy to use in multi-threaded contexts using strands. I’ve found many subtle pitfalls
here that are easy to get wrong. Thread-sanitizer for the win!</p>
<p>In any case, I’ve been able to get help from other knowledgeable Asio
developers, both from inside and outside the C++ Alliance, which has been
really helpful to get this done.</p>
<p>I’m also pretty happy with the role that <a href="https://github.com/anarthal/servertech-chat">Boost ServerTech chat</a>
has played in this task. ServerTech chat is a project to showcase how Boost libraries can be used
together, and a place to innovate. Thanks to it, I’ve had an almost real-world environment
to battle-test my API in.</p>
<h2 id="type-erased-connections">Type-erased connections</h2>
<p>With <code>connection_pool</code>, Boost.MySQL is getting a new type-erased connection
type, <code>any_connection</code>, with much easier connection establishment semantics.
Connection pooling relies on such semantics to provide better efficiency.</p>
<p><code>any_connection</code> is currently experimental, but I expect it to become the
default connection type once it becomes stable.</p>
<h2 id="boostredis-finally-released">Boost.Redis finally released!</h2>
<p>Boost.Redis finally got its first release with Boost 1.84. I’m proud to have
helped the author integrate this library into Boost.</p>
<p>Boost integration is not a trivial process. Most novice authors only have CMake
experience, but Boost builds require some parts to use B2. Even with CMake,
some parts need to adhere to certain conventions to integrate with the Boost
superproject. I’ve recently gone through this, so I’ve been able to help here.</p>
<p>I’m also pretty happy about my teammate’s efforts on documenting these
processes in the <a href="https://docs.cppalliance.org/contributor-guide/index.html">contributor guide</a>.</p>
<h2 id="sans-io-all-the-things">Sans-io all the things</h2>
<p>Up to Boost 1.84, all network algorithms in Boost.MySQL are internally
implemented as layered functions calling Boost.Asio primitives. As an overly
simplistic example, <code>connection::execute</code> calls two functions, <code>write_request</code>
and <code>read_response</code>, which end up in socket calls.</p>
<p>While this is the obvious way to implement such a library, it generates a lot of
duplication. There is <code>connection::execute</code> and <code>connection::async_execute</code>,
which yields two versions of every underlying function. It’s also slow
to compile (Asio async code is full of heavy templates) and hard to test.</p>
<p>Starting from Boost 1.85, all network algorithms are implemented as state machines.
Such algorithms are called sans-io, because they don’t directly invoke any I/O functions.
A thin layer of Asio code connects such algorithms to I/O, yielding the same interface
as before. These algorithms are currently an implementation detail, and not exposed to the public.
This change has made the library much simpler and enjoyable to test. Even if
you’re not planning to support sync functions, consider going sans-io - your unit tests will thank you.</p>
<h2 id="next-steps-sql-query-formatting">Next steps: SQL query formatting</h2>
<p>The next big step is supporting client-side query composition. This makes use cases
like dynamic filters much easier to implement, and can increase efficiency by
saving round-trips to the server. I’m currently examining the great <a href="https://github.com/fmtlib/fmt">fmt</a>
library as a source of inspiration.</p>In spite of it being winter here, Boost.MySQL is presenting a new pool this year! This was one of the most requested features in the library, and will finally be generally available as an experimental feature in Boost 1.85. Connection pools manage tasks that are easy to get wrong, like reconnections, health checks and session cleanup. As they reuse physical connections, they also provide a nice efficiency boost. Using a pool is as simple as: boost::asio::awaitable<std::int64_t> get_num_employees(boost::mysql::connection_pool& pool) { // Get a fresh connection from the pool auto conn = co_await pool.async_get_connection(boost::asio::use_awaitable); // Use the connection. It will be returned to the pool on scope exit results result; co_await conn->async_execute("SELECT COUNT(*) FROM employee", result, boost::asio::use_awaitable); co_return result.rows().at(0).at(0).as_int64(); } Despite its simple interface, implementing connection_pool has been a long and hard task that has taken me a considerable effort. In the process, I’ve been able to learn in-depth many things about Boost.Asio which I thought I knew, but it turns out I did not. For instance, I’ve finally understood the Asio property system, and wrote a blog post on it, hoping that others will find it easier to understand. I’ve also managed to make connection_pool easy to use in multi-threaded contexts using strands. I’ve found many subtle pitfalls here that are easy to get wrong. Thread-sanitizer for the win! In any case, I’ve been able to get help from other knowledgeable Asio developers, both from inside and outside the C++ Alliance, which has been really helpful to get this done. I’m also pretty happy with the role that Boost ServerTech chat has played in this task. ServerTech chat is a project to showcase how Boost libraries can be used together, and a place to innovate. Thanks to it, I’ve had an almost real-world environment to battle-test my API in. Type-erased connections With connection_pool, Boost.MySQL is getting a new type-erased connection type, any_connection, with much easier connection establishment semantics. Connection pooling relies on such semantics to provide better efficiency. any_connection is currently experimental, but I expect it to become the default connection type once it becomes stable. Boost.Redis finally released! Boost.Redis finally got its first release with Boost 1.84. I’m proud to have helped the author integrate this library into Boost. Boost integration is not a trivial process. Most novice authors only have CMake experience, but Boost builds require some parts to use B2. Even with CMake, some parts need to adhere to certain conventions to integrate with the Boost superproject. I’ve recently gone through this, so I’ve been able to help here. I’m also pretty happy about my teammate’s efforts on documenting these processes in the contributor guide. Sans-io all the things Up to Boost 1.84, all network algorithms in Boost.MySQL are internally implemented as layered functions calling Boost.Asio primitives. As an overly simplistic example, connection::execute calls two functions, write_request and read_response, which end up in socket calls. While this is the obvious way to implement such a library, it generates a lot of duplication. There is connection::execute and connection::async_execute, which yields two versions of every underlying function. It’s also slow to compile (Asio async code is full of heavy templates) and hard to test. Starting from Boost 1.85, all network algorithms are implemented as state machines. Such algorithms are called sans-io, because they don’t directly invoke any I/O functions. A thin layer of Asio code connects such algorithms to I/O, yielding the same interface as before. These algorithms are currently an implementation detail, and not exposed to the public. This change has made the library much simpler and enjoyable to test. Even if you’re not planning to support sync functions, consider going sans-io - your unit tests will thank you. Next steps: SQL query formatting The next big step is supporting client-side query composition. This makes use cases like dynamic filters much easier to implement, and can increase efficiency by saving round-trips to the server. I’m currently examining the great fmt library as a source of inspiration.Sam’s Q4 2023 Update2024-01-10T00:00:00+00:002024-01-10T00:00:00+00:00http://cppalliance.org/sam/2024/01/10/SamsQ4Update<p>Here’s an overview of some projects I have been working on the last few months.</p>
<h3 id="doc-previews">Doc Previews</h3>
<p>A Jenkins server is building “pull request doc previews” for multiple boost repositories. If any Boost author is interested in “doc previews” let me know.</p>
<p>Specific previews: adding an Antora version to boostorg/url.</p>
<p>Upgraded the server packages, apps, and operating system to Ubuntu 22.04.</p>
<p>Mr. Docs has a dedicated server for docs testing. Debugged CI deployment issues there.</p>
<h3 id="json-benchmarks">JSON benchmarks</h3>
<p>https://benchmark.cppalliance.org/</p>
<p>Continued from the previous month, updated JSON benchmarks scripts to use a consistent output file location and revised jamfile.</p>
<h3 id="boostorg-website-cppalliancetemp-site">Boostorg website cppalliance/temp-site</h3>
<p>Added prometheus and nagios monitoring alerts. Checking on status of db backup scripts. Upgraded CircleCI so release-tools will deploy to AWS S3, and temp-site will publish develop/master snapshots. Slack chat with Greg about library documentation. Test/debug slow load times of library docs on the site and reported findings. Discussions with Spencer about how URL maps ought to work. Sent Glen/Marshall info about deploying new boost releases on preview.boost.org. Ran a sync from prod to stage, including database and S3 files, so that stage looks like production. Added Frank and Lacie in GCP. On the topic of quickbook for release notes: probably migrate to asciidocs. Created docs at https://github.com/cppalliance/temp-site-documentation. Deploy more domain names for testing. boost.io. Sent Calendar API info to Lacie. Wowbagger: cron scripts to backup files.</p>
<h3 id="load-testing">Load testing</h3>
<p>Investigated https://github.com/locustio/locust/ and Bees with Machine Guns. Installed both tools. Sent a PR (merged) to locustio improving their terraform script. After a couple days, for the sake of time, concluded we can solve ‘load testing’ problems by installing a CDN in front of the website, thus removing most traffic. Switch to that goal.</p>
<h3 id="cdn-fastly">CDN Fastly</h3>
<p>Set up a CDN front-end to the temp-site at Fastly. Extensive testing, many iterations of VCL. Added SSL certificates. Opened a case to discuss the existing conflict/overlap when acme-challenge is used on both the backend cluster, and the CDN. They are planning to implement a new RFC in the next year to improve the situation. The same hostname should be applied on all servers so that social auth works. Updated kubernetes from “Ingress” to “Gateway API”, to improve SSL requests. Deployed “Gateway” in each environment.</p>
<h3 id="boostorgrelease-tools">boostorg/release-tools</h3>
<p>Assisted Alan, who is adding Antora support to boost releases. Generated docker images with additional nodejs packages. Debug/test the main release scripts, which were modified.
Added packages to the images for Klemens. Added CI, code formatter ‘black’, similar to website. Updated boostorg/boost to use the new images.</p>
<h3 id="mailman-project">Mailman project</h3>
<p>Setting up test instances of mailman2 and mailman3, to test and document mailman users, members, passwords, especially after an upgrade/migration. Wrote an improved documentation section for mailman-suite (merged) at https://gitlab.com/mailman/mailman-suite-doc. Various updates to cppalliance/ansible-mailman3 codebase. Meetings with Boost Foundation about the mailing lists. Install ElasticSearch on all instances. Added kube variables in temp-site, pointing to mm instances. Mailman cli test.</p>
<h3 id="self-hosted-runners">Self-hosted runners</h3>
<p>Analyzing the large codebase at philips-labs/terraform-aws-github-runner (tagr). Sent them yet another bugfix. Also, there is an outstanding issue (they still have not implemented) whereby the default runner labels can be entirely replaced/customized, and if that is done, tagr could be rolled out to more repositories with less risk of unforseen conflicts caused by label matching in the future. Ongoing. Installed LLVM on windows-2022 image.</p>
<h3 id="drone">Drone</h3>
<p>Upgraded the Drone executable itself. New dark mode support! That’s the last commit from Drone which is being transmogrified into “https://github.com/harness/gitness”. When gitness is eventually ready it may become a drop-in replacement, but that is not yet certain.</p>
<p>As requested by Alexander, install the latest “macOS 14 Sonoma” machines. Worked with boostorg/math to retire the oldest “macOS 10.13 High Sierra” that are now offline, and use 14. New drone 23.10 image for Peter. Sent a PR to boostorg/url to centralize the drone ‘generate’ function at cppalliance/ci-automation.</p>Here’s an overview of some projects I have been working on the last few months. Doc Previews A Jenkins server is building “pull request doc previews” for multiple boost repositories. If any Boost author is interested in “doc previews” let me know. Specific previews: adding an Antora version to boostorg/url. Upgraded the server packages, apps, and operating system to Ubuntu 22.04. Mr. Docs has a dedicated server for docs testing. Debugged CI deployment issues there. JSON benchmarks https://benchmark.cppalliance.org/ Continued from the previous month, updated JSON benchmarks scripts to use a consistent output file location and revised jamfile. Boostorg website cppalliance/temp-site Added prometheus and nagios monitoring alerts. Checking on status of db backup scripts. Upgraded CircleCI so release-tools will deploy to AWS S3, and temp-site will publish develop/master snapshots. Slack chat with Greg about library documentation. Test/debug slow load times of library docs on the site and reported findings. Discussions with Spencer about how URL maps ought to work. Sent Glen/Marshall info about deploying new boost releases on preview.boost.org. Ran a sync from prod to stage, including database and S3 files, so that stage looks like production. Added Frank and Lacie in GCP. On the topic of quickbook for release notes: probably migrate to asciidocs. Created docs at https://github.com/cppalliance/temp-site-documentation. Deploy more domain names for testing. boost.io. Sent Calendar API info to Lacie. Wowbagger: cron scripts to backup files. Load testing Investigated https://github.com/locustio/locust/ and Bees with Machine Guns. Installed both tools. Sent a PR (merged) to locustio improving their terraform script. After a couple days, for the sake of time, concluded we can solve ‘load testing’ problems by installing a CDN in front of the website, thus removing most traffic. Switch to that goal. CDN Fastly Set up a CDN front-end to the temp-site at Fastly. Extensive testing, many iterations of VCL. Added SSL certificates. Opened a case to discuss the existing conflict/overlap when acme-challenge is used on both the backend cluster, and the CDN. They are planning to implement a new RFC in the next year to improve the situation. The same hostname should be applied on all servers so that social auth works. Updated kubernetes from “Ingress” to “Gateway API”, to improve SSL requests. Deployed “Gateway” in each environment. boostorg/release-tools Assisted Alan, who is adding Antora support to boost releases. Generated docker images with additional nodejs packages. Debug/test the main release scripts, which were modified. Added packages to the images for Klemens. Added CI, code formatter ‘black’, similar to website. Updated boostorg/boost to use the new images. Mailman project Setting up test instances of mailman2 and mailman3, to test and document mailman users, members, passwords, especially after an upgrade/migration. Wrote an improved documentation section for mailman-suite (merged) at https://gitlab.com/mailman/mailman-suite-doc. Various updates to cppalliance/ansible-mailman3 codebase. Meetings with Boost Foundation about the mailing lists. Install ElasticSearch on all instances. Added kube variables in temp-site, pointing to mm instances. Mailman cli test. Self-hosted runners Analyzing the large codebase at philips-labs/terraform-aws-github-runner (tagr). Sent them yet another bugfix. Also, there is an outstanding issue (they still have not implemented) whereby the default runner labels can be entirely replaced/customized, and if that is done, tagr could be rolled out to more repositories with less risk of unforseen conflicts caused by label matching in the future. Ongoing. Installed LLVM on windows-2022 image. Drone Upgraded the Drone executable itself. New dark mode support! That’s the last commit from Drone which is being transmogrified into “https://github.com/harness/gitness”. When gitness is eventually ready it may become a drop-in replacement, but that is not yet certain. As requested by Alexander, install the latest “macOS 14 Sonoma” machines. Worked with boostorg/math to retire the oldest “macOS 10.13 High Sierra” that are now offline, and use 14. New drone 23.10 image for Peter. Sent a PR to boostorg/url to centralize the drone ‘generate’ function at cppalliance/ci-automation.Krystian’s Q3 Update2023-10-31T00:00:00+00:002023-10-31T00:00:00+00:00http://cppalliance.org/krystian/2023/10/31/KrystianQ3Update<p>My primary focus this quarter was getting MrDocs into a state where it can serve as a drop-in replacement for Doxygen/Docca in Boost.URL. Before diving into that, there are a few smaller things I addressed in other projects:</p>
<h2 id="booststaticstring">Boost.StaticString</h2>
<ul>
<li>Added support for platforms lacking <code>wchar_t</code>/<code>wsnprintf</code></li>
</ul>
<h2 id="docca">Docca</h2>
<ul>
<li>Added backward compatibility for operator names. Doxygen 1.8.15 and older generate operator names containing a space between <code>operator</code> and the subsequent tokens. This behavior changed in newer versions, meaning that the new names must be converted to the old format to avoid breaking existing references to these functions.</li>
<li>Suppressed generation of private friends. This was necessary because such declarations would “hide” the primary declaration and result in broken links.</li>
<li>Stripped auto-generated links within code blocks due to incorrect rendering.</li>
</ul>
<h2 id="mrdocs">MrDocs</h2>
<p>Switching focus to MrDocs, I implemented many major features:</p>
<h3 id="dependency-extraction">Dependency extraction</h3>
<p>When symbols are referenced by a declaration, dependency extraction controls whether the referenced symbol will be extracted, irrespective of whether it was declared within the project directory. My initial naive implementation would extract such symbols unconditionally, but I later added a more refined mode where dependency extraction only occurs for:</p>
<ul>
<li>Local classes which are deduced as the return type of an extracted function, and</li>
<li>Base classes of an extracted class
These cases are the only ones in which a referenced symbol affects the “interface” of another, hence the term “dependency.” A final mode that disables dependency extraction completely was also added.</li>
</ul>
<h3 id="safe-names">Safe names</h3>
<p>The “safe name” of a symbol is a prettier but unique name for a symbol that can be used as an alternative to the base16/base64 representation of a <code>SymbolID</code>. These names also have the property of being path/URL safe, as their intended purpose is for use as filenames when generating the output.
Broadly, safe names are generated by collecting all symbols with the same name in a given scope, and then appending digits from the base16 representation of the <code>SymbolID</code> until all names are unique. For example, the safe name for <code>void A::f();</code> will be <code>A-f</code> in the absence of other overloads. If there exists an overload <code>void A::f(int);</code>, then a possible set of safe names could be <code>A-f-0a</code> and <code>A-f-04</code>.</p>
<h3 id="symbol-filtering">Symbol filtering</h3>
<p>Symbol filtering permits the exclusion of symbols matching a pattern from being extracted. Filters are specified as C++ <em>id-expressions</em>, except that wildcards (<code>*</code>) may be used to zero or more occurrences of any character. The primary purpose of filters is to exclude symbols from detail namespaces (e.g., using the pattern <code>*::detail</code>). In addition to excluded patterns, it is also possible to specify included patterns to override matches; these patterns are meaningless unless they match a subset of symbols matched by an excluded pattern. For example, the excluded pattern <code>A::B</code> combined with the included pattern <code>A::B::f*</code> means only the symbols in <code>A::B</code> beginning with <code>f</code> are to be extracted. Internally, filters are converted into a tree that is traversed alongside the AST; this avoids the need to check every pattern each time a new symbol is extracted.</p>
<h3 id="symbol-lookup">Symbol lookup</h3>
<p>Symbol lookup is the mechanism by which the <code>@ref</code> and <code>@copydoc</code> commands are implemented; it performs a simplified version of C++ name lookup for the given <em>id-expression</em> within the set of all extracted symbols. The current implementation is far from complete (e.g., no ambiguity resolution is performed, and the semantics of constructs like inline namespaces, using declarations, using directives, and injected-class-names are not implemented), but it’s sufficient for Boost.URL’s documentation. Lookup is deferred until all symbols have been extracted to support cross-TU references without forward declarations.</p>
<h2 id="clang">Clang</h2>
<p>The backbone of MrDocs is the clang compiler, which, given the nature of software, is not without its bugs. Working around them is only feasible to a certain extent, meaning that at some point it becomes necessary to fix them instead of waiting for others to do so. To address this, I have spent considerable time this quarter getting comfortable with hacking clang and familiarizing myself with the process of merging patches into LLVM. Thus far, I have submitted <a href="https://github.com/llvm/llvm-project/pull/66636">one PR that has been merged</a> which eliminates the <code>ClassScopeFunctionSpecializationDecl</code> AST node in favor of using <code>DependentFunctionTemplateSpecializationInfo</code> to represent dependent class scope explicit specializations of function templates. The primary motivation for this patch was to simplify <code>ASTVisitor::traverse</code> in MrDocs by using the same overload to handle all function declaration nodes. However, this patch also improves diagnostics for the following example, insofar that the lack of a primary template will be diagnosed prior to instantiation:</p>
<pre><code class="language-cpp">template<typename>
struct A
{
template<>
void f(int);
};
</code></pre>
<p>I have also been working on patches for other bugs related to function template specializations, e.g., diagnosing friend function template specializations which are definitions, ensuring that lookup for friend function template specializations considers inline namespaces, diagnosing unexpanded packs in class scope function template specializations, etc.</p>
<p>Another related aspect of explicit function template specializations I have been working on is template argument deduction. The current implementation of template argument deduction for function templates implicitly instantiates a specialization for the deduced arguments, which is undesirable (and non-conforming) when the deduction is done for the purposes of matching an explicit specialization to its primary template. I wrote a proof-of-concept implementation in which this implicit instantiation is eliminated, but I am not planning to pursue these changes until a later date when I have more time available to propose these changes.</p>
<p>Finally, I have been working on some AST memory optimizations, namely for data common to all redeclarations of an entity. This is done by replacing <code>Redeclarable::First</code> (which stores a pointer to the first declaration in a redeclaration chain) with a pointer to a common base <code>Common</code>:</p>
<pre><code class="language-cpp">struct Common
{
decl_type* First;
};
</code></pre>
<p>Allocated by calling <code>decl_type::newCommon</code>, which permits <code>decl_type</code> to allocate a <code>Redeclarable::Common</code> derived object to store additional common data. This can, for example, be used by <code>CXXRecordDecl</code> to store a single <code>DefinitionData</code> pointer for all redeclarations, as opposed to storing it in each <code>CXXRecordDecl</code> and propagating it upon allocation. This also eliminates the need for <code>RedeclarableTemplate</code>’s common pointer, as it can be merged into <code>Redeclarable::Common</code>.</p>My primary focus this quarter was getting MrDocs into a state where it can serve as a drop-in replacement for Doxygen/Docca in Boost.URL. Before diving into that, there are a few smaller things I addressed in other projects: Boost.StaticString Added support for platforms lacking wchar_t/wsnprintf Docca Added backward compatibility for operator names. Doxygen 1.8.15 and older generate operator names containing a space between operator and the subsequent tokens. This behavior changed in newer versions, meaning that the new names must be converted to the old format to avoid breaking existing references to these functions. Suppressed generation of private friends. This was necessary because such declarations would “hide” the primary declaration and result in broken links. Stripped auto-generated links within code blocks due to incorrect rendering. MrDocs Switching focus to MrDocs, I implemented many major features: Dependency extraction When symbols are referenced by a declaration, dependency extraction controls whether the referenced symbol will be extracted, irrespective of whether it was declared within the project directory. My initial naive implementation would extract such symbols unconditionally, but I later added a more refined mode where dependency extraction only occurs for: Local classes which are deduced as the return type of an extracted function, and Base classes of an extracted class These cases are the only ones in which a referenced symbol affects the “interface” of another, hence the term “dependency.” A final mode that disables dependency extraction completely was also added. Safe names The “safe name” of a symbol is a prettier but unique name for a symbol that can be used as an alternative to the base16/base64 representation of a SymbolID. These names also have the property of being path/URL safe, as their intended purpose is for use as filenames when generating the output. Broadly, safe names are generated by collecting all symbols with the same name in a given scope, and then appending digits from the base16 representation of the SymbolID until all names are unique. For example, the safe name for void A::f(); will be A-f in the absence of other overloads. If there exists an overload void A::f(int);, then a possible set of safe names could be A-f-0a and A-f-04. Symbol filtering Symbol filtering permits the exclusion of symbols matching a pattern from being extracted. Filters are specified as C++ id-expressions, except that wildcards (*) may be used to zero or more occurrences of any character. The primary purpose of filters is to exclude symbols from detail namespaces (e.g., using the pattern *::detail). In addition to excluded patterns, it is also possible to specify included patterns to override matches; these patterns are meaningless unless they match a subset of symbols matched by an excluded pattern. For example, the excluded pattern A::B combined with the included pattern A::B::f* means only the symbols in A::B beginning with f are to be extracted. Internally, filters are converted into a tree that is traversed alongside the AST; this avoids the need to check every pattern each time a new symbol is extracted. Symbol lookup Symbol lookup is the mechanism by which the @ref and @copydoc commands are implemented; it performs a simplified version of C++ name lookup for the given id-expression within the set of all extracted symbols. The current implementation is far from complete (e.g., no ambiguity resolution is performed, and the semantics of constructs like inline namespaces, using declarations, using directives, and injected-class-names are not implemented), but it’s sufficient for Boost.URL’s documentation. Lookup is deferred until all symbols have been extracted to support cross-TU references without forward declarations. Clang The backbone of MrDocs is the clang compiler, which, given the nature of software, is not without its bugs. Working around them is only feasible to a certain extent, meaning that at some point it becomes necessary to fix them instead of waiting for others to do so. To address this, I have spent considerable time this quarter getting comfortable with hacking clang and familiarizing myself with the process of merging patches into LLVM. Thus far, I have submitted one PR that has been merged which eliminates the ClassScopeFunctionSpecializationDecl AST node in favor of using DependentFunctionTemplateSpecializationInfo to represent dependent class scope explicit specializations of function templates. The primary motivation for this patch was to simplify ASTVisitor::traverse in MrDocs by using the same overload to handle all function declaration nodes. However, this patch also improves diagnostics for the following example, insofar that the lack of a primary template will be diagnosed prior to instantiation: template<typename> struct A { template<> void f(int); }; I have also been working on patches for other bugs related to function template specializations, e.g., diagnosing friend function template specializations which are definitions, ensuring that lookup for friend function template specializations considers inline namespaces, diagnosing unexpanded packs in class scope function template specializations, etc. Another related aspect of explicit function template specializations I have been working on is template argument deduction. The current implementation of template argument deduction for function templates implicitly instantiates a specialization for the deduced arguments, which is undesirable (and non-conforming) when the deduction is done for the purposes of matching an explicit specialization to its primary template. I wrote a proof-of-concept implementation in which this implicit instantiation is eliminated, but I am not planning to pursue these changes until a later date when I have more time available to propose these changes. Finally, I have been working on some AST memory optimizations, namely for data common to all redeclarations of an entity. This is done by replacing Redeclarable::First (which stores a pointer to the first declaration in a redeclaration chain) with a pointer to a common base Common: struct Common { decl_type* First; }; Allocated by calling decl_type::newCommon, which permits decl_type to allocate a Redeclarable::Common derived object to store additional common data. This can, for example, be used by CXXRecordDecl to store a single DefinitionData pointer for all redeclarations, as opposed to storing it in each CXXRecordDecl and propagating it upon allocation. This also eliminates the need for RedeclarableTemplate’s common pointer, as it can be merged into Redeclarable::Common.Spencer’s Q3 2023 Update2023-10-31T00:00:00+00:002023-10-31T00:00:00+00:00http://cppalliance.org/q3_update/2023/10/31/SpencerQ3Update<p>During Q3 2023, my work has been to get the new Boost website ready to launch. Though I’ve been a fullstack developer most of my career, I came in to focus more on the front-end development. I started right before Q3, so much of the site architecture was already in place. Using TailwindCSS, which I was happy about, as I’d been using it for quite a while.</p>
<p>The bulk of my work was to flesh out the sections of the site, clean up the styling, and to provide an overall better user experience. The team that had built the site to that point (Revsys) is still on board, and that’s been great - they’re Django & Python specialists, while I’d only built one site using Django in 2019.</p>
<p>The main things that jumped out at me when I started, that were keeping it from being in a launchable state, were that there were a number of pages with no content at all - and pages that did exist were inconsistent with each other in terms of the styling.</p>
<h3 id="getting-started">Getting started</h3>
<ul>
<li>Had a few long, fun, and very productive huddles with Vinnie early on, going section-by-section to talk about what we liked and what would look best. Our sense of what works, both from the aesthetics angle and the user experience, matches up well.</li>
<li>Reviewed the existing codebase while getting enough up-to-speed again with Django & Python.</li>
<li>Determined what was still missing from the site and, if not creating it right away, making sure there was a workable plan on how to create it when ready.</li>
</ul>
<h3 id="customizing-the-site">Customizing the site</h3>
<ul>
<li>Creating consistency amongst all pages of the site, including standardizing the display of similar sections across the site.</li>
<li>Addressing existing style issues / items that hadn’t been looked at yet, such as text overflowing, checking that text aligns properly and consistently.</li>
<li>Making the site mobile-ready. Typically I would do this first when building a new site, and expanding from there, but either way works for me!</li>
<li>Fleshing out / getting to an almost-ready state the homepage and the Learn, Community, and Releases sections.</li>
<li>The Libraries section had had the most work done when I started, but I worked on updating the styles, fleshing out the library detail page more by including information that wasn’t being displayed, and implementing the chart for Commits per Month.</li>
<li>Completely redesigned the Learn section after we had some amazing illustrations made for it.</li>
<li>Updated the homepage to be dynamic with the help of Revsys, created new elements that made sense, and brought in styling I’d worked on for the rest of the site to clean it up and bring it all together.</li>
<li>Too many tweaks to mention.</li>
</ul>
<h3 id="upgrade-guide">Upgrade Guide</h3>
<p>Many items are still works-in-progress, and being a website, that’s usually always the case. Once we have fully launched, I would like to be able to spec out new features and sections ahead of time, as I think it makes development much easier to have everything planned and explicitly documented - though, always flexible.</p>
<h3 id="next-steps">Next steps</h3>
<p>Throughout the quarter, I ran into many features or elements that I’ve seen newer ways of handling, or that I may have done differently myself in the past. Some work that I had done, I had to make the decision to cut. Once we’ve launched, I am hoping for some opportunities to revisit some of those changes. I also look forward to hearing from the Boost community once they’ve had a chance to use the site. Once we have the solid base that we’ve been building ready to go, it’s their opinions and the features they want that will matter most.</p>During Q3 2023, my work has been to get the new Boost website ready to launch. Though I’ve been a fullstack developer most of my career, I came in to focus more on the front-end development. I started right before Q3, so much of the site architecture was already in place. Using TailwindCSS, which I was happy about, as I’d been using it for quite a while. The bulk of my work was to flesh out the sections of the site, clean up the styling, and to provide an overall better user experience. The team that had built the site to that point (Revsys) is still on board, and that’s been great - they’re Django & Python specialists, while I’d only built one site using Django in 2019. The main things that jumped out at me when I started, that were keeping it from being in a launchable state, were that there were a number of pages with no content at all - and pages that did exist were inconsistent with each other in terms of the styling. Getting started Had a few long, fun, and very productive huddles with Vinnie early on, going section-by-section to talk about what we liked and what would look best. Our sense of what works, both from the aesthetics angle and the user experience, matches up well. Reviewed the existing codebase while getting enough up-to-speed again with Django & Python. Determined what was still missing from the site and, if not creating it right away, making sure there was a workable plan on how to create it when ready. Customizing the site Creating consistency amongst all pages of the site, including standardizing the display of similar sections across the site. Addressing existing style issues / items that hadn’t been looked at yet, such as text overflowing, checking that text aligns properly and consistently. Making the site mobile-ready. Typically I would do this first when building a new site, and expanding from there, but either way works for me! Fleshing out / getting to an almost-ready state the homepage and the Learn, Community, and Releases sections. The Libraries section had had the most work done when I started, but I worked on updating the styles, fleshing out the library detail page more by including information that wasn’t being displayed, and implementing the chart for Commits per Month. Completely redesigned the Learn section after we had some amazing illustrations made for it. Updated the homepage to be dynamic with the help of Revsys, created new elements that made sense, and brought in styling I’d worked on for the rest of the site to clean it up and bring it all together. Too many tweaks to mention. Upgrade Guide Many items are still works-in-progress, and being a website, that’s usually always the case. Once we have fully launched, I would like to be able to spec out new features and sections ahead of time, as I think it makes development much easier to have everything planned and explicitly documented - though, always flexible. Next steps Throughout the quarter, I ran into many features or elements that I’ve seen newer ways of handling, or that I may have done differently myself in the past. Some work that I had done, I had to make the decision to cut. Once we’ve launched, I am hoping for some opportunities to revisit some of those changes. I also look forward to hearing from the Boost community once they’ve had a chance to use the site. Once we have the solid base that we’ve been building ready to go, it’s their opinions and the features they want that will matter most.Dmitry’s Q3 2023 Update2023-10-30T00:00:00+00:002023-10-30T00:00:00+00:00http://cppalliance.org/dmitry/2023/10/30/dmitrys-q3-update<p>In the third quarter my work was mostly focused on improvements to JSON’s
conversion features. In Boost.JSON conversion between user types and library
containers is done with <code>value_to</code> and <code>value_from</code> functions. Conversions of
composite types are attempted recursively. The library provides conversions for
several common groups of types, including sequences, associative containers,
tuples, numbers, and strings. Users also have the option to implement
conversion for other types. The function <code>value_to</code> can fail at runtime, when
the structure of JSON value differs from the one expected by conversion
implementation. The function can report those errors in two separate ways:
exceptions and error codes. This is not only expressed in the
front-end—available overloads for <code>value_to</code>—but is also supported on the
back-end. In other words, users can report errors from their conversions using
either exceptions or error codes. And the library turns error codes into
exceptions if on the front-end exceptions were requested, and even attempts the
opposite conversion where possible.</p>
<p>There was a problem though, when on the front-end exceptions were requested,
and conversion of the deepest types are using exceptions too, but higher up
errors are reported via error codes, there is a likelihood that the exception
will be swallowed, and then a non-discriminate “some error has occurred”
exception will be thrown instead. In order to fix this, I implemented a
mechanism that communicates user’s choice of error reporting to the back-end.</p>
<p>Another change to the <code>value_to/from</code> functions was the addition of
<code>is_optional_like</code> and <code>is_variant_like</code> traits. They determine if a type can
be classified as optional or variant correspondingly. Previously the library
explicitly handled <code>std::optional</code> and <code>std::variant</code>, and
support for <code>boost::variant2::variant</code> was provided in its own library. With
these new traits all types that are sufficiently close to the standard optional
and variant are handled. In the case of optional it has an additional benefit:
conversion for described classes does not treat missing members as an error, if
their types are optionals.</p>
<p>But the biggest amount of time was spent on the new feature: direct parsing.
Even during the Boost review of Boost.JSON some people have complained that
they would prefer to avoid going through JSON containers entirely and parse
directly into their types. This is now possible with the function <code>parse_into</code>.
Benchmarks also show that it can potentially double the performance. The design
and most of implementation was provided by Peter Dimov a while ago. So,
I mostly only had to refactor it to reduce code duplication, and change
behaviour of some functions, so that the result is to that of <code>value_to/from</code>.
I also added an implementation for optionals, and provided a different
implementation for variants.</p>
<p>That last one deserves some explanation. Back when we were discussing different
ways to convert to and from variants, we chose a seamless approach, where the
variant itself doesn’t add anything to representation, but its current
alternative is represented directly. This is what most JSON files use in
practice, but it does complicate conversion back from JSON into variant.
<code>value_to</code> attempts conversions for each alternative and picks the first one
which succeeds. This approach is not immediately possible for direct parsing,
though, as the choice of alternative has to be made before the full source of
the value is available. Peter’s solution was to only support variants for
which it was easy to discriminate between alternatives. This does eliminate
whole classes of JSON documents, though, in which variants of very similar
alternatives (usually objects) are used. So, I’ve dealt with variants
differently: the implementation instead records parser events and replays them
for the next alternative if the current one fails.</p>
<p>That approach has a consequence: for variants we need to dynamically allocate
and keep a sequence of events. This can potentially eliminate all performance
benefits, if the variant is the topmost container. To mitigate this I also
implemented a way to limit the amount of parser events variant conversion can
replay. For some variants a fairly small limit would be enough and completely
eliminate the need for additional allocations. Ultimately, I decided not to
merge that customisation, and wait for user feedback.</p>
<p>Finally, there was one notable change to Boost.JSON unrelated to conversion.
Different C++ implementations disagreed whether <code>value jv{ value() }</code>
copy-constructs <code>jv</code> or uses construction from <code>initializer_list</code>. This
resulted in code that behaves differently for different implementations.
Unfortunately we couldn’t fix it on our end, this is just a discrepancy between
how implementations treat this syntax. But eventually I came to realisation
that it can be handled explicitly to be a copy by the <code>initializer_list</code>
constructor. The constructor now treats <code>initializer_list</code>s of size 1 as an
attempt to copy. If you do want an array of size 1, you can use <code>value jv{
array{x} }</code>.</p>In the third quarter my work was mostly focused on improvements to JSON’s conversion features. In Boost.JSON conversion between user types and library containers is done with value_to and value_from functions. Conversions of composite types are attempted recursively. The library provides conversions for several common groups of types, including sequences, associative containers, tuples, numbers, and strings. Users also have the option to implement conversion for other types. The function value_to can fail at runtime, when the structure of JSON value differs from the one expected by conversion implementation. The function can report those errors in two separate ways: exceptions and error codes. This is not only expressed in the front-end—available overloads for value_to—but is also supported on the back-end. In other words, users can report errors from their conversions using either exceptions or error codes. And the library turns error codes into exceptions if on the front-end exceptions were requested, and even attempts the opposite conversion where possible. There was a problem though, when on the front-end exceptions were requested, and conversion of the deepest types are using exceptions too, but higher up errors are reported via error codes, there is a likelihood that the exception will be swallowed, and then a non-discriminate “some error has occurred” exception will be thrown instead. In order to fix this, I implemented a mechanism that communicates user’s choice of error reporting to the back-end. Another change to the value_to/from functions was the addition of is_optional_like and is_variant_like traits. They determine if a type can be classified as optional or variant correspondingly. Previously the library explicitly handled std::optional and std::variant, and support for boost::variant2::variant was provided in its own library. With these new traits all types that are sufficiently close to the standard optional and variant are handled. In the case of optional it has an additional benefit: conversion for described classes does not treat missing members as an error, if their types are optionals. But the biggest amount of time was spent on the new feature: direct parsing. Even during the Boost review of Boost.JSON some people have complained that they would prefer to avoid going through JSON containers entirely and parse directly into their types. This is now possible with the function parse_into. Benchmarks also show that it can potentially double the performance. The design and most of implementation was provided by Peter Dimov a while ago. So, I mostly only had to refactor it to reduce code duplication, and change behaviour of some functions, so that the result is to that of value_to/from. I also added an implementation for optionals, and provided a different implementation for variants. That last one deserves some explanation. Back when we were discussing different ways to convert to and from variants, we chose a seamless approach, where the variant itself doesn’t add anything to representation, but its current alternative is represented directly. This is what most JSON files use in practice, but it does complicate conversion back from JSON into variant. value_to attempts conversions for each alternative and picks the first one which succeeds. This approach is not immediately possible for direct parsing, though, as the choice of alternative has to be made before the full source of the value is available. Peter’s solution was to only support variants for which it was easy to discriminate between alternatives. This does eliminate whole classes of JSON documents, though, in which variants of very similar alternatives (usually objects) are used. So, I’ve dealt with variants differently: the implementation instead records parser events and replays them for the next alternative if the current one fails. That approach has a consequence: for variants we need to dynamically allocate and keep a sequence of events. This can potentially eliminate all performance benefits, if the variant is the topmost container. To mitigate this I also implemented a way to limit the amount of parser events variant conversion can replay. For some variants a fairly small limit would be enough and completely eliminate the need for additional allocations. Ultimately, I decided not to merge that customisation, and wait for user feedback. Finally, there was one notable change to Boost.JSON unrelated to conversion. Different C++ implementations disagreed whether value jv{ value() } copy-constructs jv or uses construction from initializer_list. This resulted in code that behaves differently for different implementations. Unfortunately we couldn’t fix it on our end, this is just a discrepancy between how implementations treat this syntax. But eventually I came to realisation that it can be handled explicitly to be a copy by the initializer_list constructor. The constructor now treats initializer_lists of size 1 as an attempt to copy. If you do want an array of size 1, you can use value jv{ array{x} }.Alan’s Q3 Update2023-10-27T00:00:00+00:002023-10-27T00:00:00+00:00http://cppalliance.org/alan/2023/10/27/AlanQ3Update<h1 id="summary">Summary</h1>
<ul>
<li><a href="#mrdocs">MrDocs</a>
<ul>
<li><a href="#handlebars">Handlebars</a></li>
<li><a href="#the-dom">The DOM</a></li>
<li><a href="#javascript-bindings">Javascript Bindings</a></li>
<li><a href="#unit-tests">Unit tests</a></li>
</ul>
</li>
<li><a href="#boost-website">Boost Website</a></li>
<li><a href="#boost-release-tools">Boost Release Tools</a></li>
<li><a href="#boost-libraries">Boost Libraries</a></li>
<li><a href="#c-github-actions">C++ Github Actions</a></li>
<li><a href="#gray-box-local-search">Gray-box local search</a></li>
</ul>
<h2 id="mrdocs">MrDocs</h2>
<p>Over the last quarter, we have been working intensely on <a href="https://github.com/cppalliance/mrdocs">MrDocs</a>, a documentation generator for C++ projects. I’ve been overseeing and reviewing all the work done by the other contributors in the project. I’ve also been responsible for:</p>
<ul>
<li>setting up and maintaining CI for the project;</li>
<li>MrDocs and LLVM release binaries;</li>
<li>build scripts;</li>
<li>setting up and integrating dependencies;</li>
<li>setting up and deploying the Antora toolchains and documentation to the project website;</li>
<li>working on supporting libraries; and</li>
<li>fixing bugs.</li>
</ul>
<p>These are some of the highlights of the work done in the last quarter:</p>
<ul>
<li>Refactor library layout and scripts so all the implementation files are in <a href="https://github.com/cppalliance/mrdocs/commit/04f75ddbeb666a65a3a8604b5cfaead1977e8c1c"><code>src</code></a></li>
<li><a href="https://github.com/cppalliance/mrdocs/commit/5bd2cc494d82e10189041f138efc6a9abf3bd55e">Automated deployment</a> of demo documentation to the project website. The website is updated on every commit to the <code>develop</code> or <code>master</code> branches. The action uses the actual release package to generate the documentation. This PR involved fixing all scripts and directory layouts for the release package to be properly usable. The new project layout uses the FHS standard. CMake modules to create a target to generate the documentation for projects were included in the installation. The MrDocs executable and libraries are installed as separate CMake package components. Any usage of FetchContent has been removed from the CMake scripts. The documentation was refactored to reflect the changes. Included complete instructions in the documentation pages for installation and usage, describing all commands and options.</li>
<li>Deployed <a href="https://mrdox.com/llvm+clang/">new LLVM binaries to the website</a> and <a href="https://github.com/cppalliance/mrdocs/commit/2584328ab143d50d4d8289ac83f019f622200fa4">updated CI</a>. The binaries were regenerated for all platforms in 4 modes: <code>Debug</code>, <code>Release</code>, <code>RelWithDebInfo</code>, and <code>DebWithOpt</code>. This involved fixing long-standing bugs related to conflicts in LLVM versions used by the project and providing new pre-built binaries on the website. In particular, the previous pre-built binaries used a special ReleaseWithDebInfo LLVM configuration that caused conflicts with MrDocs on MSVC, being used with a Debug CMake configuration variant by developers. This eliminated the need for the ad-hoc GitHub LLVM binaries release and for the special docker container we had been using so far.</li>
<li>Added <a href="https://github.com/cppalliance/mrdocs/commit/ba63ed0f8cab4846dbff468b8d1f24d14f5d22c8">support for CMakePresets</a>. This allowed us to simplify the build process for MrDocs and its dependencies, which was previously counting on long command line commands and <code>CMakeUserPresets.json</code> files without a corresponding <code>CMakePresets.json</code> file. It was also a step towards the new installation instructions in the documentation. An example file for <code>CMakeUserPresets.json</code> including all compilers was provided. The base <code>CMakePresets.json</code> file included a special vendor configuration to hide base configurations from Visual Studio. <code>CMakeSettings.json</code> was deprecated.</li>
<li>Added complete <a href="https://github.com/cppalliance/mrdocs/commit/34912248fbbd006b163c6bd438e30ff52efc4fac">installation instructions</a> for the project. The instructions were adapted so that all commands are relative to an enclosing directory containing all the dependencies and MrDocs. Included instructions for all steps considering package managers, installed dependencies, or binaries provided on the project website. The CMake scripts were adapted to make it easier to build the project in a variety of environments according to the instructions.</li>
<li>Included a <a href="https://github.com/cppalliance/mrdocs/commit/2e554c8b636f31815fb80656717e910e097fbb77">polyfill implementation of <code>std::expected</code></a> as <code>mrdocs::Expected</code>. This implementation is currently being used by MrDocs and support libraries.</li>
<li><a href="https://github.com/cppalliance/mrdocs/commit/63ac382438b6fa78041210f67f0736d0977a924b">Refactored MrDocs generators</a> to use our custom <a href="#handlebars">C++ implementation</a> of the Handlebars template engine. Javascript helpers are loaded with <a href="https://duktape.org/">duktape</a> with our <a href="#javascript-bindings">Javascript Bindings</a> and compiled into functions in the Javascript engine.</li>
<li>Refactored the project name from MrDox to <a href="https://github.com/cppalliance/mrdocs/commit/12c027f4f1b449570ae58b601634b29f5fdbfd3f">MrDocs</a></li>
</ul>
<h3 id="handlebars">Handlebars</h3>
<p>MrDocs includes a support library that reimplements the <a href="https://handlebarsjs.com/">Handlebars</a> template engine in C++. This module is used to generate documentation from templates.</p>
<p>Over the last quarter, this is the MrDocs support library in which I have been investing most of my time. All the development started in this same quarter in July. It already supports all features from the original Handlebars test suite, including all mustache features. The library is already integrated with MrDocs to generate the documentation for the project website.</p>
<ul>
<li><a href="https://github.com/cppalliance/mrdocs/commit/81a5b886d09999a0cd36e983349515e5d0ae6d27">Initial proposal</a> of the C++ Handlebars library.</li>
<li>Fixed and refactored code that relied on <a href="https://github.com/cppalliance/mrdocs/commit/353fe987825023c9a886411c76798e93e27adabb">references to temporaries</a> generated by the Dom.</li>
<li>Included support for <a href="https://github.com/cppalliance/mrdocs/commit/68491d0ee8a9d13088e2f0f96bd4aad6cfc78435">inverse blocks without helpers</a></li>
<li>All features, specs, and tests from the original Handlebars test suite were then ported to C++ and are now passing: <a href="https://github.com/cppalliance/mrdocs/commit/74fd1f357543e3097e58e1a1e5ed3992c918402b">basic specs</a>; <a href="https://github.com/cppalliance/mrdocs/commit/5a0409a032d99c2efef374f74f3e2cb7fc80d49c">partials and automatic indentation</a>; <a href="https://github.com/cppalliance/mrdocs/commit/7e5250ea35e5e5e30175d5cc533337739f3d46f9">whitespace control</a>; <a href="https://github.com/cppalliance/mrdocs/commit/95210f86884ff831d0a6a7d7f22aee05b6ec281b">block helpers, mustache blocks, and chained blocks</a>; <a href="https://github.com/cppalliance/mrdocs/commit/f3e686136212ade78c2b0573bb2e81b9edbc01bf">subexpressions</a>; <a href="https://github.com/cppalliance/mrdocs/commit/7510558dce3950059470d3ec9f11870f6b7354e2">builtin helpers</a>; <a href="https://github.com/cppalliance/mrdocs/commit/c1223af5f866351821f3dd69fa4ed7f0d6deb9a8">private data</a>; <a href="https://github.com/cppalliance/mrdocs/commit/8f674d5bb220f24539ec46430cae234ebeb832ee">helper formats</a>; <a href="https://github.com/cppalliance/mrdocs/commit/517d4d41e0456f134f79710326f2b05c0c213267">track-ids mode</a>; <a href="https://github.com/cppalliance/mrdocs/commit/deaa47c876e50e5018ac22ede69a79f35a611fa4">strict mode</a>; <a href="https://github.com/cppalliance/mrdocs/commit/d10a92142e7420d7af4656048e548ee314fd9ff9">util</a>; <a href="https://github.com/cppalliance/mrdocs/commit/f63df18387c3d2461ba8d2ae7dbe9b103cc69a10">mustache</a>. As many handlebars features were undocumented in the original implementation, adjusting our handlebars implementation, which was only designed to handle basic templates, to pass all tests from the original Handlebars test suite involved multiple significant refactors and improvements to the entire codebase. The tests are a superset of the mustache specs, which are also passing. The previously available <a href="https://github.com/cppalliance/mrdocs/commit/2f9fe70c30b0d8deeaf4bb104258802b6aa1f138">SafeString</a> type because a regular <code>dom::Value</code> with the implementation of the specs.</li>
<li>Support for <a href="https://github.com/cppalliance/mrdocs/commit/93a1bf991e4d754fdadb77dd3f1d175dfb77f60f"><code>dom::Function</code></a> in all components of the Handlebars engine instead of custom callback types. This also allows the engine context to contain functions that can work similarly to helpers. The engine callback object is passed as the last argument in a helper, similar to the Javascript implementation. Because most built-in helpers support variable arguments, a new <code>dom::Function</code> implementation type was provided to support this use case.</li>
<li>Support for <a href="https://github.com/cppalliance/mrdocs/commit/2e554c8b636f31815fb80656717e910e097fbb77">error handling via <code>Expected</code></a>. All functions that might fail have a variant that throws exceptions and another that returns an <code>Expected</code> value. Helpers functions are also allowed to propagate errors via <code>mrdocs::Error</code>.</li>
<li>Fixed a <a href="https://github.com/cppalliance/mrdocs/commit/bf64028b4ec2cdb6cef34e53259fdd99bf199a98">bug</a> that caused MrDocs to emit <code>[object Object]</code> for <code>\n</code> after the <a href="https://github.com/cppalliance/mrdocs/commit/63ac382438b6fa78041210f67f0736d0977a924b">transition to C++ handlebars</a>.</li>
<li>The “find” helper was adjusted to <a href="https://github.com/cppalliance/mrdocs/commit/60d79ff116a4eb7532aef218860e48818c2b95e2">support strings, arrays, and objects</a>. This fixes a problem with the variants of similar but conflicting helpers that were categorized as container and string helpers.</li>
</ul>
<h3 id="the-dom">The DOM</h3>
<p>MrDocs also includes a support library called “DOM” that provides a C++ interface to type-erased property trees, such as JSON and Javascript Objects. This module is used by MrDocs to create opaque representations of property trees that can be used by the Handlebars engine and other library functions. Such representations can come from a variety of sources, including JSON files, Javascript objects, and internal C++ objects with information parsed by MrDocs.</p>
<p>After completing the Handlebars implementation, I also included <a href="https://github.com/cppalliance/mrdocs/commit/cc9e397f25e9f969e0569a79327754bd2e6b26fa">complete unit tests</a> for the DOM. Tests were included for all DOM types and many bugs have been fixed.</p>
<p>All classes have been documented to reflect their intended behavior, which is loosely modeled after JavaScript types and data structures. The APIs have also been adjusted to be safer and more consistent with the model for reference types. Objects and Value types received functions for nested object lookup and objects were generalized to support non-enumerable properties and Javascript bindings.</p>
<h3 id="javascript-bindings">Javascript Bindings</h3>
<p>MrDocs includes a support library that wraps <a href="https://duktape.org/">duktape</a> to provide a C++ interface to the Javascript engine. This module is used by the MrDocs executable to evaluate user-defined helpers.</p>
<p>I also included <a href="https://github.com/cppalliance/mrdocs/commit/d5b7b3d1bf983cde57619314e49681e3c73c1a02">unit tests</a> for Javascript wrapper and bindings.</p>
<p>Throughout the process,</p>
<ul>
<li>the implementation was completed for classes that were placeholders and existing bugs have been fixed</li>
<li>the API was documented</li>
<li>The javascript <code>Scope</code> object was extended to support all types of alternative syntax to evaluate expressions.</li>
<li>Value types: support for integers and floating point numbers.</li>
<li>Value types: support for all dom::Value operations using the native <code>duktape</code> functions</li>
<li>Provided classes to wrap javascript Objects, Arrays, and Functions as <a href="#the-dom">DOM</a> values</li>
<li><code>Scope</code> functions that might fail were adjusted to return <code>Expected</code> values</li>
</ul>
<h3 id="unit-tests">Unit Tests</h3>
<p>MrDocs includes a support library for unit tests. The library was initially adapted from the Boost.URL unit tests and extended to support the needs of MrDocs.</p>
<p>I had previously implemented a smaller system for the Handlebars unit tests which was then <a href="https://github.com/cppalliance/mrdocs/commit/e14fe087c2ecb7884f0af21b94ce34414506b3ef">integrated with the boost.url test suite library</a>. Features from the handlebars test suite library were ported to the boost.url test suite library, including the expression decomposer and the diff algorithm for golden master tests.</p>
<p>With this integration, Handlebars tests were then listed among any other tests in the library. These tests were later complemented with regular MrDocs tests.</p>
<p>The decomposer has later been improved for <a href="https://github.com/cppalliance/mrdocs/commit/831a691957de8266788dd42b3a4c1116c8f46505">integral comparison operators</a>.</p>
<h2 id="boost-website">Boost Website</h2>
<p>In this last quarter, the Boost website went beta on https://www.preview.boost.org/. Among the many support projects for the website, I’ve been helping the most on the <a href="https://github.com/cppalliance/site-docs"><code>cppalliance/site-docs</code></a>, which includes the Boost website documentation as an Antora project. Its components represent the “User Guide”, “Contributor Guide”, and “Formal Review” sections of the website.</p>
<p>Since the inception of the project, I’ve been overseeing and reviewing all the work done by the other contributors to the project. I’ve also been responsible for:</p>
<ul>
<li>setting up and maintaining CI for the project;</li>
<li>coordinating with <a href="https://github.com/cppalliance/temp-site"><code>cppalliance/temp-site</code></a> on content uploaded to AWS buckets;</li>
<li>build scripts to be reused by the release tools and previews;</li>
<li>writing sections of the documentation that require technical knowledge;</li>
<li>developing custom Boost/Antora extensions, such as the Boost Macro extension;</li>
<li>maintaining the Antora toolchain and templates; and</li>
<li>adjusted Boost libraries to match formats expected by the website.</li>
</ul>
<p>The Antora <a href="https://github.com/cppalliance/site-docs/commit/aefae2a6062cc19a731e007bc28c275180e290fd">playbooks were recently adjusted</a> to initially contain no content sources, now that the Antora-enabled build process also implemented by me was deployed in the official Boost release process.</p>
<h2 id="boost-libraries">Boost Libraries</h2>
<p>As in other quarters, the Boost Library in which I have been investing most of my time is <a href="https://github.com/boostorg/url">Boost.URL</a>, since it’s our most recently accepted library. The library is in maintenance mode since our focus shifted to MrDocs, but considering how recent it is, there is a constant demand for work fixing bugs and improving the documentation. In Boost.URL, I’ve been responsible for:</p>
<ul>
<li>upgrading CI, mostly coordinating with the <a href="#c-github-actions">C++ Github Actions</a>;</li>
<li>maintaining, simplifying, and updating build scripts;</li>
<li>integrating more spec tests, such as the Ada tests included more recently;</li>
<li>including more examples, such as the more recent sanitize-URL example;</li>
<li>fixing documentation content that is out of date; and</li>
<li>fixing bugs.</li>
</ul>
<p>Besides bugs, the library was recently refactored to remove what was previously called a “header-only” mode and deprecated several aliases, which caused some small friction in this last quarter. These are some of the highlights of the work done in the last quarter:</p>
<ul>
<li>Extended <a href="https://github.com/boostorg/url/commit/516e0093c55271a6ec9b9f271292fc29bcd586cd">fuzz testing</a>. <code>Fuzz</code> was included as a new clang factor in CI. The process was adjusted so that the corpus is properly reused from and stored in GitHub action archives. CMake scripts were refactored to include CMake options that control the fuzzer parameters. Fuzzers were included for each of the grammar rules for URLs.</li>
<li><a href="https://github.com/boostorg/url/commit/f2bb191b902ab63fa2207c64cfe273bd516a719d">Support <code>IP-literal</code> as <code>IPv6addrz</code></a>. This is an <a href="https://github.com/boostorg/url/issues/711">issue</a> where a valid <code>IPv6addrz</code> wasn’t being considered an <code>IP-literal</code>. IPv6addrz includes a <code>ZoneID</code> at the end, delimited by an encoded <code>"%25"</code>. The <code>ipv6_address</code> class is unmodified, as the mapping from the <code>ZoneID</code> to a <code>std::uint32_t</code> is dependent on the application context. The original <code>ZoneID</code> can be obtained from the url_view but the library is agnostic about it.</li>
<li>Included <a href="https://github.com/boostorg/url/commit/f3fe229c9d349d06083f9cdf1ae163b84b1ad1d8">GDB pretty printers</a> and documentation. All available URL components are now pretty printed in GDB. A developer mode was also included which prints the URL components in a format corresponding to the internal URL string offsets.</li>
<li>Updated the <a href="https://github.com/boostorg/url/commit/7e47e9fef6fecce45f7c65277601b7e7ff38c365">content of both the documentation and README.adoc</a> so that they match current best practices. The documentation in README.md contained dated and incorrect information, while the quickbook documentation was missing important information and contained bad practices.</li>
</ul>
<p>Some relevant bug fixes were:</p>
<ul>
<li>Enforced that appropriate CMake BOOST_INCLUDE_LIBRARIES are set <a href="https://github.com/boostorg/url/commit/d0746ebf941230d0c8a535859da2a0f7e6a747ca">according to the build options</a>. The previous implementation included these extra libraries whenever they were available, which caused problems for other libraries that depended on Boost.URL via <code>depinst.py</code>.</li>
<li>Fix and include unit-tests for issues <a href="https://github.com/boostorg/url/issues/755">#755</a> and <a href="https://github.com/boostorg/url/issues/446">#446</a></li>
<li>We <a href="https://github.com/boostorg/url/commit/20ab896ffede3c4ac9cbfb6740e6e97f321ccd87">disabled</a> and <a href="https://github.com/boostorg/url/commit/cac7c200e28e3559d4ea9ea43033bda5e8f66c39">re-enabled</a> drone caching. This is related to a bug where Drone would attempt to cache the <code>b2</code> binary, which would cause conflicts. This was fixed with <a href="https://github.com/boostorg/boost-ci/pull/213">PR #123 in boostorg/boost-ci</a>.</li>
<li>Remove quickbook <a href="https://github.com/boostorg/url/commit/b156eb230193e5f2d79980812106872f2d71c535">references to variant</a>. Links and references to <code>variant</code> in both the <code>.qbk</code> and <code>.xml</code> files were removed as the <code>variant</code> alias had been deprecated in <a href="https://github.com/boostorg/url/commit/96438f683e09e20183fab1b6059fa7f1b0ffe67d">96438f6</a>.</li>
<li>Updated <a href="https://github.com/boostorg/url/commit/705554ca127cc1deb5d66efcdbd16cc593e31950">javadoc deprecated references</a>. This fixed a mistake where the Javadoc for many deprecated aliases included references to the deprecated alias <code>boost::core::string_view</code> instead of the correct deprecated aliases <code>boost::optional</code>, <code>system::error_category</code>, <code>system::error_code</code>, <code>system::error_condition</code>, <code>system::system_error</code>, and <code>system::result</code>.</li>
<li>Replaced <code>@ref</code> prefixes <a href="https://github.com/boostorg/url/commit/3db1407cc9d792c1e192b401d31c2c12f607ec25">with backtick for references</a> in javadocs. This caused an issue in the documentation since several aliases were deprecated in favor of symbols from Boost.Core.</li>
<li><a href="https://github.com/boostorg/url/commit/a1181275d02a0a4c6ab8147354f752ec36e1dd98">Refactored variable name in url::set_params</a>. This issue was causing an error in the documentation for <code>url_base::set_params</code>.</li>
<li><a href="https://github.com/boostorg/url/commit/0ca58467a472c72e84405a7d991201f58ffdf327">Fix bug</a> where <code>url_view</code>/<code>string_view</code> constructor would require a non-<code>url_view_base</code> as input. This <a href="https://github.com/boostorg/url/issues/756">cased the constructor to reparse the <code>string_view</code></a> from an already parsed <code>url_view_base</code>.</li>
<li><a href="https://github.com/boostorg/url/commit/c97bc2782cdd9b343ede1492863a672805c255cd">Fix bug</a> where <code>parse_query</code> would recreate <code>std::string_view</code>s. This would make the query <a href="https://github.com/boostorg/url/issues/757">include values in the underlying std::string beyond the expected string_view</a>.</li>
<li>Updated <a href="https://github.com/boostorg/website/commit/8c39fd223c5c9f74f0d70d611c35360415d862da">changelog</a> for boost release.</li>
</ul>
<p>Besides Boost.URL, as usual, I’ve been overseeing and fixing smaller issues with other boost libraries, such as Boost.Docca, Boost.StaticString, and helping with libraries by other contributors when asked for assistance, as in a more recent case with Boost.Outcome.</p>
<p>In particular, we had to fix a smaller issue in Boost.Docca that was also affecting Boost.URL. The issue involved Boost.Docca’s dependence on a deprecated version of Doxygen that is no longer supported by the Boost toolchain.</p>
<h2 id="boost-release-tools">Boost Release Tools</h2>
<p>Over the last quarter, I’ve been working on the integration of toolchains I developed into the Boost Release Tools to add support for features desired for the new website.</p>
<p>Some of the highlights of the work done in the last quarter:</p>
<ul>
<li>Introduced support for libraries with Antora documentation <a href="https://github.com/boostorg/release-tools/commit/66670dfcf4d1ac69a963aa74cd9c06ffade73d58">into the official Boost release process</a>. Deployed new docker containers that include NodeJS, Gulp, and the Antora toolchain. With this enhancement, each library now can function as an Antora component within an Antora master project hosted in a separate repository (https://github.com/cppalliance/site-docs). This master project repository also contains additional components, such as the user guide, contributor guide, and a dedicated component for the review process. In a subsequent phase of the release process, this Antora documentation is seamlessly merged with the pre-existing in-source documentation, which has been generated using various other tools. When a library is “Antora-enabled”, the release process will automatically generate the Antora documentation and publish it with the documentation of other libraries. No <code>b2</code> scripts are required to generate the documentation for the library. All Antora-enabled libraries use the same master Antora UI template that matches the design of the boost website. Antora <a href="https://github.com/cppalliance/site-docs/commit/aefae2a6062cc19a731e007bc28c275180e290fd">playbooks were adjusted</a> to initially contain no content sources, now that the Antora-enabled build process in deployed in the official Boost release process.</li>
<li>Deployed new container <a href="https://github.com/boostorg/boost/commit/30f0ef1de2d8f205502d2a557ee0c9cb5a3b4708">to <code>boostorg/boost</code></a> to support the new release process.</li>
<li>New <a href="https://github.com/boostorg/release-tools/pull/52">archive variants</a> for boost. Add extra archive variants such as <code>boost-docs</code> and <code>boost-source</code>. These variants can reduce expenses with JFrog download bandwidth, provide users with archives that are simpler to use, and provide docs-only archives for the website. The new MakeBoostDistro.py script includes parameters to determine what types of files should be included in the distribution. All other functions are adapted to handle these requirements accordingly. Switching to source-only downloads would save Boost $1000 per month.</li>
</ul>
<h2 id="c-github-actions">C++ Github Actions</h2>
<p><a href="https://github.com/alandefreitas/cpp-actions">C++ Github Actions</a> is a project I created and have been maintaining since the second quarter of the year. It is a collection of reusable Github Actions for any C++ project that needs to be tested on a variety of compilers. Both MrDocs are Boost.URL are currently using these actions in their CI.</p>
<p>The project includes actions to:</p>
<ul>
<li>Generate a Github Actions Matrix for C++ projects;</li>
<li>Setup C++ compilers;</li>
<li>Install and setup packages;</li>
<li>Clone Boost modules;</li>
<li>Run complete CMake and <code>b2</code> workflows;</li>
<li>Generate changelogs from conventional commits;</li>
<li>Generate summaries; and</li>
<li>Generate time-trace reports and flame graphs</li>
</ul>
<p>These actions include a myriad of options and features.</p>
<ul>
<li>The “setup-*” actions include logic to detect, install, and cache dependencies, which can be used by the CMake and <code>b2</code> actions.</li>
<li>Individual options and actions attempt to set up a wide variety of compilers on different platforms, including MSVC, GCC, Clang, MinGW, AppleClang, and Clang-CL.</li>
<li>Actions that generate reports include a multitude of tools and options to analyze changes, time traces, and coverage.</li>
</ul>
<p>Since then, these actions have been adapted as needed to support the needs of MrDocs and Boost.URL, which have also been using conventional commits. Here’s a recent summary report generated for Boost.URL by the CI workflow: https://github.com/boostorg/url/actions/runs/6512424067</p>
<p>The project documentation also uses the Antora UI template we have been maintaining for all other projects: https://alandefreitas.github.io/cpp-actions</p>
<h2 id="gray-box-local-search">Gray-box local search</h2>
<p>On 13 September 2023, the <a href="https://link.springer.com/article/10.1007/s00500-023-09129-1">following paper</a> I co-authored was published:</p>
<pre><code>Lopes, R.A., Freitas, A.R.R.
Gray-box local search with groups of step sizes.
Soft Computing.
p. 1-14
2023
https://doi.org/10.1007/s00500-023-09129-1
</code></pre>
<p>The paper was accepted on 18 August 2023 and published on 13 September 2023.</p>
<p>While the paper is more aligned with my educational background than my daily C++ Alliance tasks, it is one more paper that carries the C++ Alliance in the affiliation, contributing to its reputation.</p>Summary MrDocs Handlebars The DOM Javascript Bindings Unit tests Boost Website Boost Release Tools Boost Libraries C++ Github Actions Gray-box local search MrDocs Over the last quarter, we have been working intensely on MrDocs, a documentation generator for C++ projects. I’ve been overseeing and reviewing all the work done by the other contributors in the project. I’ve also been responsible for: setting up and maintaining CI for the project; MrDocs and LLVM release binaries; build scripts; setting up and integrating dependencies; setting up and deploying the Antora toolchains and documentation to the project website; working on supporting libraries; and fixing bugs. These are some of the highlights of the work done in the last quarter: Refactor library layout and scripts so all the implementation files are in src Automated deployment of demo documentation to the project website. The website is updated on every commit to the develop or master branches. The action uses the actual release package to generate the documentation. This PR involved fixing all scripts and directory layouts for the release package to be properly usable. The new project layout uses the FHS standard. CMake modules to create a target to generate the documentation for projects were included in the installation. The MrDocs executable and libraries are installed as separate CMake package components. Any usage of FetchContent has been removed from the CMake scripts. The documentation was refactored to reflect the changes. Included complete instructions in the documentation pages for installation and usage, describing all commands and options. Deployed new LLVM binaries to the website and updated CI. The binaries were regenerated for all platforms in 4 modes: Debug, Release, RelWithDebInfo, and DebWithOpt. This involved fixing long-standing bugs related to conflicts in LLVM versions used by the project and providing new pre-built binaries on the website. In particular, the previous pre-built binaries used a special ReleaseWithDebInfo LLVM configuration that caused conflicts with MrDocs on MSVC, being used with a Debug CMake configuration variant by developers. This eliminated the need for the ad-hoc GitHub LLVM binaries release and for the special docker container we had been using so far. Added support for CMakePresets. This allowed us to simplify the build process for MrDocs and its dependencies, which was previously counting on long command line commands and CMakeUserPresets.json files without a corresponding CMakePresets.json file. It was also a step towards the new installation instructions in the documentation. An example file for CMakeUserPresets.json including all compilers was provided. The base CMakePresets.json file included a special vendor configuration to hide base configurations from Visual Studio. CMakeSettings.json was deprecated. Added complete installation instructions for the project. The instructions were adapted so that all commands are relative to an enclosing directory containing all the dependencies and MrDocs. Included instructions for all steps considering package managers, installed dependencies, or binaries provided on the project website. The CMake scripts were adapted to make it easier to build the project in a variety of environments according to the instructions. Included a polyfill implementation of std::expected as mrdocs::Expected. This implementation is currently being used by MrDocs and support libraries. Refactored MrDocs generators to use our custom C++ implementation of the Handlebars template engine. Javascript helpers are loaded with duktape with our Javascript Bindings and compiled into functions in the Javascript engine. Refactored the project name from MrDox to MrDocs Handlebars MrDocs includes a support library that reimplements the Handlebars template engine in C++. This module is used to generate documentation from templates. Over the last quarter, this is the MrDocs support library in which I have been investing most of my time. All the development started in this same quarter in July. It already supports all features from the original Handlebars test suite, including all mustache features. The library is already integrated with MrDocs to generate the documentation for the project website. Initial proposal of the C++ Handlebars library. Fixed and refactored code that relied on references to temporaries generated by the Dom. Included support for inverse blocks without helpers All features, specs, and tests from the original Handlebars test suite were then ported to C++ and are now passing: basic specs; partials and automatic indentation; whitespace control; block helpers, mustache blocks, and chained blocks; subexpressions; builtin helpers; private data; helper formats; track-ids mode; strict mode; util; mustache. As many handlebars features were undocumented in the original implementation, adjusting our handlebars implementation, which was only designed to handle basic templates, to pass all tests from the original Handlebars test suite involved multiple significant refactors and improvements to the entire codebase. The tests are a superset of the mustache specs, which are also passing. The previously available SafeString type because a regular dom::Value with the implementation of the specs. Support for dom::Function in all components of the Handlebars engine instead of custom callback types. This also allows the engine context to contain functions that can work similarly to helpers. The engine callback object is passed as the last argument in a helper, similar to the Javascript implementation. Because most built-in helpers support variable arguments, a new dom::Function implementation type was provided to support this use case. Support for error handling via Expected. All functions that might fail have a variant that throws exceptions and another that returns an Expected value. Helpers functions are also allowed to propagate errors via mrdocs::Error. Fixed a bug that caused MrDocs to emit [object Object] for \n after the transition to C++ handlebars. The “find” helper was adjusted to support strings, arrays, and objects. This fixes a problem with the variants of similar but conflicting helpers that were categorized as container and string helpers. The DOM MrDocs also includes a support library called “DOM” that provides a C++ interface to type-erased property trees, such as JSON and Javascript Objects. This module is used by MrDocs to create opaque representations of property trees that can be used by the Handlebars engine and other library functions. Such representations can come from a variety of sources, including JSON files, Javascript objects, and internal C++ objects with information parsed by MrDocs. After completing the Handlebars implementation, I also included complete unit tests for the DOM. Tests were included for all DOM types and many bugs have been fixed. All classes have been documented to reflect their intended behavior, which is loosely modeled after JavaScript types and data structures. The APIs have also been adjusted to be safer and more consistent with the model for reference types. Objects and Value types received functions for nested object lookup and objects were generalized to support non-enumerable properties and Javascript bindings. Javascript Bindings MrDocs includes a support library that wraps duktape to provide a C++ interface to the Javascript engine. This module is used by the MrDocs executable to evaluate user-defined helpers. I also included unit tests for Javascript wrapper and bindings. Throughout the process, the implementation was completed for classes that were placeholders and existing bugs have been fixed the API was documented The javascript Scope object was extended to support all types of alternative syntax to evaluate expressions. Value types: support for integers and floating point numbers. Value types: support for all dom::Value operations using the native duktape functions Provided classes to wrap javascript Objects, Arrays, and Functions as DOM values Scope functions that might fail were adjusted to return Expected values Unit Tests MrDocs includes a support library for unit tests. The library was initially adapted from the Boost.URL unit tests and extended to support the needs of MrDocs. I had previously implemented a smaller system for the Handlebars unit tests which was then integrated with the boost.url test suite library. Features from the handlebars test suite library were ported to the boost.url test suite library, including the expression decomposer and the diff algorithm for golden master tests. With this integration, Handlebars tests were then listed among any other tests in the library. These tests were later complemented with regular MrDocs tests. The decomposer has later been improved for integral comparison operators. Boost Website In this last quarter, the Boost website went beta on https://www.preview.boost.org/. Among the many support projects for the website, I’ve been helping the most on the cppalliance/site-docs, which includes the Boost website documentation as an Antora project. Its components represent the “User Guide”, “Contributor Guide”, and “Formal Review” sections of the website. Since the inception of the project, I’ve been overseeing and reviewing all the work done by the other contributors to the project. I’ve also been responsible for: setting up and maintaining CI for the project; coordinating with cppalliance/temp-site on content uploaded to AWS buckets; build scripts to be reused by the release tools and previews; writing sections of the documentation that require technical knowledge; developing custom Boost/Antora extensions, such as the Boost Macro extension; maintaining the Antora toolchain and templates; and adjusted Boost libraries to match formats expected by the website. The Antora playbooks were recently adjusted to initially contain no content sources, now that the Antora-enabled build process also implemented by me was deployed in the official Boost release process. Boost Libraries As in other quarters, the Boost Library in which I have been investing most of my time is Boost.URL, since it’s our most recently accepted library. The library is in maintenance mode since our focus shifted to MrDocs, but considering how recent it is, there is a constant demand for work fixing bugs and improving the documentation. In Boost.URL, I’ve been responsible for: upgrading CI, mostly coordinating with the C++ Github Actions; maintaining, simplifying, and updating build scripts; integrating more spec tests, such as the Ada tests included more recently; including more examples, such as the more recent sanitize-URL example; fixing documentation content that is out of date; and fixing bugs. Besides bugs, the library was recently refactored to remove what was previously called a “header-only” mode and deprecated several aliases, which caused some small friction in this last quarter. These are some of the highlights of the work done in the last quarter: Extended fuzz testing. Fuzz was included as a new clang factor in CI. The process was adjusted so that the corpus is properly reused from and stored in GitHub action archives. CMake scripts were refactored to include CMake options that control the fuzzer parameters. Fuzzers were included for each of the grammar rules for URLs. Support IP-literal as IPv6addrz. This is an issue where a valid IPv6addrz wasn’t being considered an IP-literal. IPv6addrz includes a ZoneID at the end, delimited by an encoded "%25". The ipv6_address class is unmodified, as the mapping from the ZoneID to a std::uint32_t is dependent on the application context. The original ZoneID can be obtained from the url_view but the library is agnostic about it. Included GDB pretty printers and documentation. All available URL components are now pretty printed in GDB. A developer mode was also included which prints the URL components in a format corresponding to the internal URL string offsets. Updated the content of both the documentation and README.adoc so that they match current best practices. The documentation in README.md contained dated and incorrect information, while the quickbook documentation was missing important information and contained bad practices. Some relevant bug fixes were: Enforced that appropriate CMake BOOST_INCLUDE_LIBRARIES are set according to the build options. The previous implementation included these extra libraries whenever they were available, which caused problems for other libraries that depended on Boost.URL via depinst.py. Fix and include unit-tests for issues #755 and #446 We disabled and re-enabled drone caching. This is related to a bug where Drone would attempt to cache the b2 binary, which would cause conflicts. This was fixed with PR #123 in boostorg/boost-ci. Remove quickbook references to variant. Links and references to variant in both the .qbk and .xml files were removed as the variant alias had been deprecated in 96438f6. Updated javadoc deprecated references. This fixed a mistake where the Javadoc for many deprecated aliases included references to the deprecated alias boost::core::string_view instead of the correct deprecated aliases boost::optional, system::error_category, system::error_code, system::error_condition, system::system_error, and system::result. Replaced @ref prefixes with backtick for references in javadocs. This caused an issue in the documentation since several aliases were deprecated in favor of symbols from Boost.Core. Refactored variable name in url::set_params. This issue was causing an error in the documentation for url_base::set_params. Fix bug where url_view/string_view constructor would require a non-url_view_base as input. This cased the constructor to reparse the string_view from an already parsed url_view_base. Fix bug where parse_query would recreate std::string_views. This would make the query include values in the underlying std::string beyond the expected string_view. Updated changelog for boost release. Besides Boost.URL, as usual, I’ve been overseeing and fixing smaller issues with other boost libraries, such as Boost.Docca, Boost.StaticString, and helping with libraries by other contributors when asked for assistance, as in a more recent case with Boost.Outcome. In particular, we had to fix a smaller issue in Boost.Docca that was also affecting Boost.URL. The issue involved Boost.Docca’s dependence on a deprecated version of Doxygen that is no longer supported by the Boost toolchain. Boost Release Tools Over the last quarter, I’ve been working on the integration of toolchains I developed into the Boost Release Tools to add support for features desired for the new website. Some of the highlights of the work done in the last quarter: Introduced support for libraries with Antora documentation into the official Boost release process. Deployed new docker containers that include NodeJS, Gulp, and the Antora toolchain. With this enhancement, each library now can function as an Antora component within an Antora master project hosted in a separate repository (https://github.com/cppalliance/site-docs). This master project repository also contains additional components, such as the user guide, contributor guide, and a dedicated component for the review process. In a subsequent phase of the release process, this Antora documentation is seamlessly merged with the pre-existing in-source documentation, which has been generated using various other tools. When a library is “Antora-enabled”, the release process will automatically generate the Antora documentation and publish it with the documentation of other libraries. No b2 scripts are required to generate the documentation for the library. All Antora-enabled libraries use the same master Antora UI template that matches the design of the boost website. Antora playbooks were adjusted to initially contain no content sources, now that the Antora-enabled build process in deployed in the official Boost release process. Deployed new container to boostorg/boost to support the new release process. New archive variants for boost. Add extra archive variants such as boost-docs and boost-source. These variants can reduce expenses with JFrog download bandwidth, provide users with archives that are simpler to use, and provide docs-only archives for the website. The new MakeBoostDistro.py script includes parameters to determine what types of files should be included in the distribution. All other functions are adapted to handle these requirements accordingly. Switching to source-only downloads would save Boost $1000 per month. C++ Github Actions C++ Github Actions is a project I created and have been maintaining since the second quarter of the year. It is a collection of reusable Github Actions for any C++ project that needs to be tested on a variety of compilers. Both MrDocs are Boost.URL are currently using these actions in their CI. The project includes actions to: Generate a Github Actions Matrix for C++ projects; Setup C++ compilers; Install and setup packages; Clone Boost modules; Run complete CMake and b2 workflows; Generate changelogs from conventional commits; Generate summaries; and Generate time-trace reports and flame graphs These actions include a myriad of options and features. The “setup-*” actions include logic to detect, install, and cache dependencies, which can be used by the CMake and b2 actions. Individual options and actions attempt to set up a wide variety of compilers on different platforms, including MSVC, GCC, Clang, MinGW, AppleClang, and Clang-CL. Actions that generate reports include a multitude of tools and options to analyze changes, time traces, and coverage. Since then, these actions have been adapted as needed to support the needs of MrDocs and Boost.URL, which have also been using conventional commits. Here’s a recent summary report generated for Boost.URL by the CI workflow: https://github.com/boostorg/url/actions/runs/6512424067 The project documentation also uses the Antora UI template we have been maintaining for all other projects: https://alandefreitas.github.io/cpp-actions Gray-box local search On 13 September 2023, the following paper I co-authored was published: Lopes, R.A., Freitas, A.R.R. Gray-box local search with groups of step sizes. Soft Computing. p. 1-14 2023 https://doi.org/10.1007/s00500-023-09129-1 The paper was accepted on 18 August 2023 and published on 13 September 2023. While the paper is more aligned with my educational background than my daily C++ Alliance tasks, it is one more paper that carries the C++ Alliance in the affiliation, contributing to its reputation.