Skip to content

✨ Potentially refactor compute tables In DD package #376

@burgholzer

Description

@burgholzer

What's the problem this feature will solve?

At the moment, each individual decision diagram operation has its own compute table. On the one hand, this is neat, since it allows to fine tune the sizes of the individual compute tables. On the other hand, it creates quite some complexity across the package. Reading the literature on BDD packages and looking at the history of this DD package, this is frequently handled differently: There is a single compute table and the operation being performed is factored into the hash during lookup.
For BDDs that is a no-brainer since there is only one type of operand so it is fairly efficient to cook up a compute table data structure for that. Typically, even unary operations are handled in the same table as binary operations.
In the very past, this was also possible (and implemented) for our DD package. This was due to there only being one type of node (vectors and matrices shared the same node type, but vectors only used two of the four successors).
However, in the presence of vector, matrix, and density matrix nodes, this is no longer as easy. There are many variations of triplets of the form (Operand1, Operand2, Result). It is unclear, how these could be combined in a feasible and efficient way. One possibility would be to resort to using variants for storing the individual types. But that would introduce a certain overhead in memory and runtime.

Describe the solution you'd like

Why bother?

  • has the potential to save memory overall by not having to maintain 10+ individual tables.
  • DD operations only ever scale with the number of nodes when the compute table is large enough to store all intermediate results required during a computation. By having multiple compute tables and reducing the size of some of them, the performance of some methods is inherently limited. Having one large table might actually allow to perform all operations in a performant way.
  • Having a single compute table makes managing its growth easier. While this is not an issue right now (compute tables are statically sized), the overall goal is to have them be dynamically resizable. Clearly, maintaining and orchestrating the growth of a single table depending on other package parameters is way easier than managing 10+ of them.
  • It might make the compute table itself more future proof and easier to extend for new operations and/or DD types.

As always, this has to be tested and evaluated well in order to guarantee that the resulting solution indeed fulfills its goals.

Metadata

Metadata

Assignees

No one assigned

    Labels

    DDAnything related to the DD packagec++Anything related to C++ codeenhancementImprovement of existing featurerefactorAnything related to code refactoring

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions