You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following discussions at the buildingSMART meeting in Chicago last week, and some research I have been doing into other similar formats, I would like to name 5 types or combinations of graphs that can be used to organize a model and then discuss the tradeoffs between explicit geometry vs semantic definitions.
Each of the components I describe below should be understood as something that can only be applied to an entity once. For example, it would be an error if two placement components are attached to a single entity.
In the below, I am not addressing how a model is composed from multiple authors, nor how library entities (Revit-style types) are inserted into a model. This is just a discussion of how a composed model is structured.
My apologies for dropping a wall of text here, it seems to be the right place for this discussion!
Classes of model graphs
Placement Tree / Scenegraph
A placement tree is one where all entities in a model have:
A single parent
An optional transformation / offset
An optional geometry
The geometry must be explicitly defined (no information needed from anywhere else in the graph).
The geometry is placed as per the product of all it's parents' (parent, grandparent, great-grandparent, ...) transforms
This means:
The geometry of the model can be determined by simply traversing the tree while keeping track of the cumulative transforms.
The geometry of the model is extremely unambiguous, and can be efficiently rendered.
In an Entity-component world, this can be achieved by defining two kinds of components:
Placement: a component that has a reference to the entity's parent, and an optional transform
Geometry: a component that contains the geometry of the object.
The placement graph can be determined by only looking at the placement components in the model.
You will have no doubt noticed that there is no semantic information in this type of graph...
Semantic Overlay on a Placement Graph
One approach include semantic information on a model is to maintain all the requirements of a Placement Tree and simply tag certain entities as assemblies or classes.
For instance, a new component called AssemblyInfo can be defined, with two parameters:
kind: "assembly", "component" or "sub-component"
name: a unique or user-facing name for the assembly
This maintains all the benefits of the Placement Tree, but allows us to give the important entities names and hide non-assembly entities in a simplified tree view.
This is the approach USD takes: they call their entities Prims, each Prim is transformed as per the product of its parent transforms. A subset of the Prims are tagged and organized into a Model Hierarchy. They also require that all Prims tagged as assemblies must be children of assemblies - the semantic overlay covers a contiguous set of entities that includes the root node.
Another option is to have two trees over the same entities: a placement tree and a semantic tree. The placement tree is as above, and the semantic tree is a completely separate organization. In this case, we define a component called "AssemblyInfo" a little bit differently. It will have three properties:
parent: the parent assembly of the entity
kind: "assembly", "component" or "sub-component"
name: a unique or user-facing name for the assembly
Placement tree with all entities in global coordinates, which has no effect on the assembly hierarchy
flowchart LR
g["Global Axis"] --> Wall
g --> Window1
g --> Window2
g --> Glazing1
g --> Frame1
g --> Glazing2
g --> Frame2
Loading
A few observations:
This seems unweildy. On the other hand, this is how Blender organizes it's scenes. Every object can be in a collection and parented to another object. Blender parenting creates a placement tree, and the Blender collection hierarchy creates a semantic tree. The blender outliner creates 'ghost' objects when the two hierarchies do not align.
There can be more than 2 hierarchies. Any hierarchy can be created by defining new kinds of components. Each tree can be assembled by simply reading the relevant components.
The semantic tree is still a tree, not a graph!!
Parametric / Procedural Graph
In all three options above, the geometry is defined explicitly, and there is a hierarchy of transforms to place that geometry. But this is not how BIM software typically works. Instead of geometry being drawing, the user is presented with a number of parameters to select, many of which are other entities in the model. The BIM software then computes the geometry.
For example, a wall in Revit has 4 basic parameters: thickness (from the type), and plan layout, lower level and upper level (from the instance). The lower level and upper level are references to other entities in the model.
There are two major differences in this approach versus the first three:
The geometry is calculated from the semantics. This is the parametric / procedural part.
There are multiple parents. In this case, instead of there being one parent, there two levels are parents. We no longer have a tree, but a directed acyclic (I hope!) graph.
In these cases, the semantic graph is used to the compute the geometry, and the geometry dependencies are often hidden from the user (in authoring applications).
There are two big points that come from the approach of computing geometry from semantics:
We must have a graph, not a tree. Many entities are defined as 'between entity a and entity b'.
It is much harder to agree on semantics than it is to agree on a geometry format.
Placement Tree with Semantic Graph
Another option is to require that placement and geometric information be placed in a tree structure, but allow the semantic overlay to form a graph. This allows the model snapshot to be efficiently loaded and rendered, while the semantic properties of specific entities can be lazy loaded as required.
We can call this option explicit geometry tree, with semantic overlay graph .
Implications for interoperability
Should IFC5 define geometry explicitly, or be a semantic standard from which geometry is computed? Another way to ask the question is, should we require IFC5 exporters to render out geometry, or should we require IFC5 importers to be able to compute geometry from all the semantics that IFC5 defines?
There is a lot to be said for an explicit geometry tree, with semantic overlay graph approach. This would require that all IFC readers be able to read a specific set of geometric types. Semantic information could be overlaid, and IFC readers that are able to understand those semantics could make use of them. The semantic overlay graph becomes an enrichment of the base data, not a requirement to be able to ingest the file in the first place.
For example, a model with a road alignment would be exported as a series of curves in cartesian space (from OpenRoad or Civil3D). The alignment information would also be written. It could then be imported into Revit or Tekla Structures, without those tools needing to perform any math on the alignment, but if it was imported into a tool that did include support for alignments, it could grab the alignment info (and check against the rendered geometry).
Hi @tomvandig and @aothms,
(cc @berlotti, @gschleusner1972)
Following discussions at the buildingSMART meeting in Chicago last week, and some research I have been doing into other similar formats, I would like to name 5 types or combinations of graphs that can be used to organize a model and then discuss the tradeoffs between explicit geometry vs semantic definitions.
Each of the components I describe below should be understood as something that can only be applied to an entity once. For example, it would be an error if two placement components are attached to a single entity.
In the below, I am not addressing how a model is composed from multiple authors, nor how library entities (Revit-style types) are inserted into a model. This is just a discussion of how a composed model is structured.
My apologies for dropping a wall of text here, it seems to be the right place for this discussion!
Classes of model graphs
Placement Tree / Scenegraph
A placement tree is one where all entities in a model have:
The geometry must be explicitly defined (no information needed from anywhere else in the graph).
The geometry is placed as per the product of all it's parents' (parent, grandparent, great-grandparent, ...) transforms
This means:
In an Entity-component world, this can be achieved by defining two kinds of components:
Placement Tree:
Composed Placement Tree:
flowchart LR Wall --> Window1 Wall --> Window2 Window1 -->F1[Frame] Window1 --> G1[Glazing] Window2 --> F2[Frame] Window2 --> G2[Glazing]The placement graph can be determined by only looking at the placement components in the model.
You will have no doubt noticed that there is no semantic information in this type of graph...
Semantic Overlay on a Placement Graph
One approach include semantic information on a model is to maintain all the requirements of a Placement Tree and simply tag certain entities as assemblies or classes.
For instance, a new component called AssemblyInfo can be defined, with two parameters:
This maintains all the benefits of the Placement Tree, but allows us to give the important entities names and hide non-assembly entities in a simplified tree view.
This is the approach USD takes: they call their entities Prims, each Prim is transformed as per the product of its parent transforms. A subset of the Prims are tagged and organized into a Model Hierarchy. They also require that all Prims tagged as assemblies must be children of assemblies - the semantic overlay covers a contiguous set of entities that includes the root node.
Placement Tree with Semantic Overlay:
flowchart LR Wall["Wall\n{Assembly}"] --> w1["Window1(WindowType)"] Wall --> w2["Window2(WindowType)"] WindowType["WindowType\n{Assembly}"] --> Frame WindowType --> GlazingComposed Placement Tree with Semantic Overlay:
flowchart LR Wall["Wall\n{Assembly}"] --> w1["Window1\n{Assembly}"] Wall --> w2["Window2\n{Assembly}"] w1 --> f1[Frame] w1 --> g1[Glazing] w2 --> f2[Frame] w2 --> g2[Glazing]Semantic Tree
Another option is to have two trees over the same entities: a placement tree and a semantic tree. The placement tree is as above, and the semantic tree is a completely separate organization. In this case, we define a component called "AssemblyInfo" a little bit differently. It will have three properties:
flowchart LR Wall["Wall\n{Assembly}"] --> w1["Window1\n{Assembly}"] Wall --> w2["Window2\n{Assembly}"] w1 --> f1[Frame] w1 --> g1[Glazing] w2 --> f2[Frame] w2 --> g2[Glazing]Placement tree with all entities in global coordinates, which has no effect on the assembly hierarchy
A few observations:
Parametric / Procedural Graph
In all three options above, the geometry is defined explicitly, and there is a hierarchy of transforms to place that geometry. But this is not how BIM software typically works. Instead of geometry being drawing, the user is presented with a number of parameters to select, many of which are other entities in the model. The BIM software then computes the geometry.
For example, a wall in Revit has 4 basic parameters: thickness (from the type), and plan layout, lower level and upper level (from the instance). The lower level and upper level are references to other entities in the model.
There are two major differences in this approach versus the first three:
In these cases, the semantic graph is used to the compute the geometry, and the geometry dependencies are often hidden from the user (in authoring applications).
There are two big points that come from the approach of computing geometry from semantics:
Placement Tree with Semantic Graph
Another option is to require that placement and geometric information be placed in a tree structure, but allow the semantic overlay to form a graph. This allows the model snapshot to be efficiently loaded and rendered, while the semantic properties of specific entities can be lazy loaded as required.
We can call this option explicit geometry tree, with semantic overlay graph .
Implications for interoperability
Should IFC5 define geometry explicitly, or be a semantic standard from which geometry is computed? Another way to ask the question is, should we require IFC5 exporters to render out geometry, or should we require IFC5 importers to be able to compute geometry from all the semantics that IFC5 defines?
There is a lot to be said for an explicit geometry tree, with semantic overlay graph approach. This would require that all IFC readers be able to read a specific set of geometric types. Semantic information could be overlaid, and IFC readers that are able to understand those semantics could make use of them. The semantic overlay graph becomes an enrichment of the base data, not a requirement to be able to ingest the file in the first place.
For example, a model with a road alignment would be exported as a series of curves in cartesian space (from OpenRoad or Civil3D). The alignment information would also be written. It could then be imported into Revit or Tekla Structures, without those tools needing to perform any math on the alignment, but if it was imported into a tool that did include support for alignments, it could grab the alignment info (and check against the rendered geometry).