SDF workflow #21
Replies: 1 comment
-
My response:) Hey! I appreciate all the info. I've copied it down into github for safe keeping! Yea I've coded my render graph as a backwards call tree - only the end nodes are known and it calls all its dependencies, which generally return 2d textures. I would be extending it to allow for different return types. I would love to try make it be able to split the graph into sub shaders automatically - so that you can view the output of any node but it can compile and run full speed as a single graph when you aren't inspecting it. Theres definitely a lot of complexity when you're balancing shader #defines, uniforms, functions and samplers all with different dependencies haha Yea I've had a look into vvvv before, I like how it directly represents the objects of the C# language. For my use cases, I want a solution that's embeddable in any context - plugins, run on raspberry pi, low level rendering efficiencies. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Notes from tekt:
Many of the design choices in raytk were based around working within TouchDesigner, but there's probably at least some stuff that could be useful.
RayTK is structured as a backwards call tree, where each node is a function, and it interacts with its inputs by calling their functions to produce values.
The limitation is that imperative style sequential operations and side effects aren't really a natural fit.
I added workarounds like "variables" which are basically globals that get populated by a downstream op and can be used by upstream ops. https://t3kt.github.io/raytk/guide/variables/
Then there are materials where a material id is attached to an sdf result then the renderer calls back to whatever secondary function the material provided linked to that id.
Variables
The primary format of ops is (coords, context) -> result.
The context is basically a way to pass along other info that isn't coordinates. For example nodes in a material get stuff like uv coordinates, light angles, etc.
There are a lot of places where "coordinate" is stretched to mean other things. Most field operators take in an input that can replace their coordinates with some other value. For example basing a color ramp on a sine wave instead of just on the x axis.
For return values, there's Sdf (which should probably be called SdfResult) which is a distance with some extra properties, and float and vec4, and other things like Ray for cameras. I decided to avoid using vec2/vec3 as return types to keep things somewhat simpler.
Type inference gets kinda intense too.
Beta Was this translation helpful? Give feedback.
All reactions