-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
breakingbreaks compatibility with existing API/usage patternsbreaks compatibility with existing API/usage patternsenhancementNew feature or requestNew feature or requestlow priorityMaybe do this one later...Maybe do this one later...
Milestone
Description
Motivation
Currently, GPU buffer objects must be specified with any structuring information (i.e descriptors) at compile time. Additionally, all specialized buffers (such as index and vertex buffers) use a common GenericBuffer type as a base to reuse common functionality. The problem is, this increases greatly the required work to implement new usages for buffers, which leads to unnecesary complexity and inflexibility should I decide to implement runtime-determined buffer structuring.
Example
Below are examples of the current API with comments for what I see as shortfalls.
Generic Specialization
const SomeStruct = struct {
age: float,
id: u32,
num_children: usize,
};
// This presents a challenge for uploading data via staging buffers,
// since any compatible staging buffer must also be specialized from the GenericBuffer, which unfortunately
// in my implementation bloats the generic buffer
// All of this is also comptime, which means that structuring buffers at runtime
// (neccesary for generating uniform data from shader sources) is basically impossible
const SomeStructuredBuffer = api.GenericBuffer(SomeStruct, .{
.memory = .{ .device_local_bit = true },
.usage = .{
.transfer_dst_bit = true,
.vertex_buffer_bit = true,
},
});Wrapper Type Specialization
// Note how the specialized wrapper needs to also be generic
// to be structured as well.
pub fn SomeSpecializedBuffer(comptime T: type) type {
// Inner buffer type -- required since genericBuffer itself can't really represent
// different use cases for buffers on its own
const Inner = api.GenericBuffer(T, .{
.memory = .{ .device_local_bit = true },
.usage = .{
.transfer_dst_bit = true,
.vertex_buffer_bit = true,
},
});
buf: Inner,
//... Buffer functions ....
// All of these wrapper type functions follow the same pattern:
// Do some specialized stuff (staging transfers usually) -> call GenericBuffer base function
pub fn setData(ctx: *anyopaque, data: *const anyopaque) !void {
const self: *Self = @ptrCast(@alignCast(ctx));
const elem: []const T = @as([*]const T, @ptrCast(@alignCast(data)))[0..self.buf.size];
// staging buffers are supported, sort of... via some more comptime junk in the GenericBuffer
var staging = try self.buf.createStaging();
defer staging.deinit();
const staging_mem = try staging.mapMemory();
defer staging.unmapMemory();
@memcpy(staging_mem, elem);
try buf.copy(staging.buffer(), self.buffer(), self.buf.dev);
}
// This is how buffers are used interchangeably, it follows the pattern of zig's
// allocator interface. There's nothing inherently wrong with this, but it makes buffer operations such as copy
// a lot clunkier and places more burden on the buffer's specialization code.
pub fn buffer(self: *Self) AnyBuffer {
return AnyBuffer{
.cfg = &Inner.cfg,
.handle = self.buf.h_buf,
.ptr = self,
.size = self.buf.bytesSize(),
.vtable = &.{
.bind = bind,
.setData = setData,
.deinit = deinit,
},
};
}
}Issues
- Hard To Specialize: Generic buffer's don't do enough to encapsulate different functionality, which basically requires me to embed them in a speicalized type like IndexBuffer with extra information slapped on in the containing type.
- Limited to Compile-Time Structuring: Specifying structure at runtime (which is necessary if I want assets to be more data driven) is basically impossible and would require a completely separate implementation.
- Hard to use Interchangably: Specialized buffers are wrappers around the GenericBuffer type, meaning that they can't be reliably type erased and require a VTable interface to be used interchangeably.
Metadata
Metadata
Assignees
Labels
breakingbreaks compatibility with existing API/usage patternsbreaks compatibility with existing API/usage patternsenhancementNew feature or requestNew feature or requestlow priorityMaybe do this one later...Maybe do this one later...