A lightweight GPU compute library built on wgpu, providing a simple and ergonomic API for GPU-accelerated computing.
- 🚀 Simple API: Easy-to-use interface for GPU compute operations
- 🔧 Type-safe buffers: Generic buffer types with compile-time safety
- 📝 WGSL shaders: Support for WebGPU Shading Language
- ⚡ Async operations: Async/await support for GPU operations
- 🎯 Zero-cost abstractions: Minimal overhead over raw wgpu
Add this to your Cargo.toml:
[dependencies]
oxgpu = "0.1.0"use oxgpu::{Context, Buffer};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create GPU context
let ctx = Context::new().await?;
// Create buffers
let data = vec![1.0f32, 2.0, 3.0, 4.0, 5.0];
let buffer = Buffer::from_slice(&ctx, &data).await;
// Read data back
let result = buffer.read(&ctx).await?;
println!("Result: {:?}", result);
Ok(())
}use oxgpu::{Context, Buffer,ComputeKernel, BindingType, KernelBinding};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let ctx = Context::new().await?;
// Create input/output buffers
let x = Buffer::from_slice(&ctx, &[1.0f32, 2.0, 3.0]).await;
let y = Buffer::from_slice(&ctx, &[2.0f32, 4.0, 6.0]).await;
// WGSL shader
let shader = r#"
@group(0) @binding(0) var<storage, read> x: array<f32>;
@group(0) @binding(1) var<storage, read_write> y: array<f32>;
@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) id: vec3<u32>) {
y[id.x] = x[id.x] + y[id.x];
}
"#;
// Build and run kernel
let kernel = ComputeKernel::builder()
.source(shader)
.entry_point("main")
.bind(KernelBinding::new(0, BindingType::Storage { read_only: true }))
.bind(KernelBinding::new(1, BindingType::Storage { read_only: false }))
.build(&ctx)
.await?;
kernel.run(&ctx, (1, 1, 1), &[&x, &y]);
let result = y.read(&ctx).await?;
println!("Result: {:?}", result); // [3.0, 6.0, 9.0]
Ok(())
}You can find more examples in the examples/ directory:
- Basic Buffer: Basic buffer creation, writing, and reading.
- Vector Add: Simple vector addition using a compute shader.
- Matrix Multiplication: Matrix multiplication using a compute shader.
To run an example:
cargo run --example vector_addContext: GPU context managing device and queueBuffer<T>: Typed GPU buffer for data storageBufferUsage: Flags for buffer usage (storage, uniform, etc.)ComputeKernel: Compiled compute shaderComputeKernelBuilder: Builder for creating compute kernels
Buffer::new()- Create buffer with custom usageBuffer::from_slice()- Create from sliceBuffer::zeros()- Create zero-initialized bufferbuffer.read()- Read data from GPUbuffer.write()- Write data to GPU
- Rust 2024 edition
- A GPU with WebGPU support
Contributions are welcome! Please feel free to submit a Pull Request.