Skip to content

A lightweight GPU compute library built on wgpu, providing a simple and ergonomic API for GPU-accelerated computing.

License

Notifications You must be signed in to change notification settings

vulkanic-labs/oxgpu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

oxgpu

A lightweight GPU compute library built on wgpu, providing a simple and ergonomic API for GPU-accelerated computing.

Features

  • 🚀 Simple API: Easy-to-use interface for GPU compute operations
  • 🔧 Type-safe buffers: Generic buffer types with compile-time safety
  • 📝 WGSL shaders: Support for WebGPU Shading Language
  • Async operations: Async/await support for GPU operations
  • 🎯 Zero-cost abstractions: Minimal overhead over raw wgpu

Installation

Add this to your Cargo.toml:

[dependencies]
oxgpu = "0.1.0"

Quick Start

use oxgpu::{Context, Buffer};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create GPU context
    let ctx = Context::new().await?;

    // Create buffers
    let data = vec![1.0f32, 2.0, 3.0, 4.0, 5.0];
    let buffer = Buffer::from_slice(&ctx, &data).await;

    // Read data back
    let result = buffer.read(&ctx).await?;
    println!("Result: {:?}", result);

    Ok(())
}

Compute Shader Example

use oxgpu::{Context, Buffer,ComputeKernel, BindingType, KernelBinding};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let ctx = Context::new().await?;

    // Create input/output buffers
    let x = Buffer::from_slice(&ctx, &[1.0f32, 2.0, 3.0]).await;
    let y = Buffer::from_slice(&ctx, &[2.0f32, 4.0, 6.0]).await;

    // WGSL shader
    let shader = r#"
        @group(0) @binding(0) var<storage, read> x: array<f32>;
        @group(0) @binding(1) var<storage, read_write> y: array<f32>;

        @compute @workgroup_size(64)
        fn main(@builtin(global_invocation_id) id: vec3<u32>) {
            y[id.x] = x[id.x] + y[id.x];
        }
    "#;

    // Build and run kernel
    let kernel = ComputeKernel::builder()
        .source(shader)
        .entry_point("main")
        .bind(KernelBinding::new(0, BindingType::Storage { read_only: true }))
        .bind(KernelBinding::new(1, BindingType::Storage { read_only: false }))
        .build(&ctx)
        .await?;

    kernel.run(&ctx, (1, 1, 1), &[&x, &y]);

    let result = y.read(&ctx).await?;
    println!("Result: {:?}", result); // [3.0, 6.0, 9.0]

    Ok(())
}

Examples

You can find more examples in the examples/ directory:

To run an example:

cargo run --example vector_add

API Overview

Core Types

  • Context: GPU context managing device and queue
  • Buffer<T>: Typed GPU buffer for data storage
  • BufferUsage: Flags for buffer usage (storage, uniform, etc.)
  • ComputeKernel: Compiled compute shader
  • ComputeKernelBuilder: Builder for creating compute kernels

Buffer Operations

  • Buffer::new() - Create buffer with custom usage
  • Buffer::from_slice() - Create from slice
  • Buffer::zeros() - Create zero-initialized buffer
  • buffer.read() - Read data from GPU
  • buffer.write() - Write data to GPU

Requirements

  • Rust 2024 edition
  • A GPU with WebGPU support

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

About

A lightweight GPU compute library built on wgpu, providing a simple and ergonomic API for GPU-accelerated computing.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages