-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is neededoptimizationPerformance optimizationPerformance optimization
Description
Summary
Generate monomorphic ABI encode functions at comptime for known function signatures, matching alloy's sol! macro approach.
Current Performance
| Benchmark | eth.zig | alloy.rs | Gap |
|---|---|---|---|
| abi_encode_transfer | 33 ns | 30 ns | 1.10x loss |
| abi_encode_static | 32 ns | 50 ns | 1.72x win |
| abi_encode_dynamic | 117 ns | 175 ns | 1.54x win |
We win on generic encoding but lose on the specific transfer(address,uint256) case because alloy's sol! macro generates a specialized encoder at compile time with no union dispatch.
Root Cause
Current encoding goes through AbiValue union dispatch:
const args = [_]AbiValue{
.{ .address = addr },
.{ .uint256 = amount },
};
const result = encodeFunctionCall(allocator, selector, &args);Each value requires a runtime switch on the union tag. For a 2-argument function call with known types, this overhead is ~3ns (the entire gap).
Proposed Approach
Add a comptime function that generates type-specific encoders:
/// Generate a specialized encoder for a known function signature.
/// Returns a fixed-size buffer (no allocator needed).
pub fn comptimeEncode(
comptime selector: [4]u8,
comptime types: []const AbiType,
) type {
const payload_size = 4 + types.len * 32; // Static types only
return struct {
pub fn encode(values: anytype) [payload_size]u8 {
var buf: [payload_size]u8 = undefined;
@memcpy(buf[0..4], &selector);
inline for (types, 0..) |t, i| {
const offset = 4 + i * 32;
switch (t) {
.address => {
@memset(buf[offset..][0..12], 0);
@memcpy(buf[offset + 12..][0..20], &values[i]);
},
.uint256 => {
const bytes = std.mem.toBytes(std.mem.nativeToBig(u256, values[i]));
@memcpy(buf[offset..][0..32], &bytes);
},
// ... other static types
}
}
return buf;
}
};
}
// Usage:
const TransferEncoder = comptimeEncode(
.{ 0xa9, 0x05, 0x9c, 0xbb }, // transfer(address,uint256)
&.{ .address, .uint256 },
);
// Zero-alloc, no union dispatch, returns stack buffer
const calldata = TransferEncoder.encode(.{ recipient_addr, amount });Why This Works
inline forunrolls the loop at comptime -- no runtime iteration- Direct type access with no union tag check
- Fixed-size return buffer -- no allocator needed
- LLVM can see the entire function as straight-line code
Scope
Phase 1: Static-only types (address, uint256, bool, bytes32) -- covers transfer, approve, balanceOf
Phase 2: Dynamic types (string, bytes, arrays) with comptime offset calculation
This Is a Good First Issue Because
- The existing
abi_encode.zigis well-structured and easy to understand - The comptime pattern is a natural extension of existing code
- Benchmarks already exist to measure improvement
- Zig's comptime makes this elegant (3ns gap is small but the pattern is valuable)
References
src/encoding/abi_encode.zig-- current implementationbench/bench.zig:251-259-- benchmark for transfer encoding- alloy sol! macro -- what we're matching
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is neededoptimizationPerformance optimizationPerformance optimization