Releases: caffeinelabs/openai-client
openai-client 0.1.3 — serde-core 0.1.1 type-table cycle fix
Fixed
- Bumps
serde-coreto0.1.1to pull in the Candid type-table cycle fix.
JSON.toText(to_candid(req), …)no longer traps withIC0502: Canister trapped: stack overflowwhenreqcarries a?Map<K, V>field. Affects
everyChatApi.createChatCompletioncall in practice — the request type
has bothmetadata : ?Map<Text, Text>andlogit_bias : ?Map<Text, Int>,
whose self-referential RBT type entries used to send the decoder into
unbounded re-expansion before any byte of the value was inspected.
ChatApi.createChatCompletion is now functional end-to-end through the
connector (Wed 2026-04-29 Caffeine demo).
v0.1.2 — ChatApi works end-to-end (string-flatten fix)
Regenerated from caffeinelabs/openapi-generator@d009dcad56f, which adds support for anyOf / oneOf schemas whose every branch produces a JSON string. These are now emitted as a flat type X = Text alias instead of the empty record {} that 0.1.1's diagnostic flagged.
Fixed
ChatApi.createChatCompletionis now functional end-to-end. No more "ModelIdsShared has no synthesisable JSON form" diagnostic — the generator now produces a real type. Consumer code:model = "gpt-4o-mini". Wire:"model": "gpt-4o-mini". OpenAI: 200.- Symmetric fix for the other two string-flatten schemas in OpenAI's spec:
ModelIdsShared = Text(chat models)CreateCompletionRequestModel = Text(legacy completions)VoiceIdsShared = Text(TTS voice IDs)
What's still open
- Mixed-shape
oneOf/anyOf(e.g.oneOf<string, ComplexObject>) still falls through to the old empty-fallback path with the diagnostic. Future work — none on the OpenAI focusApis surface today. - Pre-existing template caveats from 0.1.0 (oneOf-array tag sanitisation in Completions/Embeddings, slash-in-record-fields in Moderations) unchanged.
mops publishstill needs--no-docs.
🤖 Generated with Claude Code
v0.1.1 — diagnostics regen
Regenerated from caffeinelabs/openapi-generator@383d3ae with --additional-properties=diagnostics=true.
Added
validate(value : T) : ?Textper model; API methods call it
before serialisation andthrow Error.reject(msg)on a non-null
result. Surfaces theModelIdsSharedempty-fallback (oneOf<string, …>the codegen could not tag) to the caller's
catch (e) { e.message() }, instead of letting bad JSON reach OpenAI
as"model": {}and come back as an opaque 4xx misread upstream as
"Invalid OpenAI API key".- Reply-side error context — every reply-side
throw Error.reject
(UTF-8 decode, JSON tokenise, "Failed to deserialize", "Failed to
convert", declared error code with no body schema) appends
— server returned: <body>(or byte count for binary). Closes the
black-box where the server's actual response was hidden from the user
on parse failures.
Notes
- Same focusApis surface as 0.1.0 (Chat / Completions / Models /
Embeddings / Images / Audio / Moderations / Files; 220 .mo files). - Same generator-template caveats as 0.1.0;
mops publishstill needs
--no-docsuntil upstream sanitisation gaps are closed.
🤖 Generated with Claude Code
v0.1.0
Initial release of the generated Motoko client for the OpenAI API.
Scope
Curated subset of the full OpenAI API surface — 8 APIs kept, models pruned to their transitive closure:
- Chat, Completions, Models, Embeddings, Images, Audio, Moderations, Files
Totals: 8 API modules, 211 Model modules, 1 Config, 220 .mo files.
Status
mops build: green.- Per-file
moc --check: 208 pass, 12 fail — all in Completions / Embeddings / Moderations models (two template-sanitisation gaps documented ascaveats:in the generator config). The Chat surface andmops buildare unaffected.
Usage
import { createChatCompletion } \"mo:openai-client/Apis/ChatApi\";
import { defaultConfig } \"mo:openai-client/Config\";
let config = { defaultConfig with auth = ?#bearer \"YOUR_API_KEY\" };