-
Notifications
You must be signed in to change notification settings - Fork 126
Description
Problem statement
I’m trying to use LFM2-2.6B-Exp (mlx-community) inside Osaurus, and the model itself is honestly wild. For a 2.6B model, the instruction following and agent-style reasoning are on another level. It genuinely feels like one of the best small “agent brains” out there, and the IFBench results vs models hundreds of times larger totally match real usage.
The problem I’m hitting is purely around tool calling. LFM2 emits tool calls as custom textual tokens inside the assistant output (e.g. <|tool_call_start|> … <|tool_call_end|>). Because Osaurus exposes tools via OpenAI-compatible APIs and MCP, these tool calls aren’t automatically picked up as executable actions, which creates friction when running the model inside the Osaurus ecosystem.
This isn’t a complaint about the model at all — it’s excellent. I’m opening this because LFM2-2.6B-Exp feels like a perfect fit for Osaurus (local, fast, agentic), and this output mismatch is currently the only thing stopping it from working smoothly with Osaurus’ tool + MCP system.
Proposed solution
No response
Alternatives considered
No response
Additional context
No response