Skip to content

LFM2-2.6B-Exp tool-calling output doesn’t plug cleanly into Osaurus tool system #310

@EphraimElgrabli

Description

@EphraimElgrabli

Problem statement

I’m trying to use LFM2-2.6B-Exp (mlx-community) inside Osaurus, and the model itself is honestly wild. For a 2.6B model, the instruction following and agent-style reasoning are on another level. It genuinely feels like one of the best small “agent brains” out there, and the IFBench results vs models hundreds of times larger totally match real usage.
The problem I’m hitting is purely around tool calling. LFM2 emits tool calls as custom textual tokens inside the assistant output (e.g. <|tool_call_start|> … <|tool_call_end|>). Because Osaurus exposes tools via OpenAI-compatible APIs and MCP, these tool calls aren’t automatically picked up as executable actions, which creates friction when running the model inside the Osaurus ecosystem.
This isn’t a complaint about the model at all — it’s excellent. I’m opening this because LFM2-2.6B-Exp feels like a perfect fit for Osaurus (local, fast, agentic), and this output mismatch is currently the only thing stopping it from working smoothly with Osaurus’ tool + MCP system.

Proposed solution

No response

Alternatives considered

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions