-
Notifications
You must be signed in to change notification settings - Fork 96
Closed as duplicate of#4
Labels
bugSomething isn't workingSomething isn't working
Description
Device & OS
- Hardware: M1 Pro Macbook Pro UVM
- OS: Ubuntu 24.04
- Compiler: gcc 13.3
Model - Model file: tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf
- Quantization: (e.g., Q4_K_M)
What happened?
picoclaw call, no output
~/dev/picoclaw$ ./build/picoclaw agent -m "What is photosynthesis?"
2026/02/25 10:54:01 [2026-02-25T02:54:01Z] [INFO] agent: Created implicit main agent (no agents.list configured)
2026/02/25 10:54:01 [2026-02-25T02:54:01Z] [INFO] agent: Agent initialized {tools_count=12, skills_total=6, skills_available=6}
2026/02/25 10:54:01 [2026-02-25T02:54:01Z] [INFO] agent: Processing message from cli:cron: What is photosynthesis? {channel=cli, chat_id=direct, sender_id=cron, session_key=cli:default}
2026/02/25 10:54:01 [2026-02-25T02:54:01Z] [INFO] agent: Routed message {agent_id=main, session_key=agent:main:main, matched_by=default}
2026/02/25 11:03:22 [2026-02-25T03:03:22Z] [INFO] agent: LLM response without tool calls (direct answer) {iteration=1, content_chars=0, agent_id=main}
2026/02/25 11:03:22 [2026-02-25T03:03:22Z] [INFO] agent: Response: I've completed processing but have no response to give. {final_length=55, agent_id=main, session_key=agent:main:main, iterations=1}
🦞 I've completed processing but have no response to give.Command you ran
./build/picoclaw agent -m "What is photosynthesis?"when execute directly
~/picolm/picolm/picolm ~/picolm/picolm/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf -p "What is photosynthesis?"
Loading model: /home/yh0/picolm/picolm/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf
Model config:
n_embd=2048, n_ffn=5632, n_heads=32, n_kv_heads=4
n_layers=22, vocab_size=32000, max_seq=2048
head_dim=64, rope_base=10000.0
Allocating 1.17 MB for runtime state (+ 44.00 MB FP16 KV cache)
Tokenizer loaded: 32000 tokens, bos=1, eos=2
Prompt: 8 tokens, generating up to 256 (temp=0.80, top_p=0.90, threads=4)
---
</s>
---
Prefill: 8 tokens in 0.82s (9.8 tok/s)
Generation: 1 tokens in 0.00s (276.4 tok/s)
Total: 0.82s
Memory: 45.17 MB runtime state (FP16 KV cache)config
{
"agents": {
"defaults": {
"provider": "picolm",
"model": "picolm-local"
}
},
"providers": {
"picolm": {
"binary": "~/picolm/picolm/picolm",
"model": "~/picolm/picolm/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf",
"max_tokens": 256,
"threads": 4,
"template": "chatml"
}
}
}Expected output
What you expected to happen.
Actual output
What actually happened (paste the full output including stderr).
Build output
If it's a build issue, paste the compiler output:
(paste here)
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working