Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions crates/synapse-cli/src/commands.rs
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,68 @@ pub async fn init(path: &str) -> Result<()> {
Ok(())
}

/// Run a Thought Loop cycle.
pub async fn thought(input: &str, use_ort: bool) -> Result<()> {
println!("💭 Synapse Thought Loop");
println!("──────────────────────");
println!("📝 Input: \"{}\"", input);

// 1. Initialize Adapters
println!("\n🧠 Loading Cognitive Components...");

let data_dir = dirs::data_dir()
.unwrap_or_else(|| std::path::PathBuf::from("."))
.join("synapse_data");

// Initialize Models
let manager = synapse_infra::adapters::model_manager::ModelManager::new(data_dir.join("models"))?;
let paths = manager.ensure_models_exist()?;

// LLM Selection
use synapse_core::ports::CognitivePort;
use synapse_cognition::thought_loop::ThoughtLoop;
use synapse_core::core::genesis::EnneadMatrix;
use synapse_core::perception::HolographicRetina;
use std::sync::Arc;

let cognitive: Arc<dyn CognitivePort> = if use_ort {
println!(" Using ORT (ONNX) Cognitive Adapter...");
let model_path = paths.phi3_onnx_path.context("Phi-3 ONNX model not found")?;
let tok_path = paths.phi3_tokenizer_path.context("Phi-3 tokenizer not found")?;

let ort_llm = Arc::new(synapse_infra::adapters::ort_adapter::OrtAdapter::new(model_path, tok_path)?);
Arc::new(synapse_infra::adapters::ort_cognitive_adapter::OrtCognitiveAdapter::new(ort_llm))
} else {
println!(" Using Candle (GGUF) Cognitive Adapter...");
let model_path = paths.llm_path.context("TinyLlama GGUF model not found")?;
let tok_path = paths.llm_tokenizer_path.context("TinyLlama tokenizer not found")?;

let candle_llm = Arc::new(tokio::sync::Mutex::new(synapse_infra::adapters::candle_adapter::CandleAdapter::new(
model_path.to_str().unwrap().to_string(),
Some(tok_path.to_str().unwrap().to_string())
Comment on lines +137 to +138

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of .unwrap() on to_str() can cause the program to panic if the model or tokenizer path contains non-UTF-8 characters. This is a possibility on some filesystems and would crash the application. It's safer to handle this potential error gracefully by propagating it.

Suggested change
model_path.to_str().unwrap().to_string(),
Some(tok_path.to_str().unwrap().to_string())
model_path.to_str().context("Model path contains invalid UTF-8")?.to_string(),
Some(tok_path.to_str().context("Tokenizer path contains invalid UTF-8")?.to_string())

)?));
Comment on lines +136 to +139
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the exact location and context of the problematic code
cat -n crates/synapse-cli/src/commands.rs | sed -n '130,145p'

Repository: iberi22/synapse-protocol

Length of output: 973


🏁 Script executed:

#!/bin/bash
# Search for all to_str().unwrap() patterns in the file
rg -n 'to_str\(\)\.unwrap\(\)' crates/synapse-cli/src/commands.rs

Repository: iberi22/synapse-protocol

Length of output: 186


🏁 Script executed:

#!/bin/bash
# Check the broader context of the thought function to understand path origins
sed -n '100,145p' crates/synapse-cli/src/commands.rs | cat -n

Repository: iberi22/synapse-protocol

Length of output: 2441


Avoid panic on non-UTF8 filesystem paths in Candle adapter initialization.

Lines 137–138 use to_str().unwrap() which will panic if the path contains invalid UTF-8 sequences. Replace with to_string_lossy().into_owned() to safely convert OS paths that may not be UTF-8:

Suggested fix
         let candle_llm = Arc::new(tokio::sync::Mutex::new(synapse_infra::adapters::candle_adapter::CandleAdapter::new(
-            model_path.to_str().unwrap().to_string(),
-            Some(tok_path.to_str().unwrap().to_string())
+            model_path.to_string_lossy().into_owned(),
+            Some(tok_path.to_string_lossy().into_owned())
         )?));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let candle_llm = Arc::new(tokio::sync::Mutex::new(synapse_infra::adapters::candle_adapter::CandleAdapter::new(
model_path.to_str().unwrap().to_string(),
Some(tok_path.to_str().unwrap().to_string())
)?));
let candle_llm = Arc::new(tokio::sync::Mutex::new(synapse_infra::adapters::candle_adapter::CandleAdapter::new(
model_path.to_string_lossy().into_owned(),
Some(tok_path.to_string_lossy().into_owned())
)?));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-cli/src/commands.rs` around lines 136 - 139, The code
initializes candle_llm by calling CandleAdapter::new with
model_path.to_str().unwrap() and tok_path.to_str().unwrap(), which can panic on
non-UTF8 paths; update the conversion to use
model_path.to_string_lossy().into_owned() and
tok_path.to_string_lossy().into_owned() (or equivalent lossily-converted
strings) before passing them into CandleAdapter::new to safely handle OS paths
that are not valid UTF-8.

Arc::new(synapse_cognition::CandleCognitiveAdapter::with_llm_adapter(candle_llm))
};

let ennead = EnneadMatrix::new();
let retina = Arc::new(HolographicRetina::new(ennead));

// 2. Initialize ThoughtLoop
let loop_orchestrator = ThoughtLoop::new(retina, cognitive);

// 3. Execute Cycle
println!("\n🔄 Running Thought Cycle...");
let result = loop_orchestrator.cycle(input).await?;

// 4. Print Results
println!("\n✨ Thought Generated:");
println!(" Content: {}", result.content.trim());
println!(" Confidence: {:.2}", result.confidence);
println!(" Entropy Reduction: {:.2}", result.entropy_reduction);

Ok(())
}

/// Manually dial a peer.
pub async fn dial(addr: &str) -> Result<()> {
info!("Sending dial command for peer: {}", addr);
Expand Down
13 changes: 13 additions & 0 deletions crates/synapse-cli/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,16 @@ enum Commands {
peer: Option<String>,
},

/// Run a Thought Loop cycle (Perception -> Judgment -> Cognition -> Simulation)
Thought {
/// Input to think about
input: String,

/// Use ORT (ONNX) instead of Candle (GGUF)
#[arg(short, long)]
ort: bool,
},

/// Translate a message with emotional empathy (Modo Espejo)
Translate {
/// The message to translate
Expand Down Expand Up @@ -215,6 +225,9 @@ async fn main() -> anyhow::Result<()> {
Commands::Transmit { message, peer } => {
commands::transmit(&message, peer).await?;
}
Commands::Thought { input, ort } => {
commands::thought(&input, ort).await?;
}
Commands::Translate { message, force } => {
commands::translate(&message, force).await?;
}
Expand Down
1 change: 1 addition & 0 deletions crates/synapse-cognition/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ use tokio::sync::broadcast;

pub mod bootstrap;
pub mod commands;
pub mod thought_loop;
pub mod tokenomics_service;
pub mod dojo_service;
pub mod sovereign_service;
Expand Down
75 changes: 75 additions & 0 deletions crates/synapse-cognition/src/thought_loop.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
//! Thought Loop Orchestration.
//!
//! Implements the cognitive cycle: Perception -> Judgment -> Cognition -> Simulation.

use synapse_core::{
error::Result,
perception::{HolographicRetina, EntropyLevel},
CognitivePort, Thought,
};
use std::sync::Arc;
use tracing::{info, warn, debug};

/// The Thought Loop service.
pub struct ThoughtLoop {
/// The retina for perception.
retina: Arc<HolographicRetina>,
/// The cognitive layer for thinking.
cognitive: Arc<dyn CognitivePort>,
}

impl ThoughtLoop {
/// Create a new ThoughtLoop.
pub fn new(retina: Arc<HolographicRetina>, cognitive: Arc<dyn CognitivePort>) -> Self {
Self { retina, cognitive }
}

/// Process a single "thought cycle".
pub async fn cycle(&self, input: &str) -> Result<Thought> {
info!("Starting Thought Loop cycle for input...");

// 1. Perception: Input -> Latent Vector
// (Simulating raw input as f32 vector from text for this POC)
let raw_input: Vec<f32> = input.as_bytes().iter().map(|&b| b as f32 / 255.0).collect();
let tensor = self.retina.perceive(&raw_input, "thought_loop_input", Some(input.to_string()))?;

// 2. Judgment: Analyze Entropy/Suffering (Genesis Matrix)
let (is_suffering, entropy_level) = self.retina.detect_suffering(tensor.latent())?;

let ethical_state = match entropy_level {
EntropyLevel::Harmonious => "The system is in a state of high moral harmony.",
EntropyLevel::Neutral => "The system is stable.",
EntropyLevel::Chaotic => "Warning: High entropy detected. Restore order.",
EntropyLevel::Suffering => "CRITICAL: Suffering detected. You MUST act to reduce entropy and preserve life.",
};

debug!("Ethical State: {}", ethical_state);

// 3. Cognition: Generate response based on dynamic system prompt
let system_prompt = format!(
"You are the Cognitive Cortex of a Synapse Node. \
Current Ethical State: {}. \
Acknowledge this state in your reasoning. \
Your goal is to reduce entropy and align with the Ennead Matrix.",
ethical_state
);

let mut thought = self.cognitive.think(input, &system_prompt).await?;

// 4. Simulation: Verify entropy reduction
let simulated_entropy = self.cognitive.analyze(&thought.content).await?;

info!("Thought Loop: Original Entropy Score: {}, Simulated Response Entropy: {}",
tensor.metadata().entropy_score, simulated_entropy);

if simulated_entropy > tensor.metadata().entropy_score && is_suffering {
warn!("Simulation detected entropy increase during suffering! Applying corrective guidance.");
// In a real implementation, we might re-run generation with stronger guidance.
thought.content = format!("(Guided) {}", thought.content);
}
Comment on lines +65 to +69
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Simulation gate does not actually block entropy regressions.

When entropy rises, the code still returns the thought (only content is prefixed), so higher-entropy output can pass validation.

🧭 Suggested correction pattern
-        if simulated_entropy > tensor.metadata().entropy_score && is_suffering {
-            warn!("Simulation detected entropy increase during suffering! Applying corrective guidance.");
-            // In a real implementation, we might re-run generation with stronger guidance.
-            thought.content = format!("(Guided) {}", thought.content);
-        }
-
-        thought.entropy_reduction = (tensor.metadata().entropy_score - simulated_entropy).max(0.0);
+        if simulated_entropy > tensor.metadata().entropy_score {
+            warn!("Simulation detected entropy increase; rejecting output for regeneration.");
+            return Err(synapse_core::error::Error::System(
+                "Thought rejected: simulation increased entropy".to_string(),
+            ));
+        }
+
+        thought.entropy_reduction = tensor.metadata().entropy_score - simulated_entropy;

Also applies to: 71-71

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/synapse-cognition/src/thought_loop.rs` around lines 65 - 69, The
simulation gate currently only prefixes thought.content when simulated_entropy >
tensor.metadata().entropy_score and is_suffering, which still allows
higher-entropy outputs to pass; update the logic in the block that checks
simulated_entropy, tensor.metadata().entropy_score, and is_suffering so that
when simulated_entropy exceeds the stored entropy score you either
reject/discard the generated thought (e.g., return an Err or skip adding it), or
force a regeneration/stronger-guidance path instead of returning the
higher-entropy object; adjust any callers that expect a Thought return (or set a
validity flag on thought) so higher-entropy outputs are not accepted—look for
the if branch using simulated_entropy, tensor.metadata().entropy_score,
is_suffering and the mutation of thought.content to implement this gate
properly.


thought.entropy_reduction = (tensor.metadata().entropy_score - simulated_entropy).max(0.0);

Ok(thought)
}
}
6 changes: 4 additions & 2 deletions crates/synapse-infra/src/adapters/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
pub mod surrealdb_adapter;
pub mod sled_adapter;
pub mod sled_memory_adapter;
// pub mod ort_adapter; // TODO: File missing, needs to be created or imported from feature branch
pub mod ort_adapter;
pub mod ort_cognitive_adapter;
pub mod context_adapter;
pub mod immune_adapter;
pub mod mock_llm_adapter;
Expand Down Expand Up @@ -34,7 +35,8 @@ pub mod libp2p_sync_adapter;
pub use surrealdb_adapter::*;
pub use sled_adapter::*;
pub use sled_memory_adapter::*;
// pub use ort_adapter::*;
pub use ort_adapter::*;
pub use ort_cognitive_adapter::*;
pub use mock_llm_adapter::*;
pub use mock_embedding_adapter::*;
pub use embedding_adapter::*;
Expand Down
27 changes: 27 additions & 0 deletions crates/synapse-infra/src/adapters/model_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ pub const DEFAULT_LLM_REPO: &str = "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF";
pub const DEFAULT_LLM_TOKENIZER_REPO: &str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0";
pub const DEFAULT_LLM_FILE: &str = "tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf";

pub const PHI3_ONNX_REPO: &str = "microsoft/Phi-3-mini-4k-instruct-onnx";
pub const PHI3_ONNX_FILE: &str = "cpu_and_mobile/cpu-int4-rtn-block-32/phi3-mini-4k-instruct-cpu-int4-rtn-block-32.onnx";
pub const PHI3_ONNX_TOKENIZER: &str = "cpu_and_mobile/cpu-int4-rtn-block-32/tokenizer.json";

pub const DEFAULT_EMBEDDING_REPO: &str = "sentence-transformers/all-MiniLM-L6-v2";
pub const DEFAULT_EMBEDDING_MODEL: &str = "model.safetensors";
pub const DEFAULT_EMBEDDING_TOKENIZER: &str = "tokenizer.json";
Expand All @@ -39,6 +43,10 @@ pub struct ModelPaths {
pub unet_path: Option<PathBuf>,
/// Path to the Genesis Embedder
pub genesis_embedder_path: Option<PathBuf>,
/// Path to Phi-3 ONNX model
pub phi3_onnx_path: Option<PathBuf>,
/// Path to Phi-3 ONNX tokenizer
pub phi3_tokenizer_path: Option<PathBuf>,
}

/// Information about a cached model
Expand Down Expand Up @@ -153,13 +161,32 @@ impl ModelManager {
}
};

// 4. Phi-3 ONNX
let phi3_onnx_path = match self.ensure_model(PHI3_ONNX_REPO, PHI3_ONNX_FILE) {
Ok(path) => Some(path),
Err(e) => {
warn!("Failed to download Phi-3 ONNX: {}", e);
None
}
};

let phi3_tokenizer_path = match self.ensure_model(PHI3_ONNX_REPO, PHI3_ONNX_TOKENIZER) {
Ok(path) => Some(path),
Err(e) => {
warn!("Failed to download Phi-3 Tokenizer: {}", e);
None
}
};
Comment on lines +165 to +179

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block for downloading the Phi-3 model and tokenizer repeats a pattern seen earlier in the function for other models. This code duplication makes the function harder to maintain. To improve this, you could extract the logic for downloading an optional model into a private helper function.

For example, you could add a helper method to ModelManager:

fn download_optional_model(&self, repo: &str, file: &str, model_name: &str) -> Option<PathBuf> {
    match self.ensure_model(repo, file) {
        Ok(path) => Some(path),
        Err(e) => {
            warn!("Failed to download {}: {}", model_name, e);
            None
        }
    }
}

This would make the call sites much cleaner and reduce redundancy.


Ok(ModelPaths {
llm_path,
llm_tokenizer_path,
embedding_dir,
tokenizer_path,
unet_path,
genesis_embedder_path,
phi3_onnx_path,
phi3_tokenizer_path,
})
}

Expand Down
Loading
Loading