Safe expression injection framework for dynamic code evaluation in AI-generated outputs. Provides sandboxed execution environments for evaluating code expressions from LLM responses with configurable security policies.
- Sandboxed code evaluation
- Configurable security policies
- Expression validation and sanitization
MIT