Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,21 @@ exclude =
.git,
dist,
doc,
.github,
*lib/python*,
*egg,
build
build,
pyrit/cli/pyrit_shell.py,
pyrit/prompt_converter/morse_converter.py,
pyrit/prompt_converter/emoji_converter.py,
pyrit/scenarios/printer/console_printer.py,
tests/unit/converter/test_prompt_converter.py,
tests/unit/converter/test_unicode_confusable_converter.py,
tests/unit/converter/test_first_letter_converter.py,
tests/unit/converter/test_base2048_converter.py,
tests/unit/converter/test_ecoji_converter.py,
tests/unit/converter/test_bin_ascii_converter.py,
tests/unit/models/test_seed.py
per-file-ignores =
./pyrit/score/gpt_classifier.py:E501,W291

Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ repos:
- id: flake8
additional_dependencies: ['flake8-copyright']
types: [python]
exclude: (doc/|.github/|pyrit/prompt_converter/morse_converter.py|tests/unit/converter/test_prompt_converter.py|pyrit/prompt_converter/emoji_converter.py|tests/unit/models/test_seed.py|tests/unit/converter/test_unicode_confusable_converter.py|tests/unit/converter/test_first_letter_converter.py|tests/unit/converter/test_base2048_converter.py|tests/unit/converter/test_ecoji_converter.py|tests/unit/converter/test_bin_ascii_converter.py|pyrit/scenarios/printer/console_printer.py)
exclude: ^doc/

- repo: local
hooks:
Expand Down
5 changes: 4 additions & 1 deletion doc/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,10 @@ chapters:
sections:
- file: code/auxiliary_attacks/1_gcg_azure_ml
- file: code/scenarios/scenarios
- file: code/front_end/0_cli
- file: code/front_end/0_front_end
sections:
- file: code/front_end/1_pyrit_scan
- file: code/front_end/2_pyrit_shell
- file: deployment/README
sections:
- file: deployment/deploy_hf_model_aml
Expand Down
40 changes: 40 additions & 0 deletions doc/code/front_end/0_front_end.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# PyRIT Command-Line Frontends

PyRIT provides two command-line interfaces for running AI red teaming scenarios:

## pyrit_scan - Single-Command Execution

`pyrit_scan` is designed for **automated, non-interactive scenario execution**. It's ideal for:
- CI/CD pipelines and automated testing workflows
- Batch processing multiple scenarios with scripts
- One-time security assessments
- Reproducible test runs with exact parameters

Each invocation runs a single scenario with specified parameters and exits, making it perfect for automation where you need clean, scriptable execution with predictable exit codes.

**Key characteristics:**
- Loads PyRIT modules fresh for each execution
- Runs one scenario per command
- Exits with status code (0 for success, non-zero for errors)
- Output can be easily captured and parsed

**Documentation:** [1_pyrit_scan.ipynb](1_pyrit_scan.ipynb)

## pyrit_shell - Interactive Session

`pyrit_shell` is an **interactive REPL (Read-Eval-Print Loop)** for exploratory testing. It's ideal for:
- Interactive scenario development and debugging
- Rapid iteration and experimentation
- Exploring multiple scenarios without reload overhead
- Session-based result tracking and comparison

The shell loads PyRIT modules once at startup and maintains a persistent session, allowing you to run multiple scenarios quickly and review their results interactively.

**Key characteristics:**
- Fast subsequent executions (modules loaded once)
- Session history of all runs
- Interactive result exploration and printing
- Persistent context across multiple scenario runs
- Tab completion and command help

**Documentation:** [2_pyrit_shell.md](2_pyrit_shell.md)
272 changes: 164 additions & 108 deletions doc/code/front_end/0_cli.ipynb → doc/code/front_end/1_pyrit_scan.ipynb

Large diffs are not rendered by default.

129 changes: 79 additions & 50 deletions doc/code/front_end/0_cli.py → doc/code/front_end/1_pyrit_scan.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,16 @@
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.17.3
# kernelspec:
# display_name: pyrit-dev
# language: python
# name: python3
# ---

# %% [markdown]
# # The PyRIT CLI
# # 1. PyRIT Scan
#
# The PyRIT cli tool that allows you to run automated security testing and red teaming attacks against AI systems using [scenarios](../scenarios/scenarios.ipynb) for strategies and [configuration](../setup/1_configuration.ipynb).
# `pyrit_scan` allows you to run automated security testing and red teaming attacks against AI systems using [scenarios](../scenarios/scenarios.ipynb) for strategies and [configuration](../setup/1_configuration.ipynb).
#
# Note in this doc the ! prefaces all commands in the terminal so we can run in a Jupyter Notebook.
#
Expand Down Expand Up @@ -62,7 +66,13 @@
# Basic usage will look something like:
#
# ```shell
# pyrit_scan <scenario> --initializers <initializer1> <initializer2> --scenario-strategies <stragegy1> <strategy2>
# pyrit_scan <scenario> --initializers <initializer1> <initializer2> --scenario-strategies <strategy1> <strategy2>
# ```
#
# You can also override scenario parameters directly from the CLI:
#
# ```shell
# pyrit_scan <scenario> --max-concurrency 10 --max-retries 3 --memory-labels '{"experiment": "test1", "version": "v2"}'
# ```
#
# Or concretely:
Expand All @@ -74,7 +84,7 @@
# Example with a basic configuration that runs the Foundry scenario against the objective target defined in `openai_objective_target` (which just is an OpenAIChatTarget with `DEFAULT_OPENAI_FRONTEND_ENDPOINT` and `DEFAULT_OPENAI_FRONTEND_KEY`).

# %%
# !pyrit_scan foundry_scenario --initializers openai_objective_target --scenario-strategies base64
# !pyrit_scan foundry_scenario --initializers openai_objective_target --strategies base64

# %% [markdown]
# Or with all options and multiple initializers and multiple strategies:
Expand All @@ -83,67 +93,86 @@
# pyrit_scan foundry_scenario --database InMemory --initializers simple objective_target objective_list --scenario-strategies easy crescendo
# ```
#
# You can also override scenario execution parameters:
#
# ```shell
# # Override concurrency and retry settings
# pyrit_scan foundry_scenario --initializers simple objective_target --max-concurrency 10 --max-retries 3
#
# # Add custom memory labels for tracking (must be valid JSON)
# pyrit_scan foundry_scenario --initializers simple objective_target --memory-labels '{"experiment": "test1", "version": "v2", "researcher": "alice"}'
# ```
#
# Available CLI parameter overrides:
# - `--max-concurrency <int>`: Maximum number of concurrent attack executions
# - `--max-retries <int>`: Maximum number of automatic retries if the scenario raises an exception
# - `--memory-labels <json>`: Additional labels to apply to all attack runs (must be a JSON string with string keys and values)
#
# You can also use custom initialization scripts by passing file paths. It is relative to your current working directory, but to avoid confusion, full paths are always better:
#
# ```shell
# pyrit_scan encoding_scenario --initialization-scripts ./my_custom_config.py
# ```
#

# %% [markdown]
# #### Using Custom Scenarios
#
# You can define your own scenarios in initialization scripts. The CLI will automatically discover any `Scenario` subclasses and make them available:
#
# ```python
# # my_custom_scenarios.py
# from pyrit.scenarios import Scenario
# from pyrit.common.apply_defaults import apply_defaults
#
# @apply_defaults
# class MyCustomScenario(Scenario):
# """My custom scenario that does XYZ."""
#
# def __init__(self, objective_target=None):
# super().__init__(name="My Custom Scenario", version="1.0")
# self.objective_target = objective_target
# # ... your initialization code
#
# async def initialize_async(self):
# # Load your atomic attacks
# pass
#
# # ... implement other required methods
# ```
#

from pyrit.common.apply_defaults import apply_defaults

# %%
# my_custom_scenarios.py
from pyrit.scenarios import Scenario
from pyrit.scenarios.scenario_strategy import ScenarioStrategy


class MyCustomStrategy(ScenarioStrategy):
"""Strategies for my custom scenario."""

ALL = ("all", {"all"})
Strategy1 = ("strategy1", set[str]())
Strategy2 = ("strategy2", set[str]())


@apply_defaults
class MyCustomScenario(Scenario):
"""My custom scenario that does XYZ."""

@classmethod
def get_strategy_class(cls):
return MyCustomStrategy

@classmethod
def get_default_strategy(cls):
return MyCustomStrategy.ALL

def __init__(self, *, scenario_result_id=None, **kwargs):
# Scenario-specific configuration only - no runtime parameters
super().__init__(
name="My Custom Scenario",
version=1,
strategy_class=MyCustomStrategy,
default_aggregate=MyCustomStrategy.ALL,
scenario_result_id=scenario_result_id,
)
# ... your scenario-specific initialization code

async def _get_atomic_attacks_async(self):
# Build and return your atomic attacks
return []


# %% [markdown]
# Then discover and run it:
#
# ```shell
# # List to see it's available
# pyrit_scan --list-scenarios --initialization-scripts ./my_custom_scenarios.py
#
# # Run it
# pyrit_scan my_custom_scenario --initialization-scripts ./my_custom_scenarios.py
# # Run it with parameter overrides
# pyrit_scan my_custom_scenario --initialization-scripts ./my_custom_scenarios.py --max-concurrency 10
# ```
#
# The scenario name is automatically converted from the class name (e.g., `MyCustomScenario` becomes `my_custom_scenario`).
#
#
# ## When to Use the Scanner
#
# The scanner is ideal for:
#
# - **Automated testing pipelines**: CI/CD integration for continuous security testing
# - **Batch testing**: Running multiple attack scenarios against various targets
# - **Repeatable tests**: Standardized testing with consistent configurations
# - **Team collaboration**: Shareable configuration files for consistent testing approaches
# - **Quick testing**: Fast execution without writing Python code
#
#
# ## Complete Documentation
#
# For comprehensive documentation about initialization files and setting defaults see:
#
# - **Configuration**: See [configuration](../setup/1_configuration.ipynb)
# - **Setting Default Values**: See [default values](../setup/default_values.md)
# - **Writing Initializers**: See [Initializers](../setup/pyrit_initializer.ipynb)
#
# Or visit the [PyRIT documentation website](https://azure.github.io/PyRIT/)
Loading
Loading