diff --git a/LOCALIZATION.md b/LOCALIZATION.md new file mode 100644 index 000000000..92f5d72c9 --- /dev/null +++ b/LOCALIZATION.md @@ -0,0 +1,38 @@ +# Localization & Translation Strategy + +This document outlines the formal plan for handling localization and translation within the Plexus project. The goal is to ensure a consistent, high-quality, multi-language user experience. + +## Core Principles + +1. **English as Source of Truth**: The English message file (`dashboard/messages/en.json`) is the canonical source for all text in the application. All new text and changes must start here. +2. **Automated Translation**: To ensure efficiency, translations for other languages will be automatically generated from the English source file. +3. **Brand Consistency**: A "Brand Glossary" is maintained to govern the translation of specific terms, ensuring that brand names, trademarks, and key concepts are handled consistently across all languages. + +## Current Implementation + +- **Technology**: The dashboard utilizes the [`next-intl`](https://next-intl.dev/) library, integrated with the Next.js App Router. +- **Message Files**: Translation messages are stored as JSON files in the `dashboard/messages/` directory. +- **Routing**: Internationalized routing is handled via a `[locale]` dynamic segment in the `dashboard/app/` directory. + +## Translation Status + +- **English (`en`)**: Complete. This is the source of truth. +- **Spanish (`es`)**: Draft translations exist in `dashboard/messages/es.json`. These were generated before the formal glossary was established and have **not been proofread**. They should not be considered final and are subject to change based on the rules defined in this document. + +## Translation Workflow + +1. **Adding New Text**: All new user-facing text must be added as a key-value pair to `dashboard/messages/en.json`. +2. **Automatic Generation**: The localization pipeline will automatically detect changes in `en.json` and generate corresponding draft translations for all other supported languages. +3. **Applying Glossary Rules**: The pipeline will consult the Brand Glossary to apply specific rules, such as preventing the translation of certain words or enforcing specific translations. +4. **Proofreading**: Generated translations must be proofread by a native speaker before they are considered ready for production. + +## Brand Glossary + +This glossary defines how specific terms should be handled by the translation process. + +| Term | Language | Instruction | Translation | Notes | +| :------ | :------- | :------------------ | :---------- | :--------------------------------------------------------------------------------------------------------------------- | +| `Plexus` | `All` | Do not translate | `Plexus` | This is a brand name. | +| `item` | `es` | Force translation | `ítem` | The Spanish word "ítem" is preferred for consistency, as advised by our Spanish language consultants. The accent is important. | + + \ No newline at end of file diff --git a/dashboard/app/activity/page.tsx b/dashboard/app/[locale]/activity/page.tsx similarity index 100% rename from dashboard/app/activity/page.tsx rename to dashboard/app/[locale]/activity/page.tsx diff --git a/dashboard/app/alerts/page.tsx b/dashboard/app/[locale]/alerts/page.tsx similarity index 100% rename from dashboard/app/alerts/page.tsx rename to dashboard/app/[locale]/alerts/page.tsx diff --git a/dashboard/app/batches/page.tsx b/dashboard/app/[locale]/batches/page.tsx similarity index 100% rename from dashboard/app/batches/page.tsx rename to dashboard/app/[locale]/batches/page.tsx diff --git a/dashboard/app/dashboard/layout.tsx b/dashboard/app/[locale]/dashboard/layout.tsx similarity index 100% rename from dashboard/app/dashboard/layout.tsx rename to dashboard/app/[locale]/dashboard/layout.tsx diff --git a/dashboard/app/dashboard/page.tsx b/dashboard/app/[locale]/dashboard/page.tsx similarity index 100% rename from dashboard/app/dashboard/page.tsx rename to dashboard/app/[locale]/dashboard/page.tsx diff --git a/dashboard/app/[locale]/datasets/page.tsx b/dashboard/app/[locale]/datasets/page.tsx new file mode 100644 index 000000000..9cc8ded94 --- /dev/null +++ b/dashboard/app/[locale]/datasets/page.tsx @@ -0,0 +1,8 @@ +"use client"; + +import { redirect } from 'next/navigation' + +export default function Datasets() { + // Redirect to the lab version + redirect('/lab/datasets') +} diff --git a/dashboard/app/documentation/advanced/cli/page.tsx b/dashboard/app/[locale]/documentation/advanced/cli/page.tsx similarity index 73% rename from dashboard/app/documentation/advanced/cli/page.tsx rename to dashboard/app/[locale]/documentation/advanced/cli/page.tsx index babde36ed..cd3e21bce 100644 --- a/dashboard/app/documentation/advanced/cli/page.tsx +++ b/dashboard/app/[locale]/documentation/advanced/cli/page.tsx @@ -1,6 +1,188 @@ 'use client'; +import { useTranslationContext } from '@/app/contexts/TranslationContext' + export default function CliPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+ + +

+ Herramienta CLI plexus +

+

+ Domina la interfaz de línea de comandos para gestionar tu implementación de Plexus. +

+ +
+
+

Resumen

+

+ La herramienta CLI de Plexus proporciona una poderosa interfaz de línea de comandos para gestionar tu implementación de Plexus, + con enfoque en evaluar y monitorear el rendimiento de cuadros de puntuación. +

+
+ +
+

Instalación

+

+ Instala la herramienta CLI de Plexus usando pip: +

+
+              
+ pip install plexus-cli +
+
+
+ +
+

Sistema de Identificadores Flexible

+

+ La CLI de Plexus usa un sistema de identificadores flexible que te permite referenciar recursos usando diferentes tipos de identificadores. Esto hace que los comandos sean más intuitivos y reduce la necesidad de buscar IDs específicos. +

+ +
+
+

Identificadores de Cuadros de Puntuación

+

+ Al usar el parámetro --scorecard, puedes proporcionar cualquiera de los siguientes: +

+
    +
  • ID de DynamoDB: El identificador único de la base de datos (ej., e51cd5ec-1940-4d8e-abcc-faa851390112)
  • +
  • Nombre: El nombre legible para humanos (ej., "Aseguramiento de Calidad")
  • +
  • Clave: La clave amigable para URLs (ej., aseguramiento-calidad)
  • +
  • ID Externo: Tu identificador externo personalizado (ej., qa-2023)
  • +
+
+ +
+

Identificadores de Puntuaciones

+

+ Similar a los cuadros de puntuación, las puntuaciones pueden referenciarse usando varios identificadores: +

+ +
    +
  • ID de DynamoDB: El UUID único asignado a la puntuación
  • +
  • Nombre: El nombre legible para humanos de la puntuación
  • +
  • Clave: La clave amigable para máquinas de la puntuación
  • +
  • ID Externo: Un identificador externo opcional para la puntuación
  • +
+
+ +
+

Identificadores de Cuenta

+

+ Al usar el parámetro --account, puedes proporcionar cualquiera de los siguientes: +

+
    +
  • ID de DynamoDB: El identificador único de la base de datos
  • +
  • Nombre: El nombre legible para humanos
  • +
  • Clave: La clave amigable para URLs
  • +
+
+
+
+ +
+

Comandos Comunes de Cuadros de Puntuación

+

+ Aquí tienes algunos comandos comunes para gestionar cuadros de puntuación: +

+ +
+              
+ {`# Listar todos los cuadros de puntuación +plexus scorecards list + +# Obtener información detallada sobre un cuadro específico +plexus scorecards info --scorecard ejemplo1 + +# Listar todas las puntuaciones en un cuadro +plexus scores list --scorecard ejemplo1 + +# Extraer configuración del cuadro a YAML +plexus scorecards pull --scorecard ejemplo1 --output ./mis-cuadros + +# Subir configuración del cuadro desde YAML +plexus scorecards push --scorecard ejemplo1 --file ./mi-cuadro.yaml --note "Configuración actualizada" + +# Eliminar un cuadro +plexus scorecards delete --scorecard ejemplo1`} +
+
+
+ +
+

Ejecutar Evaluaciones

+

+ La forma principal de evaluar el rendimiento de tu cuadro es usando el comando evaluate accuracy: +

+ +
+              
+ {`plexus \\ + evaluate \\ + accuracy \\ + --scorecard "Leads Entrantes" \\ + --number-of-samples 100 \\ + --visualize`} +
+
+ +
+

--scorecard: Cuadro a evaluar (acepta ID, nombre, clave o ID externo)

+

--number-of-samples: Número de muestras a evaluar (recomendado: 100+)

+

--visualize: Generar visualizaciones de los resultados

+
+ +

+ Este comando evaluará tu cuadro contra muestras etiquetadas y proporcionará métricas detalladas de precisión, + incluyendo precisión, recuperación y matrices de confusión cuando la visualización esté habilitada. +

+
+ +
+

Recursos Adicionales

+

+ Para información más detallada sobre características específicas: +

+
    +
  • Visita nuestra Guía de Evaluaciones
  • +
  • Consulta la ayuda integrada con plexus --help
  • +
  • Obtén ayuda específica de comandos con plexus evaluate accuracy --help
  • +
+
+
+
+ ); + } return (
+ +

Usar el Servidor MCP de Plexus

+

+ Conecta asistentes de IA como Claude a tus datos y funcionalidad de Plexus usando el servidor del Protocolo de Contexto de Modelo (MCP). +

+ +
+
+

¿Qué es MCP?

+

+ El Protocolo de Contexto de Modelo (MCP) es un estándar abierto diseñado por Anthropic que permite a los modelos de IA, como Claude, + interactuar de forma segura con herramientas y fuentes de datos externas. Para un asistente de IA, un servidor MCP actúa como una puerta de enlace, + permitiéndole acceder y usar capacidades de otros sistemas. En el contexto de Plexus, esto significa que puedes + empoderar a una IA para trabajar con tus cuadros de puntuación, evaluaciones y reportes directamente. Esto permite formas más dinámicas y + poderosas de interactuar con tu instancia de Plexus. + Para una inmersión más profunda en el protocolo mismo, consulta el anuncio oficial del Protocolo de Contexto de Modelo de Anthropic. +

+
+ +
+

Resumen del Servidor MCP de Plexus

+

+ El servidor MCP de Plexus es una herramienta pre-construida que puedes ejecutar en tu sistema. Una vez ejecutándose, permite a los asistentes de IA + que admiten MCP (como la aplicación de escritorio de Claude) conectarse a tu entorno de Plexus. Esta conexión permite a la IA + realizar varias acciones dentro de Plexus en tu nombre, como listar cuadros de puntuación, recuperar detalles de reportes, o + incluso iniciar nuevas evaluaciones. El servidor típicamente se ejecuta a través de un script wrapper (plexus_fastmcp_wrapper.py) + que maneja la configuración del entorno y asegura una comunicación fluida con el cliente de IA. +

+
+ +
+

Obtener el Código del Servidor

+

+ Para ejecutar el servidor MCP de Plexus, primero necesitarás obtener el código del servidor. Esto está disponible en el repositorio principal de GitHub de Plexus. + Puedes clonarlo o descargarlo desde: https://github.com/AnthusAI/Plexus. + Los scripts necesarios (plexus_fastmcp_wrapper.py y plexus_fastmcp_server.py) están típicamente ubicados en MCP/ dentro del repositorio. + Principalmente necesitarás estos archivos y asegurar que sus dependencias puedan cumplirse en tu entorno de Python. +

+
+ +
+

Configurar un Cliente MCP (ej., Aplicación de Escritorio de Claude)

+

+ Para usar el servidor MCP de Plexus, necesitas un cliente MCP. Por ejemplo, si estás usando la aplicación de escritorio de Claude, + la configurarías creando o editando un archivo mcp.json. Este archivo le dice a Claude (u otro cliente) + cómo encontrar y comunicarse con tu servidor MCP de Plexus en ejecución. +

+

+ Aquí hay una configuración de ejemplo para tu archivo mcp.json. Necesitarás reemplazar las rutas de marcador de posición + (/path/to/...) con las rutas reales relevantes a tu sistema y donde has clonado el repositorio de Plexus. +

+
+              
+{`{ + "mcpServers": { + "plexus-mcp-service": { + "command": "/path/to/your/conda/envs/py311/bin/python", + "args": [ + "/path/to/your/Plexus/MCP/plexus_fastmcp_wrapper.py", + "--host", "127.0.0.1", + "--port", "8002", + "--transport", "stdio", + "--env-file", "/path/to/your/Plexus/.env", + "--target-cwd", "/path/to/your/Plexus/" + ], + "env": { + "PYTHONUNBUFFERED": "1", + "PYTHONPATH": "/path/to/your/Plexus" + } + } + } +}`} +
+
+

Partes clave de esta configuración:

+
    +
  • command: La ruta completa al intérprete de Python dentro de tu entorno conda de Plexus (ej., py311).
  • +
  • args: Especifica el script wrapper a ejecutar (plexus_fastmcp_wrapper.py) y sus parámetros. + Los argumentos --host y --port configuran los ajustes del servidor local. + El argumento --transport stdio es estándar para comunicación cliente-servidor. + El argumento --env-file debe apuntar directamente a tu archivo .env (que contiene claves API). + El --target-cwd debe apuntar a tu directorio raíz del proyecto Plexus.
  • +
  • env.PYTHONPATH: Debe apuntar a la raíz de tu directorio del proyecto Plexus para asegurar que el servidor pueda encontrar todos los módulos de Python necesarios.
  • +
+

+ La ubicación del archivo mcp.json puede variar dependiendo del cliente. Para la aplicación de escritorio de Claude, consulta su documentación para la ubicación correcta (a menudo en un directorio de configuración dentro de tu perfil de usuario). +

+
+ +
+

Herramientas y Capacidades Disponibles

+

Una vez que el servidor MCP de Plexus esté ejecutándose (a través del script wrapper) y tu asistente de IA esté conectado, puedes instruir al asistente para usar las siguientes herramientas:

+ +
+

Gestión de Cuadros de Puntuación

+
    +
  • + list_plexus_scorecards: Pide a la IA que liste los cuadros de puntuación disponibles en tu Dashboard de Plexus. + Opcionalmente puedes decirle que filtre por un nombre/clave de cuenta, un nombre parcial de cuadro de puntuación, o una clave de cuadro de puntuación. Por ejemplo: "Lista los cuadros de puntuación de Plexus para la cuenta 'Ventas' que incluyan 'Q3' en el nombre." +
  • +
  • + get_plexus_scorecard_info: Solicita información detallada sobre un cuadro de puntuación específico. + Proporciona a la IA un identificador para el cuadro de puntuación (como su nombre, clave, o ID). Devolverá la descripción del cuadro de puntuación, secciones, y las puntuaciones dentro de cada sección. Por ejemplo: "Obtén información para el cuadro de puntuación 'Satisfacción del Cliente Q3'." +
  • +
  • + get_plexus_score_details: Obtén detalles específicos para una puntuación particular dentro de un cuadro de puntuación, incluyendo su configuración e historial de versiones. + Necesitarás especificar tanto el cuadro de puntuación como la puntuación. También puedes pedir una versión específica de la puntuación. Por ejemplo: "Muéstrame los detalles para la puntuación 'Capacidad de Respuesta' en el cuadro de puntuación 'Tickets de Soporte', especialmente su versión campeón." +
  • +
+
+ +
+

Herramientas de Evaluación

+
    +
  • + run_plexus_evaluation: Instruye a la IA para iniciar una nueva evaluación de cuadro de puntuación. + Necesitas proporcionar el nombre del cuadro de puntuación y opcionalmente un nombre de puntuación específico y el número de muestras. El servidor enviará esta tarea a tu backend de Plexus. Nota que el servidor MCP en sí no rastrea el progreso; monitorearías la evaluación en el Dashboard de Plexus como siempre. Por ejemplo: "Ejecuta una evaluación de Plexus para el cuadro de puntuación 'Calidad de Leads' usando 100 muestras." +
  • +
+
+ +
+

Herramientas de Reportes

+
    +
  • + list_plexus_reports: Pide una lista de reportes generados. Puedes filtrar por cuenta o por un ID de configuración de reporte específico si lo conoces. + La IA devolverá una lista mostrando nombres de reportes, IDs, y cuándo fueron creados. Por ejemplo: "Lista los últimos reportes de Plexus para la cuenta principal." +
  • +
  • + get_plexus_report_details: Recupera información detallada sobre un reporte específico proporcionando su ID. + Esto incluye los parámetros del reporte, salida, y cualquier bloque generado. Por ejemplo: "Obtén los detalles para el reporte de Plexus ID '123-abc-456'." +
  • +
  • + get_latest_plexus_report: Una forma conveniente de obtener los detalles del reporte generado más recientemente. + Opcionalmente puedes filtrar por cuenta o ID de configuración de reporte. Por ejemplo: "Muéstrame el último reporte generado desde la configuración 'Rendimiento Semanal'." +
  • +
  • + list_plexus_report_configurations: Obtén una lista de todas las configuraciones de reporte disponibles para una cuenta. + Esto es útil para saber qué reportes *puedes* generar. Por ejemplo: "¿Qué configuraciones de reporte están disponibles para la cuenta 'Marketing'?" +
  • +
+
+ +
+

Herramientas de Utilidad

+
    +
  • + think: Una herramienta de planificación usada internamente por la IA para estructurar el razonamiento antes de usar otras herramientas. + Esto ayuda a la IA a organizar su enfoque para tareas complejas que pueden requerir múltiples pasos o llamadas de herramientas. +
  • +
+
+
+ +
+

Requisitos de Entorno para Ejecutar el Servidor

+
+
+

Software

+
    +
  • Python 3.11 o más nuevo (requerido por la librería fastmcp que usa el servidor).
  • +
  • Una instalación existente de Plexus y acceso a sus credenciales del dashboard.
  • +
  • El paquete Python python-dotenv (usado por el servidor para cargar tus claves API desde el archivo .env).
  • +
+
+
+

Archivo .env con Credenciales de Plexus

+

+ El servidor necesita acceder a tu API de Plexus. Crea un archivo llamado .env. El parámetro --env-file en tu mcp.json debe apuntar directamente a este archivo. + Típicamente se ubica en tu directorio raíz del proyecto Plexus principal (ej., Plexus/.env). +

+

Variables Requeridas en .env:

+
    +
  • PLEXUS_API_URL: La URL del endpoint de API para tu instancia de Plexus.
  • +
  • PLEXUS_API_KEY: Tu clave API para autenticar con Plexus.
  • +
  • PLEXUS_DASHBOARD_URL: La URL principal de tu dashboard de Plexus (usada para generar enlaces).
  • +
+

Variables Opcionales en .env:

+
    +
  • PLEXUS_ACCOUNT_KEY: Si trabajas con múltiples cuentas, puedes establecer una clave de cuenta predeterminada aquí.
  • +
  • LOG_LEVEL: Puedes establecer esto a DEBUG, INFO, WARNING, o ERROR para controlar la verbosidad del registro del servidor.
  • +
+
+
+
+ +
+

Ejecutar el Servidor

+

+ Una vez que tengas el código y tu archivo .env esté configurado, debes ejecutar el servidor usando el script plexus_fastmcp_wrapper.py como se configura en tu archivo mcp.json. + El cliente MCP (ej., Aplicación de Escritorio de Claude) ejecutará el comando especificado en mcp.json cuando intente conectarse al "plexus-mcp-service". +

+

+ Típicamente no ejecutas el script plexus_fastmcp_wrapper.py manualmente desde la terminal para uso del cliente. En su lugar, asegúrate de que tu mcp.json esté configurado correctamente, y la aplicación cliente iniciará el proceso del servidor según sea necesario. +

+

+ Asegúrate de que tu entorno Python de Plexus (ej., conda activate py311) esté correctamente referenciado por la ruta completa a python en el campo command de tu mcp.json. + El script wrapper maneja el paso de las variables de entorno y rutas necesarias al plexus_fastmcp_server.py subyacente. +

+
+ +
+

Solución de Problemas Comunes

+
    +
  • Errores de Conexión: Verifica dos veces todas las rutas en tu archivo mcp.json (command, args, env.PYTHONPATH). Asegúrate de que apunten con precisión a tu ejecutable de Python, el script plexus_fastmcp_wrapper.py, tu archivo .env, y tu directorio del proyecto.
  • +
  • Errores de Autenticación: Verifica que la ruta --env-file en mcp.json apunte correctamente a tu archivo .env y que este archivo contenga el PLEXUS_API_URL y PLEXUS_API_KEY correctos.
  • +
+
+ +
+

Registros del Servidor

+

+ La configuración del servidor MCP de Plexus (a través de plexus_fastmcp_wrapper.py) dirige los registros operacionales y mensajes de error a stderr. + Los clientes MCP como la aplicación de escritorio de Claude típicamente capturan y muestran estos registros stderr, o los almacenan en un archivo de registro dedicado. +

+

+ Por ejemplo, Cursor a menudo almacena registros de interacción MCP en ~/Library/Logs/Claude/mcp.log en macOS. Monitorear este archivo es clave para diagnosticar problemas si el cliente no los muestra directamente. +

+
+
+
+ ); + } + return ( +
+ + +

Using the Plexus MCP Server

+

+ Connect AI assistants like Claude to your Plexus data and functionality using the Model Context Protocol (MCP) server. +

+ +
+
+

What is MCP?

+

+ The Model Context Protocol (MCP) is an open standard designed by Anthropic that allows AI models, such as Claude, + to securely interact with external tools and data sources. For an AI assistant, an MCP server acts as a gateway, + enabling it to access and use capabilities from other systems. In the context of Plexus, this means you can + empower an AI to work with your scorecards, evaluations, and reports directly. This allows for more dynamic and + powerful ways to interact with your Plexus instance. + For a deeper dive into the protocol itself, see the official Anthropic Model Context Protocol announcement. +

+
+ +
+

Plexus MCP Server Overview

+

+ The Plexus MCP server is a pre-built tool that you can run on your system. Once running, it allows AI assistants + that support MCP (like the Claude desktop app) to connect to your Plexus environment. This connection lets the AI + perform various actions within Plexus on your behalf, such as listing scorecards, retrieving report details, or + even initiating new evaluations. The server is typically run via a wrapper script (plexus_fastmcp_wrapper.py) + which handles environment setup and ensures smooth communication with the AI client. +

+
+ +
+

Getting the Server Code

+

+ To run the Plexus MCP server, you'll first need to obtain the server code. This is available in the main Plexus GitHub repository. + You can clone or download it from: https://github.com/AnthusAI/Plexus. + The necessary scripts (plexus_fastmcp_wrapper.py and plexus_fastmcp_server.py) are typically located at MCP/ within the repository. + You will primarily need these files and to ensure their dependencies can be met in your Python environment. +

+
+ +
+

Setting Up an MCP Client (e.g., Claude Desktop App)

+

+ To use the Plexus MCP server, you need an MCP client. For example, if you are using the Claude desktop application, + you would configure it by creating or editing an mcp.json file. This file tells Claude (or another client) + how to find and communicate with your running Plexus MCP server. +

+

+ Here is an example configuration for your mcp.json file. You will need to replace the placeholder paths + (/path/to/...) with the actual paths relevant to your system and where you have cloned the Plexus repository. +

+
+            
+{`{ + "mcpServers": { + "plexus-mcp-service": { + "command": "/path/to/your/conda/envs/py311/bin/python", + "args": [ + "/path/to/your/Plexus/MCP/plexus_fastmcp_wrapper.py", + "--host", "127.0.0.1", + "--port", "8002", + "--transport", "stdio", + "--env-file", "/path/to/your/Plexus/.env", + "--target-cwd", "/path/to/your/Plexus/" + ], + "env": { + "PYTHONUNBUFFERED": "1", + "PYTHONPATH": "/path/to/your/Plexus" + } + } + } +}`} +
+
+

Key parts of this configuration:

+
    +
  • command: The full path to the Python interpreter within your Plexus conda environment (e.g., py311).
  • +
  • args: Specifies the wrapper script to run (plexus_fastmcp_wrapper.py) and its parameters. + The --host and --port arguments configure the local server settings. + The --transport stdio argument is standard for client-server communication. + The --env-file argument must point directly to your .env file (which contains API keys). + The --target-cwd should point to your Plexus project root directory.
  • +
  • env.PYTHONPATH: Should point to the root of your Plexus project directory to ensure the server can find all necessary Python modules.
  • +
+

+ The location of the mcp.json file can vary depending on the client. For the Claude desktop app, consult its documentation for the correct location (often in a configuration directory within your user profile). +

+
+ +
+

Available Tools & Capabilities

+

Once the Plexus MCP server is running (via the wrapper script) and your AI assistant is connected, you can instruct the assistant to use the following tools:

+ +
+

Scorecard Management

+
    +
  • + list_plexus_scorecards: Ask the AI to list available scorecards in your Plexus Dashboard. + You can optionally tell it to filter by an account name/key, a partial scorecard name, or a scorecard key. For example: "List Plexus scorecards for the 'Sales' account that include 'Q3' in the name." +
  • +
  • + get_plexus_scorecard_info: Request detailed information about a specific scorecard. + Provide the AI with an identifier for the scorecard (like its name, key, or ID). It will return the scorecard's description, sections, and the scores within each section. For example: "Get info for the 'Customer Satisfaction Q3' scorecard." +
  • +
  • + get_plexus_score_details: Get specific details for a particular score within a scorecard, including its configuration and version history. + You'll need to specify both the scorecard and the score. You can also ask for a specific version of the score. For example: "Show me the details for the 'Responsiveness' score in the 'Support Tickets' scorecard, especially its champion version." +
  • +
+
+ +
+

Evaluation Tools

+
    +
  • + run_plexus_evaluation: Instruct the AI to start a new scorecard evaluation. + You need to provide the scorecard name and optionally a specific score name and the number of samples. The server will dispatch this task to your Plexus backend. Note that the MCP server itself doesn't track the progress; you would monitor the evaluation in the Plexus Dashboard as usual. For example: "Run a Plexus evaluation for the 'Lead Quality' scorecard using 100 samples." +
  • +
+
+ +
+

Reporting Tools

+
    +
  • + list_plexus_reports: Ask for a list of generated reports. You can filter by account or by a specific report configuration ID if you know it. + The AI will return a list showing report names, IDs, and when they were created. For example: "List the latest Plexus reports for the main account." +
  • +
  • + get_plexus_report_details: Retrieve detailed information about a specific report by providing its ID. + This includes the report's parameters, output, and any generated blocks. For example: "Get the details for Plexus report ID '123-abc-456'." +
  • +
  • + get_latest_plexus_report: A convenient way to get the details of the most recently generated report. + You can optionally filter by account or report configuration ID. For example: "Show me the latest report generated from the 'Weekly Performance' configuration." +
  • +
  • + list_plexus_report_configurations: Get a list of all available report configurations for an account. + This is useful for knowing what reports you *can* generate. For example: "What report configurations are available for the 'Marketing' account?" +
  • +
+
+ +
+

Utility Tools

+
    +
  • + think: A planning tool used internally by the AI to structure reasoning before using other tools. + This helps the AI organize its approach to complex tasks that may require multiple steps or tool calls. +
  • +
+
+
+ +
+

Environment Requirements for Running the Server

+
+
+

Software

+
    +
  • Python 3.11 or newer (required by the fastmcp library the server uses).
  • +
  • An existing Plexus installation and access to its dashboard credentials.
  • +
  • The python-dotenv Python package (used by the server to load your API keys from the .env file).
  • +
+
+
+

.env File with Plexus Credentials

+

+ The server needs to access your Plexus API. Create a file named .env. The --env-file parameter in your mcp.json should point directly to this file. + It's typically located in your main Plexus project root directory (e.g., Plexus/.env). +

+

Required Variables in .env:

+
    +
  • PLEXUS_API_URL: The API endpoint URL for your Plexus instance.
  • +
  • PLEXUS_API_KEY: Your API key for authenticating with Plexus.
  • +
  • PLEXUS_DASHBOARD_URL: The main URL of your Plexus dashboard (used for generating links).
  • +
+

Optional Variables in .env:

+
    +
  • PLEXUS_ACCOUNT_KEY: If you work with multiple accounts, you can set a default account key here.
  • +
  • LOG_LEVEL: You can set this to DEBUG, INFO, WARNING, or ERROR to control the server's logging verbosity.
  • +
+
+
+
+ +
+

Running the Server

+

+ Once you have the code and your .env file is set up, you should run the server using the plexus_fastmcp_wrapper.py script as configured in your mcp.json file. + The MCP client (e.g., Claude Desktop App) will execute the command specified in mcp.json when it attempts to connect to the "plexus-mcp-service". +

+

+ You typically don't run the plexus_fastmcp_wrapper.py script manually from the terminal for client use. Instead, ensure your mcp.json is correctly configured, and the client application will start the server process as needed. +

+

+ Make sure your Plexus Python environment (e.g., conda activate py311) is correctly referenced by the full path to python in the command field of your mcp.json. + The wrapper script handles passing the necessary environment variables and paths to the underlying plexus_fastmcp_server.py. +

+
+ +
+

Troubleshooting Common Issues

+
    +
  • Connection Errors: Double-check all paths in your mcp.json file (command, args, env.PYTHONPATH). Ensure they accurately point to your Python executable, the plexus_fastmcp_wrapper.py script, your .env file, and your project directory.
  • +
  • Authentication Errors: Verify that the --env-file path in mcp.json correctly points to your .env file and that this file contains the correct PLEXUS_API_URL and PLEXUS_API_KEY.
  • +
+
+ +
+

Server Logs

+

+ The Plexus MCP server setup (via plexus_fastmcp_wrapper.py) directs operational logs and error messages to stderr. + MCP clients like the Claude desktop app typically capture and display these stderr logs, or store them in a dedicated log file. +

+

+ For instance, Cursor often stores MCP interaction logs in ~/Library/Logs/Claude/mcp.log on macOS. Monitoring this file is key for diagnosing issues if the client doesn't display them directly. +

+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/advanced/page.tsx b/dashboard/app/[locale]/documentation/advanced/page.tsx new file mode 100644 index 000000000..e3e8c3b9a --- /dev/null +++ b/dashboard/app/[locale]/documentation/advanced/page.tsx @@ -0,0 +1,164 @@ +'use client'; + +import { Button as DocButton } from "@/components/ui/button" +import { useTranslationContext } from '@/app/contexts/TranslationContext' +import Link from "next/link" + +export default function AdvancedPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Herramientas y Conceptos Avanzados

+

+ Explora herramientas y conceptos avanzados que permiten una integración más profunda y personalización de Plexus + para usuarios técnicos y desarrolladores. +

+ +
+
+

Interfaz de Línea de Comandos

+
+

+ La herramienta CLI plexus proporciona acceso potente por línea de comandos a toda la funcionalidad de Plexus, + perfecta para automatización y flujos de trabajo avanzados. +

+ + Explorar Herramienta CLI + +
+
+ +
+

Infraestructura de Nodos de Trabajo

+
+

+ Aprende cómo configurar y gestionar nodos de trabajo de Plexus para procesar tareas de manera eficiente + en tu infraestructura. +

+ + Aprender sobre Nodos de Trabajo + +
+
+ +
+

SDK de Python

+
+

+ Integra Plexus directamente en tus aplicaciones Python con nuestro SDK integral, + habilitando acceso programático a todas las características de la plataforma. +

+ + Explorar Referencia SDK + +
+
+ +
+

Fragmentos de Código Universal

+
+

+ Aprende sobre el formato de código YAML universal de Plexus diseñado para comunicación perfecta + entre humanos, modelos de IA y otros sistemas. +

+ + Explorar Fragmentos de Código Universal + +
+
+ +
+

Servidor MCP de Plexus

+
+

+ Habilita agentes de IA y herramientas para interactuar con la funcionalidad de Plexus usando el Protocolo Cooperativo Multi-Agente (MCP). +

+ + Explorar Servidor MCP + +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Advanced Tools & Concepts

+

+ Explore advanced tools and concepts that enable deeper integration and customization of Plexus + for technical users and developers. +

+ +
+
+

Command Line Interface

+
+

+ The plexus CLI tool provides powerful command-line access to all Plexus functionality, + perfect for automation and advanced workflows. +

+ + Explore CLI Tool + +
+
+ +
+

Worker Infrastructure

+
+

+ Learn how to set up and manage Plexus worker nodes to process tasks efficiently + across your infrastructure. +

+ + Learn About Workers + +
+
+ +
+

Python SDK

+
+

+ Integrate Plexus directly into your Python applications with our comprehensive SDK, + enabling programmatic access to all platform features. +

+ + Browse SDK Reference + +
+
+ +
+

Universal Code Snippets

+
+

+ Learn about Plexus's universal YAML code format designed for seamless communication + between humans, AI models, and other systems. +

+ + Explore Universal Code Snippets + +
+
+ +
+

Plexus MCP Server

+
+

+ Enable AI agents and tools to interact with Plexus functionality using the Multi-Agent Cooperative Protocol (MCP). +

+ + Explore MCP Server + +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/advanced/sdk/page.tsx b/dashboard/app/[locale]/documentation/advanced/sdk/page.tsx new file mode 100644 index 000000000..f17fe2d69 --- /dev/null +++ b/dashboard/app/[locale]/documentation/advanced/sdk/page.tsx @@ -0,0 +1,156 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function SdkPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Referencia del SDK de Python

+

+ Explora el SDK de Python para acceso programático a la funcionalidad de Plexus. +

+ +
+
+

Resumen

+

+ El SDK de Python de Plexus proporciona una forma simple e intuitiva de interactuar con Plexus + programáticamente. Úsalo para automatizar flujos de trabajo, gestionar recursos, e integrar + Plexus en tus aplicaciones. +

+
+ +
+

Instalación

+

+ Instala el SDK de Plexus usando pip: +

+
+              pip install plexus-sdk
+            
+
+ +
+

Inicio Rápido

+

+ Aquí tienes un ejemplo simple para comenzar: +

+
+              {`from plexus import Plexus
+
+# Inicializar el cliente
+plexus = Plexus(api_key="tu-api-key")
+
+# Crear una nueva fuente
+source = plexus.sources.create(
+    name="Mi Fuente",
+    type="text",
+    data="Contenido de ejemplo"
+)
+
+# Ejecutar una evaluación
+evaluation = plexus.evaluations.create(
+    source_id=source.id,
+    scorecard_id="tu-scorecard-id"
+)`}
+            
+
+ +
+

Documentación Completa

+

+ Para la referencia completa de la API, guías de autenticación, ejemplos de uso avanzado y mejores prácticas, + visita nuestra documentación integral del SDK de Python: +

+
+ + Ver Documentación Completa del SDK → + +
+
+
+
+ ); + } + return ( +
+

Python SDK Reference

+

+ Explore the Python SDK for programmatic access to Plexus functionality. +

+ +
+
+

Overview

+

+ The Plexus Python SDK provides a simple and intuitive way to interact with Plexus + programmatically. Use it to automate workflows, manage resources, and integrate + Plexus into your applications. +

+
+ +
+

Installation

+

+ Install the Plexus SDK using pip: +

+
+            pip install plexus-sdk
+          
+
+ +
+

Quick Start

+

+ Here's a simple example to get you started: +

+
+            {`from plexus import Plexus
+
+# Initialize the client
+plexus = Plexus(api_key="your-api-key")
+
+# Create a new source
+source = plexus.sources.create(
+    name="My Source",
+    type="text",
+    data="Sample content"
+)
+
+# Run an evaluation
+evaluation = plexus.evaluations.create(
+    source_id=source.id,
+    scorecard_id="your-scorecard-id"
+)`}
+          
+
+ +
+

Complete Documentation

+

+ For complete API reference, authentication guides, advanced usage examples, and best practices, + visit our comprehensive Python SDK documentation: +

+
+ + View Full SDK Documentation → + +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/documentation/advanced/universal-code/page.tsx b/dashboard/app/[locale]/documentation/advanced/universal-code/page.tsx similarity index 61% rename from dashboard/app/documentation/advanced/universal-code/page.tsx rename to dashboard/app/[locale]/documentation/advanced/universal-code/page.tsx index dde85b2b9..63701644c 100644 --- a/dashboard/app/documentation/advanced/universal-code/page.tsx +++ b/dashboard/app/[locale]/documentation/advanced/universal-code/page.tsx @@ -3,8 +3,10 @@ import { MessageSquareCode } from 'lucide-react'; import { CodeSnippet } from '@/components/ui/code-snippet'; import FeedbackAnalysis from '@/components/blocks/FeedbackAnalysis'; +import { useTranslationContext } from '@/app/contexts/TranslationContext' export default function YAMLCodeStandardPage() { + const { locale } = useTranslationContext(); // Create the YAML data but also parse it into the object structure for the component const sampleYAMLCode = `# Sales Lead Routing Analysis Report Output # @@ -186,6 +188,136 @@ scores: indexed_items_file: "lead_routing_analysis_55125_items.json"`; + if (locale === 'es') { + return ( +
+
+

Fragmentos de Código Universal

+

+ Interfaz de código universal para humanos, modelos de IA y sistemas +

+
+ +
+
+

El Icono de Código Universal

+
+ +
+
+

+ En todo Plexus, este icono significa que puedes obtener datos estructurados que funcionan en cualquier lugar. Haz clic, copia la salida, + y pégala directamente en ChatGPT, Claude, tu editor de código, o compártela con otros miembros del equipo. + El formato YAML incluye contexto integrado para que cualquiera (humano o IA) entienda inmediatamente lo que está viendo. +

+

+ No más luchar con JSON denso o perder contexto cuando mueves datos entre herramientas. + Simplemente funciona, en cualquier lugar. +

+
+
+ +
+

Reporte Visual → Código Universal

+

+ Así es como funciona: cada reporte gráfico en Plexus tiene una representación de código correspondiente. + A continuación hay un análisis real de enrutamiento de leads de ventas. El reporte visual muestra puntuaciones de acuerdo, matrices de confusión e insights de manera hermosa. + El botón de Código revela los mismos datos como YAML contextual que funciona en cualquier lugar. +

+ +
+

Prueba el Botón de Código

+

+ Usa el botón de Código en la esquina superior derecha para ver cómo los insights visuales se transforman en YAML estructurado que funciona con cualquier herramienta de IA, sistema de documentación o repositorio de código. +

+
+ +
+
+
+
Código Universal
+
+
+
+
+ +
+
+ +
+

+ 💡 Prueba esto: Usa el botón de Código para revelar YAML contextual con comentarios explicativos. + Haz clic en el botón Copiar para copiar el código a tu portapapeles. + Pégalo en ChatGPT o Claude y pregunta: "¿Qué puntuaciones de enrutamiento de leads de ventas muestran el mayor desacuerdo entre revisores?" o "¿Qué recomendaciones de entrenamiento mejorarían la confiabilidad del enrutamiento de leads?" + La IA entenderá inmediatamente el contexto y te dará recomendaciones estratégicas. +

+
+
+ +
+

Disponible en Todas Partes

+

+ Cada bloque de reporte en Plexus genera automáticamente Fragmentos de Código Universal. Ya sea que estés trabajando con + análisis de temas, análisis de retroalimentación, matrices de confusión, o cualquier otra salida analítica, el distintivo + icono de código te da acceso instantáneo a datos estructurados y contextuales. +

+ +

+ También encontrarás Fragmentos de Código Universal en: +

+ +
+
+

📊 Bloques de Reporte

+

+ Cada salida analítica incluye el Icono de Código Universal para acceso instantáneo a datos +

+
+
+

🎯 Evaluaciones

+

+ Resultados de evaluación con matrices de confusión, métricas de precisión y datos de rendimiento +

+
+
+

📈 Analíticas

+

+ Análisis estadístico, puntuaciones de acuerdo e insights de rendimiento +

+
+
+

🔧 Configuraciones

+

+ Configuraciones de cuadros de puntuación y puntuaciones exportadas en formato universal +

+
+
+
+ +
+

Por Qué Esto Importa

+

+ Las exportaciones de datos tradicionales carecen de contexto cuando las mueves. + Los Fragmentos de Código Universal resuelven esto empaquetando tus datos con explicaciones integradas que viajan con ellos. +

+

+ Esto significa que puedes mover insights sin problemas entre Plexus, tus herramientas de IA, documentación, repositorios de código, + y conversaciones de equipo sin perder significado o requerir explicación adicional. +

+
+
+
+ ); + } + return (
diff --git a/dashboard/app/[locale]/documentation/advanced/worker-nodes/page.tsx b/dashboard/app/[locale]/documentation/advanced/worker-nodes/page.tsx new file mode 100644 index 000000000..4df01e6a9 --- /dev/null +++ b/dashboard/app/[locale]/documentation/advanced/worker-nodes/page.tsx @@ -0,0 +1,370 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function WorkerNodesPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+ + +

Nodos Trabajadores

+

+ Aprende cómo desplegar y gestionar nodos trabajadores de Plexus en cualquier infraestructura para procesar tus tareas de evaluación. +

+ +
+
+

Resumen

+

+ Los nodos trabajadores de Plexus son procesos daemon de larga duración que manejan tareas de evaluación y otras operaciones. + Puedes ejecutar estos trabajadores en cualquier computadora con Python instalado - ya sea en la nube (AWS, Azure, GCP) + o en tus propias instalaciones. +

+

+ Los trabajadores se gestionan usando la herramienta CLI de Plexus, que facilita iniciar, configurar y monitorear procesos + trabajadores en tu infraestructura. +

+
+ +
+

Iniciar un Trabajador

+

+ Usa el comando plexus command worker para iniciar un proceso trabajador. Aquí tienes un ejemplo básico: +

+ +
+              
+ {`plexus command worker \\\\ + --concurrency 4 \\\\ + --queue celery \\\\ + --loglevel INFO`} +
+
+ +
+

--concurrency: Número de procesos trabajadores (predeterminado: 4)

+

--queue: Cola a procesar (predeterminado: celery)

+

--loglevel: Nivel de registro (predeterminado: INFO)

+
+
+ +
+

Especialización de Trabajadores

+

+ Los trabajadores pueden especializarse para manejar tipos específicos de tareas usando patrones objetivo. Esto te permite + dedicar ciertos trabajadores a cargas de trabajo particulares: +

+ +
+              
+ {`# Trabajador que solo procesa tareas relacionadas con conjuntos de datos +plexus command worker \\\\ + --target-patterns "datasets/*" \\\\ + --concurrency 4 + +# Trabajador para tareas intensivas en GPU +plexus command worker \\\\ + --target-patterns "*/gpu-required" \\\\ + --concurrency 2 + +# Trabajador que maneja múltiples tipos de tareas +plexus command worker \\\\ + --target-patterns "datasets/*,training/*" \\\\ + --concurrency 8`} +
+
+ +

+ Los patrones objetivo usan el formato dominio/subdominio y admiten comodines. Algunos ejemplos: +

+
    +
  • datasets/call-criteria - Solo procesar tareas de conjunto de datos de criterios de llamada
  • +
  • training/call-criteria - Solo manejar tareas de entrenamiento de criterios de llamada
  • +
  • */gpu-required - Procesar cualquier tarea que requiera recursos de GPU
  • +
  • datasets/* - Manejar todas las tareas relacionadas con conjuntos de datos
  • +
+
+ +
+

Ejemplos de Despliegue

+

+ Aquí tienes algunos escenarios de despliegue comunes: +

+ +
+
+

AWS EC2

+
+                  
+ {`# Ejecutar en una sesión screen para persistencia +screen -S plexus-worker +plexus command worker \\\\ + --concurrency 8 \\\\ + --loglevel INFO +# Ctrl+A, D para desconectar`} +
+
+
+ +
+

Desarrollo Local

+
+                  
+ {`# Ejecutar con registro aumentado para depuración +plexus command worker \\\\ + --concurrency 2 \\\\ + --loglevel DEBUG`} +
+
+
+ +
+

Trabajador GPU

+
+                  
+ {`# Trabajador GPU dedicado con objetivo específico +plexus command worker \\\\ + --concurrency 1 \\\\ + --target-patterns "*/gpu-required" \\\\ + --loglevel INFO`} +
+
+
+
+
+ +
+

Mejores Prácticas

+
    +
  • Usar un gestor de procesos (como systemd, supervisor, o screen) para mantener los trabajadores funcionando
  • +
  • Establecer concurrencia basada en núcleos de CPU y memoria disponibles
  • +
  • Usar patrones objetivo para optimizar la utilización de recursos
  • +
  • Monitorear registros de trabajadores para errores y problemas de rendimiento
  • +
  • Desplegar trabajadores cerca de tus fuentes de datos cuando sea posible
  • +
  • Considerar usar grupos de auto-escalado en entornos cloud
  • +
+
+ +
+

Recursos Adicionales

+

+ Para más información sobre despliegue y gestión de trabajadores: +

+
    +
  • Consulta la documentación CLI para referencia detallada de comandos
  • +
  • Revisa la ayuda integrada con plexus command worker --help
  • +
  • Ver registros de trabajadores con --loglevel DEBUG para solución de problemas
  • +
+
+
+
+ ); + } + return ( +
+ + +

Worker Nodes

+

+ Learn how to deploy and manage Plexus worker nodes across any infrastructure to process your evaluation tasks. +

+ +
+
+

Overview

+

+ Plexus worker nodes are long-running daemon processes that handle evaluation tasks and other operations. + You can run these workers on any computer with Python installed - whether it's in the cloud (AWS, Azure, GCP) + or on your own premises. +

+

+ Workers are managed using the Plexus CLI tool, which makes it easy to start, configure, and monitor worker + processes across your infrastructure. +

+
+ +
+

Starting a Worker

+

+ Use the plexus command worker command to start a worker process. Here's a basic example: +

+ +
+            
+ {`plexus command worker \\ + --concurrency 4 \\ + --queue celery \\ + --loglevel INFO`} +
+
+ +
+

--concurrency: Number of worker processes (default: 4)

+

--queue: Queue to process (default: celery)

+

--loglevel: Logging level (default: INFO)

+
+
+ +
+

Worker Specialization

+

+ Workers can be specialized to handle specific types of tasks using target patterns. This allows you to + dedicate certain workers to particular workloads: +

+ +
+            
+ {`# Worker that only processes dataset-related tasks +plexus command worker \\ + --target-patterns "datasets/*" \\ + --concurrency 4 + +# Worker for GPU-intensive tasks +plexus command worker \\ + --target-patterns "*/gpu-required" \\ + --concurrency 2 + +# Worker handling multiple task types +plexus command worker \\ + --target-patterns "datasets/*,training/*" \\ + --concurrency 8`} +
+
+ +

+ Target patterns use the format domain/subdomain and support wildcards. Some examples: +

+
    +
  • datasets/call-criteria - Only process call criteria dataset tasks
  • +
  • training/call-criteria - Only handle call criteria training tasks
  • +
  • */gpu-required - Process any tasks requiring GPU resources
  • +
  • datasets/* - Handle all dataset-related tasks
  • +
+
+ +
+

Deployment Examples

+

+ Here are some common deployment scenarios: +

+ +
+
+

AWS EC2

+
+                
+ {`# Run in a screen session for persistence +screen -S plexus-worker +plexus command worker \\ + --concurrency 8 \\ + --loglevel INFO +# Ctrl+A, D to detach`} +
+
+
+ +
+

Local Development

+
+                
+ {`# Run with increased logging for debugging +plexus command worker \\ + --concurrency 2 \\ + --loglevel DEBUG`} +
+
+
+ +
+

GPU Worker

+
+                
+ {`# Dedicated GPU worker with specific targeting +plexus command worker \\ + --concurrency 1 \\ + --target-patterns "*/gpu-required" \\ + --loglevel INFO`} +
+
+
+
+
+ +
+

Best Practices

+
    +
  • Use a process manager (like systemd, supervisor, or screen) to keep workers running
  • +
  • Set concurrency based on available CPU cores and memory
  • +
  • Use target patterns to optimize resource utilization
  • +
  • Monitor worker logs for errors and performance issues
  • +
  • Deploy workers close to your data sources when possible
  • +
  • Consider using auto-scaling groups in cloud environments
  • +
+
+ +
+

Additional Resources

+

+ For more information about worker deployment and management: +

+
    +
  • See the CLI documentation for detailed command reference
  • +
  • Check the built-in help with plexus command worker --help
  • +
  • View worker logs with --loglevel DEBUG for troubleshooting
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/documentation/basics/evaluations/page.tsx b/dashboard/app/[locale]/documentation/basics/evaluations/page.tsx similarity index 100% rename from dashboard/app/documentation/basics/evaluations/page.tsx rename to dashboard/app/[locale]/documentation/basics/evaluations/page.tsx diff --git a/dashboard/app/documentation/components/breadcrumb.tsx b/dashboard/app/[locale]/documentation/components/breadcrumb.tsx similarity index 100% rename from dashboard/app/documentation/components/breadcrumb.tsx rename to dashboard/app/[locale]/documentation/components/breadcrumb.tsx diff --git a/dashboard/app/documentation/components/doc-button.tsx b/dashboard/app/[locale]/documentation/components/doc-button.tsx similarity index 100% rename from dashboard/app/documentation/components/doc-button.tsx rename to dashboard/app/[locale]/documentation/components/doc-button.tsx diff --git a/dashboard/app/documentation/components/documentation-layout.tsx b/dashboard/app/[locale]/documentation/components/documentation-layout.tsx similarity index 58% rename from dashboard/app/documentation/components/documentation-layout.tsx rename to dashboard/app/[locale]/documentation/components/documentation-layout.tsx index 98775cf68..a75ee2c94 100644 --- a/dashboard/app/documentation/components/documentation-layout.tsx +++ b/dashboard/app/[locale]/documentation/components/documentation-layout.tsx @@ -10,7 +10,9 @@ import { useTheme } from "next-themes" import { Button, type ButtonProps } from "@/components/ui/button" import { ScrollArea } from "@/components/ui/scroll-area" import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from "@/components/ui/tooltip" +import { PublicLanguageSelector } from "@/components/ui/public-language-selector" import SquareLogo, { LogoVariant } from '@/components/logo-square' +import { useTranslationContext } from '@/app/contexts/TranslationContext' const useMediaQuery = (query: string): boolean => { const [matches, setMatches] = useState(false) @@ -76,63 +78,128 @@ interface DocSidebarItem { }>; } -const docSections: DocSidebarItem[] = [ - { - name: "Introduction", - href: "/documentation", - }, - { - name: "Concepts", - href: "/documentation/concepts", - items: [ - { name: "Items", href: "/documentation/concepts/items" }, - { name: "Sources", href: "/documentation/concepts/sources" }, - { name: "Scores", href: "/documentation/concepts/scores" }, - { name: "Scorecards", href: "/documentation/concepts/scorecards" }, - { name: "Score Results", href: "/documentation/concepts/score-results" }, - { name: "Evaluations", href: "/documentation/concepts/evaluations" }, - { name: "Tasks", href: "/documentation/concepts/tasks" }, - { name: "Reports", href: "/documentation/concepts/reports" }, - ], - }, - { - name: "Methods", - href: "/documentation/methods", - items: [ - { name: "Add/Edit a Source", href: "/documentation/methods/add-edit-source" }, - { name: "Profile a Source", href: "/documentation/methods/profile-source" }, - { name: "Add/Edit a Scorecard", href: "/documentation/methods/add-edit-scorecard" }, - { name: "Add/Edit a Score", href: "/documentation/methods/add-edit-score" }, - { name: "Evaluate a Score", href: "/documentation/methods/evaluate-score" }, - { name: "Monitor Tasks", href: "/documentation/methods/monitor-tasks" }, - ], - }, - { - name: "Evaluation Metrics", - href: "/documentation/evaluation-metrics", - items: [ - { name: "Gauges with Context", href: "/documentation/evaluation-metrics/gauges-with-context" }, - { name: "Agreement", href: "/documentation/evaluation-metrics/gauges/agreement" }, - { name: "Accuracy", href: "/documentation/evaluation-metrics/gauges/accuracy" }, - { name: "Precision", href: "/documentation/evaluation-metrics/gauges/precision" }, - { name: "Recall", href: "/documentation/evaluation-metrics/gauges/recall" }, - { name: "Class Number Impact", href: "/documentation/evaluation-metrics/gauges/class-number" }, - { name: "Class Imbalance", href: "/documentation/evaluation-metrics/gauges/class-imbalance" }, - { name: "Examples", href: "/documentation/evaluation-metrics/examples" }, - ], - }, - { - name: "Advanced", - href: "/documentation/advanced", - items: [ - { name: "plexus CLI Tool", href: "/documentation/advanced/cli" }, - { name: "Worker Nodes", href: "/documentation/advanced/worker-nodes" }, - { name: "Python SDK Reference", href: "/documentation/advanced/sdk" }, - { name: "Universal Code Snippets", href: "/documentation/advanced/universal-code" }, - { name: "MCP Server", href: "/documentation/advanced/mcp-server" }, - ], - }, -] +const getDocSections = (locale: string): DocSidebarItem[] => { + const localePrefix = `/${locale}`; + + if (locale === 'es') { + return [ + { + name: "Introducción", + href: `${localePrefix}/documentation`, + }, + { + name: "Conceptos", + href: `${localePrefix}/documentation/concepts`, + items: [ + { name: "Items", href: `${localePrefix}/documentation/concepts/items` }, + { name: "Fuentes", href: `${localePrefix}/documentation/concepts/sources` }, + { name: "Puntuaciones", href: `${localePrefix}/documentation/concepts/scores` }, + { name: "Cuadros", href: `${localePrefix}/documentation/concepts/scorecards` }, + { name: "Resultados de Puntuación", href: `${localePrefix}/documentation/concepts/score-results` }, + { name: "Evaluaciones", href: `${localePrefix}/documentation/concepts/evaluations` }, + { name: "Tareas", href: `${localePrefix}/documentation/concepts/tasks` }, + { name: "Reportes", href: `${localePrefix}/documentation/concepts/reports` }, + ], + }, + { + name: "Métodos", + href: `${localePrefix}/documentation/methods`, + items: [ + { name: "Agregar/Editar Fuente", href: `${localePrefix}/documentation/methods/add-edit-source` }, + { name: "Perfilar Fuente", href: `${localePrefix}/documentation/methods/profile-source` }, + { name: "Agregar/Editar Cuadro", href: `${localePrefix}/documentation/methods/add-edit-scorecard` }, + { name: "Agregar/Editar Puntuación", href: `${localePrefix}/documentation/methods/add-edit-score` }, + { name: "Evaluar Puntuación", href: `${localePrefix}/documentation/methods/evaluate-score` }, + { name: "Monitorear Tareas", href: `${localePrefix}/documentation/methods/monitor-tasks` }, + ], + }, + { + name: "Métricas de Evaluación", + href: `${localePrefix}/documentation/evaluation-metrics`, + items: [ + { name: "Indicadores con Contexto", href: `${localePrefix}/documentation/evaluation-metrics/gauges-with-context` }, + { name: "Acuerdo", href: `${localePrefix}/documentation/evaluation-metrics/gauges/agreement` }, + { name: "Precisión", href: `${localePrefix}/documentation/evaluation-metrics/gauges/accuracy` }, + { name: "Exactitud", href: `${localePrefix}/documentation/evaluation-metrics/gauges/precision` }, + { name: "Sensibilidad", href: `${localePrefix}/documentation/evaluation-metrics/gauges/recall` }, + { name: "Impacto del Número de Clases", href: `${localePrefix}/documentation/evaluation-metrics/gauges/class-number` }, + { name: "Desbalance de Clases", href: `${localePrefix}/documentation/evaluation-metrics/gauges/class-imbalance` }, + { name: "Ejemplos", href: `${localePrefix}/documentation/evaluation-metrics/examples` }, + ], + }, + { + name: "Avanzado", + href: `${localePrefix}/documentation/advanced`, + items: [ + { name: "Herramienta CLI plexus", href: `${localePrefix}/documentation/advanced/cli` }, + { name: "Nodos de Trabajo", href: `${localePrefix}/documentation/advanced/worker-nodes` }, + { name: "Referencia SDK Python", href: `${localePrefix}/documentation/advanced/sdk` }, + { name: "Fragmentos de Código Universal", href: `${localePrefix}/documentation/advanced/universal-code` }, + { name: "Servidor MCP", href: `${localePrefix}/documentation/advanced/mcp-server` }, + ], + }, + ]; + } + + // English default + return [ + { + name: "Introduction", + href: `${localePrefix}/documentation`, + }, + { + name: "Concepts", + href: `${localePrefix}/documentation/concepts`, + items: [ + { name: "Items", href: `${localePrefix}/documentation/concepts/items` }, + { name: "Sources", href: `${localePrefix}/documentation/concepts/sources` }, + { name: "Scores", href: `${localePrefix}/documentation/concepts/scores` }, + { name: "Scorecards", href: `${localePrefix}/documentation/concepts/scorecards` }, + { name: "Score Results", href: `${localePrefix}/documentation/concepts/score-results` }, + { name: "Evaluations", href: `${localePrefix}/documentation/concepts/evaluations` }, + { name: "Tasks", href: `${localePrefix}/documentation/concepts/tasks` }, + { name: "Reports", href: `${localePrefix}/documentation/concepts/reports` }, + ], + }, + { + name: "Methods", + href: `${localePrefix}/documentation/methods`, + items: [ + { name: "Add/Edit a Source", href: `${localePrefix}/documentation/methods/add-edit-source` }, + { name: "Profile a Source", href: `${localePrefix}/documentation/methods/profile-source` }, + { name: "Add/Edit a Scorecard", href: `${localePrefix}/documentation/methods/add-edit-scorecard` }, + { name: "Add/Edit a Score", href: `${localePrefix}/documentation/methods/add-edit-score` }, + { name: "Evaluate a Score", href: `${localePrefix}/documentation/methods/evaluate-score` }, + { name: "Monitor Tasks", href: `${localePrefix}/documentation/methods/monitor-tasks` }, + ], + }, + { + name: "Evaluation Metrics", + href: `${localePrefix}/documentation/evaluation-metrics`, + items: [ + { name: "Gauges with Context", href: `${localePrefix}/documentation/evaluation-metrics/gauges-with-context` }, + { name: "Agreement", href: `${localePrefix}/documentation/evaluation-metrics/gauges/agreement` }, + { name: "Accuracy", href: `${localePrefix}/documentation/evaluation-metrics/gauges/accuracy` }, + { name: "Precision", href: `${localePrefix}/documentation/evaluation-metrics/gauges/precision` }, + { name: "Recall", href: `${localePrefix}/documentation/evaluation-metrics/gauges/recall` }, + { name: "Class Number Impact", href: `${localePrefix}/documentation/evaluation-metrics/gauges/class-number` }, + { name: "Class Imbalance", href: `${localePrefix}/documentation/evaluation-metrics/gauges/class-imbalance` }, + { name: "Examples", href: `${localePrefix}/documentation/evaluation-metrics/examples` }, + ], + }, + { + name: "Advanced", + href: `${localePrefix}/documentation/advanced`, + items: [ + { name: "plexus CLI Tool", href: `${localePrefix}/documentation/advanced/cli` }, + { name: "Worker Nodes", href: `${localePrefix}/documentation/advanced/worker-nodes` }, + { name: "Python SDK Reference", href: `${localePrefix}/documentation/advanced/sdk` }, + { name: "Universal Code Snippets", href: `${localePrefix}/documentation/advanced/universal-code` }, + { name: "MCP Server", href: `${localePrefix}/documentation/advanced/mcp-server` }, + ], + }, + ]; +}; interface DocumentationLayoutProps { children: React.ReactNode; @@ -147,9 +214,12 @@ export default function DocumentationLayout({ children, tableOfContents }: Docum const [isLeftSidebarOpen, setIsLeftSidebarOpen] = useState(true) const [isRightSidebarOpen, setIsRightSidebarOpen] = useState(true) const { theme, setTheme } = useTheme() + const { locale } = useTranslationContext() const isDesktop = useMediaQuery("(min-width: 1024px)") const isMobile = useMediaQuery("(max-width: 1023px)") const pathname = usePathname() + + const docSections = getDocSections(locale) useEffect(() => { if (isDesktop) { @@ -232,6 +302,11 @@ export default function DocumentationLayout({ children, tableOfContents }: Docum
+ {isLeftSidebarOpen && ( +
+ +
+ )}
@@ -245,7 +320,10 @@ export default function DocumentationLayout({ children, tableOfContents }: Docum - {isLeftSidebarOpen ? "Toggle sidebar" : "Expand sidebar"} + {isLeftSidebarOpen + ? (locale === 'es' ? 'Alternar barra lateral' : 'Toggle sidebar') + : (locale === 'es' ? 'Expandir barra lateral' : 'Expand sidebar') + } @@ -273,7 +351,10 @@ export default function DocumentationLayout({ children, tableOfContents }: Docum - Toggle {theme === "dark" ? "Light" : theme === "light" ? "System" : "Dark"} Mode + {locale === 'es' + ? `Cambiar a modo ${theme === "dark" ? "Claro" : theme === "light" ? "Sistema" : "Oscuro"}` + : `Toggle ${theme === "dark" ? "Light" : theme === "light" ? "System" : "Dark"} Mode` + } @@ -290,7 +371,9 @@ export default function DocumentationLayout({ children, tableOfContents }: Docum return (
-

On this page

+

+ {locale === 'es' ? 'En esta página' : 'On this page'} +

+ ); +} \ No newline at end of file diff --git a/dashboard/app/documentation/layout.tsx b/dashboard/app/[locale]/documentation/layout.tsx similarity index 100% rename from dashboard/app/documentation/layout.tsx rename to dashboard/app/[locale]/documentation/layout.tsx diff --git a/dashboard/app/[locale]/documentation/methods/add-edit-score/page.tsx b/dashboard/app/[locale]/documentation/methods/add-edit-score/page.tsx new file mode 100644 index 000000000..2a0728026 --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/add-edit-score/page.tsx @@ -0,0 +1,536 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function AddEditScorePage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Agregar/Editar una Puntuación

+

+ Aprende cómo crear y gestionar puntuaciones individuales dentro de cuadros de puntuación usando la interfaz del dashboard de Plexus. +

+ +
+
+

Agregar Puntuaciones en el Dashboard

+

+ Las puntuaciones son criterios de evaluación individuales dentro de un cuadro de puntuación. El dashboard proporciona + una interfaz intuitiva para crear y configurar puntuaciones. +

+ +
+
+

Guía Paso a Paso

+
    +
  1. + Acceder a la Creación de Puntuaciones: +

    Abre tu cuadro de puntuación y haz clic en "Agregar Puntuación" o edita un cuadro de puntuación existente.

    +
  2. +
  3. + Elegir Tipo de Puntuación: +

    Selecciona entre los tipos de puntuación disponibles:

    +
      +
    • Análisis de Sentimientos
    • +
    • Calidad de Contenido
    • +
    • Verificación Gramatical
    • +
    • Métricas Personalizadas
    • +
    +
  4. +
  5. + Configurar Parámetros: +

    Configura la puntuación:

    +
      +
    • Nombre y descripción de la puntuación
    • +
    • Peso (importancia en el cuadro de puntuación general)
    • +
    • Umbral (puntuación mínima aceptable)
    • +
    • Parámetros personalizados específicos al tipo de puntuación
    • +
    +
  6. +
  7. + Vista Previa y Prueba: +

    Usa la función de vista previa para probar la puntuación contra contenido de muestra.

    +
  8. +
  9. + Guardar Puntuación: +

    Haz clic en "Agregar Puntuación" para incluirla en tu cuadro de puntuación.

    +
  10. +
+
+ +
+

Editar Puntuaciones Existentes

+
    +
  1. + Localizar la Puntuación: +

    Encuentra la puntuación que deseas modificar dentro de tu cuadro de puntuación.

    +
  2. +
  3. + Acceder al Modo de Edición: +

    Haz clic en el ícono de edición junto a la puntuación.

    +
  4. +
  5. + Modificar Configuraciones: +

    Actualiza la configuración de la puntuación según sea necesario.

    +
  6. +
  7. + Guardar Cambios: +

    Haz clic en "Guardar" para aplicar tus modificaciones.

    +
  8. +
+
+
+
+ +
+

Gestión de Versiones de Puntuaciones

+

+ Las puntuaciones en Plexus soportan versionado, permitiéndote rastrear cambios y gestionar diferentes implementaciones: +

+ +
+
+

Crear Nuevas Versiones

+

+ Cuando editas una puntuación y guardas cambios, se crea automáticamente una nueva versión. + Puedes agregar notas para documentar los cambios realizados en cada versión. +

+
+
+

Versiones Campeón

+

+ Cada puntuación tiene una versión "campeón" designada que se usa para evaluaciones. + Puedes promover cualquier versión a estado campeón cuando estés satisfecho con su rendimiento. +

+
+
+

Versiones Destacadas

+

+ Marca versiones importantes como "destacadas" para resaltarlas en el historial de versiones. + Esto ayuda a rastrear hitos significativos en el desarrollo de tu puntuación. +

+
+
+
+ +
+

Consejos de Configuración de Puntuaciones

+
+
+

Balance de Pesos

+

+ Considera cuidadosamente la importancia relativa de cada puntuación al establecer pesos. + El total de todos los pesos en un cuadro de puntuación debe ser igual a 1.0. +

+
+
+

Establecimiento de Umbrales

+

+ Establece umbrales apropiados basados en tus requisitos de calidad y prueba + con muestras de contenido representativas. +

+
+
+

Tipos de Puntuaciones

+

+ Elige tipos de puntuación que se alineen con tus objetivos de evaluación. Combina diferentes + tipos para crear evaluaciones integrales. +

+
+
+
+ +
+

Usar la CLI

+

+ Para la gestión automatizada de puntuaciones, puedes usar la CLI de Plexus: +

+ +
+            {`# Ver información detallada sobre una puntuación
+plexus scorecards score "Nombre de Puntuación" --account "nombre-cuenta"
+plexus scorecards score "clave-puntuacion" --account "nombre-cuenta"
+
+# Mostrar historial de versiones y configuración
+plexus scorecards score "Nombre de Puntuación" --account "nombre-cuenta" --show-versions --show-config
+
+# Listar todas las puntuaciones para un cuadro de puntuación específico
+plexus scorecards list-scores --scorecard-id "id-cuadro-puntuacion"
+
+# Próximamente:
+# Ver historial de versiones para una puntuación
+plexus scorecards history --account-key "clave-cuenta" --score-key "clave-puntuacion"
+
+# Promover una versión a campeón
+plexus scorecards promote --account-key "clave-cuenta" --score-id "id-puntuacion" --version-id "id-version"
+
+# Agregar una nueva puntuación a un cuadro de puntuación
+plexus scores add --scorecard-id "id-cuadro" --name "Puntuación de Calidad" --type quality --weight 0.5
+
+# Listar todas las puntuaciones en un cuadro de puntuación
+plexus scores list --scorecard "Aseguramiento de Calidad"
+
+# Ver configuración de puntuación
+plexus scores info --score "Verificación Gramatical"`}
+          
+ +
+
+

Búsqueda Eficiente de Puntuaciones

+

+ El comando score soporta múltiples métodos de búsqueda: +

+
    +
  • Por ID: plexus scorecards score "id-puntuacion"
  • +
  • Por clave: plexus scorecards score "clave-puntuacion"
  • +
  • Por nombre: plexus scorecards score "Nombre de Puntuación"
  • +
  • Por ID externo: plexus scorecards score "id-externo"
  • +
+

+ Puedes limitar la búsqueda a una cuenta específica o cuadro de puntuación para resultados más rápidos. +

+
+
+
+ +
+

Referencia del SDK de Python

+

+ Para la gestión programática de puntuaciones, puedes usar el SDK de Python: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="tu-clave-api")
+
+# Obtener un cuadro de puntuación usando cualquier identificador (nombre, clave, ID, o ID externo)
+scorecard = plexus.scorecards.get("Aseguramiento de Calidad")
+
+# Obtener una puntuación usando cualquier identificador
+score = plexus.scores.get("Verificación Gramatical")
+
+# Obtener todas las puntuaciones en un cuadro de puntuación
+scores = scorecard.get_scores()
+
+# Obtener configuración de puntuación
+config = score.get_configuration()
+
+# Obtener resultados de evaluación de puntuación
+results = score.get_results(limit=10)`}
+          
+ +

+ Al igual que la CLI, el SDK de Python también soporta el sistema de identificadores flexible, permitiéndote referenciar recursos usando diferentes tipos de identificadores. +

+
+ +
+

Configuración YAML

+

+ Las puntuaciones pueden configurarse usando YAML para personalización avanzada: +

+ +
+            {`name: Puntuación de Calidad
+key: puntuacion-calidad
+externalId: score_123
+type: LangGraphScore
+parameters:
+  check_grammar: true
+  check_style: true
+  min_word_count: 100
+threshold: 0.8
+weight: 0.5`}
+          
+ +

+ Próximamente: La capacidad de extraer y subir configuraciones YAML usando la CLI para edición offline y control de versiones. +

+
+ +
+

Próximamente

+

+ Se están desarrollando características adicionales para puntuaciones. Regresa pronto para: +

+
    +
  • Nuevos tipos de puntuaciones y métricas
  • +
  • Algoritmos de puntuación avanzados
  • +
  • Parámetros de evaluación personalizados
  • +
  • Analíticas de rendimiento de puntuaciones
  • +
  • Operaciones masivas de puntuaciones
  • +
  • Sincronización YAML para edición offline
  • +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Add/Edit a Score

+

+ Learn how to create and manage individual scores within scorecards using the Plexus dashboard interface. +

+ +
+
+

Adding Scores in the Dashboard

+

+ Scores are individual evaluation criteria within a scorecard. The dashboard provides + an intuitive interface for creating and configuring scores. +

+ +
+
+

Step-by-Step Guide

+
    +
  1. + Access Score Creation: +

    Open your scorecard and click "Add Score" or edit an existing scorecard.

    +
  2. +
  3. + Choose Score Type: +

    Select from available score types:

    +
      +
    • Sentiment Analysis
    • +
    • Content Quality
    • +
    • Grammar Check
    • +
    • Custom Metrics
    • +
    +
  4. +
  5. + Configure Parameters: +

    Set up the score configuration:

    +
      +
    • Score name and description
    • +
    • Weight (importance in overall scorecard)
    • +
    • Threshold (minimum acceptable score)
    • +
    • Custom parameters specific to the score type
    • +
    +
  6. +
  7. + Preview and Test: +

    Use the preview feature to test the score against sample content.

    +
  8. +
  9. + Save Score: +

    Click "Add Score" to include it in your scorecard.

    +
  10. +
+
+ +
+

Editing Existing Scores

+
    +
  1. + Locate the Score: +

    Find the score you want to modify within your scorecard.

    +
  2. +
  3. + Access Edit Mode: +

    Click the edit icon next to the score.

    +
  4. +
  5. + Modify Settings: +

    Update the score's configuration as needed.

    +
  6. +
  7. + Save Changes: +

    Click "Save" to apply your modifications.

    +
  8. +
+
+
+
+ +
+

Score Version Management

+

+ Scores in Plexus support versioning, allowing you to track changes and manage different implementations: +

+ +
+
+

Creating New Versions

+

+ When you edit a score and save changes, a new version is automatically created. + You can add notes to document the changes made in each version. +

+
+
+

Champion Versions

+

+ Each score has a designated "champion" version that is used for evaluations. + You can promote any version to champion status when you're satisfied with its performance. +

+
+
+

Featured Versions

+

+ Mark important versions as "featured" to highlight them in the version history. + This helps track significant milestones in your score's development. +

+
+
+
+ +
+

Score Configuration Tips

+
+
+

Weight Balancing

+

+ Carefully consider the relative importance of each score when setting weights. + The total of all weights in a scorecard should equal 1.0. +

+
+
+

Threshold Setting

+

+ Set appropriate thresholds based on your quality requirements and test + with representative content samples. +

+
+
+

Score Types

+

+ Choose score types that align with your evaluation goals. Combine different + types to create comprehensive assessments. +

+
+
+
+ +
+

Using the CLI

+

+ For automated score management, you can use the Plexus CLI: +

+ +
+            {`# View detailed information about a score
+plexus scorecards score "Score Name" --account "account-name"
+plexus scorecards score "score-key" --account "account-name"
+
+# Show version history and configuration
+plexus scorecards score "Score Name" --account "account-name" --show-versions --show-config
+
+# List all scores for a specific scorecard
+plexus scorecards list-scores --scorecard-id "scorecard-id"
+
+# Coming soon:
+# View version history for a score
+plexus scorecards history --account-key "account-key" --score-key "score-key"
+
+# Promote a version to champion
+plexus scorecards promote --account-key "account-key" --score-id "score-id" --version-id "version-id"
+
+# Add a new score to a scorecard
+plexus scores add --scorecard-id "card-id" --name "Quality Score" --type quality --weight 0.5
+
+# List all scores in a scorecard
+plexus scores list --scorecard "Quality Assurance"
+
+# View score configuration
+plexus scores info --score "Grammar Check"`}
+          
+ +
+
+

Efficient Score Lookup

+

+ The score command supports multiple lookup methods: +

+
    +
  • By ID: plexus scorecards score "score-id"
  • +
  • By key: plexus scorecards score "score-key"
  • +
  • By name: plexus scorecards score "Score Name"
  • +
  • By external ID: plexus scorecards score "external-id"
  • +
+

+ You can scope the search to a specific account or scorecard for faster results. +

+
+
+
+ +
+

Python SDK Reference

+

+ For programmatic score management, you can use the Python SDK: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="your-api-key")
+
+# Get a scorecard using any identifier (name, key, ID, or external ID)
+scorecard = plexus.scorecards.get("Quality Assurance")
+
+# Get a score using any identifier
+score = plexus.scores.get("Grammar Check")
+
+# Get all scores in a scorecard
+scores = scorecard.get_scores()
+
+# Get score configuration
+config = score.get_configuration()
+
+# Get score evaluation results
+results = score.get_results(limit=10)`}
+          
+ +

+ Like the CLI, the Python SDK also supports the flexible identifier system, allowing you to reference resources using different types of identifiers. +

+
+ +
+

YAML Configuration

+

+ Scores can be configured using YAML for advanced customization: +

+ +
+            {`name: Quality Score
+key: quality-score
+externalId: score_123
+type: LangGraphScore
+parameters:
+  check_grammar: true
+  check_style: true
+  min_word_count: 100
+threshold: 0.8
+weight: 0.5`}
+          
+ +

+ Coming soon: The ability to pull and push YAML configurations using the CLI for offline editing and version control. +

+
+ +
+

Coming Soon

+

+ Additional score features are being developed. Check back soon for: +

+
    +
  • New score types and metrics
  • +
  • Advanced scoring algorithms
  • +
  • Custom evaluation parameters
  • +
  • Score performance analytics
  • +
  • Bulk score operations
  • +
  • YAML synchronization for offline editing
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/add-edit-scorecard/page.tsx b/dashboard/app/[locale]/documentation/methods/add-edit-scorecard/page.tsx new file mode 100644 index 000000000..8d2f4d4c2 --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/add-edit-scorecard/page.tsx @@ -0,0 +1,438 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function AddEditScorecardPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Agregar/Editar un Cuadro de Puntuación

+

+ Aprende cómo crear y administrar cuadros de puntuación usando la interfaz del dashboard de Plexus. +

+ +
+
+

Crear un Cuadro de Puntuación en el Dashboard

+

+ Los cuadros de puntuación definen los criterios para evaluar tu contenido. El dashboard proporciona + una interfaz intuitiva para crear y administrar cuadros de puntuación. +

+ +
+
+

Guía Paso a Paso

+
    +
  1. + Acceder a Cuadros de Puntuación: +

    Navega a la sección "Cuadros de Puntuación" en el menú de navegación principal.

    +
  2. +
  3. + Crear Nuevo Cuadro de Puntuación: +

    Haz clic en el botón "Nuevo Cuadro de Puntuación" en la esquina superior derecha.

    +
  4. +
  5. + Información Básica: +

    Completa los detalles del cuadro de puntuación:

    +
      +
    • Nombre del cuadro de puntuación
    • +
    • Descripción
    • +
    • Categoría/etiquetas (opcional)
    • +
    +
  6. +
  7. + Agregar Puntuaciones: +

    Haz clic en "Agregar Puntuación" para incluir criterios de evaluación:

    +
      +
    • Seleccionar tipo de puntuación
    • +
    • Configurar parámetros de puntuación
    • +
    • Establecer peso y umbral
    • +
    +
  8. +
  9. + Guardar Cuadro de Puntuación: +

    Haz clic en "Crear" para guardar tu nuevo cuadro de puntuación.

    +
  10. +
+
+ +
+

Editar un Cuadro de Puntuación

+
    +
  1. + Encontrar el Cuadro de Puntuación: +

    Localiza el cuadro de puntuación que deseas modificar en la lista de Cuadros de Puntuación.

    +
  2. +
  3. + Entrar en Modo de Edición: +

    Haz clic en el ícono de editar o selecciona "Editar" del menú de acciones.

    +
  4. +
  5. + Realizar Cambios: +

    Modifica los detalles del cuadro de puntuación, agrega/elimina puntuaciones, o ajusta pesos.

    +
  6. +
  7. + Guardar Actualizaciones: +

    Haz clic en "Guardar Cambios" para aplicar tus modificaciones.

    +
  8. +
+
+
+
+ +
+

Consejos para la Gestión de Cuadros de Puntuación

+
+
+

Organización

+

+ Usa nombres y descripciones significativos para mantener tus cuadros de puntuación organizados. + Considera usar etiquetas para agrupar cuadros de puntuación relacionados. +

+
+
+

Pesos de Puntuación

+

+ Equilibra los pesos de las puntuaciones para reflejar la importancia relativa de cada criterio + en tu proceso de evaluación. +

+
+
+

Plantillas

+

+ Guarda configuraciones de cuadros de puntuación comúnmente utilizadas como plantillas para reutilización rápida. +

+
+
+
+ +
+

Usar la CLI

+

+ Para la gestión automatizada de cuadros de puntuación, puedes usar la CLI de Plexus: +

+ +
+            {`# Listar cuadros de puntuación con rendimiento optimizado
+plexus scorecards list "account-name" --fast
+
+# Ver un cuadro de puntuación específico por filtrado
+plexus scorecards list "account-name" --name "Calidad de Contenido"
+
+# Ver información detallada sobre una puntuación
+plexus scorecards score "score-name" --account "account-name" --show-versions
+
+# Próximamente:
+# Crear un nuevo cuadro de puntuación
+plexus scorecards create --name "Calidad de Contenido" --description "Evalúa la calidad del contenido"
+
+# Obtener información detallada sobre un cuadro de puntuación específico
+plexus scorecards info --scorecard "Calidad de Contenido"
+
+# Listar todas las puntuaciones en un cuadro de puntuación
+plexus scorecards list-scores --scorecard "Calidad de Contenido"
+
+# Extraer configuración del cuadro de puntuación a YAML
+plexus scorecards pull --scorecard "Calidad de Contenido" --output ./mis-cuadros-puntuacion
+
+# Subir configuración del cuadro de puntuación desde YAML
+plexus scorecards push --scorecard "Calidad de Contenido" --file ./mi-cuadro-puntuacion.yaml --note "Configuración actualizada"
+
+# Eliminar un cuadro de puntuación
+plexus scorecards delete --scorecard "Calidad de Contenido"`}
+          
+ +
+
+

Consideraciones de Rendimiento

+

+ La CLI ahora usa consultas GraphQL optimizadas para obtener datos de cuadros de puntuación de manera eficiente: +

+
    +
  • + Enfoque de Consulta Única: En lugar de hacer consultas separadas para las secciones y puntuaciones de cada cuadro de puntuación, + el sistema ahora obtiene todos los datos en una sola consulta GraphQL comprensiva. +
  • +
  • + Modo Rápido: Usa la opción --fast para omitir la obtención de secciones y puntuaciones cuando solo necesitas información básica del cuadro de puntuación. +
  • +
  • + Ocultar Puntuaciones: Usa --hide-scores para excluir detalles de puntuación de la salida mientras aún obtienes datos básicos del cuadro de puntuación. +
  • +
+
+
+
+ +
+

Referencia del SDK de Python

+

+ Para la gestión programática de cuadros de puntuación, puedes usar el SDK de Python: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="tu-clave-api")
+
+# Obtener un cuadro de puntuación usando cualquier identificador (nombre, clave, ID, o ID externo)
+scorecard = plexus.scorecards.get("Calidad de Contenido")
+
+# Listar todos los cuadros de puntuación
+scorecards = plexus.scorecards.list()
+
+# Obtener todas las puntuaciones en un cuadro de puntuación
+scores = scorecard.get_scores()
+
+# Exportar cuadro de puntuación a YAML
+yaml_config = scorecard.to_yaml()
+with open("cuadro-puntuacion.yaml", "w") as f:
+    f.write(yaml_config)
+
+# Importar cuadro de puntuación desde YAML
+with open("cuadro-puntuacion.yaml", "r") as f:
+    yaml_content = f.read()
+    
+nuevo_scorecard = plexus.scorecards.from_yaml(yaml_content)`}
+          
+ +

+ Al igual que la CLI, el SDK de Python también soporta el sistema de identificadores flexible, permitiéndote referenciar cuadros de puntuación usando diferentes tipos de identificadores. +

+
+ +
+

Próximamente

+

+ Se están desarrollando características adicionales para cuadros de puntuación. Regresa pronto para: +

+
    +
  • Opciones avanzadas de configuración de puntuación
  • +
  • Control de versiones de cuadros de puntuación
  • +
  • Características de edición colaborativa
  • +
  • Analíticas de rendimiento
  • +
  • Sincronización YAML para edición offline
  • +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Add/Edit a Scorecard

+

+ Learn how to create and manage scorecards using the Plexus dashboard interface. +

+ +
+
+

Creating a Scorecard in the Dashboard

+

+ Scorecards define the criteria for evaluating your content. The dashboard provides + an intuitive interface for creating and managing scorecards. +

+ +
+
+

Step-by-Step Guide

+
    +
  1. + Access Scorecards: +

    Navigate to the "Scorecards" section in the main navigation menu.

    +
  2. +
  3. + Create New Scorecard: +

    Click the "New Scorecard" button in the top-right corner.

    +
  4. +
  5. + Basic Information: +

    Fill in the scorecard details:

    +
      +
    • Scorecard name
    • +
    • Description
    • +
    • Category/tags (optional)
    • +
    +
  6. +
  7. + Add Scores: +

    Click "Add Score" to include evaluation criteria:

    +
      +
    • Select score type
    • +
    • Configure score parameters
    • +
    • Set weight and threshold
    • +
    +
  8. +
  9. + Save Scorecard: +

    Click "Create" to save your new scorecard.

    +
  10. +
+
+ +
+

Editing a Scorecard

+
    +
  1. + Find the Scorecard: +

    Locate the scorecard you want to modify in the Scorecards list.

    +
  2. +
  3. + Enter Edit Mode: +

    Click the edit icon or select "Edit" from the actions menu.

    +
  4. +
  5. + Make Changes: +

    Modify scorecard details, add/remove scores, or adjust weights.

    +
  6. +
  7. + Save Updates: +

    Click "Save Changes" to apply your modifications.

    +
  8. +
+
+
+
+ +
+

Scorecard Management Tips

+
+
+

Organization

+

+ Use meaningful names and descriptions to keep your scorecards organized. + Consider using tags to group related scorecards. +

+
+
+

Score Weights

+

+ Balance score weights to reflect the relative importance of each criterion + in your evaluation process. +

+
+
+

Templates

+

+ Save commonly used scorecard configurations as templates for quick reuse. +

+
+
+
+ +
+

Using the CLI

+

+ For automated scorecard management, you can use the Plexus CLI: +

+ +
+            {`# List scorecards with optimized performance
+plexus scorecards list "account-name" --fast
+
+# View a specific scorecard by filtering
+plexus scorecards list "account-name" --name "Content Quality"
+
+# View detailed information about a score
+plexus scorecards score "score-name" --account "account-name" --show-versions
+
+# Coming soon:
+# Create a new scorecard
+plexus scorecards create --name "Content Quality" --description "Evaluates content quality"
+
+# Get detailed information about a specific scorecard
+plexus scorecards info --scorecard "Content Quality"
+
+# List all scores in a scorecard
+plexus scorecards list-scores --scorecard "Content Quality"
+
+# Pull scorecard configuration to YAML
+plexus scorecards pull --scorecard "Content Quality" --output ./my-scorecards
+
+# Push scorecard configuration from YAML
+plexus scorecards push --scorecard "Content Quality" --file ./my-scorecard.yaml --note "Updated configuration"
+
+# Delete a scorecard
+plexus scorecards delete --scorecard "Content Quality"`}
+          
+ +
+
+

Performance Considerations

+

+ The CLI now uses optimized GraphQL queries to efficiently fetch scorecard data: +

+
    +
  • + Single Query Approach: Instead of making separate queries for each scorecard's sections and scores, + the system now fetches all data in one comprehensive GraphQL query. +
  • +
  • + Fast Mode: Use the --fast option to skip fetching sections and scores when you only need basic scorecard info. +
  • +
  • + Hide Scores: Use --hide-scores to exclude score details from output while still getting basic scorecard data. +
  • +
+
+
+
+ +
+

Python SDK Reference

+

+ For programmatic scorecard management, you can use the Python SDK: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="your-api-key")
+
+# Get a scorecard using any identifier (name, key, ID, or external ID)
+scorecard = plexus.scorecards.get("Content Quality")
+
+# List all scorecards
+scorecards = plexus.scorecards.list()
+
+# Get all scores in a scorecard
+scores = scorecard.get_scores()
+
+# Export scorecard to YAML
+yaml_config = scorecard.to_yaml()
+with open("scorecard.yaml", "w") as f:
+    f.write(yaml_config)
+
+# Import scorecard from YAML
+with open("scorecard.yaml", "r") as f:
+    yaml_content = f.read()
+    
+new_scorecard = plexus.scorecards.from_yaml(yaml_content)`}
+          
+ +

+ Like the CLI, the Python SDK also supports the flexible identifier system, allowing you to reference scorecards using different types of identifiers. +

+
+ +
+

Coming Soon

+

+ Additional scorecard features are being developed. Check back soon for: +

+
    +
  • Advanced score configuration options
  • +
  • Scorecard version control
  • +
  • Collaborative editing features
  • +
  • Performance analytics
  • +
  • YAML synchronization for offline editing
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/add-edit-source/page.tsx b/dashboard/app/[locale]/documentation/methods/add-edit-source/page.tsx new file mode 100644 index 000000000..12da5c7b3 --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/add-edit-source/page.tsx @@ -0,0 +1,308 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function AddEditSourcePage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Agregar/Editar una Fuente

+

+ Aprende cómo crear y gestionar fuentes en Plexus usando la interfaz del Panel de Control. +

+ +
+
+

Agregar una Fuente en el Panel de Control

+

+ El Panel de Control de Plexus proporciona una interfaz intuitiva para crear y gestionar tus fuentes. + Sigue estos pasos para agregar una nueva fuente: +

+ +
+
+

Guía Paso a Paso

+
    +
  1. + Navegar a Fuentes: +

    Haz clic en "Fuentes" en el menú de navegación principal para acceder a la página de gestión de fuentes.

    +
  2. +
  3. + Crear Nueva Fuente: +

    Haz clic en el botón "Agregar Fuente" en la esquina superior derecha de la página.

    +
  4. +
  5. + Elegir Tipo de Fuente: +

    Selecciona el tipo de fuente que deseas crear (ej. Texto, Audio).

    +
  6. +
  7. + Configurar Ajustes: +

    Completa la información requerida:

    +
      +
    • Nombre de la fuente
    • +
    • Descripción (opcional)
    • +
    • Contenido o carga de archivo
    • +
    • Ajustes adicionales específicos al tipo de fuente
    • +
    +
  8. +
  9. + Guardar: +

    Haz clic en "Crear" para guardar tu nueva fuente.

    +
  10. +
+
+ +
+

Editar una Fuente Existente

+
    +
  1. + Localizar la Fuente: +

    Encuentra la fuente que deseas editar en la lista de Fuentes.

    +
  2. +
  3. + Acceder al Modo de Edición: +

    Haz clic en el ícono de edición (lápiz) junto al nombre de la fuente.

    +
  4. +
  5. + Realizar Cambios: +

    Actualiza la información de la fuente según sea necesario.

    +
  6. +
  7. + Guardar Cambios: +

    Haz clic en "Guardar" para aplicar tus actualizaciones.

    +
  8. +
+
+
+
+ +
+

Consejos de Gestión de Fuentes

+
+
+

Organización

+

+ Usa nombres claros y descriptivos junto con etiquetas opcionales para mantener tus fuentes organizadas + y fácilmente buscables. +

+
+
+

Operaciones por Lote

+

+ Selecciona múltiples fuentes para realizar operaciones por lote como eliminación o actualización de etiquetas. +

+
+
+
+ +
+

Usar la CLI

+

+ Para automatización y scripts, puedes usar la CLI de Plexus para gestionar fuentes: +

+ +
+              {`# Crear una nueva fuente
+plexus sources create --name "Mi Fuente" --type text --content "Contenido de ejemplo"
+
+# Actualizar una fuente existente
+plexus sources update source-id --name "Nombre Actualizado" --content "Contenido actualizado"`}
+            
+
+ +
+

Referencia del SDK de Python

+

+ Para acceso programático, puedes usar el SDK de Python: +

+ +
+              {`from plexus import Plexus
+
+plexus = Plexus(api_key="tu-api-key")
+
+# Crear una nueva fuente
+source = plexus.sources.create(
+    name="Mi Fuente",
+    type="text",
+    data="Contenido de ejemplo"
+)
+
+# Actualizar una fuente existente
+source = plexus.sources.update(
+    source_id="source-id",
+    name="Nombre de Fuente Actualizado",
+    data="Contenido actualizado"
+)`}
+            
+
+ +
+

Próximamente

+

+ Se están desarrollando documentación y características adicionales. Vuelve pronto para: +

+
    +
  • Técnicas avanzadas de gestión de fuentes
  • +
  • Capacidades de importación/exportación masiva
  • +
  • Plantillas de fuentes personalizadas
  • +
  • Ejemplos de integración
  • +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Add/Edit a Source

+

+ Learn how to create and manage sources in Plexus using the dashboard interface. +

+ +
+
+

Adding a Source in the Dashboard

+

+ The Plexus dashboard provides an intuitive interface for creating and managing your sources. + Follow these steps to add a new source: +

+ +
+
+

Step-by-Step Guide

+
    +
  1. + Navigate to Sources: +

    Click on "Sources" in the main navigation menu to access the sources management page.

    +
  2. +
  3. + Create New Source: +

    Click the "Add Source" button in the top-right corner of the page.

    +
  4. +
  5. + Choose Source Type: +

    Select the type of source you want to create (e.g., Text, Audio).

    +
  6. +
  7. + Configure Settings: +

    Fill in the required information:

    +
      +
    • Source name
    • +
    • Description (optional)
    • +
    • Content or file upload
    • +
    • Additional settings specific to the source type
    • +
    +
  8. +
  9. + Save: +

    Click "Create" to save your new source.

    +
  10. +
+
+ +
+

Editing an Existing Source

+
    +
  1. + Locate the Source: +

    Find the source you want to edit in the Sources list.

    +
  2. +
  3. + Access Edit Mode: +

    Click the edit icon (pencil) next to the source name.

    +
  4. +
  5. + Make Changes: +

    Update the source's information as needed.

    +
  6. +
  7. + Save Changes: +

    Click "Save" to apply your updates.

    +
  8. +
+
+
+
+ +
+

Source Management Tips

+
+
+

Organization

+

+ Use clear, descriptive names and optional tags to keep your sources organized + and easily searchable. +

+
+
+

Batch Operations

+

+ Select multiple sources to perform batch operations like deletion or tag updates. +

+
+
+
+ +
+

Using the CLI

+

+ For automation and scripting, you can use the Plexus CLI to manage sources: +

+ +
+            {`# Create a new source
+plexus sources create --name "My Source" --type text --content "Sample content"
+
+# Update an existing source
+plexus sources update source-id --name "Updated Name" --content "Updated content"`}
+          
+
+ +
+

Python SDK Reference

+

+ For programmatic access, you can use the Python SDK: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="your-api-key")
+
+# Create a new source
+source = plexus.sources.create(
+    name="My Source",
+    type="text",
+    data="Sample content"
+)
+
+# Update an existing source
+source = plexus.sources.update(
+    source_id="source-id",
+    name="Updated Source Name",
+    data="Updated content"
+)`}
+          
+
+ +
+

Coming Soon

+

+ Additional documentation and features are being developed. Check back soon for: +

+
    +
  • Advanced source management techniques
  • +
  • Bulk import/export capabilities
  • +
  • Custom source templates
  • +
  • Integration examples
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/evaluate-score/page.tsx b/dashboard/app/[locale]/documentation/methods/evaluate-score/page.tsx new file mode 100644 index 000000000..89cabe66f --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/evaluate-score/page.tsx @@ -0,0 +1,298 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function EvaluateScorePage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Evaluar una Puntuación

+

+ Aprende cómo ejecutar evaluaciones usando puntuaciones individuales o cuadros de puntuación completos. +

+ +
+
+

Ejecutar una Evaluación

+

+ Puedes evaluar contenido usando puntuaciones individuales o cuadros de puntuación completos. El proceso de evaluación + analiza tu contenido contra los criterios definidos y proporciona resultados detallados. +

+ +
+
+

Usar el Dashboard

+
    +
  1. Selecciona tu contenido fuente
  2. +
  3. Elige un cuadro de puntuación o puntuación individual
  4. +
  5. Haz clic en "Ejecutar Evaluación"
  6. +
  7. Monitorea el progreso de la evaluación
  8. +
  9. Revisa los resultados
  10. +
+
+ +
+

Usar el SDK

+
+                {`from plexus import Plexus
+
+plexus = Plexus(api_key="tu-clave-api")
+
+# Evaluar usando una puntuación específica (acepta ID, nombre, clave, o ID externo)
+evaluation = plexus.evaluations.create(
+    source_id="id-fuente",
+    score="Verificación Gramatical"  # Puede usar nombre, clave, ID, o ID externo
+)
+
+# O evaluar usando un cuadro de puntuación completo (acepta ID, nombre, clave, o ID externo)
+evaluation = plexus.evaluations.create(
+    source_id="id-fuente",
+    scorecard="Calidad de Contenido"  # Puede usar nombre, clave, ID, o ID externo
+)
+
+# Obtener resultados de evaluación
+results = evaluation.get_results()
+
+# Imprimir valores de puntuación
+for score in results.scores:
+    print(f"{score.name}: {score.value}")`}
+              
+ +

+ El SDK soporta el sistema de identificadores flexible, permitiéndote referenciar cuadros de puntuación y puntuaciones usando diferentes tipos de identificadores (nombre, clave, ID, o ID externo). +

+
+ +
+

Usar la CLI

+
+                {`# Evaluar usando un cuadro de puntuación
+plexus evaluate accuracy --scorecard "Calidad de Contenido" --number-of-samples 100
+
+# Listar resultados de evaluación
+plexus evaluations list
+
+# Ver resultados detallados para una evaluación específica
+plexus evaluations list-results --evaluation id-evaluacion`}
+              
+ +

+ La CLI soporta el sistema de identificadores flexible, permitiéndote referenciar cuadros de puntuación usando diferentes tipos de identificadores (nombre, clave, ID, o ID externo). +

+
+
+
+ +
+

Entender los Resultados

+
+
+

Valores de Puntuación

+

+ Resultados numéricos o categóricos para cada criterio evaluado. +

+
+
+

Explicaciones

+

+ Razonamiento detallado detrás del resultado de evaluación de cada puntuación. +

+
+
+

Sugerencias

+

+ Recomendaciones para mejora basadas en los resultados de evaluación. +

+
+
+
+ +
+

Evaluaciones por Lotes

+

+ Puedes evaluar múltiples fuentes a la vez usando procesamiento por lotes: +

+ +
+            {`# Crear una evaluación por lotes
+batch = plexus.evaluations.create_batch(
+    source_ids=["fuente-1", "fuente-2", "fuente-3"],
+    scorecard="Aseguramiento de Calidad"  # Puede usar nombre, clave, ID, o ID externo
+)
+
+# Monitorear progreso del lote
+status = batch.get_status()
+
+# Obtener resultados cuando esté completo
+results = batch.get_results()`}
+          
+ +

+ Al igual que las evaluaciones individuales, las evaluaciones por lotes también soportan el sistema de identificadores flexible para cuadros de puntuación y puntuaciones. +

+
+ +
+

Próximamente

+

+ Se está desarrollando documentación detallada sobre evaluaciones. Regresa pronto para: +

+
    +
  • Opciones avanzadas de evaluación
  • +
  • Formato personalizado de resultados
  • +
  • Optimización de rendimiento de evaluaciones
  • +
  • Técnicas de análisis de resultados
  • +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Evaluate a Score

+

+ Learn how to run evaluations using individual scores or complete scorecards. +

+ +
+
+

Running an Evaluation

+

+ You can evaluate content using individual scores or entire scorecards. The evaluation + process analyzes your content against the defined criteria and provides detailed results. +

+ +
+
+

Using the Dashboard

+
    +
  1. Select your source content
  2. +
  3. Choose a scorecard or individual score
  4. +
  5. Click "Run Evaluation"
  6. +
  7. Monitor the evaluation progress
  8. +
  9. Review the results
  10. +
+
+ +
+

Using the SDK

+
+                {`from plexus import Plexus
+
+plexus = Plexus(api_key="your-api-key")
+
+# Evaluate using a specific score (accepts ID, name, key, or external ID)
+evaluation = plexus.evaluations.create(
+    source_id="source-id",
+    score="Grammar Check"  # Can use name, key, ID, or external ID
+)
+
+# Or evaluate using an entire scorecard (accepts ID, name, key, or external ID)
+evaluation = plexus.evaluations.create(
+    source_id="source-id",
+    scorecard="Content Quality"  # Can use name, key, ID, or external ID
+)
+
+# Get evaluation results
+results = evaluation.get_results()
+
+# Print score values
+for score in results.scores:
+    print(f"{score.name}: {score.value}")`}
+              
+ +

+ The SDK supports the flexible identifier system, allowing you to reference scorecards and scores using different types of identifiers (name, key, ID, or external ID). +

+
+ +
+

Using the CLI

+
+                {`# Evaluate using a scorecard
+plexus evaluate accuracy --scorecard "Content Quality" --number-of-samples 100
+
+# List evaluation results
+plexus evaluations list
+
+# View detailed results for a specific evaluation
+plexus evaluations list-results --evaluation evaluation-id`}
+              
+ +

+ The CLI supports the flexible identifier system, allowing you to reference scorecards using different types of identifiers (name, key, ID, or external ID). +

+
+
+
+ +
+

Understanding Results

+
+
+

Score Values

+

+ Numerical or categorical results for each evaluated criterion. +

+
+
+

Explanations

+

+ Detailed reasoning behind each score's evaluation result. +

+
+
+

Suggestions

+

+ Recommendations for improvement based on the evaluation results. +

+
+
+
+ +
+

Batch Evaluations

+

+ You can evaluate multiple sources at once using batch processing: +

+ +
+            {`# Create a batch evaluation
+batch = plexus.evaluations.create_batch(
+    source_ids=["source-1", "source-2", "source-3"],
+    scorecard="Quality Assurance"  # Can use name, key, ID, or external ID
+)
+
+# Monitor batch progress
+status = batch.get_status()
+
+# Get results when complete
+results = batch.get_results()`}
+          
+ +

+ Like individual evaluations, batch evaluations also support the flexible identifier system for scorecards and scores. +

+
+ +
+

Coming Soon

+

+ Detailed documentation about evaluations is currently being developed. Check back soon for: +

+
    +
  • Advanced evaluation options
  • +
  • Custom result formatting
  • +
  • Evaluation performance optimization
  • +
  • Result analysis techniques
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/monitor-tasks/page.tsx b/dashboard/app/[locale]/documentation/methods/monitor-tasks/page.tsx new file mode 100644 index 000000000..dbfda92ed --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/monitor-tasks/page.tsx @@ -0,0 +1,278 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function MonitorTasksPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Monitorear Tareas

+

+ Aprende cómo rastrear y gestionar tareas en tu implementación de Plexus. +

+ +
+
+

Monitoreo de Tareas

+

+ Las tareas representan unidades individuales de trabajo en Plexus, como evaluaciones, + procesamiento de fuentes, o entrenamiento de modelos. Puedes monitorear tareas a través del + dashboard web y la interfaz de línea de comandos. +

+ +
+
+

Usar el Dashboard

+

+ El dashboard web proporciona una interfaz visual para monitorear tareas: +

+
    +
  1. Navega a la sección de Tareas en el dashboard
  2. +
  3. Ve tareas activas y completadas en tiempo real
  4. +
  5. Usa filtros para encontrar tareas específicas por tipo o estado
  6. +
  7. Monitorea el progreso de tareas con barras de progreso visuales
  8. +
  9. Ve información detallada de tareas incluyendo etapas y registros
  10. +
  11. Rastrea el rendimiento de tareas y uso de recursos
  12. +
+
+ +
+

Usar la CLI

+

+ La CLI de Plexus proporciona herramientas poderosas para monitorear tareas directamente desde tu terminal: +

+
+                {`# Listar tareas para una cuenta (muestra las 10 más recientes por defecto)
+plexus tasks list --account tu-clave-cuenta
+
+# Mostrar todas las tareas en lugar de solo las más recientes
+plexus tasks list --account tu-clave-cuenta --all
+
+# Filtrar tareas por estado
+plexus tasks list --account tu-clave-cuenta --status RUNNING
+plexus tasks list --account tu-clave-cuenta --status COMPLETED
+plexus tasks list --account tu-clave-cuenta --status FAILED
+
+# Filtrar tareas por tipo
+plexus tasks list --account tu-clave-cuenta --type evaluation
+
+# Combinar filtros
+plexus tasks list --account tu-clave-cuenta --status RUNNING --type evaluation
+
+# Limitar el número de tareas mostradas
+plexus tasks list --account tu-clave-cuenta --limit 5`}
+              
+

+ La salida de la CLI muestra información integral de tareas en una vista bien formateada: +

+
    +
  • Detalles básicos de tarea (ID, tipo, estado, objetivo, comando)
  • +
  • IDs asociados (cuenta, cuadro de puntuación, puntuación)
  • +
  • Etapa actual e información del trabajador
  • +
  • Información completa de tiempo (creado, iniciado, completado, estimado)
  • +
  • Indicadores de estado codificados por color (azul para ejecutándose, verde para completado, rojo para fallido)
  • +
  • Mensajes de error y detalles cuando estén disponibles
  • +
  • Metadatos de tarea y registros de salida
  • +
+
+
+
+ +
+

Zona de Peligro: Eliminación de Tareas

+
+

+ ⚠️ Advertencia: La eliminación de tareas es una operación permanente. Las tareas eliminadas no pueden recuperarse. + Solo usa estos comandos cuando estés absolutamente seguro sobre la eliminación. +

+ +
+

+ La CLI proporciona comandos para eliminación de tareas con medidas de seguridad integradas: +

+ +
+                {`# Eliminar una tarea específica por ID
+plexus tasks delete --account tu-clave-cuenta --task-id "id-tarea"
+
+# Eliminar todas las tareas fallidas para una cuenta
+plexus tasks delete --account tu-clave-cuenta --status FAILED
+
+# Eliminar todas las tareas de un tipo específico para una cuenta
+plexus tasks delete --account tu-clave-cuenta --type evaluation
+
+# Eliminar TODAS las tareas para una cuenta específica
+plexus tasks delete --account tu-clave-cuenta --all
+
+# Eliminar TODAS las tareas en TODAS las cuentas (USAR CON EXTREMA PRECAUCIÓN)
+plexus tasks delete --all
+
+# Omitir confirmación con -y/--yes (USAR CON EXTREMA PRECAUCIÓN)
+plexus tasks delete --all -y`}
+              
+ +
+

Características de Seguridad:

+
    +
  • La bandera --all es requerida para eliminación masiva
  • +
  • El alcance de la cuenta está claramente indicado en las confirmaciones
  • +
  • La confirmación se muestra por defecto (puede omitirse con -y)
  • +
  • La vista previa de tareas a eliminar siempre se muestra
  • +
  • Las etapas de tarea asociadas se limpian automáticamente
  • +
  • La barra de progreso muestra el estado de eliminación
  • +
+ +

Antes de eliminar tareas, considera:

+
    +
  • ¿Hay operaciones dependientes que podrían verse afectadas?
  • +
  • ¿Necesitas mantener los registros de tareas para propósitos de auditoría?
  • +
  • ¿Has respaldado algún resultado importante de tareas?
  • +
  • ¿Estás apuntando a las tareas correctas con tus filtros?
  • +
  • Si usas --all sin --account, ¿estás seguro de que quieres eliminar tareas en TODAS las cuentas?
  • +
+
+
+
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Monitor Tasks

+

+ Learn how to track and manage tasks in your Plexus deployment. +

+ +
+
+

Task Monitoring

+

+ Tasks represent individual units of work in Plexus, such as evaluations, + source processing, or model training. You can monitor tasks through both + the web dashboard and the command line interface. +

+ +
+
+

Using the Dashboard

+

+ The web dashboard provides a visual interface for monitoring tasks: +

+
    +
  1. Navigate to the Tasks section in the dashboard
  2. +
  3. View active and completed tasks in real-time
  4. +
  5. Use filters to find specific tasks by type or status
  6. +
  7. Monitor task progress with visual progress bars
  8. +
  9. View detailed task information including stages and logs
  10. +
  11. Track task performance and resource usage
  12. +
+
+ +
+

Using the CLI

+

+ The Plexus CLI provides powerful tools for monitoring tasks directly from your terminal: +

+
+                {`# List tasks for an account (shows 10 most recent by default)
+plexus tasks list --account your-account-key
+
+# Show all tasks instead of just the most recent
+plexus tasks list --account your-account-key --all
+
+# Filter tasks by status
+plexus tasks list --account your-account-key --status RUNNING
+plexus tasks list --account your-account-key --status COMPLETED
+plexus tasks list --account your-account-key --status FAILED
+
+# Filter tasks by type
+plexus tasks list --account your-account-key --type evaluation
+
+# Combine filters
+plexus tasks list --account your-account-key --status RUNNING --type evaluation
+
+# Limit the number of tasks shown
+plexus tasks list --account your-account-key --limit 5`}
+              
+

+ The CLI output displays comprehensive task information in a well-formatted view: +

+
    +
  • Basic task details (ID, type, status, target, command)
  • +
  • Associated IDs (account, scorecard, score)
  • +
  • Current stage and worker information
  • +
  • Complete timing information (created, started, completed, estimated)
  • +
  • Color-coded status indicators (blue for running, green for completed, red for failed)
  • +
  • Error messages and details when available
  • +
  • Task metadata and output logs
  • +
+
+
+
+ +
+

Danger Zone: Task Deletion

+
+

+ ⚠️ Warning: Task deletion is a permanent operation. Deleted tasks cannot be recovered. + Only use these commands when you are absolutely certain about the deletion. +

+ +
+

+ The CLI provides commands for task deletion with built-in safety measures: +

+ +
+                {`# Delete a specific task by ID
+plexus tasks delete --account your-account-key --task-id "task-id"
+
+# Delete all failed tasks for an account
+plexus tasks delete --account your-account-key --status FAILED
+
+# Delete all tasks of a specific type for an account
+plexus tasks delete --account your-account-key --type evaluation
+
+# Delete ALL tasks for a specific account
+plexus tasks delete --account your-account-key --all
+
+# Delete ALL tasks across ALL accounts (USE WITH EXTREME CAUTION)
+plexus tasks delete --all
+
+# Skip confirmation prompt with -y/--yes (USE WITH EXTREME CAUTION)
+plexus tasks delete --all -y`}
+              
+ +
+

Safety Features:

+
    +
  • The --all flag is required for bulk deletion
  • +
  • Account scope is clearly indicated in confirmations
  • +
  • Confirmation prompt is shown by default (can be skipped with -y)
  • +
  • Preview of tasks to be deleted is always shown
  • +
  • Associated task stages are automatically cleaned up
  • +
  • Progress bar shows deletion status
  • +
+ +

Before deleting tasks, consider:

+
    +
  • Are there any dependent operations that might be affected?
  • +
  • Do you need to keep the task records for auditing purposes?
  • +
  • Have you backed up any important task results?
  • +
  • Are you targeting the correct tasks with your filters?
  • +
  • If using --all without --account, are you certain you want to delete tasks across ALL accounts?
  • +
+
+
+
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/page.tsx b/dashboard/app/[locale]/documentation/methods/page.tsx new file mode 100644 index 000000000..bf9831c3f --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/page.tsx @@ -0,0 +1,214 @@ +'use client'; + +import { Button as DocButton } from "@/components/ui/button" +import { useTranslationContext } from '@/app/contexts/TranslationContext' +import Link from "next/link" + +export default function MethodsPage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Métodos

+

+ Bienvenido a nuestra sección de guías paso a paso. Aquí encontrarás instrucciones detalladas y prácticas para todas las operaciones comunes en Plexus. Ya sea que estés configurando tu primera fuente, creando cuadros de puntuación o ejecutando evaluaciones, estas guías te guiarán a través de cada proceso paso a paso. +

+ +
+
+

Gestión de Fuentes

+
+
+

Agregar y Editar Fuentes

+

+ Aprende cómo crear nuevas fuentes y gestionar las existentes a través del panel de control. +

+ + Ver Guía de Gestión de Fuentes + +
+ +
+

Perfilado de Fuentes

+

+ Entiende cómo analizar tus fuentes para obtener insights sobre sus características. +

+ + Aprender sobre Perfilado + +
+
+
+ +
+

Configuración de Evaluaciones

+
+
+

Crear Cuadros de Puntuación

+

+ Configura criterios de evaluación completos con cuadros de puntuación personalizados. +

+ + Explorar Creación de Cuadros + +
+ +
+

Configurar Puntuaciones

+

+ Define métricas de evaluación individuales y sus parámetros. +

+ + Configurar Ajustes de Puntuación + +
+
+
+ +
+

Ejecutar Evaluaciones

+
+
+

Evaluar Contenido

+

+ Procesa tus fuentes usando cuadros de puntuación para generar insights. +

+ + Comenzar a Evaluar Contenido + +
+ +
+

Gestión de Tareas

+

+ Rastrea y gestiona tareas de evaluación a través de su ciclo de vida. +

+ + Monitorear tus Tareas + +
+
+
+ +
+

Próximos Pasos

+

+ ¿Listo para comenzar? Empieza con la gestión de fuentes para configurar tu contenido para evaluación. +

+
+ + Comenzar Gestión de Fuentes + + + Revisar Conceptos Fundamentales + +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Methods

+

+ Welcome to our step-by-step guides section. Here you'll find detailed, practical instructions for all common operations in Plexus. Whether you're setting up your first source, creating scorecards, or running evaluations, these guides will walk you through each process step by step. +

+ +
+
+

Source Management

+
+
+

Adding and Editing Sources

+

+ Learn how to create new sources and manage existing ones through the dashboard. +

+ + View Source Management Guide + +
+ +
+

Source Profiling

+

+ Understand how to analyze your sources to gain insights into their characteristics. +

+ + Learn About Profiling + +
+
+
+ +
+

Evaluation Setup

+
+
+

Creating Scorecards

+

+ Set up comprehensive evaluation criteria with custom scorecards. +

+ + Explore Scorecard Creation + +
+ +
+

Configuring Scores

+

+ Define individual evaluation metrics and their parameters. +

+ + Configure Score Settings + +
+
+
+ +
+

Running Evaluations

+
+
+

Evaluating Content

+

+ Process your sources using scorecards to generate insights. +

+ + Start Evaluating Content + +
+ +
+

Task Management

+

+ Track and manage evaluation tasks through their lifecycle. +

+ + Monitor Your Tasks + +
+
+
+ +
+

Next Steps

+

+ Ready to get started? Begin with source management to set up your content for evaluation. +

+
+ + Start Managing Sources + + + Review Core Concepts + +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/methods/profile-source/page.tsx b/dashboard/app/[locale]/documentation/methods/profile-source/page.tsx new file mode 100644 index 000000000..a241fd74f --- /dev/null +++ b/dashboard/app/[locale]/documentation/methods/profile-source/page.tsx @@ -0,0 +1,322 @@ +'use client'; + +import { useTranslationContext } from '@/app/contexts/TranslationContext' + +export default function ProfileSourcePage() { + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Perfilar una Fuente

+

+ Aprende cómo analizar y perfilar tus fuentes usando la interfaz del panel de control de Plexus. +

+ +
+
+

Perfilado de Fuentes en el Panel de Control

+

+ El perfilado de fuentes te ayuda a entender las características y patrones en tus datos + antes de ejecutar evaluaciones. El panel de control proporciona herramientas completas para analizar + tus fuentes. +

+ +
+
+

Guía Paso a Paso

+
    +
  1. + Acceder a Detalles de Fuente: +

    Navega a tu fuente en la lista de Fuentes y haz clic en ella para ver detalles.

    +
  2. +
  3. + Iniciar Perfilado: +

    Haz clic en el botón "Perfilar Fuente" en la vista de detalles de la fuente.

    +
  4. +
  5. + Configurar Análisis: +

    Selecciona las opciones de perfilado que deseas ejecutar:

    +
      +
    • Análisis de contenido
    • +
    • Detección de patrones
    • +
    • Métricas de calidad
    • +
    • Opciones de análisis personalizado
    • +
    +
  6. +
  7. + Ejecutar Perfil: +

    Haz clic en "Iniciar Análisis" para comenzar el proceso de perfilado.

    +
  8. +
  9. + Revisar Resultados: +

    Una vez completo, examina los resultados detallados del perfilado en el panel de control.

    +
  10. +
+
+
+
+ +
+

Entendiendo los Resultados del Perfil

+
+
+

Análisis de Contenido

+

+ Ve desgloses detallados del contenido de tu fuente, incluyendo estructura, formato + y características clave. El panel de control presenta esta información a través de + visualizaciones interactivas y reportes detallados. +

+
+
+

Detección de Patrones

+

+ Explora patrones identificados y anomalías a través de la vista de análisis de patrones + del panel de control. Esto te ayuda a entender temas comunes y problemas potenciales + en tu contenido. +

+
+
+

Métricas de Calidad

+

+ Revisa mediciones de calidad completas a través de gráficos intuitivos y + desgloses detallados de métricas en la interfaz del panel de control. +

+
+
+
+ +
+

Consejos de Gestión de Perfiles

+
+
+

Guardar Perfiles

+

+ Guarda configuraciones de perfil como plantillas para reutilización rápida en múltiples fuentes. +

+
+
+

Comparar Resultados

+

+ Usa la vista de comparación del panel de control para analizar resultados de perfil entre diferentes + fuentes o períodos de tiempo. +

+
+
+
+ +
+

Usar la CLI

+

+ Para flujos de trabajo automatizados de perfilado, puedes usar la CLI de Plexus: +

+ +
+              {`# Ejecutar un perfil en una fuente
+plexus sources profile source-id --analysis-type full
+
+# Obtener resultados del perfil
+plexus sources profile-results source-id`}
+            
+
+ +
+

Referencia del SDK de Python

+

+ Para perfilado programático, puedes usar el SDK de Python: +

+ +
+              {`from plexus import Plexus
+
+plexus = Plexus(api_key="tu-api-key")
+
+# Ejecutar un perfil en una fuente
+profile = plexus.sources.profile(
+    source_id="source-id",
+    options={
+        "content_analysis": True,
+        "pattern_detection": True,
+        "quality_metrics": True
+    }
+)
+
+# Obtener resultados del perfil
+results = profile.get_results()`}
+            
+
+ +
+

Próximamente

+

+ Se están desarrollando características adicionales de perfilado. Vuelve pronto para: +

+
    +
  • Opciones avanzadas de visualización
  • +
  • Plantillas de perfilado personalizadas
  • +
  • Generación automatizada de insights
  • +
  • Compartir perfiles y colaboración
  • +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Profile a Source

+

+ Learn how to analyze and profile your sources using the Plexus dashboard interface. +

+ +
+
+

Profiling Sources in the Dashboard

+

+ Source profiling helps you understand the characteristics and patterns in your data + before running evaluations. The dashboard provides comprehensive tools for analyzing + your sources. +

+ +
+
+

Step-by-Step Guide

+
    +
  1. + Access Source Details: +

    Navigate to your source in the Sources list and click on it to view details.

    +
  2. +
  3. + Start Profiling: +

    Click the "Profile Source" button in the source details view.

    +
  4. +
  5. + Configure Analysis: +

    Select the profiling options you want to run:

    +
      +
    • Content analysis
    • +
    • Pattern detection
    • +
    • Quality metrics
    • +
    • Custom analysis options
    • +
    +
  6. +
  7. + Run Profile: +

    Click "Start Analysis" to begin the profiling process.

    +
  8. +
  9. + Review Results: +

    Once complete, examine the detailed profiling results in the dashboard.

    +
  10. +
+
+
+
+ +
+

Understanding Profile Results

+
+
+

Content Analysis

+

+ View detailed breakdowns of your source content, including structure, format, + and key characteristics. The dashboard presents this information through + interactive visualizations and detailed reports. +

+
+
+

Pattern Detection

+

+ Explore identified patterns and anomalies through the dashboard's pattern + analysis view. This helps you understand common themes and potential issues + in your content. +

+
+
+

Quality Metrics

+

+ Review comprehensive quality measurements through intuitive charts and + detailed metric breakdowns in the dashboard interface. +

+
+
+
+ +
+

Profile Management Tips

+
+
+

Saving Profiles

+

+ Save profile configurations as templates for quick reuse across multiple sources. +

+
+
+

Comparing Results

+

+ Use the dashboard's comparison view to analyze profile results across different + sources or time periods. +

+
+
+
+ +
+

Using the CLI

+

+ For automated profiling workflows, you can use the Plexus CLI: +

+ +
+            {`# Run a profile on a source
+plexus sources profile source-id --analysis-type full
+
+# Get profile results
+plexus sources profile-results source-id`}
+          
+
+ +
+

Python SDK Reference

+

+ For programmatic profiling, you can use the Python SDK: +

+ +
+            {`from plexus import Plexus
+
+plexus = Plexus(api_key="your-api-key")
+
+# Run a profile on a source
+profile = plexus.sources.profile(
+    source_id="source-id",
+    options={
+        "content_analysis": True,
+        "pattern_detection": True,
+        "quality_metrics": True
+    }
+)
+
+# Get profile results
+results = profile.get_results()`}
+          
+
+ +
+

Coming Soon

+

+ Additional profiling features are being developed. Check back soon for: +

+
    +
  • Advanced visualization options
  • +
  • Custom profiling templates
  • +
  • Automated insights generation
  • +
  • Profile sharing and collaboration
  • +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/[locale]/documentation/page.tsx b/dashboard/app/[locale]/documentation/page.tsx new file mode 100644 index 000000000..eb8d2bdf8 --- /dev/null +++ b/dashboard/app/[locale]/documentation/page.tsx @@ -0,0 +1,241 @@ +'use client'; + +import { Button as DocButton } from "@/components/ui/button" +import { useTranslations, useTranslationContext } from '@/app/contexts/TranslationContext' +import Link from "next/link" + +export default function DocumentationPage() { + const t = useTranslations('documentation'); + const { locale } = useTranslationContext(); + + if (locale === 'es') { + return ( +
+

Documentación

+

+ Bienvenido a la documentación de Plexus. Aquí encontrarás guías completas y documentación + para ayudarte a comenzar a trabajar con Plexus lo más rápido posible. +

+ +
+
+

Primeros Pasos

+
+
+

Conceptos Fundamentales

+

+ Aprende sobre los conceptos y componentes fundamentales que impulsan Plexus. +

+ + Explorar Fundamentos + +
+ +
+

Guías Paso a Paso

+

+ Sigue guías detalladas para operaciones y flujos de trabajo comunes. +

+ + Ver Métodos + +
+
+
+ +
+

Componentes de la Plataforma

+
+
+

Nodos de Trabajo

+

+ Configura y gestiona nodos de trabajo para procesar tu contenido a escala. +

+ + Aprender sobre Workers + +
+ +
+

+ Herramienta CLI plexus +

+

+ Utiliza la interfaz de línea de comandos para gestionar tu implementación de Plexus. +

+ + Explorar CLI + +
+ +
+

SDK de Python

+

+ Integra Plexus en tus aplicaciones Python de manera programática. +

+ + Explorar Referencia SDK + +
+
+
+ +
+

Inicio Rápido

+

+ La forma más rápida de comenzar con Plexus es: +

+
    +
  1. + Revisar los Fundamentos +

    Comprende los conceptos básicos que conforman Plexus.

    +
  2. +
  3. + Crear tu Primera Fuente +

    Agrega contenido para analizar usando el panel de control.

    +
  4. +
  5. + Configurar un Cuadro de Puntuación +

    Define cómo quieres evaluar tu contenido.

    +
  6. +
  7. + Ejecutar una Evaluación +

    Procesa tu contenido y visualiza los resultados.

    +
  8. +
+
+ +
+

Próximos Pasos

+

+ ¿Listo para comenzar? Empieza con los fundamentos para entender los conceptos básicos de Plexus. +

+
+ + Comenzar con Fundamentos + + + Ir a Creación de Fuentes + +
+
+
+
+ ); + } + + // English content (default) + return ( +
+

Documentation

+

+ Welcome to the Plexus documentation. Here you'll find comprehensive guides and documentation + to help you start working with Plexus as quickly as possible. +

+ +
+
+

Getting Started

+
+
+

Core Concepts

+

+ Learn about the fundamental concepts and components that power Plexus. +

+ + Explore Basics + +
+ +
+

Step-by-Step Guides

+

+ Follow detailed guides for common operations and workflows. +

+ + View Methods + +
+
+
+ +
+

Platform Components

+
+
+

Worker Nodes

+

+ Set up and manage worker nodes to process your content at scale. +

+ + Learn About Workers + +
+ +
+

+ plexus CLI Tool +

+

+ Use the command-line interface to manage your Plexus deployment. +

+ + Explore CLI + +
+ +
+

Python SDK

+

+ Integrate Plexus into your Python applications programmatically. +

+ + Browse SDK Reference + +
+
+
+ +
+

Quick Start

+

+ The fastest way to get started with Plexus is to: +

+
    +
  1. + Review the Basics +

    Understand the core concepts that make up Plexus.

    +
  2. +
  3. + Create Your First Source +

    Add some content to analyze using the dashboard.

    +
  4. +
  5. + Set Up a Scorecard +

    Define how you want to evaluate your content.

    +
  6. +
  7. + Run an Evaluation +

    Process your content and view the results.

    +
  8. +
+
+ +
+

Next Steps

+

+ Ready to get started? Begin with the basics to understand Plexus's core concepts. +

+
+ + Start with Basics + + + Jump to Source Creation + +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/evaluations/[id]/__tests__/page.test.tsx b/dashboard/app/[locale]/evaluations/[id]/__tests__/page.test.tsx similarity index 100% rename from dashboard/app/evaluations/[id]/__tests__/page.test.tsx rename to dashboard/app/[locale]/evaluations/[id]/__tests__/page.test.tsx diff --git a/dashboard/app/evaluations/[id]/client-layout.tsx b/dashboard/app/[locale]/evaluations/[id]/client-layout.tsx similarity index 100% rename from dashboard/app/evaluations/[id]/client-layout.tsx rename to dashboard/app/[locale]/evaluations/[id]/client-layout.tsx diff --git a/dashboard/app/evaluations/[id]/layout.tsx b/dashboard/app/[locale]/evaluations/[id]/layout.tsx similarity index 100% rename from dashboard/app/evaluations/[id]/layout.tsx rename to dashboard/app/[locale]/evaluations/[id]/layout.tsx diff --git a/dashboard/app/evaluations/[id]/page.tsx b/dashboard/app/[locale]/evaluations/[id]/page.tsx similarity index 100% rename from dashboard/app/evaluations/[id]/page.tsx rename to dashboard/app/[locale]/evaluations/[id]/page.tsx diff --git a/dashboard/app/evaluations/page.tsx b/dashboard/app/[locale]/evaluations/page.tsx similarity index 100% rename from dashboard/app/evaluations/page.tsx rename to dashboard/app/[locale]/evaluations/page.tsx diff --git a/dashboard/app/feedback-queues/page.tsx b/dashboard/app/[locale]/feedback-queues/page.tsx similarity index 100% rename from dashboard/app/feedback-queues/page.tsx rename to dashboard/app/[locale]/feedback-queues/page.tsx diff --git a/dashboard/app/feedback/page.tsx b/dashboard/app/[locale]/feedback/page.tsx similarity index 100% rename from dashboard/app/feedback/page.tsx rename to dashboard/app/[locale]/feedback/page.tsx diff --git a/dashboard/app/items/page.tsx b/dashboard/app/[locale]/items/page.tsx similarity index 100% rename from dashboard/app/items/page.tsx rename to dashboard/app/[locale]/items/page.tsx diff --git a/dashboard/app/lab/README-metadata.md b/dashboard/app/[locale]/lab/README-metadata.md similarity index 100% rename from dashboard/app/lab/README-metadata.md rename to dashboard/app/[locale]/lab/README-metadata.md diff --git a/dashboard/app/lab/activity/layout.tsx b/dashboard/app/[locale]/lab/activity/layout.tsx similarity index 100% rename from dashboard/app/lab/activity/layout.tsx rename to dashboard/app/[locale]/lab/activity/layout.tsx diff --git a/dashboard/app/lab/activity/page.tsx b/dashboard/app/[locale]/lab/activity/page.tsx similarity index 100% rename from dashboard/app/lab/activity/page.tsx rename to dashboard/app/[locale]/lab/activity/page.tsx diff --git a/dashboard/app/lab/alerts/page.tsx b/dashboard/app/[locale]/lab/alerts/page.tsx similarity index 100% rename from dashboard/app/lab/alerts/page.tsx rename to dashboard/app/[locale]/lab/alerts/page.tsx diff --git a/dashboard/app/lab/analysis/page.tsx b/dashboard/app/[locale]/lab/analysis/page.tsx similarity index 100% rename from dashboard/app/lab/analysis/page.tsx rename to dashboard/app/[locale]/lab/analysis/page.tsx diff --git a/dashboard/app/lab/batches/[id]/layout.tsx b/dashboard/app/[locale]/lab/batches/[id]/layout.tsx similarity index 100% rename from dashboard/app/lab/batches/[id]/layout.tsx rename to dashboard/app/[locale]/lab/batches/[id]/layout.tsx diff --git a/dashboard/app/lab/batches/[id]/page.tsx b/dashboard/app/[locale]/lab/batches/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/batches/[id]/page.tsx rename to dashboard/app/[locale]/lab/batches/[id]/page.tsx diff --git a/dashboard/app/lab/batches/page.tsx b/dashboard/app/[locale]/lab/batches/page.tsx similarity index 100% rename from dashboard/app/lab/batches/page.tsx rename to dashboard/app/[locale]/lab/batches/page.tsx diff --git a/dashboard/app/lab/data/page.tsx b/dashboard/app/[locale]/lab/data/page.tsx similarity index 100% rename from dashboard/app/lab/data/page.tsx rename to dashboard/app/[locale]/lab/data/page.tsx diff --git a/dashboard/app/[locale]/lab/datasets/layout.tsx b/dashboard/app/[locale]/lab/datasets/layout.tsx new file mode 100644 index 000000000..2b41d8be2 --- /dev/null +++ b/dashboard/app/[locale]/lab/datasets/layout.tsx @@ -0,0 +1,23 @@ +import React from 'react' +import type { Metadata } from 'next' + +export const metadata: Metadata = { + title: "Datasets", + description: "Manage and explore your datasets for AI evaluation.", + openGraph: { + title: "Datasets", + description: "Manage and explore your datasets for AI evaluation.", + }, + twitter: { + title: "Datasets", + description: "Manage and explore your datasets for AI evaluation.", + } +} + +export default function DatasetsLayout({ + children, +}: { + children: React.ReactNode +}) { + return children +} \ No newline at end of file diff --git a/dashboard/app/[locale]/lab/datasets/page.tsx b/dashboard/app/[locale]/lab/datasets/page.tsx new file mode 100644 index 000000000..8a882f701 --- /dev/null +++ b/dashboard/app/[locale]/lab/datasets/page.tsx @@ -0,0 +1,7 @@ +"use client"; + +import DatasetsDashboard from '@/components/datasets-dashboard'; + +export default function DatasetsPage() { + return ; +} \ No newline at end of file diff --git a/dashboard/app/lab/evaluations/[id]/layout.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/layout.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/layout.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/layout.tsx diff --git a/dashboard/app/lab/evaluations/[id]/page.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/page.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/page.tsx diff --git a/dashboard/app/lab/evaluations/[id]/score-results/[scoreResultId]/layout.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/score-results/[scoreResultId]/layout.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/score-results/[scoreResultId]/layout.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/score-results/[scoreResultId]/layout.tsx diff --git a/dashboard/app/lab/evaluations/[id]/score-results/[scoreResultId]/page.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/score-results/[scoreResultId]/page.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/score-results/[scoreResultId]/page.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/score-results/[scoreResultId]/page.tsx diff --git a/dashboard/app/lab/evaluations/[id]/score-results/layout.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/score-results/layout.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/score-results/layout.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/score-results/layout.tsx diff --git a/dashboard/app/lab/evaluations/[id]/score-results/page.tsx b/dashboard/app/[locale]/lab/evaluations/[id]/score-results/page.tsx similarity index 100% rename from dashboard/app/lab/evaluations/[id]/score-results/page.tsx rename to dashboard/app/[locale]/lab/evaluations/[id]/score-results/page.tsx diff --git a/dashboard/app/lab/evaluations/layout.tsx b/dashboard/app/[locale]/lab/evaluations/layout.tsx similarity index 100% rename from dashboard/app/lab/evaluations/layout.tsx rename to dashboard/app/[locale]/lab/evaluations/layout.tsx diff --git a/dashboard/app/lab/evaluations/page.tsx b/dashboard/app/[locale]/lab/evaluations/page.tsx similarity index 100% rename from dashboard/app/lab/evaluations/page.tsx rename to dashboard/app/[locale]/lab/evaluations/page.tsx diff --git a/dashboard/app/lab/feedback-queues/page.tsx b/dashboard/app/[locale]/lab/feedback-queues/page.tsx similarity index 100% rename from dashboard/app/lab/feedback-queues/page.tsx rename to dashboard/app/[locale]/lab/feedback-queues/page.tsx diff --git a/dashboard/app/lab/items/[id]/page.tsx b/dashboard/app/[locale]/lab/items/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/items/[id]/page.tsx rename to dashboard/app/[locale]/lab/items/[id]/page.tsx diff --git a/dashboard/app/lab/items/page.tsx b/dashboard/app/[locale]/lab/items/page.tsx similarity index 100% rename from dashboard/app/lab/items/page.tsx rename to dashboard/app/[locale]/lab/items/page.tsx diff --git a/dashboard/app/lab/layout.tsx b/dashboard/app/[locale]/lab/layout.tsx similarity index 100% rename from dashboard/app/lab/layout.tsx rename to dashboard/app/[locale]/lab/layout.tsx diff --git a/dashboard/app/lab/metadata-template.txt b/dashboard/app/[locale]/lab/metadata-template.txt similarity index 100% rename from dashboard/app/lab/metadata-template.txt rename to dashboard/app/[locale]/lab/metadata-template.txt diff --git a/dashboard/app/lab/reports/[id]/page.tsx b/dashboard/app/[locale]/lab/reports/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/reports/[id]/page.tsx rename to dashboard/app/[locale]/lab/reports/[id]/page.tsx diff --git a/dashboard/app/lab/reports/edit/[id]/page.tsx b/dashboard/app/[locale]/lab/reports/edit/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/reports/edit/[id]/page.tsx rename to dashboard/app/[locale]/lab/reports/edit/[id]/page.tsx diff --git a/dashboard/app/lab/reports/edit/page.tsx b/dashboard/app/[locale]/lab/reports/edit/page.tsx similarity index 100% rename from dashboard/app/lab/reports/edit/page.tsx rename to dashboard/app/[locale]/lab/reports/edit/page.tsx diff --git a/dashboard/app/lab/reports/page.tsx b/dashboard/app/[locale]/lab/reports/page.tsx similarity index 100% rename from dashboard/app/lab/reports/page.tsx rename to dashboard/app/[locale]/lab/reports/page.tsx diff --git a/dashboard/app/lab/scorecards/[id]/layout.tsx b/dashboard/app/[locale]/lab/scorecards/[id]/layout.tsx similarity index 100% rename from dashboard/app/lab/scorecards/[id]/layout.tsx rename to dashboard/app/[locale]/lab/scorecards/[id]/layout.tsx diff --git a/dashboard/app/lab/scorecards/[id]/page.tsx b/dashboard/app/[locale]/lab/scorecards/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/scorecards/[id]/page.tsx rename to dashboard/app/[locale]/lab/scorecards/[id]/page.tsx diff --git a/dashboard/app/lab/scorecards/[id]/scores/[scoreId]/layout.tsx b/dashboard/app/[locale]/lab/scorecards/[id]/scores/[scoreId]/layout.tsx similarity index 100% rename from dashboard/app/lab/scorecards/[id]/scores/[scoreId]/layout.tsx rename to dashboard/app/[locale]/lab/scorecards/[id]/scores/[scoreId]/layout.tsx diff --git a/dashboard/app/lab/scorecards/[id]/scores/[scoreId]/page.tsx b/dashboard/app/[locale]/lab/scorecards/[id]/scores/[scoreId]/page.tsx similarity index 100% rename from dashboard/app/lab/scorecards/[id]/scores/[scoreId]/page.tsx rename to dashboard/app/[locale]/lab/scorecards/[id]/scores/[scoreId]/page.tsx diff --git a/dashboard/app/lab/scorecards/[id]/scores/layout.tsx b/dashboard/app/[locale]/lab/scorecards/[id]/scores/layout.tsx similarity index 100% rename from dashboard/app/lab/scorecards/[id]/scores/layout.tsx rename to dashboard/app/[locale]/lab/scorecards/[id]/scores/layout.tsx diff --git a/dashboard/app/lab/scorecards/layout.tsx b/dashboard/app/[locale]/lab/scorecards/layout.tsx similarity index 100% rename from dashboard/app/lab/scorecards/layout.tsx rename to dashboard/app/[locale]/lab/scorecards/layout.tsx diff --git a/dashboard/app/lab/scorecards/page.tsx b/dashboard/app/[locale]/lab/scorecards/page.tsx similarity index 100% rename from dashboard/app/lab/scorecards/page.tsx rename to dashboard/app/[locale]/lab/scorecards/page.tsx diff --git a/dashboard/app/lab/settings/account/page.tsx b/dashboard/app/[locale]/lab/settings/account/page.tsx similarity index 85% rename from dashboard/app/lab/settings/account/page.tsx rename to dashboard/app/[locale]/lab/settings/account/page.tsx index d3cbc7226..bfe1fd615 100644 --- a/dashboard/app/lab/settings/account/page.tsx +++ b/dashboard/app/[locale]/lab/settings/account/page.tsx @@ -12,6 +12,8 @@ import { Switch } from "@/components/ui/switch" import { Label } from "@/components/ui/label" import { Button } from "@/components/ui/button" import { useToast } from "@/components/ui/use-toast" +import { LanguageSelector } from "@/components/ui/language-selector" +import { useTranslations } from '@/app/contexts/TranslationContext' import { useAccount } from "@/app/contexts/AccountContext" type Account = Schema["Account"]["type"] @@ -39,6 +41,8 @@ const MENU_ITEMS = [ ] export default function LabAccountSettings() { + const t = useTranslations('settings.account') + const tCommon = useTranslations('common') const { authStatus } = useAuthenticator((context) => [context.authStatus]) const router = useRouter() const { toast } = useToast() @@ -85,15 +89,15 @@ export default function LabAccountSettings() { await refreshAccount() toast({ - title: "Success", - description: "Account settings saved successfully" + title: tCommon('success'), + description: t('settingsSaved') }) router.push("/lab/settings") } catch (error) { console.error("Error saving settings:", error) toast({ - title: "Error", - description: "Failed to save account settings", + title: tCommon('error'), + description: t('settingsSaveError'), variant: "destructive" }) } finally { @@ -108,7 +112,7 @@ export default function LabAccountSettings() { if (!selectedAccount) { return (
-

No account selected

+

{t('noAccountSelected')}

) } @@ -116,17 +120,17 @@ export default function LabAccountSettings() { return (
-

Account Settings

+

{t('title')}

- Customize your account menu visibility settings. + {t('description')}

- Menu Visibility for {selectedAccount.name} + {t('menuVisibilityTitle', { accountName: selectedAccount.name })} - Choose which menu items to show or hide in the sidebar. + {t('menuVisibilityDescription')} @@ -146,7 +150,7 @@ export default function LabAccountSettings() { onClick={handleSave} disabled={isSaving} > - {isSaving ? "Saving..." : "Save Changes"} + {isSaving ? t('saving') : t('saveChanges')} diff --git a/dashboard/app/[locale]/lab/settings/page.tsx b/dashboard/app/[locale]/lab/settings/page.tsx new file mode 100644 index 000000000..d1dc46f15 --- /dev/null +++ b/dashboard/app/[locale]/lab/settings/page.tsx @@ -0,0 +1,53 @@ +'use client' + +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" +import { LanguageSelector } from "@/components/ui/language-selector" +import { useTranslations } from '@/app/contexts/TranslationContext' +import Link from 'next/link' + +export default function LabSettings() { + const t = useTranslations('settings'); + const tCommon = useTranslations('common'); + + return ( +
+
+

{t('title')}

+

+ {t('description')} +

+
+ + + + {t('user')} + {t('customize')} + + + +

{t('userDescription')}

+
+ + {t('manageVisibility')} + +
+
+
+ + + + {t('account.title')} + {t('account.description')} + + +

{t('organizationDescription')}

+
+ + {t('account.title')} + +
+
+
+
+ ) +} \ No newline at end of file diff --git a/dashboard/app/lab/tasks/[id]/page.tsx b/dashboard/app/[locale]/lab/tasks/[id]/page.tsx similarity index 100% rename from dashboard/app/lab/tasks/[id]/page.tsx rename to dashboard/app/[locale]/lab/tasks/[id]/page.tsx diff --git a/dashboard/app/lab/tasks/layout.tsx b/dashboard/app/[locale]/lab/tasks/layout.tsx similarity index 100% rename from dashboard/app/lab/tasks/layout.tsx rename to dashboard/app/[locale]/lab/tasks/layout.tsx diff --git a/dashboard/app/[locale]/layout.tsx b/dashboard/app/[locale]/layout.tsx new file mode 100644 index 000000000..678c1a8a2 --- /dev/null +++ b/dashboard/app/[locale]/layout.tsx @@ -0,0 +1,82 @@ +import type { Metadata } from "next"; +import { Inter } from "next/font/google"; +import { Jersey_20 } from "next/font/google"; +import "../globals.css"; +import ClientLayout from "../client-layout"; +import { HydrationOverlay } from "@builder.io/react-hydration-overlay"; +import "@aws-amplify/ui-react/styles.css"; +import { AccountProvider } from "../contexts/AccountContext" +import { SidebarProvider } from "../contexts/SidebarContext" +import { TranslationProvider } from "../contexts/TranslationContext" +import {notFound} from 'next/navigation'; +import {locales} from '../../i18n'; + +const inter = Inter({ subsets: ["latin"] }); +const jersey20 = Jersey_20({ + subsets: ["latin"], + weight: "400", + variable: "--font-jersey-20" +}); + +export const metadata: Metadata = { + title: "Plexus - No-Code AI Agents at Scale", + description: "Run AI agents over your data with no code. Plexus is a battle-tested platform for building agent-based AI workflows that analyze streams of content and take action.", + openGraph: { + title: "Plexus - No-Code AI Agents at Scale", + description: "Run AI agents over your data with no code. Plexus is a battle-tested platform for building agent-based AI workflows that analyze streams of content and take action.", + url: "https://plexus.anth.us", + siteName: "Plexus", + images: [ + { + url: "/og-image.png", + width: 1200, + height: 630, + alt: "Plexus - No-Code AI Agents at Scale" + } + ], + locale: "en_US", + type: "website", + }, + twitter: { + card: "summary_large_image", + title: "Plexus - No-Code AI Agents at Scale", + description: "Run AI agents over your data with no code. Plexus is a battle-tested platform for building agent-based AI workflows that analyze streams of content and take action.", + creator: "@Anthus_AI", + images: ["/og-image.png"], + } +}; + +export default async function LocaleLayout({ + children, + params: {locale} +}: { + children: React.ReactNode; + params: {locale: string}; +}) { + // Validate that the incoming `locale` parameter is valid + if (!locales.includes(locale as any)) { + notFound(); + } + + // Load messages for the locale synchronously + const messages = locale === 'es' + ? (await import('../../messages/es.json')).default + : (await import('../../messages/en.json')).default; + + return ( + + + + + + + + {children} + + + + + + + ); +} \ No newline at end of file diff --git a/dashboard/app/menu-items.ts b/dashboard/app/[locale]/menu-items.ts similarity index 100% rename from dashboard/app/menu-items.ts rename to dashboard/app/[locale]/menu-items.ts diff --git a/dashboard/app/page.module.css b/dashboard/app/[locale]/page.module.css similarity index 100% rename from dashboard/app/page.module.css rename to dashboard/app/[locale]/page.module.css diff --git a/dashboard/app/[locale]/page.tsx b/dashboard/app/[locale]/page.tsx new file mode 100644 index 000000000..568eb0e66 --- /dev/null +++ b/dashboard/app/[locale]/page.tsx @@ -0,0 +1,281 @@ +'use client' + +import React from 'react' +import { StandardSection } from '@/components/landing/StandardSection' +import { UseCases } from '@/components/landing/UseCases' +import { CTASection } from '@/components/landing/CTASection' +import { Footer } from '@/components/landing/Footer' +import { Layout } from '@/components/landing/Layout' +import { Download, Brain, Workflow as WorkflowIcon, ArrowRight, Cpu, FlaskRoundIcon as Flask, Cloud, Network } from 'lucide-react' +import dynamic from 'next/dynamic' +import ItemListWorkflow from '@/components/workflow/layouts/item-list-workflow' +import MetricsGauges from '@/components/MetricsGauges' +import { Button } from '@/components/ui/button' +import Link from 'next/link' + +const CLOCKWISE_SEQUENCE = [0, 1, 3, 2] // accuracy -> precision -> specificity -> sensitivity + +const MultiModelWorkflowClient = dynamic( + () => import('@/components/workflow/layouts/multi-model-workflow'), + { ssr: false } +) + +const WorkflowClient = dynamic( + () => import('@/components/workflow/base/workflow-base'), + { ssr: false } +) + +const MultiTypeWorkflowClient = dynamic( + () => import('@/components/workflow/layouts/multi-type-workflow'), + { ssr: false } +) + +const ItemListWorkflowClient = dynamic( + () => import('@/components/workflow/layouts/item-list-workflow'), + { ssr: false } +) + +export default function LandingPage() { + const [selectedMetricIndex, setSelectedMetricIndex] = React.useState(0) + const [rotationIndex, setRotationIndex] = React.useState(0) + + React.useEffect(() => { + const interval = setInterval(() => { + setRotationIndex(prev => (prev + 1) % CLOCKWISE_SEQUENCE.length) + }, 2000) + + return () => clearInterval(interval) + }, []) + + // Map rotation index to actual gauge index for clockwise movement + const selectedIndex = CLOCKWISE_SEQUENCE[rotationIndex] + + return ( + + +

+ Plexus is a battle-tested platform for building AI workflows that analyze streams of content and take action. +

+

+ Your team can use your data to set up step-by-step processes that automate information management. Without dealing with code development and deployment. +

+
+ +
+ + } + rightContent={ +
+ +
+ } + /> + + } + rightContent={ +
+

+ Run a scorecard on each item of your data, with multiple scores per scorecard. +

+

+ Are your agents saying the right things? Are your inbound leads qualified? +

+

+ Classify, predict, extract, and act on your data. +

+
+ } + /> + + + AI changes every week! Don't lock yourself into one solution. + Plexus is a workbench for applying any newfangled AI model to + solve your problems. Or simpler and cheaper ML models. Or + logical rules -- anything your solution requires. + {"\n\n"} + OpenAI, Anthropic, Google, Deepseek, Azure, AWS Bedrock, Hugging Face, PyTorch, TensorFlow — + Plexus supports them all. +

+ } + rightContent={ +
+ +
+ } + /> + + + } + rightContent={ +
+

+ You can't just write prompts and put them into production and hope they work, you need a way to evaluate them quantitatively to see if they meet your needs. You can't optimize a metric you're not measuring. +

+

+ Each use case demands its own success metrics: Is this a regulatory compliance question where we need high sensitivity? Do we need to use balanced accuracy because the data is unbalanced? Plexus gives you the gauges you need. +

+
+ } + /> + + + Your answers should match your questions. Sometimes a simple yes/no will do, + other times you need a 5-star rating, a percentage score, or just a thumbs up. + Plexus gives you the flexibility to express your results in the format that makes sense + for your use case. + {"\n\n"} + Binary classifiers, multi-class classifiers, scalar values, entity extraction, quote extraction, + and more. The framework is flexible enough to support anything your solution requires. +

+ } + rightContent={ +
+ +
+ } + /> + + +
+

+ Built by practitioners on the front lines of AI deployment. Our features + evolve as rapidly as AI itself, delivering battle-tested tools that + transform cutting-edge capabilities into real business value. +

+
+
+
+ +

+ Multi-model +

+

+ Use any AI/ML model, from GPT-4 or Claude, to your own fine-tuned local Llama, to custom BERT-based classifiers. +

+
+
+ +

+ Lab workflow +

+

+ Create and align your own custom classifiers using sophisticated tools for analyzing datasets and evaluating results. +

+
+
+ +

+ Serverless +

+

+ Plexus is a lightning-fast, fully DevOps / IaC / NoSQL project that doesn't depend on servers or databases. +

+
+
+ +

+ Task dispatch +

+

+ Connect any node as a worker for running agents, evaluations, or reports, from AWS to Azure to local computers. +

+
+
+
+ + + +
+ + ) +} \ No newline at end of file diff --git a/dashboard/app/platform/page.tsx b/dashboard/app/[locale]/platform/page.tsx similarity index 100% rename from dashboard/app/platform/page.tsx rename to dashboard/app/[locale]/platform/page.tsx diff --git a/dashboard/app/reports/[id]/client-layout.tsx b/dashboard/app/[locale]/reports/[id]/client-layout.tsx similarity index 100% rename from dashboard/app/reports/[id]/client-layout.tsx rename to dashboard/app/[locale]/reports/[id]/client-layout.tsx diff --git a/dashboard/app/reports/[id]/layout.tsx b/dashboard/app/[locale]/reports/[id]/layout.tsx similarity index 100% rename from dashboard/app/reports/[id]/layout.tsx rename to dashboard/app/[locale]/reports/[id]/layout.tsx diff --git a/dashboard/app/reports/[id]/page.tsx b/dashboard/app/[locale]/reports/[id]/page.tsx similarity index 100% rename from dashboard/app/reports/[id]/page.tsx rename to dashboard/app/[locale]/reports/[id]/page.tsx diff --git a/dashboard/app/reports/layout.tsx b/dashboard/app/[locale]/reports/layout.tsx similarity index 100% rename from dashboard/app/reports/layout.tsx rename to dashboard/app/[locale]/reports/layout.tsx diff --git a/dashboard/app/reports/page.tsx b/dashboard/app/[locale]/reports/page.tsx similarity index 100% rename from dashboard/app/reports/page.tsx rename to dashboard/app/[locale]/reports/page.tsx diff --git a/dashboard/app/scorecards/[scorecardId]/scores/[scoreId]/edit/page.tsx b/dashboard/app/[locale]/scorecards/[scorecardId]/scores/[scoreId]/edit/page.tsx similarity index 100% rename from dashboard/app/scorecards/[scorecardId]/scores/[scoreId]/edit/page.tsx rename to dashboard/app/[locale]/scorecards/[scorecardId]/scores/[scoreId]/edit/page.tsx diff --git a/dashboard/app/scorecards/page.tsx b/dashboard/app/[locale]/scorecards/page.tsx similarity index 100% rename from dashboard/app/scorecards/page.tsx rename to dashboard/app/[locale]/scorecards/page.tsx diff --git a/dashboard/app/settings/account/page.tsx b/dashboard/app/[locale]/settings/account/page.tsx similarity index 100% rename from dashboard/app/settings/account/page.tsx rename to dashboard/app/[locale]/settings/account/page.tsx diff --git a/dashboard/app/settings/page.tsx b/dashboard/app/[locale]/settings/page.tsx similarity index 100% rename from dashboard/app/settings/page.tsx rename to dashboard/app/[locale]/settings/page.tsx diff --git a/dashboard/app/solutions/call-center-qa/page.tsx b/dashboard/app/[locale]/solutions/call-center-qa/page.tsx similarity index 100% rename from dashboard/app/solutions/call-center-qa/page.tsx rename to dashboard/app/[locale]/solutions/call-center-qa/page.tsx diff --git a/dashboard/app/solutions/enterprise/page.tsx b/dashboard/app/[locale]/solutions/enterprise/page.tsx similarity index 100% rename from dashboard/app/solutions/enterprise/page.tsx rename to dashboard/app/[locale]/solutions/enterprise/page.tsx diff --git a/dashboard/app/solutions/optimizer-agents/page.tsx b/dashboard/app/[locale]/solutions/optimizer-agents/page.tsx similarity index 100% rename from dashboard/app/solutions/optimizer-agents/page.tsx rename to dashboard/app/[locale]/solutions/optimizer-agents/page.tsx diff --git a/dashboard/app/solutions/platform/page.tsx b/dashboard/app/[locale]/solutions/platform/page.tsx similarity index 100% rename from dashboard/app/solutions/platform/page.tsx rename to dashboard/app/[locale]/solutions/platform/page.tsx diff --git a/dashboard/app/solutions/resources/page.tsx b/dashboard/app/[locale]/solutions/resources/page.tsx similarity index 100% rename from dashboard/app/solutions/resources/page.tsx rename to dashboard/app/[locale]/solutions/resources/page.tsx diff --git a/dashboard/app/contexts/TranslationContext.tsx b/dashboard/app/contexts/TranslationContext.tsx new file mode 100644 index 000000000..346a10dcc --- /dev/null +++ b/dashboard/app/contexts/TranslationContext.tsx @@ -0,0 +1,77 @@ +"use client"; + +import React, { createContext, useContext, ReactNode } from 'react'; + +interface TranslationContextType { + t: (key: string, variables?: Record) => string; + locale: string; +} + +const TranslationContext = createContext(undefined); + +interface TranslationProviderProps { + children: ReactNode; + messages: Record; + locale: string; +} + +export function TranslationProvider({ children, messages, locale }: TranslationProviderProps) { + const t = (key: string, variables?: Record): string => { + const keys = key.split('.'); + let value = messages; + + for (const k of keys) { + if (value && typeof value === 'object' && k in value) { + value = value[k]; + } else { + return key; // Return key if translation not found + } + } + + let result = typeof value === 'string' ? value : key; + + // Handle variable interpolation + if (variables && typeof result === 'string') { + Object.keys(variables).forEach(varKey => { + const placeholder = `{${varKey}}`; + result = result.replace(new RegExp(placeholder, 'g'), String(variables[varKey])); + }); + } + + return result; + }; + + return ( + + {children} + + ); +} + +export function useTranslations(namespace?: string) { + const context = useContext(TranslationContext); + if (!context) { + throw new Error('useTranslations must be used within a TranslationProvider'); + } + + return (key: string, variables?: Record) => { + const fullKey = namespace ? `${namespace}.${key}` : key; + return context.t(fullKey, variables); + }; +} + +export function useLocale() { + const context = useContext(TranslationContext); + if (!context) { + throw new Error('useLocale must be used within a TranslationProvider'); + } + return context.locale; +} + +export function useTranslationContext() { + const context = useContext(TranslationContext); + if (!context) { + throw new Error('useTranslationContext must be used within a TranslationProvider'); + } + return context; +} \ No newline at end of file diff --git a/dashboard/app/documentation/advanced/mcp-server/page.tsx b/dashboard/app/documentation/advanced/mcp-server/page.tsx deleted file mode 100644 index 6fee9a88b..000000000 --- a/dashboard/app/documentation/advanced/mcp-server/page.tsx +++ /dev/null @@ -1,255 +0,0 @@ -'use client'; - -import Link from "next/link"; - -export default function McpServerPage() { - return ( -
- - -

Using the Plexus MCP Server

-

- Connect AI assistants like Claude to your Plexus data and functionality using the Model Context Protocol (MCP) server. -

- -
-
-

What is MCP?

-

- The Model Context Protocol (MCP) is an open standard designed by Anthropic that allows AI models, such as Claude, - to securely interact with external tools and data sources. For an AI assistant, an MCP server acts as a gateway, - enabling it to access and use capabilities from other systems. In the context of Plexus, this means you can - empower an AI to work with your scorecards, evaluations, and reports directly. This allows for more dynamic and - powerful ways to interact with your Plexus instance. - For a deeper dive into the protocol itself, see the official Anthropic Model Context Protocol announcement. -

-
- -
-

Plexus MCP Server Overview

-

- The Plexus MCP server is a pre-built tool that you can run on your system. Once running, it allows AI assistants - that support MCP (like the Claude desktop app) to connect to your Plexus environment. This connection lets the AI - perform various actions within Plexus on your behalf, such as listing scorecards, retrieving report details, or - even initiating new evaluations. The server is typically run via a wrapper script (plexus_fastmcp_wrapper.py) - which handles environment setup and ensures smooth communication with the AI client. -

-
- -
-

Getting the Server Code

-

- To run the Plexus MCP server, you'll first need to obtain the server code. This is available in the main Plexus GitHub repository. - You can clone or download it from: https://github.com/AnthusAI/Plexus. - The necessary scripts (plexus_fastmcp_wrapper.py and plexus_fastmcp_server.py) are typically located at MCP/ within the repository. - You will primarily need these files and to ensure their dependencies can be met in your Python environment. -

-
- -
-

Setting Up an MCP Client (e.g., Claude Desktop App)

-

- To use the Plexus MCP server, you need an MCP client. For example, if you are using the Claude desktop application, - you would configure it by creating or editing an mcp.json file. This file tells Claude (or another client) - how to find and communicate with your running Plexus MCP server. -

-

- Here is an example configuration for your mcp.json file. You will need to replace the placeholder paths - (/path/to/...) with the actual paths relevant to your system and where you have cloned the Plexus repository. -

-
-            
-{`{ - "mcpServers": { - "plexus-mcp-service": { - "command": "/path/to/your/conda/envs/py311/bin/python", - "args": [ - "/path/to/your/Plexus/MCP/plexus_fastmcp_wrapper.py", - "--host", "127.0.0.1", - "--port", "8002", - "--transport", "stdio", - "--env-file", "/path/to/your/Plexus/.env", - "--target-cwd", "/path/to/your/Plexus/" - ], - "env": { - "PYTHONUNBUFFERED": "1", - "PYTHONPATH": "/path/to/your/Plexus" - } - } - } -}`} -
-
-

Key parts of this configuration:

-
    -
  • command: The full path to the Python interpreter within your Plexus conda environment (e.g., py311).
  • -
  • args: Specifies the wrapper script to run (plexus_fastmcp_wrapper.py) and its parameters. - The --host and --port arguments configure the local server settings. - The --transport stdio argument is standard for client-server communication. - The --env-file argument must point directly to your .env file (which contains API keys). - The --target-cwd should point to your Plexus project root directory.
  • -
  • env.PYTHONPATH: Should point to the root of your Plexus project directory to ensure the server can find all necessary Python modules.
  • -
-

- The location of the mcp.json file can vary depending on the client. For the Claude desktop app, consult its documentation for the correct location (often in a configuration directory within your user profile). -

-
- -
-

Available Tools & Capabilities

-

Once the Plexus MCP server is running (via the wrapper script) and your AI assistant is connected, you can instruct the assistant to use the following tools:

- -
-

Scorecard Management

-
    -
  • - list_plexus_scorecards: Ask the AI to list available scorecards in your Plexus Dashboard. - You can optionally tell it to filter by an account name/key, a partial scorecard name, or a scorecard key. For example: "List Plexus scorecards for the 'Sales' account that include 'Q3' in the name." -
  • -
  • - get_plexus_scorecard_info: Request detailed information about a specific scorecard. - Provide the AI with an identifier for the scorecard (like its name, key, or ID). It will return the scorecard's description, sections, and the scores within each section. For example: "Get info for the 'Customer Satisfaction Q3' scorecard." -
  • -
  • - get_plexus_score_details: Get specific details for a particular score within a scorecard, including its configuration and version history. - You'll need to specify both the scorecard and the score. You can also ask for a specific version of the score. For example: "Show me the details for the 'Responsiveness' score in the 'Support Tickets' scorecard, especially its champion version." -
  • -
-
- -
-

Evaluation Tools

-
    -
  • - run_plexus_evaluation: Instruct the AI to start a new scorecard evaluation. - You need to provide the scorecard name and optionally a specific score name and the number of samples. The server will dispatch this task to your Plexus backend. Note that the MCP server itself doesn't track the progress; you would monitor the evaluation in the Plexus Dashboard as usual. For example: "Run a Plexus evaluation for the 'Lead Quality' scorecard using 100 samples." -
  • -
-
- -
-

Reporting Tools

-
    -
  • - list_plexus_reports: Ask for a list of generated reports. You can filter by account or by a specific report configuration ID if you know it. - The AI will return a list showing report names, IDs, and when they were created. For example: "List the latest Plexus reports for the main account." -
  • -
  • - get_plexus_report_details: Retrieve detailed information about a specific report by providing its ID. - This includes the report's parameters, output, and any generated blocks. For example: "Get the details for Plexus report ID '123-abc-456'." -
  • -
  • - get_latest_plexus_report: A convenient way to get the details of the most recently generated report. - You can optionally filter by account or report configuration ID. For example: "Show me the latest report generated from the 'Weekly Performance' configuration." -
  • -
  • - list_plexus_report_configurations: Get a list of all available report configurations for an account. - This is useful for knowing what reports you *can* generate. For example: "What report configurations are available for the 'Marketing' account?" -
  • -
-
- -
-

Utility Tools

-
    -
  • - think: A planning tool used internally by the AI to structure reasoning before using other tools. - This helps the AI organize its approach to complex tasks that may require multiple steps or tool calls. -
  • -
-
-
- -
-

Environment Requirements for Running the Server

-
-
-

Software

-
    -
  • Python 3.11 or newer (required by the fastmcp library the server uses).
  • -
  • An existing Plexus installation and access to its dashboard credentials.
  • -
  • The python-dotenv Python package (used by the server to load your API keys from the .env file).
  • -
-
-
-

.env File with Plexus Credentials

-

- The server needs to access your Plexus API. Create a file named .env. The --env-file parameter in your mcp.json should point directly to this file. - It's typically located in your main Plexus project root directory (e.g., Plexus/.env). -

-

Required Variables in .env:

-
    -
  • PLEXUS_API_URL: The API endpoint URL for your Plexus instance.
  • -
  • PLEXUS_API_KEY: Your API key for authenticating with Plexus.
  • -
  • PLEXUS_DASHBOARD_URL: The main URL of your Plexus dashboard (used for generating links).
  • -
-

Optional Variables in .env:

-
    -
  • PLEXUS_ACCOUNT_KEY: If you work with multiple accounts, you can set a default account key here.
  • -
  • LOG_LEVEL: You can set this to DEBUG, INFO, WARNING, or ERROR to control the server's logging verbosity.
  • -
-
-
-
- -
-

Running the Server

-

- Once you have the code and your .env file is set up, you should run the server using the plexus_fastmcp_wrapper.py script as configured in your mcp.json file. - The MCP client (e.g., Claude Desktop App) will execute the command specified in mcp.json when it attempts to connect to the "plexus-mcp-service". -

-

- You typically don't run the plexus_fastmcp_wrapper.py script manually from the terminal for client use. Instead, ensure your mcp.json is correctly configured, and the client application will start the server process as needed. -

-

- Make sure your Plexus Python environment (e.g., conda activate py311) is correctly referenced by the full path to python in the command field of your mcp.json. - The wrapper script handles passing the necessary environment variables and paths to the underlying plexus_fastmcp_server.py. -

-
- -
-

Troubleshooting Common Issues

-
    -
  • Connection Errors: Double-check all paths in your mcp.json file (command, args, env.PYTHONPATH). Ensure they accurately point to your Python executable, the plexus_fastmcp_wrapper.py script, your .env file, and your project directory.
  • -
  • Authentication Errors: Verify that the --env-file path in mcp.json correctly points to your .env file and that this file contains the correct PLEXUS_API_URL and PLEXUS_API_KEY.
  • -
-
- -
-

Server Logs

-

- The Plexus MCP server setup (via plexus_fastmcp_wrapper.py) directs operational logs and error messages to stderr. - MCP clients like the Claude desktop app typically capture and display these stderr logs, or store them in a dedicated log file. -

-

- For instance, Cursor often stores MCP interaction logs in ~/Library/Logs/Claude/mcp.log on macOS. Monitoring this file is key for diagnosing issues if the client doesn't display them directly. -

-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/advanced/page.tsx b/dashboard/app/documentation/advanced/page.tsx deleted file mode 100644 index c34a5809a..000000000 --- a/dashboard/app/documentation/advanced/page.tsx +++ /dev/null @@ -1,86 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" -import { Metadata } from "next" - -export const metadata: Metadata = { - title: "Advanced - Plexus Documentation", - description: "Advanced tools and concepts for power users of the Plexus platform." -} - -export default function AdvancedPage() { - return ( -
-

Advanced Tools & Concepts

-

- Explore advanced tools and concepts that enable deeper integration and customization of Plexus - for technical users and developers. -

- -
-
-

Command Line Interface

-
-

- The plexus CLI tool provides powerful command-line access to all Plexus functionality, - perfect for automation and advanced workflows. -

- - Explore CLI Tool - -
-
- -
-

Worker Infrastructure

-
-

- Learn how to set up and manage Plexus worker nodes to process tasks efficiently - across your infrastructure. -

- - Learn About Workers - -
-
- -
-

Python SDK

-
-

- Integrate Plexus directly into your Python applications with our comprehensive SDK, - enabling programmatic access to all platform features. -

- - Browse SDK Reference - -
-
- -
-

Universal Code Snippets

-
-

- Learn about Plexus's universal YAML code format designed for seamless communication - between humans, AI models, and other systems. -

- - Explore Universal Code Snippets - -
-
- -
-

Plexus MCP Server

-
-

- Enable AI agents and tools to interact with Plexus functionality using the Multi-Agent Cooperative Protocol (MCP). -

- - Explore MCP Server - -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/advanced/sdk/page.tsx b/dashboard/app/documentation/advanced/sdk/page.tsx deleted file mode 100644 index ca3ec73de..000000000 --- a/dashboard/app/documentation/advanced/sdk/page.tsx +++ /dev/null @@ -1,77 +0,0 @@ -'use client'; - -export default function SdkPage() { - return ( -
-

Python SDK Reference

-

- Explore the Python SDK for programmatic access to Plexus functionality. -

- -
-
-

Overview

-

- The Plexus Python SDK provides a simple and intuitive way to interact with Plexus - programmatically. Use it to automate workflows, manage resources, and integrate - Plexus into your applications. -

-
- -
-

Installation

-

- Install the Plexus SDK using pip: -

-
-            pip install plexus-sdk
-          
-
- -
-

Quick Start

-

- Here's a simple example to get you started: -

-
-            {`from plexus import Plexus
-
-# Initialize the client
-plexus = Plexus(api_key="your-api-key")
-
-# Create a new source
-source = plexus.sources.create(
-    name="My Source",
-    type="text",
-    data="Sample content"
-)
-
-# Run an evaluation
-evaluation = plexus.evaluations.create(
-    source_id=source.id,
-    scorecard_id="your-scorecard-id"
-)`}
-          
-
- -
-

Complete Documentation

-

- For complete API reference, authentication guides, advanced usage examples, and best practices, - visit our comprehensive Python SDK documentation: -

- -
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/advanced/worker-nodes/page.tsx b/dashboard/app/documentation/advanced/worker-nodes/page.tsx deleted file mode 100644 index e9eeaf9d5..000000000 --- a/dashboard/app/documentation/advanced/worker-nodes/page.tsx +++ /dev/null @@ -1,184 +0,0 @@ -'use client'; - -export default function WorkerNodesPage() { - return ( -
- - -

Worker Nodes

-

- Learn how to deploy and manage Plexus worker nodes across any infrastructure to process your evaluation tasks. -

- -
-
-

Overview

-

- Plexus worker nodes are long-running daemon processes that handle evaluation tasks and other operations. - You can run these workers on any computer with Python installed - whether it's in the cloud (AWS, Azure, GCP) - or on your own premises. -

-

- Workers are managed using the Plexus CLI tool, which makes it easy to start, configure, and monitor worker - processes across your infrastructure. -

-
- -
-

Starting a Worker

-

- Use the plexus command worker command to start a worker process. Here's a basic example: -

- -
-            
- {`plexus command worker \\ - --concurrency 4 \\ - --queue celery \\ - --loglevel INFO`} -
-
- -
-

--concurrency: Number of worker processes (default: 4)

-

--queue: Queue to process (default: celery)

-

--loglevel: Logging level (default: INFO)

-
-
- -
-

Worker Specialization

-

- Workers can be specialized to handle specific types of tasks using target patterns. This allows you to - dedicate certain workers to particular workloads: -

- -
-            
- {`# Worker that only processes dataset-related tasks -plexus command worker \\ - --target-patterns "datasets/*" \\ - --concurrency 4 - -# Worker for GPU-intensive tasks -plexus command worker \\ - --target-patterns "*/gpu-required" \\ - --concurrency 2 - -# Worker handling multiple task types -plexus command worker \\ - --target-patterns "datasets/*,training/*" \\ - --concurrency 8`} -
-
- -

- Target patterns use the format domain/subdomain and support wildcards. Some examples: -

-
    -
  • datasets/call-criteria - Only process call criteria dataset tasks
  • -
  • training/call-criteria - Only handle call criteria training tasks
  • -
  • */gpu-required - Process any tasks requiring GPU resources
  • -
  • datasets/* - Handle all dataset-related tasks
  • -
-
- -
-

Deployment Examples

-

- Here are some common deployment scenarios: -

- -
-
-

AWS EC2

-
-                
- {`# Run in a screen session for persistence -screen -S plexus-worker -plexus command worker \\ - --concurrency 8 \\ - --loglevel INFO -# Ctrl+A, D to detach`} -
-
-
- -
-

Local Development

-
-                
- {`# Run with increased logging for debugging -plexus command worker \\ - --concurrency 2 \\ - --loglevel DEBUG`} -
-
-
- -
-

GPU Worker

-
-                
- {`# Dedicated GPU worker with specific targeting -plexus command worker \\ - --concurrency 1 \\ - --target-patterns "*/gpu-required" \\ - --loglevel INFO`} -
-
-
-
-
- -
-

Best Practices

-
    -
  • Use a process manager (like systemd, supervisor, or screen) to keep workers running
  • -
  • Set concurrency based on available CPU cores and memory
  • -
  • Use target patterns to optimize resource utilization
  • -
  • Monitor worker logs for errors and performance issues
  • -
  • Deploy workers close to your data sources when possible
  • -
  • Consider using auto-scaling groups in cloud environments
  • -
-
- -
-

Additional Resources

-

- For more information about worker deployment and management: -

-
    -
  • See the CLI documentation for detailed command reference
  • -
  • Check the built-in help with plexus command worker --help
  • -
  • View worker logs with --loglevel DEBUG for troubleshooting
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/concepts/items/page.tsx b/dashboard/app/documentation/concepts/items/page.tsx deleted file mode 100644 index 45b137e83..000000000 --- a/dashboard/app/documentation/concepts/items/page.tsx +++ /dev/null @@ -1,105 +0,0 @@ -export default function ItemsPage() { - return ( -
-

Items

-

- Learn about Items, the core content units that Plexus analyzes and scores. -

- -
-
-

What are Items?

-

- Items are individual pieces of content that you want to analyze or evaluate using Plexus. - They can be any type of content that your AI, ML, or logical scoring techniques can process, such as: -

-
    -
  • Call center transcripts for quality assurance
  • -
  • Customer emails or support tickets
  • -
  • Case files or documents
  • -
  • Code repositories for analysis
  • -
  • Images or videos for content moderation
  • -
-
- -
-

How Items Work

-

- Items are the foundation of Plexus's evaluation system: -

-
-
-

1. Organization

-

- Each Item belongs to an Account and can be referenced by multiple Scorecards. - This allows you to evaluate the same content using different criteria or scoring methods. -

-
-
-

2. Scoring

-

- When you apply a Scorecard to an Item, Plexus creates a ScoringJob to process it. - The results are stored as ScoreResults, which contain the scores, confidence levels, - and any additional metadata from the scoring process. -

-
-
-

3. Evaluation

-

- Items can be part of Evaluations, where their scoring results are compared against - known correct answers to measure the accuracy and effectiveness of your scoring methods. -

-
-
-
- -
-

Item Properties

-
-
-

Core Properties

-
    -
  • Name: A unique identifier for the Item
  • -
  • Description: Optional details about the Item's content or purpose
  • -
  • Account: The Account that owns this Item
  • -
-
-
-

Relationships

-
    -
  • Scorecards: Scorecards that reference this Item
  • -
  • ScoringJobs: Records of scoring operations performed on this Item
  • -
  • ScoreResults: Results from scoring operations
  • -
  • Evaluation: Optional link to an Evaluation this Item is part of
  • -
-
-
-
- -
-

Best Practices

-
    -
  • Use clear, descriptive names for your Items to make them easy to identify
  • -
  • Include relevant metadata in the description to provide context
  • -
  • Organize Items logically within your Account structure
  • -
  • Keep track of which Items are used in Evaluations for quality control
  • -
  • Regularly review ScoreResults to monitor scoring effectiveness
  • -
-
- -
-

Next Steps

-

- Now that you understand Items, you can: -

-
    -
  • Create Scorecards to evaluate your Items
  • -
  • Set up scoring criteria using Scores
  • -
  • Run Evaluations to measure scoring accuracy
  • -
  • Monitor results through the dashboard
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/concepts/page.tsx b/dashboard/app/documentation/concepts/page.tsx deleted file mode 100644 index 876f44ade..000000000 --- a/dashboard/app/documentation/concepts/page.tsx +++ /dev/null @@ -1,152 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" -import { Metadata } from "next" - -export const metadata: Metadata = { - title: "Basics - Plexus Documentation", - description: "Learn about the core concepts in Plexus" -} - -export default function BasicsPage() { - return ( -
-

Core Concepts

-

- Learn about the fundamental building blocks that make up Plexus. -

- -
-
-

Core Concepts

-
-
-

Items

-

- Individual pieces of content that you want to analyze or evaluate using Plexus. Items are the core units that get scored. -

- - Learn about Items - -
- -
-

Sources

-

- Input data for evaluation, including text and audio content. Sources are the foundation - of content analysis in Plexus. -

- - Learn about Sources - -
- -
-

Scores

-

- Individual evaluation criteria that define what to measure. Scores are the building blocks - of scorecards and can range from simple questions to complex metrics. -

- - Learn about Scores - -
- -
-

Scorecards

-

- Collections of scores that form a complete evaluation framework. Scorecards organize - related evaluation criteria into meaningful groups. -

- - Learn about Scorecards - -
- -
-

Evaluations

-

- The process of analyzing sources using scorecards to generate insights - and quality metrics. -

- - Understand Evaluations - -
- -
-

Tasks

-

- Individual units of work in Plexus, representing operations like source processing - and evaluations. -

- - Discover Tasks - -
- -
-

Reports

-

- Flexible, template-driven analyses and summaries generated from your Plexus data using reusable components. -

- - Learn about Reports - -
- -
-

Evaluation Metrics

-

- Specialized visualization tools that help interpret agreement and accuracy metrics, especially when dealing with imbalanced data. -

- - Understand Evaluation Metrics - -
-
-
- -
-

How It All Works Together

-

- The Plexus workflow follows a simple pattern: -

-
    -
  1. - Create Sources -

    Upload or connect your content for analysis.

    -
  2. -
  3. - Define Scorecards -

    Set up evaluation criteria and scoring rules.

    -
  4. -
  5. - Run Evaluations -

    Process sources using your scorecards.

    -
  6. -
  7. - Monitor Tasks & View Reports -

    Track progress of evaluations and report generation, then review the results and generated reports.

    -
  8. -
-
- -
-

Next Steps

-

- Start with Sources to learn how to add content to Plexus, then explore Scorecards - to understand how to evaluate your content effectively. -

-
- - Get Started with Sources - - - View Step-by-Step Guides - -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/concepts/scorecards/page.tsx b/dashboard/app/documentation/concepts/scorecards/page.tsx deleted file mode 100644 index 23e8d8871..000000000 --- a/dashboard/app/documentation/concepts/scorecards/page.tsx +++ /dev/null @@ -1,203 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" -import { Metadata } from "next" - -export const metadata: Metadata = { - title: "Scorecards - Plexus Documentation", - description: "Learn about Scorecards in Plexus - the framework for evaluating content quality and performance" -} - -export default function ScorecardsPage() { - return ( -
-

Scorecards

-

- Understand how to create and manage Scorecards to evaluate your content effectively. -

- -
-
-

What are Scorecards?

-

- Scorecards are collections of evaluation criteria that define how your content - should be analyzed. They help ensure consistent evaluation across all your sources - by providing a structured framework for assessment. -

-

- Think of a scorecard as a comprehensive evaluation template that contains all the - metrics and criteria you want to measure for a specific type of content. Scorecards - can be tailored to different content types, business objectives, or quality standards. -

-
- -
-

Scorecard Structure

-
-
-

Sections

-

- Scorecards are organized into logical sections that group related evaluation criteria. - For example, a customer service scorecard might have sections for "Greeting", "Problem Resolution", - and "Closing". -

-
- -
-

Scores

-

- Individual evaluation criteria that assess specific aspects of your content. - Each score can be customized with its own evaluation logic and requirements. - Scores are the building blocks of your evaluation framework. -

-

- Examples of scores include: -

-
    -
  • Grammar and spelling accuracy
  • -
  • Sentiment analysis (positive/negative/neutral)
  • -
  • Compliance with specific regulations
  • -
  • Presence of required information
  • -
  • Custom business-specific metrics
  • -
-
- -
-

Sections

-

- Logical groupings of related scores within a scorecard. Sections help organize - scores into categories for better management and understanding. -

-
-
-

Weights

-

- Importance factors that determine how much each score contributes to the - overall evaluation result. Weights allow you to prioritize certain criteria - over others based on their importance to your business objectives. -

-
-
-

Versions

-

- Score configurations are versioned, allowing you to track changes over time, - compare different implementations, and promote specific versions to champion status. -

-
-
-
- -
-

CLI Management

-

- The Plexus CLI provides powerful commands for managing scorecards: -

-
-
-

Listing Scorecards

-
-                {`# List all scorecards for an account
-plexus scorecards list "account-name"
-
-# List with filtering
-plexus scorecards list "account-name" --name "Scorecard Name"
-plexus scorecards list "account-name" --key "scorecard-key"
-
-# Performance options
-plexus scorecards list "account-name" --fast  # Skip fetching scores for faster results
-plexus scorecards list "account-name" --hide-scores  # Don't display scores in output`}
-              
-

- The list command uses an optimized single GraphQL query to fetch scorecards, sections, - and scores in one request, providing significantly faster performance. -

-
-
-

Viewing Score Details

-
-                {`# View a specific score by name, key, ID, or external ID
-plexus scorecards score "Score Name" --account "account-name"
-plexus scorecards score "score-key" --account "account-name"
-plexus scorecards score "score-id" --show-versions --show-config
-
-# Scope to a specific scorecard
-plexus scorecards score "Score Name" --scorecard "Scorecard Name"`}
-              
-
-
-

Listing Scores in a Scorecard

-

- To list all scores within a scorecard, use the scores list command: -

-
-                {`# List all scores in a scorecard
-plexus scores list --scorecard "Scorecard Name"
-
-# You can also use the singular form
-plexus score list --scorecard "Scorecard Name"`}
-              
-

- This command displays all scores organized by section, including their IDs, keys, and external IDs. -

-
-
-

Version Management

-
-                {`# View version history (coming soon)
-plexus scorecards history --account-key "account-key" --score-key "score-key"
-
-# Promote a version to champion (coming soon)
-plexus scorecards promote --account-key "account-key" --score-id "score-id" --version-id "version-id"
-
-# Pull latest champion versions (coming soon)
-plexus scorecards pull --account-key "account-key"
-
-# Push local changes as new versions
-plexus scorecards push --scorecard "scorecard-name" --note "Updated configuration"`}
-              
-
-
-
- -
-

Best Practices

-
-
-

Scorecard Organization

-

- Group related scores into logical sections to improve clarity and maintainability. - Use consistent naming conventions for scorecards, sections, and scores. -

-
-
-

Version Management

-

- Add descriptive notes to new versions to document changes. Test new versions - thoroughly before promoting them to champion status. -

-
-
-

Performance Considerations

-

- Use the --fast option when listing many scorecards to improve performance. - This skips fetching score details when you only need basic scorecard information. -

-
-
-
- -
-

Coming Soon

-

- Additional scorecard features are being developed. Check back soon for: -

-
    -
  • Advanced score configuration options
  • -
  • Collaborative editing features
  • -
  • Performance analytics
  • -
  • Bulk operations for scorecard management
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/concepts/sources/page.tsx b/dashboard/app/documentation/concepts/sources/page.tsx deleted file mode 100644 index 6d7efeb4e..000000000 --- a/dashboard/app/documentation/concepts/sources/page.tsx +++ /dev/null @@ -1,53 +0,0 @@ -export default function SourcesPage() { - return ( -
-

Sources

-

- Learn about Sources in Plexus and how they form the foundation of your evaluation workflows. -

- -
-
-

What are Sources?

-

- Sources are the input data that you want to evaluate using Plexus. They can be text, - audio files, or other supported formats that you want to analyze using AI models. -

-
- -
-

Types of Sources

-
-
-

Text Sources

-

- Plain text content that can be evaluated for various metrics like sentiment, - quality, or compliance with specific criteria. -

-
-
-

Audio Sources

-

- Audio recordings that can be transcribed and analyzed for content, - quality, or specific patterns. -

-
-
-
- -
-

Coming Soon

-

- Detailed documentation about Sources is currently being developed. Check back soon for: -

-
    -
  • Source creation and management
  • -
  • Supported formats and limitations
  • -
  • Best practices for organizing sources
  • -
  • Advanced source configurations
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/evaluation-metrics/page.tsx b/dashboard/app/documentation/evaluation-metrics/page.tsx deleted file mode 100644 index 6b3c7d309..000000000 --- a/dashboard/app/documentation/evaluation-metrics/page.tsx +++ /dev/null @@ -1,480 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" -import { Metadata } from "next" -import { GaugeThresholdComputer } from "@/utils/gauge-thresholds" -import EvaluationCard from '@/components/EvaluationCard' -import { Segment } from "@/components/gauge" - -export const metadata: Metadata = { - title: "Interpreting Evaluation Metrics - Plexus Documentation", - description: "Understanding the challenges of interpreting classifier accuracy and an overview of Plexus solutions." -} - -// Helper function to create sample score data for examples -const createExampleScore = ( - id: string, - name: string, - ac1: number, - accuracy: number, - itemCount: number, - mismatches: number, - labelDistribution?: Record -) => ({ - id, - score_name: name, - cc_question_id: `example-${id}`, - ac1, - item_count: itemCount, - mismatches, - accuracy, - label_distribution: labelDistribution -}) - -// Define fixed segments for the illustrative accuracy gauges in the initial scenarios (kept for initial coin flip examples if those are retained in narrative) -const fixedAccuracyGaugeSegments: Segment[] = [ - { start: 0, end: 50, color: 'var(--gauge-inviable)' }, - { start: 50, end: 70, color: 'var(--gauge-converging)' }, - { start: 70, end: 80, color: 'var(--gauge-almost)' }, - { start: 80, end: 90, color: 'var(--gauge-viable)' }, - { start: 90, end: 100, color: 'var(--gauge-great)' }, -]; - -export default function EvaluationMetricsPage() { - // Article Topic Labeler - Our consistent example through the document - const articleTopicLabelerExampleData = { - id: 'article-topic-labeler', - score_name: 'Article Topic Labeler Performance', - cc_question_id: 'example-topic-labeler', - accuracy: 62.0, - item_count: 100, - mismatches: 38, // 100 - 62 - gwetAC1: 0.512, // Lower AC1 reflecting 62% accuracy - label_distribution: { - 'News': 40, - 'Sports': 15, - 'Business': 15, - 'Technology': 15, - 'Lifestyle': 15 - } - }; - - const articleTopicLabelerClassDistribution = [ - { label: "News", count: 40 }, - { label: "Sports", count: 15 }, - { label: "Business", count: 15 }, - { label: "Technology", count: 15 }, - { label: "Lifestyle", count: 15 } - ]; - - const articleTopicLabelerConfusionMatrix = { - labels: ["News", "Sports", "Business", "Technology", "Lifestyle"], - matrix: [ - { actualClassLabel: "News", predictedClassCounts: { "News": 28, "Sports": 3, "Business": 3, "Technology": 3, "Lifestyle": 3 } }, - { actualClassLabel: "Sports", predictedClassCounts: { "News": 3, "Sports": 9, "Business": 1, "Technology": 1, "Lifestyle": 1 } }, - { actualClassLabel: "Business", predictedClassCounts: { "News": 3, "Sports": 1, "Business": 8, "Technology": 2, "Lifestyle": 1 } }, - { actualClassLabel: "Technology", predictedClassCounts: { "News": 3, "Sports": 1, "Business": 2, "Technology": 8, "Lifestyle": 1 } }, - { actualClassLabel: "Lifestyle", predictedClassCounts: { "News": 3, "Sports": 1, "Business": 1, "Technology": 1, "Lifestyle": 9 } }, - ], - }; - - const articleTopicLabelerPredictedDistribution = [ - { label: "News", count: 40 }, - { label: "Sports", count: 15 }, - { label: "Business", count: 15 }, - { label: "Technology", count: 15 }, - { label: "Lifestyle", count: 15 } - ]; - - // Segments for the final Article Topic Labeler example (fully contextualized) - const articleTopicLabelerFullContextSegments = GaugeThresholdComputer.createSegments( - GaugeThresholdComputer.computeThresholds(articleTopicLabelerExampleData.label_distribution) - ); - - // Coin flip examples for the narrative - const fairCoinData = createExampleScore( - 'fair-coin', - 'Randomly Guessing Coin Flips (50/50)', - -0.04, - 48.0, - 100, - 52, - { 'Heads': 50, 'Tails': 50 } - ) - - const alwaysHeadsData = createExampleScore( - 'always-heads', - 'Always Guessing "Heads" (50/50)', - 0.02, - 51.0, - 100, - 49, - { 'Heads': 51, 'Tails': 49 } - ) - - const fairCoinDistribution = [ - { label: "Heads", count: 51 }, - { label: "Tails", count: 49 } - ]; - - const predictedFairCoinData = [ - { label: "Heads", count: 50 }, - { label: "Tails", count: 50 } - ]; - - const predictedAlwaysHeadsData = [ - { label: "Heads", count: 100 }, - { label: "Tails", count: 0 } - ]; - - const fairCoinConfusionMatrix = { - labels: ["Heads", "Tails"], - matrix: [ - { actualClassLabel: "Heads", predictedClassCounts: { "Heads": 24, "Tails": 26 } }, - { actualClassLabel: "Tails", predictedClassCounts: { "Heads": 26, "Tails": 24 } }, - ], - }; - - const alwaysHeadsConfusionMatrix = { - labels: ["Heads", "Tails"], - matrix: [ - { actualClassLabel: "Heads", predictedClassCounts: { "Heads": 51, "Tails": 0 } }, - { actualClassLabel: "Tails", predictedClassCounts: { "Heads": 49, "Tails": 0 } }, - ], - }; - - // Card Suit Guessing Example Data for narrative - const cardSuitData = createExampleScore( - 'card-suit-guessing', - 'Predicting a Card Suit (4 Classes, Random Guessing)', - -0.03, - 23.0, - 208, - 160, - { '♥️': 52, '♦️': 52, '♣️': 52, '♠️': 52 } - ); - - const cardSuitActualDistribution = [ - { label: "♥️", count: 52 }, - { label: "♦️", count: 52 }, - { label: "♣️", count: 52 }, - { label: "♠️", count: 52 } - ]; - - const cardSuitConfusionMatrix = { - labels: ["♥️", "♦️", "♣️", "♠️"], - matrix: [ - { actualClassLabel: "♥️", predictedClassCounts: { "♥️": 12, "♦️": 13, "♣️": 13, "♠️": 14 } }, - { actualClassLabel: "♦️", predictedClassCounts: { "♥️": 13, "♦️": 12, "♣️": 14, "♠️": 13 } }, - { actualClassLabel: "♣️", predictedClassCounts: { "♥️": 13, "♦️": 14, "♣️": 12, "♠️": 13 } }, - { actualClassLabel: "♠️", predictedClassCounts: { "♥️": 14, "♦️": 13, "♣️": 13, "♠️": 12 } }, - ], - }; - - const cardSuitPredictedDistribution = [ - { label: "♥️", count: 12+13+13+14 }, - { label: "♦️", count: 13+12+14+13 }, - { label: "♣️", count: 13+14+12+13 }, - { label: "♠️", count: 14+13+13+12 } - ]; - - return ( -
-

Interpreting Evaluation Metrics: The Challenge

-

- Understanding metrics like accuracy is key to evaluating AI performance. However, raw numbers can be deceptive without proper context. This page explores common pitfalls and introduces Plexus's approach to clearer, more reliable evaluation. -

- -
-
-

The Big Question: Is This Classifier Good?

-

- When developing an AI system, we need gauges to tell if our model is performing well. Let's consider an "Article Topic Labeler" that classifies articles into five categories: News, Sports, Business, Technology, and Lifestyle. Evaluated on 100 articles, it achieves 62% accuracy. -

- - -
-

Is 62% accuracy good?

-

- This number seems mediocre. The uncontextualized gauge suggests it's just 'converging'. But is this poor performance, or is there more to the story? -

-
- - } - /> - -

- Intuitively, 62% seems somewhat weak—nearly 4 out of 10 articles are wrong. But to judge this, we need a baseline: what accuracy would random guessing achieve? -

-
- -
-

Pitfall 1: Ignoring the Baseline (Chance Agreement)

-

- Raw accuracy is meaningless without knowing the chance agreement rate. Consider predicting 100 coin flips: -

- -
- -

~50% accuracy achieved.

-

- But is this good guessing without knowing the chance baseline? -

-
- } - /> - - -

~51% accuracy achieved.

-

- Slightly better, but still hovering around the 50% chance rate. -

-
- } - /> -
- -
-

Key Insight: The Baseline Problem

-

- Both strategies hover around 50% accuracy. This is the base random-chance agreement rate for a binary task. Without understanding this baseline, raw accuracy numbers are uninterpretable. Any reported accuracy must be compared against what random chance would yield for that specific problem. -

-
- - -
-

Pitfall 2: The Moving Target of Multiple Classes

-

- The chance agreement rate isn't fixed; it changes with the number of classes. For example, consider guessing the suit of a randomly drawn card from a standard 4-suit deck: -

- - -
-

~23% accuracy in this run.

-

- The fixed gauge makes this look terrible. Is it? -

-
-
-

Misleading Raw View

-

- For a 4-class problem, 25% is the actual random chance baseline. The raw gauge is deceptive here. -

-
- - } - /> - -
-

Key Insight: Number of Classes Shifts the Baseline

-

- The baseline random-chance agreement rate dropped from 50% (for 2 classes like coin flips) to 25% (for 4 classes like card suits). This is a critical concept: as the number of equally likely options increases, the accuracy you'd expect from random guessing decreases. Therefore, a 30% accuracy is much better for a 10-class problem (10% chance) than for a 2-class problem (50% chance). -

-
-
- -
-

Pitfall 3: The Illusion of Class Imbalance

-

- The distribution of classes in your data (class balance) adds another layer of complexity. If a dataset is imbalanced, a classifier can achieve high accuracy by simply always predicting the majority class, even if it has no real skill. -

-
- -

~52% accuracy.

-

Strategy doesn't exploit the deck's known 75/25 imbalance.

-
- } - /> - -
-

75% accuracy!

-
-
-

Deceptively High!

-

- This 75% is achieved by exploiting the imbalance (always guessing majority), not by skill. -

-
- - } - /> -
-

- A more extreme example: an email filter claims 97% accuracy at detecting prohibited content. However, if only 3% of emails actually contain such content, a filter that labels *every single email* as "safe" (catching zero violations) will achieve 97% accuracy. -

- -
-

97% accuracy! Sounds great?

-
-
-

CRITICAL FLAW!

-

- This model detects ZERO prohibited content. It's worse than useless, providing a false sense of security. -

-
- - } - /> -
-

Key Insight: Imbalance Inflates Naive Accuracy

-

- Raw accuracy scores are deeply misleading without considering class imbalance. A high accuracy might simply reflect the majority class proportion, not actual predictive power. A 97% accuracy could be excellent for a balanced problem, mediocre for a moderately imbalanced one, or indicative of complete failure in rare event detection. -

-
- - -
-

Plexus's Solution: A Unified Approach to Clarity

-

- To overcome these common pitfalls and provide a true understanding of classifier performance, Plexus employs a two-pronged strategy that combines contextualized raw metrics with inherently context-aware agreement scores: -

-
    -
  1. - Contextualized Accuracy Gauges: We don't just show raw accuracy; we show it on a dynamic visual scale. The colored segments of our Accuracy gauges adapt based on the number of classes *and* their distribution in your specific data. This immediately helps you interpret if an accuracy score is good, bad, or indifferent *for that particular problem context*. -
  2. -
  3. - Inherently Context-Aware Agreement Gauges: Alongside accuracy, we prominently feature an Agreement gauge (typically using Gwet's AC1). This metric is specifically designed to calculate a chance-corrected measure of agreement. It *internally* accounts for the number of classes and their distribution, providing a standardized score (0 = chance, 1 = perfect) that reflects skill beyond random guessing. This score is directly comparable across different problems and datasets. -
  4. -
-

- Let's see how this unified approach clarifies the performance of our Article Topic Labeler (which had 62% raw accuracy, 5 classes, and an imbalanced distribution with 40% "News"): -

- - - -
-

The Power of Two Gauges

-

- This combined approach offers robust and intuitive understanding: -

-
    -
  • The Contextualized Accuracy Gauge clarifies what the raw 62% accuracy means for *this specific task's complexities* (5 classes, imbalanced).
  • -
  • The Agreement Gauge provides a single, standardized score (AC1 of 0.512) measuring performance *above chance*, directly comparable across different problems.
  • -
-

- Together, they prevent misinterpretations of raw accuracy and offer true insight into a classifier's performance. -

-
- -
-

Dive Deeper into the Solutions

-

- To understand the detailed mechanics of how Plexus contextualizes Accuracy gauges and how the Agreement gauge works across various scenarios, explore our dedicated guide: -

- - Understanding Gauges with Context - -
-
- -
-

Next Steps

-

- Explore further documentation to enhance your understanding: -

-
- - Detailed: Gauges with Context - - - View More Examples - - - Learn about Evaluations - - - Explore Reports - -
-
- -
-
- ); -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/add-edit-score/page.tsx b/dashboard/app/documentation/methods/add-edit-score/page.tsx deleted file mode 100644 index 4ebfc5e0f..000000000 --- a/dashboard/app/documentation/methods/add-edit-score/page.tsx +++ /dev/null @@ -1,264 +0,0 @@ -export default function AddEditScorePage() { - return ( -
-

Add/Edit a Score

-

- Learn how to create and manage individual scores within scorecards using the Plexus dashboard interface. -

- -
-
-

Adding Scores in the Dashboard

-

- Scores are individual evaluation criteria within a scorecard. The dashboard provides - an intuitive interface for creating and configuring scores. -

- -
-
-

Step-by-Step Guide

-
    -
  1. - Access Score Creation: -

    Open your scorecard and click "Add Score" or edit an existing scorecard.

    -
  2. -
  3. - Choose Score Type: -

    Select from available score types:

    -
      -
    • Sentiment Analysis
    • -
    • Content Quality
    • -
    • Grammar Check
    • -
    • Custom Metrics
    • -
    -
  4. -
  5. - Configure Parameters: -

    Set up the score configuration:

    -
      -
    • Score name and description
    • -
    • Weight (importance in overall scorecard)
    • -
    • Threshold (minimum acceptable score)
    • -
    • Custom parameters specific to the score type
    • -
    -
  6. -
  7. - Preview and Test: -

    Use the preview feature to test the score against sample content.

    -
  8. -
  9. - Save Score: -

    Click "Add Score" to include it in your scorecard.

    -
  10. -
-
- -
-

Editing Existing Scores

-
    -
  1. - Locate the Score: -

    Find the score you want to modify within your scorecard.

    -
  2. -
  3. - Access Edit Mode: -

    Click the edit icon next to the score.

    -
  4. -
  5. - Modify Settings: -

    Update the score's configuration as needed.

    -
  6. -
  7. - Save Changes: -

    Click "Save" to apply your modifications.

    -
  8. -
-
-
-
- -
-

Score Version Management

-

- Scores in Plexus support versioning, allowing you to track changes and manage different implementations: -

- -
-
-

Creating New Versions

-

- When you edit a score and save changes, a new version is automatically created. - You can add notes to document the changes made in each version. -

-
-
-

Champion Versions

-

- Each score has a designated "champion" version that is used for evaluations. - You can promote any version to champion status when you're satisfied with its performance. -

-
-
-

Featured Versions

-

- Mark important versions as "featured" to highlight them in the version history. - This helps track significant milestones in your score's development. -

-
-
-
- -
-

Score Configuration Tips

-
-
-

Weight Balancing

-

- Carefully consider the relative importance of each score when setting weights. - The total of all weights in a scorecard should equal 1.0. -

-
-
-

Threshold Setting

-

- Set appropriate thresholds based on your quality requirements and test - with representative content samples. -

-
-
-

Score Types

-

- Choose score types that align with your evaluation goals. Combine different - types to create comprehensive assessments. -

-
-
-
- -
-

Using the CLI

-

- For automated score management, you can use the Plexus CLI: -

- -
-            {`# View detailed information about a score
-plexus scorecards score "Score Name" --account "account-name"
-plexus scorecards score "score-key" --account "account-name"
-
-# Show version history and configuration
-plexus scorecards score "Score Name" --account "account-name" --show-versions --show-config
-
-# List all scores for a specific scorecard
-plexus scorecards list-scores --scorecard-id "scorecard-id"
-
-# Coming soon:
-# View version history for a score
-plexus scorecards history --account-key "account-key" --score-key "score-key"
-
-# Promote a version to champion
-plexus scorecards promote --account-key "account-key" --score-id "score-id" --version-id "version-id"
-
-# Add a new score to a scorecard
-plexus scores add --scorecard-id "card-id" --name "Quality Score" --type quality --weight 0.5
-
-# List all scores in a scorecard
-plexus scores list --scorecard "Quality Assurance"
-
-# View score configuration
-plexus scores info --score "Grammar Check"`}
-          
- -
-
-

Efficient Score Lookup

-

- The score command supports multiple lookup methods: -

-
    -
  • By ID: plexus scorecards score "score-id"
  • -
  • By key: plexus scorecards score "score-key"
  • -
  • By name: plexus scorecards score "Score Name"
  • -
  • By external ID: plexus scorecards score "external-id"
  • -
-

- You can scope the search to a specific account or scorecard for faster results. -

-
-
-
- -
-

Python SDK Reference

-

- For programmatic score management, you can use the Python SDK: -

- -
-            {`from plexus import Plexus
-
-plexus = Plexus(api_key="your-api-key")
-
-# Get a scorecard using any identifier (name, key, ID, or external ID)
-scorecard = plexus.scorecards.get("Quality Assurance")
-
-# Get a score using any identifier
-score = plexus.scores.get("Grammar Check")
-
-# Get all scores in a scorecard
-scores = scorecard.get_scores()
-
-# Get score configuration
-config = score.get_configuration()
-
-# Get score evaluation results
-results = score.get_results(limit=10)`}
-          
- -

- Like the CLI, the Python SDK also supports the flexible identifier system, allowing you to reference resources using different types of identifiers. -

-
- -
-

YAML Configuration

-

- Scores can be configured using YAML for advanced customization: -

- -
-            {`name: Quality Score
-key: quality-score
-externalId: score_123
-type: LangGraphScore
-parameters:
-  check_grammar: true
-  check_style: true
-  min_word_count: 100
-threshold: 0.8
-weight: 0.5`}
-          
- -

- Coming soon: The ability to pull and push YAML configurations using the CLI for offline editing and version control. -

-
- -
-

Coming Soon

-

- Additional score features are being developed. Check back soon for: -

-
    -
  • New score types and metrics
  • -
  • Advanced scoring algorithms
  • -
  • Custom evaluation parameters
  • -
  • Score performance analytics
  • -
  • Bulk score operations
  • -
  • YAML synchronization for offline editing
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/add-edit-scorecard/page.tsx b/dashboard/app/documentation/methods/add-edit-scorecard/page.tsx deleted file mode 100644 index 1f66d942f..000000000 --- a/dashboard/app/documentation/methods/add-edit-scorecard/page.tsx +++ /dev/null @@ -1,215 +0,0 @@ -export default function AddEditScorecardPage() { - return ( -
-

Add/Edit a Scorecard

-

- Learn how to create and manage scorecards using the Plexus dashboard interface. -

- -
-
-

Creating a Scorecard in the Dashboard

-

- Scorecards define the criteria for evaluating your content. The dashboard provides - an intuitive interface for creating and managing scorecards. -

- -
-
-

Step-by-Step Guide

-
    -
  1. - Access Scorecards: -

    Navigate to the "Scorecards" section in the main navigation menu.

    -
  2. -
  3. - Create New Scorecard: -

    Click the "New Scorecard" button in the top-right corner.

    -
  4. -
  5. - Basic Information: -

    Fill in the scorecard details:

    -
      -
    • Scorecard name
    • -
    • Description
    • -
    • Category/tags (optional)
    • -
    -
  6. -
  7. - Add Scores: -

    Click "Add Score" to include evaluation criteria:

    -
      -
    • Select score type
    • -
    • Configure score parameters
    • -
    • Set weight and threshold
    • -
    -
  8. -
  9. - Save Scorecard: -

    Click "Create" to save your new scorecard.

    -
  10. -
-
- -
-

Editing a Scorecard

-
    -
  1. - Find the Scorecard: -

    Locate the scorecard you want to modify in the Scorecards list.

    -
  2. -
  3. - Enter Edit Mode: -

    Click the edit icon or select "Edit" from the actions menu.

    -
  4. -
  5. - Make Changes: -

    Modify scorecard details, add/remove scores, or adjust weights.

    -
  6. -
  7. - Save Updates: -

    Click "Save Changes" to apply your modifications.

    -
  8. -
-
-
-
- -
-

Scorecard Management Tips

-
-
-

Organization

-

- Use meaningful names and descriptions to keep your scorecards organized. - Consider using tags to group related scorecards. -

-
-
-

Score Weights

-

- Balance score weights to reflect the relative importance of each criterion - in your evaluation process. -

-
-
-

Templates

-

- Save commonly used scorecard configurations as templates for quick reuse. -

-
-
-
- -
-

Using the CLI

-

- For automated scorecard management, you can use the Plexus CLI: -

- -
-            {`# List scorecards with optimized performance
-plexus scorecards list "account-name" --fast
-
-# View a specific scorecard by filtering
-plexus scorecards list "account-name" --name "Content Quality"
-
-# View detailed information about a score
-plexus scorecards score "score-name" --account "account-name" --show-versions
-
-# Coming soon:
-# Create a new scorecard
-plexus scorecards create --name "Content Quality" --description "Evaluates content quality"
-
-# Get detailed information about a specific scorecard
-plexus scorecards info --scorecard "Content Quality"
-
-# List all scores in a scorecard
-plexus scorecards list-scores --scorecard "Content Quality"
-
-# Pull scorecard configuration to YAML
-plexus scorecards pull --scorecard "Content Quality" --output ./my-scorecards
-
-# Push scorecard configuration from YAML
-plexus scorecards push --scorecard "Content Quality" --file ./my-scorecard.yaml --note "Updated configuration"
-
-# Delete a scorecard
-plexus scorecards delete --scorecard "Content Quality"`}
-          
- -
-
-

Performance Considerations

-

- The CLI now uses optimized GraphQL queries to fetch scorecard data efficiently: -

-
    -
  • - Single Query Approach: Instead of making separate queries for each scorecard's sections and scores, - the system now fetches all data in a single comprehensive GraphQL query. -
  • -
  • - Fast Mode: Use the --fast option to skip fetching sections and scores when you only need basic scorecard information. -
  • -
  • - Hide Scores: Use --hide-scores to exclude score details from the output while still fetching basic scorecard data. -
  • -
-
-
-
- -
-

Python SDK Reference

-

- For programmatic scorecard management, you can use the Python SDK: -

- -
-            {`from plexus import Plexus
-
-plexus = Plexus(api_key="your-api-key")
-
-# Get a scorecard using any identifier (name, key, ID, or external ID)
-scorecard = plexus.scorecards.get("Content Quality")
-
-# List all scorecards
-scorecards = plexus.scorecards.list()
-
-# Get all scores in a scorecard
-scores = scorecard.get_scores()
-
-# Export scorecard to YAML
-yaml_config = scorecard.to_yaml()
-with open("scorecard.yaml", "w") as f:
-    f.write(yaml_config)
-
-# Import scorecard from YAML
-with open("scorecard.yaml", "r") as f:
-    yaml_content = f.read()
-    
-new_scorecard = plexus.scorecards.from_yaml(yaml_content)`}
-          
- -

- Like the CLI, the Python SDK also supports the flexible identifier system, allowing you to reference scorecards using different types of identifiers. -

-
- -
-

Coming Soon

-

- Additional scorecard features are being developed. Check back soon for: -

-
    -
  • Advanced score configuration options
  • -
  • Scorecard version control
  • -
  • Collaborative editing features
  • -
  • Performance analytics
  • -
  • YAML synchronization for offline editing
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/add-edit-source/page.tsx b/dashboard/app/documentation/methods/add-edit-source/page.tsx deleted file mode 100644 index 35ad0c8b7..000000000 --- a/dashboard/app/documentation/methods/add-edit-source/page.tsx +++ /dev/null @@ -1,150 +0,0 @@ -export default function AddEditSourcePage() { - return ( -
-

Add/Edit a Source

-

- Learn how to create and manage sources in Plexus using the dashboard interface. -

- -
-
-

Adding a Source in the Dashboard

-

- The Plexus dashboard provides an intuitive interface for creating and managing your sources. - Follow these steps to add a new source: -

- -
-
-

Step-by-Step Guide

-
    -
  1. - Navigate to Sources: -

    Click on "Sources" in the main navigation menu to access the sources management page.

    -
  2. -
  3. - Create New Source: -

    Click the "Add Source" button in the top-right corner of the page.

    -
  4. -
  5. - Choose Source Type: -

    Select the type of source you want to create (e.g., Text, Audio).

    -
  6. -
  7. - Configure Settings: -

    Fill in the required information:

    -
      -
    • Source name
    • -
    • Description (optional)
    • -
    • Content or file upload
    • -
    • Additional settings specific to the source type
    • -
    -
  8. -
  9. - Save: -

    Click "Create" to save your new source.

    -
  10. -
-
- -
-

Editing an Existing Source

-
    -
  1. - Locate the Source: -

    Find the source you want to edit in the Sources list.

    -
  2. -
  3. - Access Edit Mode: -

    Click the edit icon (pencil) next to the source name.

    -
  4. -
  5. - Make Changes: -

    Update the source's information as needed.

    -
  6. -
  7. - Save Changes: -

    Click "Save" to apply your updates.

    -
  8. -
-
-
-
- -
-

Source Management Tips

-
-
-

Organization

-

- Use clear, descriptive names and optional tags to keep your sources organized - and easily searchable. -

-
-
-

Batch Operations

-

- Select multiple sources to perform batch operations like deletion or tag updates. -

-
-
-
- -
-

Using the CLI

-

- For automation and scripting, you can use the Plexus CLI to manage sources: -

- -
-            {`# Create a new source
-plexus sources create --name "My Source" --type text --content "Sample content"
-
-# Update an existing source
-plexus sources update source-id --name "Updated Name" --content "Updated content"`}
-          
-
- -
-

Python SDK Reference

-

- For programmatic access, you can use the Python SDK: -

- -
-            {`from plexus import Plexus
-
-plexus = Plexus(api_key="your-api-key")
-
-# Create a new source
-source = plexus.sources.create(
-    name="My Source",
-    type="text",
-    data="Sample content"
-)
-
-# Update an existing source
-source = plexus.sources.update(
-    source_id="source-id",
-    name="Updated Source Name",
-    data="Updated content"
-)`}
-          
-
- -
-

Coming Soon

-

- Additional documentation and features are being developed. Check back soon for: -

-
    -
  • Advanced source management techniques
  • -
  • Bulk import/export capabilities
  • -
  • Custom source templates
  • -
  • Integration examples
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/evaluate-score/page.tsx b/dashboard/app/documentation/methods/evaluate-score/page.tsx deleted file mode 100644 index 038847260..000000000 --- a/dashboard/app/documentation/methods/evaluate-score/page.tsx +++ /dev/null @@ -1,145 +0,0 @@ -export default function EvaluateScorePage() { - return ( -
-

Evaluate a Score

-

- Learn how to run evaluations using individual scores or complete scorecards. -

- -
-
-

Running an Evaluation

-

- You can evaluate content using individual scores or entire scorecards. The evaluation - process analyzes your content against the defined criteria and provides detailed results. -

- -
-
-

Using the Dashboard

-
    -
  1. Select your source content
  2. -
  3. Choose a scorecard or individual score
  4. -
  5. Click "Run Evaluation"
  6. -
  7. Monitor the evaluation progress
  8. -
  9. Review the results
  10. -
-
- -
-

Using the SDK

-
-                {`from plexus import Plexus
-
-plexus = Plexus(api_key="your-api-key")
-
-# Evaluate using a specific score (accepts ID, name, key, or external ID)
-evaluation = plexus.evaluations.create(
-    source_id="source-id",
-    score="Grammar Check"  # Can use name, key, ID, or external ID
-)
-
-# Or evaluate using an entire scorecard (accepts ID, name, key, or external ID)
-evaluation = plexus.evaluations.create(
-    source_id="source-id",
-    scorecard="Content Quality"  # Can use name, key, ID, or external ID
-)
-
-# Get evaluation results
-results = evaluation.get_results()
-
-# Print score values
-for score in results.scores:
-    print(f"{score.name}: {score.value}")`}
-              
- -

- The SDK supports the flexible identifier system, allowing you to reference scorecards and scores using different types of identifiers (name, key, ID, or external ID). -

-
- -
-

Using the CLI

-
-                {`# Evaluate using a scorecard
-plexus evaluate accuracy --scorecard "Content Quality" --number-of-samples 100
-
-# List evaluation results
-plexus evaluations list
-
-# View detailed results for a specific evaluation
-plexus evaluations list-results --evaluation evaluation-id`}
-              
- -

- The CLI supports the flexible identifier system, allowing you to reference scorecards using different types of identifiers (name, key, ID, or external ID). -

-
-
-
- -
-

Understanding Results

-
-
-

Score Values

-

- Numerical or categorical results for each evaluated criterion. -

-
-
-

Explanations

-

- Detailed reasoning behind each score's evaluation result. -

-
-
-

Suggestions

-

- Recommendations for improvement based on the evaluation results. -

-
-
-
- -
-

Batch Evaluations

-

- You can evaluate multiple sources at once using batch processing: -

- -
-            {`# Create a batch evaluation
-batch = plexus.evaluations.create_batch(
-    source_ids=["source-1", "source-2", "source-3"],
-    scorecard="Quality Assurance"  # Can use name, key, ID, or external ID
-)
-
-# Monitor batch progress
-status = batch.get_status()
-
-# Get results when complete
-results = batch.get_results()`}
-          
- -

- Like individual evaluations, batch evaluations also support the flexible identifier system for scorecards and scores. -

-
- -
-

Coming Soon

-

- Detailed documentation about evaluations is currently being developed. Check back soon for: -

-
    -
  • Advanced evaluation options
  • -
  • Custom result formatting
  • -
  • Evaluation performance optimization
  • -
  • Result analysis techniques
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/monitor-tasks/page.tsx b/dashboard/app/documentation/methods/monitor-tasks/page.tsx deleted file mode 100644 index 8ab13eb34..000000000 --- a/dashboard/app/documentation/methods/monitor-tasks/page.tsx +++ /dev/null @@ -1,135 +0,0 @@ -export default function MonitorTasksPage() { - return ( -
-

Monitor Tasks

-

- Learn how to track and manage tasks in your Plexus deployment. -

- -
-
-

Task Monitoring

-

- Tasks represent individual units of work in Plexus, such as evaluations, - source processing, or model training. You can monitor tasks through both - the web dashboard and the command line interface. -

- -
-
-

Using the Dashboard

-

- The web dashboard provides a visual interface for monitoring tasks: -

-
    -
  1. Navigate to the Tasks section in the dashboard
  2. -
  3. View active and completed tasks in real-time
  4. -
  5. Use filters to find specific tasks by type or status
  6. -
  7. Monitor task progress with visual progress bars
  8. -
  9. View detailed task information including stages and logs
  10. -
  11. Track task performance and resource usage
  12. -
-
- -
-

Using the CLI

-

- The Plexus CLI provides powerful tools for monitoring tasks directly from your terminal: -

-
-                {`# List tasks for an account (shows 10 most recent by default)
-plexus tasks list --account your-account-key
-
-# Show all tasks instead of just the most recent
-plexus tasks list --account your-account-key --all
-
-# Filter tasks by status
-plexus tasks list --account your-account-key --status RUNNING
-plexus tasks list --account your-account-key --status COMPLETED
-plexus tasks list --account your-account-key --status FAILED
-
-# Filter tasks by type
-plexus tasks list --account your-account-key --type evaluation
-
-# Combine filters
-plexus tasks list --account your-account-key --status RUNNING --type evaluation
-
-# Limit the number of tasks shown
-plexus tasks list --account your-account-key --limit 5`}
-              
-

- The CLI output displays comprehensive task information in a well-formatted view: -

-
    -
  • Basic task details (ID, type, status, target, command)
  • -
  • Associated IDs (account, scorecard, score)
  • -
  • Current stage and worker information
  • -
  • Complete timing information (created, started, completed, estimated)
  • -
  • Color-coded status indicators (blue for running, green for completed, red for failed)
  • -
  • Error messages and details when available
  • -
  • Task metadata and output logs
  • -
-
-
-
- -
-

Danger Zone: Task Deletion

-
-

- ⚠️ Warning: Task deletion is a permanent operation. Deleted tasks cannot be recovered. - Only use these commands when you are absolutely certain about the deletion. -

- -
-

- The CLI provides commands for task deletion with built-in safety measures: -

- -
-                {`# Delete a specific task by ID
-plexus tasks delete --account your-account-key --task-id "task-id"
-
-# Delete all failed tasks for an account
-plexus tasks delete --account your-account-key --status FAILED
-
-# Delete all tasks of a specific type for an account
-plexus tasks delete --account your-account-key --type evaluation
-
-# Delete ALL tasks for a specific account
-plexus tasks delete --account your-account-key --all
-
-# Delete ALL tasks across ALL accounts (USE WITH EXTREME CAUTION)
-plexus tasks delete --all
-
-# Skip confirmation prompt with -y/--yes (USE WITH EXTREME CAUTION)
-plexus tasks delete --all -y`}
-              
- -
-

Safety Features:

-
    -
  • The --all flag is required for bulk deletion
  • -
  • Account scope is clearly indicated in confirmations
  • -
  • Confirmation prompt is shown by default (can be skipped with -y)
  • -
  • Preview of tasks to be deleted is always shown
  • -
  • Associated task stages are automatically cleaned up
  • -
  • Progress bar shows deletion status
  • -
- -

Before deleting tasks, consider:

-
    -
  • Are there any dependent operations that might be affected?
  • -
  • Do you need to keep the task records for auditing purposes?
  • -
  • Have you backed up any important task results?
  • -
  • Are you targeting the correct tasks with your filters?
  • -
  • If using --all without --account, are you certain you want to delete tasks across ALL accounts?
  • -
-
-
-
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/page.tsx b/dashboard/app/documentation/methods/page.tsx deleted file mode 100644 index ed1ca7a6c..000000000 --- a/dashboard/app/documentation/methods/page.tsx +++ /dev/null @@ -1,111 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" -import { Metadata } from "next" - -export const metadata: Metadata = { - title: "Methods - Plexus Documentation", - description: "Step-by-step guides for common operations and workflows in Plexus." -} - -export default function MethodsPage() { - return ( -
-

Methods

-

- Welcome to our step-by-step guides section. Here you'll find detailed, practical instructions for all common operations in Plexus. Whether you're setting up your first source, creating scorecards, or running evaluations, these guides will walk you through each process step by step. -

- -
-
-

Source Management

-
-
-

Adding and Editing Sources

-

- Learn how to create new sources and manage existing ones through the dashboard. -

- - View Source Management Guide - -
- -
-

Source Profiling

-

- Understand how to analyze your sources to gain insights into their characteristics. -

- - Learn About Profiling - -
-
-
- -
-

Evaluation Setup

-
-
-

Creating Scorecards

-

- Set up comprehensive evaluation criteria with custom scorecards. -

- - Explore Scorecard Creation - -
- -
-

Configuring Scores

-

- Define individual evaluation metrics and their parameters. -

- - Configure Score Settings - -
-
-
- -
-

Running Evaluations

-
-
-

Evaluating Content

-

- Process your sources using scorecards to generate insights. -

- - Start Evaluating Content - -
- -
-

Task Management

-

- Track and manage evaluation tasks through their lifecycle. -

- - Monitor Your Tasks - -
-
-
- -
-

Next Steps

-

- Ready to get started? Begin with source management to set up your content for evaluation. -

-
- - Start Managing Sources - - - Review Core Concepts - -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/methods/profile-source/page.tsx b/dashboard/app/documentation/methods/profile-source/page.tsx deleted file mode 100644 index 501a577a9..000000000 --- a/dashboard/app/documentation/methods/profile-source/page.tsx +++ /dev/null @@ -1,157 +0,0 @@ -export default function ProfileSourcePage() { - return ( -
-

Profile a Source

-

- Learn how to analyze and profile your sources using the Plexus dashboard interface. -

- -
-
-

Profiling Sources in the Dashboard

-

- Source profiling helps you understand the characteristics and patterns in your data - before running evaluations. The dashboard provides comprehensive tools for analyzing - your sources. -

- -
-
-

Step-by-Step Guide

-
    -
  1. - Access Source Details: -

    Navigate to your source in the Sources list and click on it to view details.

    -
  2. -
  3. - Start Profiling: -

    Click the "Profile Source" button in the source details view.

    -
  4. -
  5. - Configure Analysis: -

    Select the profiling options you want to run:

    -
      -
    • Content analysis
    • -
    • Pattern detection
    • -
    • Quality metrics
    • -
    • Custom analysis options
    • -
    -
  6. -
  7. - Run Profile: -

    Click "Start Analysis" to begin the profiling process.

    -
  8. -
  9. - Review Results: -

    Once complete, examine the detailed profiling results in the dashboard.

    -
  10. -
-
-
-
- -
-

Understanding Profile Results

-
-
-

Content Analysis

-

- View detailed breakdowns of your source content, including structure, format, - and key characteristics. The dashboard presents this information through - interactive visualizations and detailed reports. -

-
-
-

Pattern Detection

-

- Explore identified patterns and anomalies through the dashboard's pattern - analysis view. This helps you understand common themes and potential issues - in your content. -

-
-
-

Quality Metrics

-

- Review comprehensive quality measurements through intuitive charts and - detailed metric breakdowns in the dashboard interface. -

-
-
-
- -
-

Profile Management Tips

-
-
-

Saving Profiles

-

- Save profile configurations as templates for quick reuse across multiple sources. -

-
-
-

Comparing Results

-

- Use the dashboard's comparison view to analyze profile results across different - sources or time periods. -

-
-
-
- -
-

Using the CLI

-

- For automated profiling workflows, you can use the Plexus CLI: -

- -
-            {`# Run a profile on a source
-plexus sources profile source-id --analysis-type full
-
-# Get profile results
-plexus sources profile-results source-id`}
-          
-
- -
-

Python SDK Reference

-

- For programmatic profiling, you can use the Python SDK: -

- -
-            {`from plexus import Plexus
-
-plexus = Plexus(api_key="your-api-key")
-
-# Run a profile on a source
-profile = plexus.sources.profile(
-    source_id="source-id",
-    options={
-        "content_analysis": True,
-        "pattern_detection": True,
-        "quality_metrics": True
-    }
-)
-
-# Get profile results
-results = profile.get_results()`}
-          
-
- -
-

Coming Soon

-

- Additional profiling features are being developed. Check back soon for: -

-
    -
  • Advanced visualization options
  • -
  • Custom profiling templates
  • -
  • Automated insights generation
  • -
  • Profile sharing and collaboration
  • -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/documentation/page.tsx b/dashboard/app/documentation/page.tsx deleted file mode 100644 index df54fca22..000000000 --- a/dashboard/app/documentation/page.tsx +++ /dev/null @@ -1,118 +0,0 @@ -import { Button as DocButton } from "@/components/ui/button" -import Link from "next/link" - -export default function DocumentationPage() { - return ( -
-

Documentation

-

- Welcome to the Plexus documentation. Here you'll find comprehensive guides and documentation - to help you start working with Plexus as quickly as possible. -

- -
-
-

Getting Started

-
-
-

Core Concepts

-

- Learn about the fundamental concepts and components that power Plexus. -

- - Explore Basics - -
- -
-

Step-by-Step Guides

-

- Follow detailed guides for common operations and workflows. -

- - View Methods - -
-
-
- -
-

Platform Components

-
-
-

Worker Nodes

-

- Set up and manage worker nodes to process your content at scale. -

- - Learn About Workers - -
- -
-

- plexus CLI Tool -

-

- Use the command-line interface to manage your Plexus deployment. -

- - Explore CLI - -
- -
-

Python SDK

-

- Integrate Plexus into your Python applications programmatically. -

- - Browse SDK Reference - -
-
-
- -
-

Quick Start

-

- The fastest way to get started with Plexus is to: -

-
    -
  1. - Review the Basics -

    Understand the core concepts that make up Plexus.

    -
  2. -
  3. - Create Your First Source -

    Add some content to analyze using the dashboard.

    -
  4. -
  5. - Set Up a Scorecard -

    Define how you want to evaluate your content.

    -
  6. -
  7. - Run an Evaluation -

    Process your content and view the results.

    -
  8. -
-
- -
-

Next Steps

-

- Ready to get started? Begin with the basics to understand Plexus's core concepts. -

-
- - Start with Basics - - - Jump to Source Creation - -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/lab/settings/page.tsx b/dashboard/app/lab/settings/page.tsx deleted file mode 100644 index 29e4dc05b..000000000 --- a/dashboard/app/lab/settings/page.tsx +++ /dev/null @@ -1,32 +0,0 @@ -'use client' - -import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card" -import Link from 'next/link' - -export default function LabSettings() { - return ( -
-
-

Settings

-

- Manage your account and application settings. -

-
- - - - Account Settings - Customize your account and preferences. - - -

Update your profile, change notification preferences, and manage security settings.

-
- - Manage Menu Visibility - -
-
-
-
- ) -} \ No newline at end of file diff --git a/dashboard/app/layout.tsx b/dashboard/app/layout.tsx index 8a75dbdf1..da8842492 100644 --- a/dashboard/app/layout.tsx +++ b/dashboard/app/layout.tsx @@ -1,78 +1,18 @@ -import type { Metadata } from "next"; -import { Inter } from "next/font/google"; -import "./globals.css"; -import ClientLayout from "./client-layout"; -import { HydrationOverlay } from "@builder.io/react-hydration-overlay"; -import "@aws-amplify/ui-react/styles.css"; -import { AccountProvider } from "./contexts/AccountContext" -import { SidebarProvider } from "./contexts/SidebarContext" - -const inter = Inter({ subsets: ["latin"] }); - -export const metadata: Metadata = { - title: "Plexus - Distributed Machine Learning Operating System", - description: "A distributed machine learning operating system. Plexus is a battle-tested platform for building AI workflows that analyze streams of content and take action.", - viewport: { - width: 'device-width', - initialScale: 1, - maximumScale: 1, - userScalable: false, - }, - openGraph: { - title: "Plexus - Distributed Machine Learning Operating System", - description: "A distributed machine learning operating system. Plexus is a battle-tested platform for building AI workflows that analyze streams of content and take action.", - url: "https://plexus.anth.us", - siteName: "Plexus", - images: [ - { - url: "/og-image.png", - width: 1200, - height: 630, - alt: "Plexus - Distributed Machine Learning Operating System" - } - ], - locale: "en_US", - type: "website", - }, - twitter: { - card: "summary_large_image", - title: "Plexus - Distributed Machine Learning Operating System", - description: "A distributed machine learning operating system. Plexus is a battle-tested platform for building AI workflows that analyze streams of content and take action.", - creator: "@Anthus_AI", - images: ["/og-image.png"], - } -}; +// Since we don't have a page directly under `/` that doesn't belong to a route group +// this layout will be used. However, we do want to redirect to the default locale, so we use a redirect. export default function RootLayout({ children, }: { children: React.ReactNode; }) { + // This page only renders when a user visits the root. + // Since we're using a [locale] structure, users should always be redirected to a locale. + // The middleware handles this redirect. return ( - - - - - - - - - - - {children} - - - + + + {children} ); diff --git a/dashboard/app/page.tsx b/dashboard/app/page.tsx index be045e5b3..3777f111c 100644 --- a/dashboard/app/page.tsx +++ b/dashboard/app/page.tsx @@ -1,277 +1,7 @@ -'use client' +import {redirect} from 'next/navigation'; -import React from 'react' -import { StandardSection } from '@/components/landing/StandardSection' -import { UseCases } from '@/components/landing/UseCases' -import { CTASection } from '@/components/landing/CTASection' -import { Footer } from '@/components/landing/Footer' -import { Layout } from '@/components/landing/Layout' -import { Download, Brain, Workflow as WorkflowIcon, ArrowRight, Cpu, FlaskRoundIcon as Flask, Cloud, Network } from 'lucide-react' -import dynamic from 'next/dynamic' -import ItemListWorkflow from '@/components/workflow/layouts/item-list-workflow' -import MetricsGauges from '@/components/MetricsGauges' -import { Button } from '@/components/ui/button' -import Link from 'next/link' - -const CLOCKWISE_SEQUENCE = [0, 1, 3, 2] // accuracy -> precision -> specificity -> sensitivity - -const MultiModelWorkflowClient = dynamic( - () => import('@/components/workflow/layouts/multi-model-workflow'), - { ssr: false } -) - -const WorkflowClient = dynamic( - () => import('@/components/workflow/base/workflow-base'), - { ssr: false } -) - -const MultiTypeWorkflowClient = dynamic( - () => import('@/components/workflow/layouts/multi-type-workflow'), - { ssr: false } -) - -const ItemListWorkflowClient = dynamic( - () => import('@/components/workflow/layouts/item-list-workflow'), - { ssr: false } -) - -export default function LandingPage() { - const [selectedMetricIndex, setSelectedMetricIndex] = React.useState(0) - const [rotationIndex, setRotationIndex] = React.useState(0) - - React.useEffect(() => { - const interval = setInterval(() => { - setRotationIndex(prev => (prev + 1) % CLOCKWISE_SEQUENCE.length) - }, 2000) - - return () => clearInterval(interval) - }, []) - - // Map rotation index to actual gauge index for clockwise movement - const selectedIndex = CLOCKWISE_SEQUENCE[rotationIndex] - - return ( - - -

- Plexus is a battle-tested platform for building AI workflows that analyze streams of content and take action. -

-

- Your team can use your data to set up step-by-step processes that automate information management. Without dealing with code development and deployment. -

-
- -
- - } - rightContent={ -
- -
- } - /> - - } - rightContent={ -
-

- Run a scorecard on each item of your data, with multiple scores per scorecard. -

-

- Are your agents saying the right things? Are your inbound leads qualified? -

-

- Classify, predict, extract, and act on your data. -

-
- } - /> - - - AI changes every week! Don't lock yourself into one solution. - Plexus is a workbench for applying any newfangled AI model to - solve your problems. Or simpler and cheaper ML models. Or - logical rules -- anything your solution requires. - {"\n\n"} - OpenAI, Anthropic, Google, Deepseek, Azure, AWS Bedrock, Hugging Face, PyTorch, TensorFlow — - Plexus supports them all. -

- } - rightContent={ -
- -
- } - /> - - - } - rightContent={ -
-

- You can't just write prompts and put them into production and hope they work, you need a way to evaluate them quantitatively to see if they meet your needs. You can't optimize a metric you're not measuring. -

-

- Each use case demands its own success metrics: Is this a regulatory compliance question where we need high sensitivity? Do we need to use balanced accuracy because the data is unbalanced? Plexus gives you the gauges you need. -

-
- } - /> - - - Your answers should match your questions. Sometimes a simple yes/no will do, - other times you need a 5-star rating, a percentage score, or just a thumbs up. - Plexus gives you the flexibility to express your results in the format that makes sense - for your use case. - {"\n\n"} - Binary classifiers, multi-class classifiers, scalar values, entity extraction, quote extraction, - and more. The framework is flexible enough to support anything your solution requires. -

- } - rightContent={ -
- -
- } - /> - - -
-

- Built by practitioners on the front lines of AI deployment. Our features - evolve as rapidly as AI itself, delivering battle-tested tools that - transform cutting-edge capabilities into real business value. -

-
-
-
- -

- Multi-model -

-

- Use any AI/ML model, from GPT-4 or Claude, to your own fine-tuned local Llama, to custom BERT-based classifiers. -

-
-
- -

- Lab workflow -

-

- Create and align your own custom classifiers using sophisticated tools for analyzing datasets and evaluating results. -

-
-
- -

- Serverless -

-

- Plexus is a lightning-fast, fully DevOps / IaC / NoSQL project that doesn't depend on servers or databases. -

-
-
- -

- Task dispatch -

-

- Connect any node as a worker for running agents, evaluations, or reports, from AWS to Azure to local computers. -

-
-
-
- - - -