diff --git a/.env b/.env deleted file mode 100644 index ba9d8f280..000000000 --- a/.env +++ /dev/null @@ -1,15 +0,0 @@ -# Telegram Bot -TELEGRAM_BOT_TOKEN=8242264750:AAExylQW59P_xE15NVsTjZOosn_mgtJ_A4w -GOOGLE_API_KEY = AIzaSyAyCc8hPLSIBG7dBEPoXd1QbqtUT77vUrU - -ADMIN_GROUP_ID = -4945697120 - -# Дополнительные настройки (необязательные) -# DATA_DIRECTORY=./data -# DATABASE_PATH=chattractive.db -# GEMINI_MODEL=gemini-2.0-flash-exp -# AUDIO_MODEL_DIR=/models/chatterbox - -VOICE_PATH=voices/voice1.wav -VOICE_DEVICE=cuda - \ No newline at end of file diff --git a/.gitignore b/.gitignore index 9a46dfc33..f21dd0666 100644 --- a/.gitignore +++ b/.gitignore @@ -1,17 +1,20 @@ .vscode -# Pylance pyrightconfig.json -# Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class -# C extensions *.so -# Distribution / packaging +.env +.env.local +.env.production +.env.staging +.env.development + +*.so .Python build/ develop-eggs/ @@ -30,13 +33,19 @@ wheels/ *.egg MANIFEST -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. +venv/ +env/ +ENV/ +env.bak/ +venv.bak/ +.venv/ +.virtualenvs/ +.env/ +.venv + *.manifest *.spec -# Installer logs pip-log.txt pip-delete-this-directory.txt @@ -44,8 +53,8 @@ syn_out/ checkpoints/ .gradio -# Ignore generated sample .wav files -**/*.wav chattractive.bd chattractive.db models/ +!chattractive/vendor/chatterbox/models/ +!chattractive/vendor/chatterbox/models/** diff --git a/README.md b/README.md index 36265907a..f943d1f27 100644 --- a/README.md +++ b/README.md @@ -1,27 +1,33 @@ # Chattractive -Интерактивный Telegram‑бот, который отвечает на вопросы об ИТМО, используя локальную базу знаний из папки `data`. Бот поддерживает работу в автоматическом режиме (ответы от Gemini), ручной режим общения с администраторами и генерацию голосовых ответов на основе модели Chatterbox. +Chattractive — Telegram‑бот, который отвечает на вопросы об ИТМО и умеет озвучивать ответы. +Он сочетает локальную базу знаний, обращение к Gemini и мультиязычную озвучку Chatterbox. ## Возможности -- 📚 **RAG-пайплайн**: вопросы пользователя сопоставляются с документами из `data`, а релевантные фрагменты передаются в Gemini. -- 💬 **Диалог с памятью**: история сообщений хранится в SQLite и используется при формировании ответов. -- 👨‍👩‍👧 **Ручной режим**: сообщения пересылаются в группу администраторов, ответы из группы доставляются пользователю (включая голосовые). -- 🔊 **Голосовые сообщения**: бот может озвучивать ответы через модуль `app/audio` (опционально). -- 🧹 **Управление с клавиатуры**: перезапуск диалога, переключение режимов и голосовых ответов в один клик. +- 📚 **Векторный поиск по базе знаний**: документы из `data/` разбиваются на фрагменты, + для них строится TF‑IDF‑эмбеддинг, а ответы подбираются по косинусной близости. +- 💬 **Диалог с памятью**: история сообщений хранится в SQLite и передаётся в Gemini. +- 👨‍👩‍👧 **Ручной режим**: сообщения можно пересылать операторам и получать ответы из группы. +- 🔊 **Голосовые сообщения**: по желанию бот проговаривает ответы через Chatterbox. +- 🧹 **Управление с клавиатуры**: перезапуск диалога, переключение режимов и голоса одной кнопкой. -## Быстрый старт +## Установка и запуск -1. **Создайте виртуальное окружение** и установите зависимости: +1. **Создайте виртуальное окружение и установите зависимости**: ```bash - python -m venv .venv - source .venv/bin/activate - pip install --upgrade pip + -3.11 -m venv venv + venv/Scripts/Activate pip install -e . + + #Для использование cuda, нужно указать VOICE_DEVICE=cuda в окружении и дополнительно ввести команды: + + pip uninstall torch torchaudio + pip install --index-url https://download.pytorch.org/whl/cu124 torch==2.6.0 torchaudio==2.6.0 ``` -2. **Подготовьте переменные окружения**. Скопируйте `env.example` в `.env` и заполните значения: +2. **Заполните переменные окружения**. Скопируйте `env.example` в `.env` и пропишите значения: ```bash cp env.example .env @@ -31,54 +37,61 @@ - `TELEGRAM_BOT_TOKEN` — токен вашего бота. - `GOOGLE_API_KEY` — ключ доступа к Gemini. - - `ADMIN_GROUP_ID` — идентификатор Telegram-группы администраторов. + - `ADMIN_GROUP_ID` — идентификатор Telegram‑группы операторов. Необязательные параметры: - `DATA_DIRECTORY` — путь к папке с текстами (по умолчанию `data`). - - `DATABASE_PATH` — путь к файлу SQLite (по умолчанию `chattractive.db`). + - `DATABASE_PATH` — путь к SQLite‑файлу (по умолчанию `chattractive.db`). - `GEMINI_MODEL` — модель Gemini (по умолчанию `gemini-2.0-flash-exp`). - - `AUDIO_MODEL_DIR` - ??????? ? ?????? Chatterbox (?? ????????? ./models). - - `VOICE_DEVICE` - ?????????? ??? ????????? ????? (`cpu`, `cuda` ? ?.?.). - - `VOICE_LANGUAGE` - ???????? ??? ??? ??????? (????????, `ru`, `en`). + - `AUDIO_MODEL_DIR` — каталог с весами Chatterbox (по умолчанию `./models`). + - `VOICE_DEVICE` — устройство для TTS (`cpu`, `cuda`, `mps`). + - `VOICE_LANGUAGE` — язык синтеза (`ru`, `en`, и т.д.). +3. **Подготовьте данные**. Поместите в `data/` текстовые файлы (`.txt`, `.md`, `.rst`) с актуальной информацией. -3. **Заполните папку `data/`** текстовыми файлами (`.txt`, `.md`, `.rst`) с актуальной информацией. - -4. **Запустите бота**: +4. **Скачайте веса TTS (опционально)**: ```bash - python load_model.py # downloads Chatterbox multilingual TTS weights if missing - python main.py + python load_model.py ``` -By default multilingual weights are cached in ./models; keep AUDIO_MODEL_DIR unset to reuse that folder automatically. -Supported language ids include: ar, da, de, el, en, es, fi, fr, he, hi, it, ja, ko, ms, nl, no, pl, pt, ru, sv, sw, tr, zh. - -## Архитектура - -- `app/AI/knowledge_base.py` — загрузка и поиск по локальной базе знаний. -- `app/AI/chat_service.py` — формирование промптов и вызовы Gemini. -- `app/db/storage.py` — простой слой работы с SQLite. -- `app/bot/bot.py` — обработка сообщений Telegram и управление режимами. -- `app/audio/voice_service.py` — обертка над Chatterbox TTS. -- `main.py` — точка входа, сборка всех компонентов. - -## Голосовые ответы + Команда сохранит мультиязычную модель Chatterbox в каталог `models/`. Можно задать собственный путь через `AUDIO_MODEL_DIR`. -Модуль `VoiceSynthesizer` использует Chatterbox. Для работы скачайте веса модели и укажите путь в `AUDIO_MODEL_DIR`. Если модель не загружена, бот будет отвечать только текстом. +5. **Запустите бота**: -## Ручной режим + ```bash + python main.py + ``` -При активации ручного режима сообщения пользователя пересылаются в группу администраторов (`ADMIN_GROUP_ID`). Ответы из этой группы, отправленные в режиме «ответить», автоматически доставляются пользователю. Поддерживаются текст, голосовые и любые вложения. + После запуска бот автоматически проиндексирует базу знаний, подключится к Gemini и начнёт принимать сообщения. -## Сброс истории +## Структура проекта -Нажмите «🔄 Перезапуск», чтобы очистить историю чата. Записи удаляются из базы данных, и диалог начинается заново. +``` +chattractive/ +├── ai/ # работа с Gemini и локальной БЗ +├── audio/ # обёртка над Chatterbox и голосовым сервисом +├── bot/ # Telegram-логика и сценарии общения +├── db/ # слой хранения истории и служебных настроек +└── vendor/chatterbox/ # вендорные модели TTS (Resemble AI) +``` ---- +- `chattractive/ai/knowledge_base.py` — загрузка документов и векторный поиск. +- `chattractive/ai/chat_service.py` — формирование промптов и диалог с Gemini. +- `chattractive/db/storage.py` — работа с SQLite и состоянием бота. +- `chattractive/bot/bot.py` — обработка сообщений Telegram и маршрутизация режимов. +- `chattractive/audio/voice_service.py` — озвучивание с нормализацией текста. +- `main.py` — точка входа и сборка всех компонентов. -Проект предназначен как отправная точка для внутренних ассистентов на базе Gemini. Расширяйте и дополняйте функциональность под свои сценарии. +## Голосовые ответы +`VoiceSynthesizer` динамически подгружает модель Chatterbox и использует Gemini только для нормализации текста. +В логах сохраняются лишь укороченные превью ответов, поэтому чувствительный текст не попадает в INFO‑логи. +Чтобы задать собственный голос, поместите эталон в файл и укажите путь в переменной `VOICE_PATH`. +## Ручной режим +При включении ручного режима сообщения пользователя пересылаются в группу `ADMIN_GROUP_ID`. +Ответ из группы, отправленный в режиме «ответить», автоматически доставляется пользователю +(поддерживаются текст, голос и вложения). История синхронизируется с базой для последующих обращений. diff --git a/app/AI/chat_service.py b/app/AI/chat_service.py deleted file mode 100644 index 391cea0a9..000000000 --- a/app/AI/chat_service.py +++ /dev/null @@ -1,118 +0,0 @@ -"""High level conversational service built on top of Gemini API.""" - -from __future__ import annotations - -import logging -from dataclasses import dataclass -from pathlib import Path -from typing import Iterable, List, Optional, Sequence, Tuple - -from google import genai -from requests.exceptions import RequestException - -from .knowledge_base import DocumentChunk, LocalKnowledgeBase - - -logger = logging.getLogger(__name__) - - -@dataclass -class ChatTurn: - role: str - content: str - - -def _format_documents(documents: Sequence[DocumentChunk]) -> str: - parts: List[str] = [] - for idx, doc in enumerate(documents, start=1): - parts.append(f"Документ {idx} ({doc.source})\n{doc.text}") - return "\n\n".join(parts) - - -class GeminiChatService: - """Conversational interface around Gemini with document retrieval.""" - - def __init__( - self, - *, - api_key: str, - data_dir: Path, - system_prompt: Optional[str] = None, - model: str = "gemini-2.0-flash-exp", - history_limit: int = 12, - ) -> None: - if not api_key: - raise ValueError("Gemini API key must be provided") - self._client = genai.Client(api_key=api_key) - self._model = model - self._kb = LocalKnowledgeBase(data_dir) - self._system_prompt = system_prompt or ( - "Ты — дружелюбный ассистент LISA INFO. Отвечай только на русском языке и помогай кратко и по делу. " - "Если вопрос требует данных из базы знаний, используй факты из документов без перечисления источников. " - "Когда точного ответа нет, честно сообщи об этом и предложи дальнейшие шаги. " - "Держи каждый ответ короче 2000 символов, начинай с конкретных действий и избегай лишней воды." - ) - self._history_limit = max(2, history_limit) - - def build_prompt( - self, - history: Sequence[ChatTurn], - user_message: str, - documents: Sequence[DocumentChunk], - ) -> List[dict]: - payload: List[dict] = [] - - system_text = ( - "[SYSTEM]\n" - f"{self._system_prompt}\n\n" - "Всегда отвечай на русском языке. Если информации недостаточно, объясни это и предложи, что можно сделать дальше." - "\nИспользуй для форматирование только эти символы, которые понимает Telegram: **жирный**, __курсив__, ~~зачёркнутый~~, `моно`, ```моно-блок``` и ничего сверх этого." - "\nKeep replies focused on concrete actions, stay under 2000 characters, and avoid filler." - ) - payload.append({"role": "user", "parts": [{"text": system_text}]}) - - for turn in history[-self._history_limit :]: - payload.append({"role": turn.role, "parts": [{"text": turn.content}]}) - - if documents: - docs_text = ( - "[DOCUMENTS]\n" - f"{_format_documents(documents)}\n\n" - "Используй эти выдержки по смыслу, но не перечисляй источники в ответе." - ) - payload.append({"role": "user", "parts": [{"text": docs_text}]}) - - user_text = ( - "[USER]\n" - f"{user_message}\n\n" - "Ответ сформируй по-русски, без явного перечисления источников." - ) - payload.append({"role": "user", "parts": [{"text": user_text}]}) - - return payload - - def _generate(self, payload: List[dict]) -> str: - try: - response = self._client.models.generate_content(model=self._model, contents=payload) - except RequestException as exc: - logger.warning("Gemini request failed: %s", exc) - return "Сейчас не получается получить ответ. Попробуйте ещё раз чуть позже." - except Exception as exc: - logger.exception("Gemini request crashed: %s", exc) - return "Произошла ошибка при обращении к модели. Попробуйте повторить запрос позднее." - text = getattr(response, "text", None) - if not text: - text = "К сожалению, ответ не был сформирован. Попробуйте переформулировать вопрос." - return text.strip() - - def answer( - self, - history: Sequence[ChatTurn], - user_message: str, - *, - top_k: int = 4, - ) -> Tuple[str, List[DocumentChunk]]: - documents = self._kb.search(user_message, top_k=top_k) - payload = self.build_prompt(history, user_message, documents) - reply = self._generate(payload) - return reply, list(documents) diff --git a/app/audio/__init__.py b/app/audio/__init__.py deleted file mode 100644 index 190cfbf23..000000000 --- a/app/audio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -try: - from importlib.metadata import version -except ImportError: - from importlib_metadata import version # For Python <3.8 - -__version__ = version("chatterbox-tts") - - -from .tts import ChatterboxTTS -from .vc import ChatterboxVC -from .mtl_tts import ChatterboxMultilingualTTS, SUPPORTED_LANGUAGES \ No newline at end of file diff --git a/chattractive/__init__.py b/chattractive/__init__.py new file mode 100644 index 000000000..2514e94b1 --- /dev/null +++ b/chattractive/__init__.py @@ -0,0 +1,3 @@ +"""Chattractive — Telegram assistant powered by Gemini and Chatterbox.""" + +__all__ = ["bot", "ai", "audio", "db", "vendor"] diff --git a/app/AI/__init__.py b/chattractive/ai/__init__.py similarity index 100% rename from app/AI/__init__.py rename to chattractive/ai/__init__.py diff --git a/chattractive/ai/chat_service.py b/chattractive/ai/chat_service.py new file mode 100644 index 000000000..f7b7d1628 --- /dev/null +++ b/chattractive/ai/chat_service.py @@ -0,0 +1,221 @@ +"""High level conversational service built on top of Gemini API.""" + +from __future__ import annotations + +import logging +import time +from dataclasses import dataclass +from pathlib import Path +from typing import Any, Iterable, List, Optional, Sequence, Tuple + +from google import genai +from google.genai.errors import ClientError +from requests.exceptions import RequestException + +from .knowledge_base import DocumentChunk, LocalKnowledgeBase + + +logger = logging.getLogger(__name__) + + +@dataclass +class ChatTurn: + role: str + content: str + + +def _format_documents(documents: Sequence[DocumentChunk]) -> str: + parts: List[str] = [] + for idx, doc in enumerate(documents, start=1): + parts.append(f"Документ {idx} ({doc.source})\n{doc.text}") + return "\n\n".join(parts) + + +def _retry_delay_seconds(details: Any) -> Optional[float]: + """Best-effort extraction of retry delay (seconds) from Gemini error payload.""" + entries: Sequence[Any] = () + if isinstance(details, dict): + error_payload = details.get("error") + if isinstance(error_payload, dict): + entries = error_payload.get("details") or () + else: + entries = details.get("details") or () + elif isinstance(details, list): + entries = details + else: + return None + + for entry in entries or (): + if not isinstance(entry, dict): + continue + retry_info = entry.get("retryDelay") + if retry_info is not None: + if isinstance(retry_info, (int, float)) and retry_info > 0: + return float(retry_info) + if isinstance(retry_info, str): + cleaned = retry_info.strip().lower() + if cleaned.endswith('s'): + cleaned = cleaned[:-1] + try: + value = float(cleaned) + except ValueError: + pass + else: + if value > 0: + return value + seconds = entry.get("seconds") + nanos = entry.get("nanos") + if seconds is not None or nanos is not None: + try: + total = float(seconds or 0) + float(nanos or 0) / 1_000_000_000 + except (TypeError, ValueError): + continue + if total > 0: + return total + return None + +def _quota_hint(details: Any) -> str: + """Extract a human-friendly quota hint from Gemini error details.""" + if not isinstance(details, dict): + return "" + error_payload = details.get("error") + if isinstance(error_payload, dict): + entries = error_payload.get("details") or () + else: + entries = details.get("details") or () + for entry in entries or (): + if not isinstance(entry, dict): + continue + if entry.get("@type") != "type.googleapis.com/google.rpc.QuotaFailure": + continue + violations = entry.get("violations") or () + for violation in violations: + if not isinstance(violation, dict): + continue + metric = violation.get("quotaMetric") + limit = violation.get("quotaValue") + if metric and limit: + return f" Limit {metric} allows {limit} requests." + break + return "" + +class GeminiChatService: + """Conversational interface around Gemini with document retrieval.""" + + def __init__( + self, + *, + api_key: str, + data_dir: Path, + system_prompt: Optional[str] = None, + model: str = "gemini-2.0-flash-exp", + history_limit: int = 12, + ) -> None: + if not api_key: + raise ValueError("Gemini API key must be provided") + self._client = genai.Client(api_key=api_key) + self._model = model + self._kb = LocalKnowledgeBase(data_dir) + self._system_prompt = system_prompt or ( + "Ты — дружелюбный ассистент LISA INFO. Отвечай только на русском языке в мужском роде и помогай кратко и по делу. " + "Если вопрос требует данных из базы знаний, используй факты из документов без перечисления источников. " + "Когда точного ответа нет, честно сообщи об этом и предложи дальнейшие шаги. " + "Держи каждый ответ короче 2000 символов, начинай с конкретных действий и избегай лишней воды." + ) + self._history_limit = max(2, history_limit) + + def build_prompt( + self, + history: Sequence[ChatTurn], + user_message: str, + documents: Sequence[DocumentChunk], + ) -> List[dict]: + payload: List[dict] = [] + + system_text = ( + "[SYSTEM]\n" + f"{self._system_prompt}\n\n" + "Всегда отвечай на русском языке. Если информации недостаточно, объясни это и предложи, что можно сделать дальше." + "\nИспользуй для форматирование только эти символы, которые понимает Telegram: **жирный**, __курсив__, ~~зачёркнутый~~, `моно`, ```моно-блок``` и ничего сверх этого." + "\nKeep replies focused on concrete actions, stay under 2000 characters, and avoid filler." + ) + payload.append({"role": "user", "parts": [{"text": system_text}]}) + + for turn in history[-self._history_limit :]: + payload.append({"role": turn.role, "parts": [{"text": turn.content}]}) + + if documents: + docs_text = ( + "[DOCUMENTS]\n" + f"{_format_documents(documents)}\n\n" + "Используй эти выдержки по смыслу, но не перечисляй источники в ответе." + ) + payload.append({"role": "user", "parts": [{"text": docs_text}]}) + + user_text = ( + "[USER]\n" + f"{user_message}\n\n" + "Ответ сформируй по-русски, без явного перечисления источников." + ) + payload.append({"role": "user", "parts": [{"text": user_text}]}) + + return payload + + def _generate(self, payload: List[dict]) -> str: + max_attempts = 3 + response = None + for attempt in range(1, max_attempts + 1): + try: + response = self._client.models.generate_content( + model=self._model, contents=payload + ) + break + except ClientError as exc: + if exc.code == 429: + retry_delay = _retry_delay_seconds(getattr(exc, "details", None)) + if retry_delay and attempt < max_attempts: + logger.warning( + "Gemini quota reached, retrying in %.2fs (attempt %d/%d)", + retry_delay, + attempt, + max_attempts, + ) + time.sleep(retry_delay) + continue + quota_hint = _quota_hint(getattr(exc, "details", None)) + logger.warning("Gemini quota exhausted: %s", exc) + message = ( + "Gemini API quota exhausted for model " + f"{self._model}. Wait for the quota to reset or upgrade your Gemini plan." + ) + if quota_hint: + message += quota_hint + return message + logger.exception("Gemini client error: %s", exc) + return ( + "Gemini returned a client error. Check the request payload and logs, then try again." + ) + except RequestException as exc: + logger.warning("Gemini request failed: %s", exc) + return "Gemini request failed. Check your network connection and try again." + except Exception as exc: + logger.exception("Gemini request crashed: %s", exc) + return "Gemini request crashed unexpectedly. Please review the logs and try again." + if response is None: + return "Gemini did not return a response. Please retry shortly." + text_response = getattr(response, "text", None) + if not text_response: + text_response = "Gemini returned an empty response. Please retry or contact support if the issue persists." + return text_response.strip() + + def answer( + self, + history: Sequence[ChatTurn], + user_message: str, + *, + top_k: int = 4, + ) -> Tuple[str, List[DocumentChunk]]: + documents = self._kb.search(user_message, top_k=top_k) + payload = self.build_prompt(history, user_message, documents) + reply = self._generate(payload) + return reply, list(documents) diff --git a/app/AI/knowledge_base.py b/chattractive/ai/knowledge_base.py similarity index 50% rename from app/AI/knowledge_base.py rename to chattractive/ai/knowledge_base.py index dc159fec5..dab66bcf2 100644 --- a/app/AI/knowledge_base.py +++ b/chattractive/ai/knowledge_base.py @@ -3,12 +3,14 @@ from __future__ import annotations import logging -import math import re +from collections import Counter from dataclasses import dataclass from pathlib import Path from typing import Iterable, List, Sequence +import numpy as np + logger = logging.getLogger(__name__) @@ -52,6 +54,10 @@ def __init__( self._chunk_overlap = chunk_overlap self._encoding = encoding self._documents: List[DocumentChunk] = [] + self._tokenized_documents: List[List[str]] = [] + self._vocab: dict[str, int] = {} + self._idf: np.ndarray = np.zeros(0, dtype=np.float32) + self._document_embeddings: np.ndarray = np.zeros((0, 0), dtype=np.float32) self._load_documents() @property @@ -74,10 +80,17 @@ def _load_documents(self) -> None: clean_chunk = chunk.strip() if not clean_chunk: continue + tokens = _tokenize(clean_chunk) + if not tokens: + continue order += 1 self._documents.append( DocumentChunk(text=clean_chunk, source=str(file_path.name), order=order) ) + self._tokenized_documents.append(tokens) + + if self._documents: + self._build_embeddings() def _split_into_chunks(self, text: str) -> Iterable[str]: tokens = text.split() @@ -100,22 +113,85 @@ def search(self, query: str, *, top_k: int = 5) -> List[DocumentChunk]: if not query.strip(): return [] + if not self._documents or self._document_embeddings.size == 0: + return [] + query_tokens = _tokenize(query) if not query_tokens: return [] - query_token_set = set(query_tokens) - results: List[DocumentChunk] = [] - for doc in self._documents: - doc_tokens = _tokenize(doc.text) - if not doc_tokens: - continue - doc_token_set = set(doc_tokens) - overlap = len(query_token_set & doc_token_set) - if overlap == 0: + query_counts = Counter(query_tokens) + total = float(sum(query_counts.values())) + if total == 0.0: + return [] + + query_vector = np.zeros(len(self._vocab), dtype=np.float32) + for token, count in query_counts.items(): + index = self._vocab.get(token) + if index is None: continue - score = overlap / math.sqrt(len(query_token_set) * len(doc_token_set)) - results.append(DocumentChunk(text=doc.text, source=doc.source, order=doc.order, score=score)) + query_vector[index] = (count / total) * self._idf[index] - results.sort(key=lambda item: (item.score, -item.order), reverse=True) - return results[:top_k] + norm = float(np.linalg.norm(query_vector)) + if norm == 0.0: + return [] + query_vector /= norm + + scores = self._document_embeddings @ query_vector + if scores.size == 0: + return [] + + ranked_indices = np.argsort(scores)[::-1] + results: List[DocumentChunk] = [] + for idx in ranked_indices[:top_k]: + score = float(scores[idx]) + if score <= 0.0: + break + doc = self._documents[idx] + results.append( + DocumentChunk(text=doc.text, source=doc.source, order=doc.order, score=score) + ) + + return results + + def _build_embeddings(self) -> None: + vocab: dict[str, int] = {} + document_frequency: Counter[str] = Counter() + for tokens in self._tokenized_documents: + document_frequency.update(set(tokens)) + for token in tokens: + if token not in vocab: + vocab[token] = len(vocab) + + if not vocab: + self._vocab = {} + self._idf = np.zeros(0, dtype=np.float32) + self._document_embeddings = np.zeros((len(self._documents), 0), dtype=np.float32) + return + + vocab_size = len(vocab) + doc_count = len(self._tokenized_documents) + tf_matrix = np.zeros((doc_count, vocab_size), dtype=np.float32) + + for row, tokens in enumerate(self._tokenized_documents): + token_counts = Counter(tokens) + total_tokens = float(sum(token_counts.values())) + if total_tokens == 0.0: + continue + for token, count in token_counts.items(): + index = vocab[token] + tf_matrix[row, index] = count / total_tokens + + idf_values = np.array( + [document_frequency[token] for token in vocab], dtype=np.float32 + ) + idf = np.log((1.0 + doc_count) / (1.0 + idf_values)) + 1.0 + embeddings = tf_matrix * idf + + norms = np.linalg.norm(embeddings, axis=1, keepdims=True) + norms[norms == 0.0] = 1.0 + embeddings = embeddings / norms + + self._vocab = vocab + self._idf = idf + self._document_embeddings = embeddings diff --git a/chattractive/antisleep.py b/chattractive/antisleep.py new file mode 100644 index 000000000..6cf8615ab --- /dev/null +++ b/chattractive/antisleep.py @@ -0,0 +1,126 @@ +from __future__ import annotations + +import ctypes +import logging +import platform +import subprocess +from typing import Optional + + +logger = logging.getLogger(__name__) + + +class AntiSleepGuard: + """Best-effort guard that keeps the host machine awake.""" + + def __init__(self) -> None: + self._platform = platform.system() + self._active = False + self._blocker_proc: Optional[subprocess.Popen] = None + + def enable(self) -> None: + if self._active: + return + + supported = False + try: + if self._platform == "Windows": + self._enable_windows() + supported = True + elif self._platform == "Darwin": + self._enable_macos() + supported = True + elif self._platform == "Linux": + supported = self._enable_linux() + else: + logger.warning("Anti-sleep guard is not implemented for %s", self._platform) + except Exception: # pragma: no cover - platform specific + logger.exception("Failed to enable anti-sleep guard") + return + + if supported: + self._active = True + logger.info("Anti-sleep guard enabled") + else: + logger.info("Anti-sleep guard could not be enabled on this platform") + + def disable(self) -> None: + if not self._active: + return + + try: + if self._platform == "Windows": + self._disable_windows() + elif self._platform == "Darwin": + self._disable_macos() + elif self._platform == "Linux": + self._disable_linux() + except Exception: # pragma: no cover - platform specific + logger.exception("Failed to disable anti-sleep guard") + finally: + self._active = False + logger.info("Anti-sleep guard disabled") + + def __enter__(self) -> "AntiSleepGuard": + self.enable() + return self + + def __exit__(self, exc_type, exc, tb) -> None: + self.disable() + + def _enable_windows(self) -> None: + es_continuous = 0x80000000 + es_system_required = 0x00000001 + es_display_required = 0x00000002 + flags = es_continuous | es_system_required | es_display_required + result = ctypes.windll.kernel32.SetThreadExecutionState(flags) + if result == 0: + raise OSError("SetThreadExecutionState failed") + + def _disable_windows(self) -> None: + es_continuous = 0x80000000 + if ctypes.windll.kernel32.SetThreadExecutionState(es_continuous) == 0: + raise OSError("SetThreadExecutionState reset failed") + + def _enable_macos(self) -> None: + self._blocker_proc = subprocess.Popen(["caffeinate"]) + + def _disable_macos(self) -> None: + if self._blocker_proc: + self._terminate_blocker() + + def _enable_linux(self) -> bool: + try: + self._blocker_proc = subprocess.Popen( + [ + "systemd-inhibit", + "--what=idle:sleep", + "--mode=block", + "--who=Chattractive", + "--why=Prevent system sleep while Chattractive runs", + "sleep", + "infinity", + ] + ) + except FileNotFoundError: + logger.warning("systemd-inhibit is not available; anti-sleep guard cannot run") + return False + + if self._blocker_proc.poll() is not None: + raise RuntimeError("systemd-inhibit exited unexpectedly") + return True + + def _disable_linux(self) -> None: + if self._blocker_proc: + self._terminate_blocker() + + def _terminate_blocker(self) -> None: + if not self._blocker_proc: + return + self._blocker_proc.terminate() + try: + self._blocker_proc.wait(timeout=5) + except subprocess.TimeoutExpired: + self._blocker_proc.kill() + finally: + self._blocker_proc = None diff --git a/chattractive/audio/__init__.py b/chattractive/audio/__init__.py new file mode 100644 index 000000000..1ae6e109c --- /dev/null +++ b/chattractive/audio/__init__.py @@ -0,0 +1,12 @@ +"""Audio helpers built around the bundled Chatterbox models.""" + +from .mtl_tts import ChatterboxMultilingualTTS, SUPPORTED_LANGUAGES +from .tts import ChatterboxTTS +from .vc import ChatterboxVC + +__all__ = [ + "ChatterboxMultilingualTTS", + "ChatterboxTTS", + "ChatterboxVC", + "SUPPORTED_LANGUAGES", +] diff --git a/app/audio/mtl_tts.py b/chattractive/audio/mtl_tts.py similarity index 94% rename from app/audio/mtl_tts.py rename to chattractive/audio/mtl_tts.py index 7b5dd39bc..12fcd10dc 100644 --- a/app/audio/mtl_tts.py +++ b/chattractive/audio/mtl_tts.py @@ -9,13 +9,13 @@ from safetensors.torch import load_file as load_safetensors from huggingface_hub import snapshot_download -from .models.t3 import T3 -from .models.t3.modules.t3_config import T3Config -from .models.s3tokenizer import S3_SR, drop_invalid_tokens -from .models.s3gen import S3GEN_SR, S3Gen -from .models.tokenizers import MTLTokenizer -from .models.voice_encoder import VoiceEncoder -from .models.t3.modules.cond_enc import T3Cond +from chattractive.vendor.chatterbox.models.t3 import T3 +from chattractive.vendor.chatterbox.models.t3.modules.t3_config import T3Config +from chattractive.vendor.chatterbox.models.s3tokenizer import S3_SR, drop_invalid_tokens +from chattractive.vendor.chatterbox.models.s3gen import S3GEN_SR, S3Gen +from chattractive.vendor.chatterbox.models.tokenizers import MTLTokenizer +from chattractive.vendor.chatterbox.models.voice_encoder import VoiceEncoder +from chattractive.vendor.chatterbox.models.t3.modules.cond_enc import T3Cond REPO_ID = "ResembleAI/chatterbox" diff --git a/app/audio/tts.py b/chattractive/audio/tts.py similarity index 95% rename from app/audio/tts.py rename to chattractive/audio/tts.py index 6d9b5ad54..7eed2769e 100644 --- a/app/audio/tts.py +++ b/chattractive/audio/tts.py @@ -8,12 +8,12 @@ from huggingface_hub import hf_hub_download from safetensors.torch import load_file -from .models.t3 import T3 -from .models.s3tokenizer import S3_SR, drop_invalid_tokens -from .models.s3gen import S3GEN_SR, S3Gen -from .models.tokenizers import EnTokenizer -from .models.voice_encoder import VoiceEncoder -from .models.t3.modules.cond_enc import T3Cond +from chattractive.vendor.chatterbox.models.t3 import T3 +from chattractive.vendor.chatterbox.models.s3tokenizer import S3_SR, drop_invalid_tokens +from chattractive.vendor.chatterbox.models.s3gen import S3GEN_SR, S3Gen +from chattractive.vendor.chatterbox.models.tokenizers import EnTokenizer +from chattractive.vendor.chatterbox.models.voice_encoder import VoiceEncoder +from chattractive.vendor.chatterbox.models.t3.modules.cond_enc import T3Cond REPO_ID = "ResembleAI/chatterbox" diff --git a/app/audio/vc.py b/chattractive/audio/vc.py similarity index 96% rename from app/audio/vc.py rename to chattractive/audio/vc.py index a9c32ed35..137767fe6 100644 --- a/app/audio/vc.py +++ b/chattractive/audio/vc.py @@ -6,8 +6,8 @@ from huggingface_hub import hf_hub_download from safetensors.torch import load_file -from .models.s3tokenizer import S3_SR -from .models.s3gen import S3GEN_SR, S3Gen +from chattractive.vendor.chatterbox.models.s3tokenizer import S3_SR +from chattractive.vendor.chatterbox.models.s3gen import S3GEN_SR, S3Gen REPO_ID = "ResembleAI/chatterbox" diff --git a/app/audio/voice_service.py b/chattractive/audio/voice_service.py similarity index 96% rename from app/audio/voice_service.py rename to chattractive/audio/voice_service.py index 946b4f4b6..3afc87228 100644 --- a/app/audio/voice_service.py +++ b/chattractive/audio/voice_service.py @@ -83,6 +83,15 @@ def _strip_spurious_stress_marks(text: str) -> str: return _STRESS_APOSTROPHE_RE.sub('', text) +def _preview(text: str, *, limit: int = 120) -> str: + if not text: + return "" + compact = " ".join(text.split()) + if len(compact) <= limit: + return compact + return f"{compact[: limit - 1]}…" + + class VoiceSynthesizer: """Lazily loads the multilingual Chatterbox TTS model and exposes a helper method.""" @@ -238,8 +247,8 @@ def synthesize( logger.warning("Подготовленный текст для озвучки пуст; используем оригинальный ответ") prepared_text = text.strip() - logger.info("Gemini reply text: %s", text) - logger.info("Gemini TTS text: %s", prepared_text) + logger.debug("Gemini reply preview: %s", _preview(text)) + logger.debug("Gemini TTS preview: %s", _preview(prepared_text)) chunks = _split_text_for_tts(prepared_text) if not chunks: diff --git a/app/bot/bot.py b/chattractive/bot/bot.py similarity index 98% rename from app/bot/bot.py rename to chattractive/bot/bot.py index ea9d5b101..d85cbc430 100644 --- a/app/bot/bot.py +++ b/chattractive/bot/bot.py @@ -15,9 +15,9 @@ from aiogram.filters import Command from aiogram.types import FSInputFile, KeyboardButton, Message, ReplyKeyboardMarkup -from app.AI.chat_service import ChatTurn, GeminiChatService -from app.audio.voice_service import VoiceSynthesizer -from app.db.storage import ChatDatabase +from chattractive.ai.chat_service import ChatTurn, GeminiChatService +from chattractive.audio.voice_service import VoiceSynthesizer +from chattractive.db.storage import ChatDatabase logger = logging.getLogger(__name__) diff --git a/app/db/__init__.py b/chattractive/db/__init__.py similarity index 100% rename from app/db/__init__.py rename to chattractive/db/__init__.py diff --git a/app/db/storage.py b/chattractive/db/storage.py similarity index 100% rename from app/db/storage.py rename to chattractive/db/storage.py diff --git a/chattractive/vendor/__init__.py b/chattractive/vendor/__init__.py new file mode 100644 index 000000000..dcbbf2cef --- /dev/null +++ b/chattractive/vendor/__init__.py @@ -0,0 +1 @@ +"""Third-party bundles vendored with Chattractive.""" diff --git a/chattractive/vendor/chatterbox/__init__.py b/chattractive/vendor/chatterbox/__init__.py new file mode 100644 index 000000000..60274fb65 --- /dev/null +++ b/chattractive/vendor/chatterbox/__init__.py @@ -0,0 +1,3 @@ +"""Vendored subset of ResembleAI Chatterbox models.""" + +__all__ = ["models"] diff --git a/app/audio/models/__init__.py b/chattractive/vendor/chatterbox/models/__init__.py similarity index 100% rename from app/audio/models/__init__.py rename to chattractive/vendor/chatterbox/models/__init__.py diff --git a/chattractive/vendor/chatterbox/models/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..9c5895672 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/__pycache__/utils.cpython-311.pyc b/chattractive/vendor/chatterbox/models/__pycache__/utils.cpython-311.pyc new file mode 100644 index 000000000..56b265b18 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/__pycache__/utils.cpython-311.pyc differ diff --git a/app/audio/models/s3gen/__init__.py b/chattractive/vendor/chatterbox/models/s3gen/__init__.py similarity index 100% rename from app/audio/models/s3gen/__init__.py rename to chattractive/vendor/chatterbox/models/s3gen/__init__.py diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..c98b61f4b Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/configs.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/configs.cpython-311.pyc new file mode 100644 index 000000000..3b98544d2 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/configs.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/const.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/const.cpython-311.pyc new file mode 100644 index 000000000..8ba342df4 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/const.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/decoder.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/decoder.cpython-311.pyc new file mode 100644 index 000000000..17dd77cea Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/decoder.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/f0_predictor.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/f0_predictor.cpython-311.pyc new file mode 100644 index 000000000..7e2908c1c Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/f0_predictor.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow.cpython-311.pyc new file mode 100644 index 000000000..b47fb668b Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow_matching.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow_matching.cpython-311.pyc new file mode 100644 index 000000000..a71e42738 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/flow_matching.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/hifigan.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/hifigan.cpython-311.pyc new file mode 100644 index 000000000..524151755 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/hifigan.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/s3gen.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/s3gen.cpython-311.pyc new file mode 100644 index 000000000..d3aaa0c7f Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/s3gen.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/__pycache__/xvector.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/xvector.cpython-311.pyc new file mode 100644 index 000000000..6815f899f Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/__pycache__/xvector.cpython-311.pyc differ diff --git a/app/audio/models/s3gen/configs.py b/chattractive/vendor/chatterbox/models/s3gen/configs.py similarity index 100% rename from app/audio/models/s3gen/configs.py rename to chattractive/vendor/chatterbox/models/s3gen/configs.py diff --git a/app/audio/models/s3gen/const.py b/chattractive/vendor/chatterbox/models/s3gen/const.py similarity index 100% rename from app/audio/models/s3gen/const.py rename to chattractive/vendor/chatterbox/models/s3gen/const.py diff --git a/app/audio/models/s3gen/decoder.py b/chattractive/vendor/chatterbox/models/s3gen/decoder.py similarity index 100% rename from app/audio/models/s3gen/decoder.py rename to chattractive/vendor/chatterbox/models/s3gen/decoder.py diff --git a/app/audio/models/s3gen/f0_predictor.py b/chattractive/vendor/chatterbox/models/s3gen/f0_predictor.py similarity index 100% rename from app/audio/models/s3gen/f0_predictor.py rename to chattractive/vendor/chatterbox/models/s3gen/f0_predictor.py diff --git a/app/audio/models/s3gen/flow.py b/chattractive/vendor/chatterbox/models/s3gen/flow.py similarity index 100% rename from app/audio/models/s3gen/flow.py rename to chattractive/vendor/chatterbox/models/s3gen/flow.py diff --git a/app/audio/models/s3gen/flow_matching.py b/chattractive/vendor/chatterbox/models/s3gen/flow_matching.py similarity index 100% rename from app/audio/models/s3gen/flow_matching.py rename to chattractive/vendor/chatterbox/models/s3gen/flow_matching.py diff --git a/app/audio/models/s3gen/hifigan.py b/chattractive/vendor/chatterbox/models/s3gen/hifigan.py similarity index 100% rename from app/audio/models/s3gen/hifigan.py rename to chattractive/vendor/chatterbox/models/s3gen/hifigan.py diff --git a/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/decoder.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/decoder.cpython-311.pyc new file mode 100644 index 000000000..791be4c3e Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/decoder.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/flow_matching.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/flow_matching.cpython-311.pyc new file mode 100644 index 000000000..5bf09fa89 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/flow_matching.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/transformer.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/transformer.cpython-311.pyc new file mode 100644 index 000000000..2f0e280a2 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/matcha/__pycache__/transformer.cpython-311.pyc differ diff --git a/app/audio/models/s3gen/matcha/decoder.py b/chattractive/vendor/chatterbox/models/s3gen/matcha/decoder.py similarity index 100% rename from app/audio/models/s3gen/matcha/decoder.py rename to chattractive/vendor/chatterbox/models/s3gen/matcha/decoder.py diff --git a/app/audio/models/s3gen/matcha/flow_matching.py b/chattractive/vendor/chatterbox/models/s3gen/matcha/flow_matching.py similarity index 100% rename from app/audio/models/s3gen/matcha/flow_matching.py rename to chattractive/vendor/chatterbox/models/s3gen/matcha/flow_matching.py diff --git a/app/audio/models/s3gen/matcha/text_encoder.py b/chattractive/vendor/chatterbox/models/s3gen/matcha/text_encoder.py similarity index 100% rename from app/audio/models/s3gen/matcha/text_encoder.py rename to chattractive/vendor/chatterbox/models/s3gen/matcha/text_encoder.py diff --git a/app/audio/models/s3gen/matcha/transformer.py b/chattractive/vendor/chatterbox/models/s3gen/matcha/transformer.py similarity index 100% rename from app/audio/models/s3gen/matcha/transformer.py rename to chattractive/vendor/chatterbox/models/s3gen/matcha/transformer.py diff --git a/app/audio/models/s3gen/s3gen.py b/chattractive/vendor/chatterbox/models/s3gen/s3gen.py similarity index 100% rename from app/audio/models/s3gen/s3gen.py rename to chattractive/vendor/chatterbox/models/s3gen/s3gen.py diff --git a/app/audio/models/s3gen/transformer/__init__.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/__init__.py similarity index 100% rename from app/audio/models/s3gen/transformer/__init__.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/__init__.py diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..9934eb984 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/activation.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/activation.cpython-311.pyc new file mode 100644 index 000000000..837575ecf Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/activation.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/attention.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/attention.cpython-311.pyc new file mode 100644 index 000000000..33a80b98a Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/attention.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/convolution.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/convolution.cpython-311.pyc new file mode 100644 index 000000000..189e540d0 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/convolution.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/embedding.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/embedding.cpython-311.pyc new file mode 100644 index 000000000..a6d44c632 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/embedding.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/encoder_layer.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/encoder_layer.cpython-311.pyc new file mode 100644 index 000000000..ac1873fea Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/encoder_layer.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/positionwise_feed_forward.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/positionwise_feed_forward.cpython-311.pyc new file mode 100644 index 000000000..31884bf65 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/positionwise_feed_forward.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/subsampling.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/subsampling.cpython-311.pyc new file mode 100644 index 000000000..0c0716676 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/subsampling.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/upsample_encoder.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/upsample_encoder.cpython-311.pyc new file mode 100644 index 000000000..24a2215ba Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/transformer/__pycache__/upsample_encoder.cpython-311.pyc differ diff --git a/app/audio/models/s3gen/transformer/activation.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/activation.py similarity index 100% rename from app/audio/models/s3gen/transformer/activation.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/activation.py diff --git a/app/audio/models/s3gen/transformer/attention.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/attention.py similarity index 100% rename from app/audio/models/s3gen/transformer/attention.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/attention.py diff --git a/app/audio/models/s3gen/transformer/convolution.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/convolution.py similarity index 100% rename from app/audio/models/s3gen/transformer/convolution.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/convolution.py diff --git a/app/audio/models/s3gen/transformer/embedding.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/embedding.py similarity index 100% rename from app/audio/models/s3gen/transformer/embedding.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/embedding.py diff --git a/app/audio/models/s3gen/transformer/encoder_layer.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/encoder_layer.py similarity index 100% rename from app/audio/models/s3gen/transformer/encoder_layer.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/encoder_layer.py diff --git a/app/audio/models/s3gen/transformer/positionwise_feed_forward.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/positionwise_feed_forward.py similarity index 100% rename from app/audio/models/s3gen/transformer/positionwise_feed_forward.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/positionwise_feed_forward.py diff --git a/app/audio/models/s3gen/transformer/subsampling.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/subsampling.py similarity index 100% rename from app/audio/models/s3gen/transformer/subsampling.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/subsampling.py diff --git a/app/audio/models/s3gen/transformer/upsample_encoder.py b/chattractive/vendor/chatterbox/models/s3gen/transformer/upsample_encoder.py similarity index 100% rename from app/audio/models/s3gen/transformer/upsample_encoder.py rename to chattractive/vendor/chatterbox/models/s3gen/transformer/upsample_encoder.py diff --git a/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/class_utils.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/class_utils.cpython-311.pyc new file mode 100644 index 000000000..1ef400c47 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/class_utils.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mask.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mask.cpython-311.pyc new file mode 100644 index 000000000..111d7f022 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mask.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mel.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mel.cpython-311.pyc new file mode 100644 index 000000000..a46c9ac94 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3gen/utils/__pycache__/mel.cpython-311.pyc differ diff --git a/app/audio/models/s3gen/utils/class_utils.py b/chattractive/vendor/chatterbox/models/s3gen/utils/class_utils.py similarity index 100% rename from app/audio/models/s3gen/utils/class_utils.py rename to chattractive/vendor/chatterbox/models/s3gen/utils/class_utils.py diff --git a/app/audio/models/s3gen/utils/mask.py b/chattractive/vendor/chatterbox/models/s3gen/utils/mask.py similarity index 100% rename from app/audio/models/s3gen/utils/mask.py rename to chattractive/vendor/chatterbox/models/s3gen/utils/mask.py diff --git a/app/audio/models/s3gen/utils/mel.py b/chattractive/vendor/chatterbox/models/s3gen/utils/mel.py similarity index 100% rename from app/audio/models/s3gen/utils/mel.py rename to chattractive/vendor/chatterbox/models/s3gen/utils/mel.py diff --git a/app/audio/models/s3gen/xvector.py b/chattractive/vendor/chatterbox/models/s3gen/xvector.py similarity index 100% rename from app/audio/models/s3gen/xvector.py rename to chattractive/vendor/chatterbox/models/s3gen/xvector.py diff --git a/app/audio/models/s3tokenizer/__init__.py b/chattractive/vendor/chatterbox/models/s3tokenizer/__init__.py similarity index 100% rename from app/audio/models/s3tokenizer/__init__.py rename to chattractive/vendor/chatterbox/models/s3tokenizer/__init__.py diff --git a/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..215bcd5f8 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/s3tokenizer.cpython-311.pyc b/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/s3tokenizer.cpython-311.pyc new file mode 100644 index 000000000..ce22c3071 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/s3tokenizer/__pycache__/s3tokenizer.cpython-311.pyc differ diff --git a/app/audio/models/s3tokenizer/s3tokenizer.py b/chattractive/vendor/chatterbox/models/s3tokenizer/s3tokenizer.py similarity index 100% rename from app/audio/models/s3tokenizer/s3tokenizer.py rename to chattractive/vendor/chatterbox/models/s3tokenizer/s3tokenizer.py diff --git a/app/audio/models/t3/__init__.py b/chattractive/vendor/chatterbox/models/t3/__init__.py similarity index 100% rename from app/audio/models/t3/__init__.py rename to chattractive/vendor/chatterbox/models/t3/__init__.py diff --git a/chattractive/vendor/chatterbox/models/t3/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..a80aedc81 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/__pycache__/llama_configs.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/__pycache__/llama_configs.cpython-311.pyc new file mode 100644 index 000000000..98c6c87ba Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/__pycache__/llama_configs.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/__pycache__/t3.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/__pycache__/t3.cpython-311.pyc new file mode 100644 index 000000000..ccd874429 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/__pycache__/t3.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/alignment_stream_analyzer.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/alignment_stream_analyzer.cpython-311.pyc new file mode 100644 index 000000000..82a954cf2 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/alignment_stream_analyzer.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/t3_hf_backend.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/t3_hf_backend.cpython-311.pyc new file mode 100644 index 000000000..608e4ba40 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/inference/__pycache__/t3_hf_backend.cpython-311.pyc differ diff --git a/app/audio/models/t3/inference/alignment_stream_analyzer.py b/chattractive/vendor/chatterbox/models/t3/inference/alignment_stream_analyzer.py similarity index 100% rename from app/audio/models/t3/inference/alignment_stream_analyzer.py rename to chattractive/vendor/chatterbox/models/t3/inference/alignment_stream_analyzer.py diff --git a/app/audio/models/t3/inference/t3_hf_backend.py b/chattractive/vendor/chatterbox/models/t3/inference/t3_hf_backend.py similarity index 100% rename from app/audio/models/t3/inference/t3_hf_backend.py rename to chattractive/vendor/chatterbox/models/t3/inference/t3_hf_backend.py diff --git a/app/audio/models/t3/llama_configs.py b/chattractive/vendor/chatterbox/models/t3/llama_configs.py similarity index 100% rename from app/audio/models/t3/llama_configs.py rename to chattractive/vendor/chatterbox/models/t3/llama_configs.py diff --git a/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/cond_enc.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/cond_enc.cpython-311.pyc new file mode 100644 index 000000000..8485ec833 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/cond_enc.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/learned_pos_emb.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/learned_pos_emb.cpython-311.pyc new file mode 100644 index 000000000..32ed32bb4 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/learned_pos_emb.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/perceiver.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/perceiver.cpython-311.pyc new file mode 100644 index 000000000..de9b5d7ba Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/perceiver.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/t3_config.cpython-311.pyc b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/t3_config.cpython-311.pyc new file mode 100644 index 000000000..5dc11a3f0 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/t3/modules/__pycache__/t3_config.cpython-311.pyc differ diff --git a/app/audio/models/t3/modules/cond_enc.py b/chattractive/vendor/chatterbox/models/t3/modules/cond_enc.py similarity index 100% rename from app/audio/models/t3/modules/cond_enc.py rename to chattractive/vendor/chatterbox/models/t3/modules/cond_enc.py diff --git a/app/audio/models/t3/modules/learned_pos_emb.py b/chattractive/vendor/chatterbox/models/t3/modules/learned_pos_emb.py similarity index 100% rename from app/audio/models/t3/modules/learned_pos_emb.py rename to chattractive/vendor/chatterbox/models/t3/modules/learned_pos_emb.py diff --git a/app/audio/models/t3/modules/perceiver.py b/chattractive/vendor/chatterbox/models/t3/modules/perceiver.py similarity index 100% rename from app/audio/models/t3/modules/perceiver.py rename to chattractive/vendor/chatterbox/models/t3/modules/perceiver.py diff --git a/app/audio/models/t3/modules/t3_config.py b/chattractive/vendor/chatterbox/models/t3/modules/t3_config.py similarity index 100% rename from app/audio/models/t3/modules/t3_config.py rename to chattractive/vendor/chatterbox/models/t3/modules/t3_config.py diff --git a/app/audio/models/t3/t3.py b/chattractive/vendor/chatterbox/models/t3/t3.py similarity index 100% rename from app/audio/models/t3/t3.py rename to chattractive/vendor/chatterbox/models/t3/t3.py diff --git a/app/audio/models/tokenizers/__init__.py b/chattractive/vendor/chatterbox/models/tokenizers/__init__.py similarity index 100% rename from app/audio/models/tokenizers/__init__.py rename to chattractive/vendor/chatterbox/models/tokenizers/__init__.py diff --git a/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..a6d9dd047 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/tokenizer.cpython-311.pyc b/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/tokenizer.cpython-311.pyc new file mode 100644 index 000000000..52f898817 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/tokenizers/__pycache__/tokenizer.cpython-311.pyc differ diff --git a/app/audio/models/tokenizers/tokenizer.py b/chattractive/vendor/chatterbox/models/tokenizers/tokenizer.py similarity index 100% rename from app/audio/models/tokenizers/tokenizer.py rename to chattractive/vendor/chatterbox/models/tokenizers/tokenizer.py diff --git a/app/audio/models/utils.py b/chattractive/vendor/chatterbox/models/utils.py similarity index 100% rename from app/audio/models/utils.py rename to chattractive/vendor/chatterbox/models/utils.py diff --git a/app/audio/models/voice_encoder/__init__.py b/chattractive/vendor/chatterbox/models/voice_encoder/__init__.py similarity index 100% rename from app/audio/models/voice_encoder/__init__.py rename to chattractive/vendor/chatterbox/models/voice_encoder/__init__.py diff --git a/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/__init__.cpython-311.pyc b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/__init__.cpython-311.pyc new file mode 100644 index 000000000..ad39bbfea Binary files /dev/null and b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/__init__.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/config.cpython-311.pyc b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/config.cpython-311.pyc new file mode 100644 index 000000000..f59eb0d7a Binary files /dev/null and b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/config.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/melspec.cpython-311.pyc b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/melspec.cpython-311.pyc new file mode 100644 index 000000000..53d7db764 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/melspec.cpython-311.pyc differ diff --git a/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/voice_encoder.cpython-311.pyc b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/voice_encoder.cpython-311.pyc new file mode 100644 index 000000000..90094ce96 Binary files /dev/null and b/chattractive/vendor/chatterbox/models/voice_encoder/__pycache__/voice_encoder.cpython-311.pyc differ diff --git a/app/audio/models/voice_encoder/config.py b/chattractive/vendor/chatterbox/models/voice_encoder/config.py similarity index 100% rename from app/audio/models/voice_encoder/config.py rename to chattractive/vendor/chatterbox/models/voice_encoder/config.py diff --git a/app/audio/models/voice_encoder/melspec.py b/chattractive/vendor/chatterbox/models/voice_encoder/melspec.py similarity index 100% rename from app/audio/models/voice_encoder/melspec.py rename to chattractive/vendor/chatterbox/models/voice_encoder/melspec.py diff --git a/app/audio/models/voice_encoder/voice_encoder.py b/chattractive/vendor/chatterbox/models/voice_encoder/voice_encoder.py similarity index 100% rename from app/audio/models/voice_encoder/voice_encoder.py rename to chattractive/vendor/chatterbox/models/voice_encoder/voice_encoder.py diff --git a/env.example b/env.example index f7bdbb099..55e2e48dd 100644 --- a/env.example +++ b/env.example @@ -1,16 +1,14 @@ -# Telegram Bot TELEGRAM_BOT_TOKEN=your_telegram_bot_token_here -# Примечание: В Docker контейнере бот использует SERVER_URL=http://server:8000 - GOOGLE_API_KEY = google_api_key_here - ADMIN_GROUP_ID = admin_group_id_here # Дополнительные настройки (необязательные) +# ANTISLEEP=TRUE # DATA_DIRECTORY=./data # DATABASE_PATH=chattractive.db # GEMINI_MODEL=gemini-2.0-flash-exp -# AUDIO_MODEL_DIR=./my_custom_models # optional override; defaults to ./models +# AUDIO_MODEL_DIR=./my_custom_models # VOICE_DEVICE=cpu # VOICE_LANGUAGE=ru +VOICE_PATH=voices/voice_example.wav diff --git a/examples/example_for_mac.py b/examples/example_for_mac.py deleted file mode 100644 index 585430919..000000000 --- a/examples/example_for_mac.py +++ /dev/null @@ -1,28 +0,0 @@ -import torch -import torchaudio as ta -from audio.tts import ChatterboxTTS - -# Detect device (Mac with M1/M2/M3/M4) -device = "mps" if torch.backends.mps.is_available() else "cpu" -map_location = torch.device(device) - -torch_load_original = torch.load -def patched_torch_load(*args, **kwargs): - if 'map_location' not in kwargs: - kwargs['map_location'] = map_location - return torch_load_original(*args, **kwargs) - -torch.load = patched_torch_load - -model = ChatterboxTTS.from_pretrained(device=device) -text = "Today is the day. I want to move like a titan at dawn, sweat like a god forging lightning. No more excuses. From now on, my mornings will be temples of discipline. I am going to work out like the gods… every damn day." - -# If you want to synthesize with a different voice, specify the audio prompt -AUDIO_PROMPT_PATH = "YOUR_FILE.wav" -wav = model.generate( - text, - audio_prompt_path=AUDIO_PROMPT_PATH, - exaggeration=2.0, - cfg_weight=0.5 - ) -ta.save("test-2.wav", wav, model.sr) diff --git a/examples/example_tts.py b/examples/example_tts.py deleted file mode 100644 index ac0df77f6..000000000 --- a/examples/example_tts.py +++ /dev/null @@ -1,31 +0,0 @@ -import torchaudio as ta -import torch -from audio.tts import ChatterboxTTS -from audio.mtl_tts import ChatterboxMultilingualTTS - -# Automatically detect the best available device -if torch.cuda.is_available(): - device = "cuda" -elif torch.backends.mps.is_available(): - device = "mps" -else: - device = "cpu" - -print(f"Using device: {device}") - -model = ChatterboxTTS.from_pretrained(device=device) - -text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill." -wav = model.generate(text) -ta.save("test-1.wav", wav, model.sr) - -multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device=device) -text = "Bonjour, comment ça va? Ceci est le modèle de synthèse vocale multilingue Chatterbox, il prend en charge 23 langues." -wav = multilingual_model.generate(text, language_id="fr") -ta.save("test-2.wav", wav, multilingual_model.sr) - - -# If you want to synthesize with a different voice, specify the audio prompt -AUDIO_PROMPT_PATH = "YOUR_FILE.wav" -wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH) -ta.save("test-3.wav", wav, model.sr) diff --git a/examples/example_vc.py b/examples/example_vc.py deleted file mode 100644 index c65f590a1..000000000 --- a/examples/example_vc.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch -import torchaudio as ta - -from audio.vc import ChatterboxVC - -# Automatically detect the best available device -if torch.cuda.is_available(): - device = "cuda" -elif torch.backends.mps.is_available(): - device = "mps" -else: - device = "cpu" - -print(f"Using device: {device}") - -AUDIO_PATH = "YOUR_FILE.wav" -TARGET_VOICE_PATH = "YOUR_FILE.wav" - -model = ChatterboxVC.from_pretrained(device) -wav = model.generate( - audio=AUDIO_PATH, - target_voice_path=TARGET_VOICE_PATH, -) -ta.save("testvc.wav", wav, model.sr) diff --git a/examples/gradio_tts_app.py b/examples/gradio_tts_app.py deleted file mode 100644 index eeb746425..000000000 --- a/examples/gradio_tts_app.py +++ /dev/null @@ -1,93 +0,0 @@ -import random -import numpy as np -import torch -import gradio as gr -from audio.tts import ChatterboxTTS - - -DEVICE = "cuda" if torch.cuda.is_available() else "cpu" - - -def set_seed(seed: int): - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - random.seed(seed) - np.random.seed(seed) - - -def load_model(): - model = ChatterboxTTS.from_pretrained(DEVICE) - return model - - -def generate(model, text, audio_prompt_path, exaggeration, temperature, seed_num, cfgw, min_p, top_p, repetition_penalty): - if model is None: - model = ChatterboxTTS.from_pretrained(DEVICE) - - if seed_num != 0: - set_seed(int(seed_num)) - - wav = model.generate( - text, - audio_prompt_path=audio_prompt_path, - exaggeration=exaggeration, - temperature=temperature, - cfg_weight=cfgw, - min_p=min_p, - top_p=top_p, - repetition_penalty=repetition_penalty, - ) - return (model.sr, wav.squeeze(0).numpy()) - - -with gr.Blocks() as demo: - model_state = gr.State(None) # Loaded once per session/user - - with gr.Row(): - with gr.Column(): - text = gr.Textbox( - value="Now let's make my mum's favourite. So three mars bars into the pan. Then we add the tuna and just stir for a bit, just let the chocolate and fish infuse. A sprinkle of olive oil and some tomato ketchup. Now smell that. Oh boy this is going to be incredible.", - label="Text to synthesize (max chars 300)", - max_lines=5 - ) - ref_wav = gr.Audio(sources=["upload", "microphone"], type="filepath", label="Reference Audio File", value=None) - exaggeration = gr.Slider(0.25, 2, step=.05, label="Exaggeration (Neutral = 0.5, extreme values can be unstable)", value=.5) - cfg_weight = gr.Slider(0.0, 1, step=.05, label="CFG/Pace", value=0.5) - - with gr.Accordion("More options", open=False): - seed_num = gr.Number(value=0, label="Random seed (0 for random)") - temp = gr.Slider(0.05, 5, step=.05, label="temperature", value=.8) - min_p = gr.Slider(0.00, 1.00, step=0.01, label="min_p || Newer Sampler. Recommend 0.02 > 0.1. Handles Higher Temperatures better. 0.00 Disables", value=0.05) - top_p = gr.Slider(0.00, 1.00, step=0.01, label="top_p || Original Sampler. 1.0 Disables(recommended). Original 0.8", value=1.00) - repetition_penalty = gr.Slider(1.00, 2.00, step=0.1, label="repetition_penalty", value=1.2) - - run_btn = gr.Button("Generate", variant="primary") - - with gr.Column(): - audio_output = gr.Audio(label="Output Audio") - - demo.load(fn=load_model, inputs=[], outputs=model_state) - - run_btn.click( - fn=generate, - inputs=[ - model_state, - text, - ref_wav, - exaggeration, - temp, - seed_num, - cfg_weight, - min_p, - top_p, - repetition_penalty, - ], - outputs=audio_output, - ) - -if __name__ == "__main__": - demo.queue( - max_size=50, - default_concurrency_limit=1, - ).launch(share=True) diff --git a/examples/gradio_vc_app.py b/examples/gradio_vc_app.py deleted file mode 100644 index bfff9f1eb..000000000 --- a/examples/gradio_vc_app.py +++ /dev/null @@ -1,27 +0,0 @@ -import torch -import gradio as gr -from audio.vc import ChatterboxVC - - -DEVICE = "cuda" if torch.cuda.is_available() else "cpu" - - -model = ChatterboxVC.from_pretrained(DEVICE) -def generate(audio, target_voice_path): - wav = model.generate( - audio, target_voice_path=target_voice_path, - ) - return model.sr, wav.squeeze(0).numpy() - - -demo = gr.Interface( - generate, - [ - gr.Audio(sources=["upload", "microphone"], type="filepath", label="Input audio file"), - gr.Audio(sources=["upload", "microphone"], type="filepath", label="Target voice audio file (if none, the default voice is used)", value=None), - ], - "audio", -) - -if __name__ == "__main__": - demo.launch() diff --git a/examples/multilingual_app.py b/examples/multilingual_app.py deleted file mode 100644 index 449c462a3..000000000 --- a/examples/multilingual_app.py +++ /dev/null @@ -1,317 +0,0 @@ -import random -import numpy as np -import torch -from audio.mtl_tts import ChatterboxMultilingualTTS, SUPPORTED_LANGUAGES -import gradio as gr - -DEVICE = "cuda" if torch.cuda.is_available() else "cpu" -print(f"🚀 Running on device: {DEVICE}") - -# --- Global Model Initialization --- -MODEL = None - -LANGUAGE_CONFIG = { - "ar": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/ar_f/ar_prompts2.flac", - "text": "في الشهر الماضي، وصلنا إلى معلم جديد بمليارين من المشاهدات على قناتنا على يوتيوب." - }, - "da": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/da_m1.flac", - "text": "Sidste måned nåede vi en ny milepæl med to milliarder visninger på vores YouTube-kanal." - }, - "de": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/de_f1.flac", - "text": "Letzten Monat haben wir einen neuen Meilenstein erreicht: zwei Milliarden Aufrufe auf unserem YouTube-Kanal." - }, - "el": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/el_m.flac", - "text": "Τον περασμένο μήνα, φτάσαμε σε ένα νέο ορόσημο με δύο δισεκατομμύρια προβολές στο κανάλι μας στο YouTube." - }, - "en": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/en_f1.flac", - "text": "Last month, we reached a new milestone with two billion views on our YouTube channel." - }, - "es": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/es_f1.flac", - "text": "El mes pasado alcanzamos un nuevo hito: dos mil millones de visualizaciones en nuestro canal de YouTube." - }, - "fi": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/fi_m.flac", - "text": "Viime kuussa saavutimme uuden virstanpylvään kahden miljardin katselukerran kanssa YouTube-kanavallamme." - }, - "fr": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/fr_f1.flac", - "text": "Le mois dernier, nous avons atteint un nouveau jalon avec deux milliards de vues sur notre chaîne YouTube." - }, - "he": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/he_m1.flac", - "text": "בחודש שעבר הגענו לאבן דרך חדשה עם שני מיליארד צפיות בערוץ היוטיוב שלנו." - }, - "hi": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/hi_f1.flac", - "text": "पिछले महीने हमने एक नया मील का पत्थर छुआ: हमारे YouTube चैनल पर दो अरब व्यूज़।" - }, - "it": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/it_m1.flac", - "text": "Il mese scorso abbiamo raggiunto un nuovo traguardo: due miliardi di visualizzazioni sul nostro canale YouTube." - }, - "ja": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/ja/ja_prompts1.flac", - "text": "先月、私たちのYouTubeチャンネルで二十億回の再生回数という新たなマイルストーンに到達しました。" - }, - "ko": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/ko_f.flac", - "text": "지난달 우리는 유튜브 채널에서 이십억 조회수라는 새로운 이정표에 도달했습니다." - }, - "ms": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/ms_f.flac", - "text": "Bulan lepas, kami mencapai pencapaian baru dengan dua bilion tontonan di saluran YouTube kami." - }, - "nl": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/nl_m.flac", - "text": "Vorige maand bereikten we een nieuwe mijlpaal met twee miljard weergaven op ons YouTube-kanaal." - }, - "no": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/no_f1.flac", - "text": "Forrige måned nådde vi en ny milepæl med to milliarder visninger på YouTube-kanalen vår." - }, - "pl": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/pl_m.flac", - "text": "W zeszłym miesiącu osiągnęliśmy nowy kamień milowy z dwoma miliardami wyświetleń na naszym kanale YouTube." - }, - "pt": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/pt_m1.flac", - "text": "No mês passado, alcançámos um novo marco: dois mil milhões de visualizações no nosso canal do YouTube." - }, - "ru": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/ru_m.flac", - "text": "В прошлом месяце мы достигли нового рубежа: два миллиарда просмотров на нашем YouTube-канале." - }, - "sv": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/sv_f.flac", - "text": "Förra månaden nådde vi en ny milstolpe med två miljarder visningar på vår YouTube-kanal." - }, - "sw": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/sw_m.flac", - "text": "Mwezi uliopita, tulifika hatua mpya ya maoni ya bilioni mbili kweny kituo chetu cha YouTube." - }, - "tr": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/tr_m.flac", - "text": "Geçen ay YouTube kanalımızda iki milyar görüntüleme ile yeni bir dönüm noktasına ulaştık." - }, - "zh": { - "audio": "https://storage.googleapis.com/chatterbox-demo-samples/mtl_prompts/zh_f2.flac", - "text": "上个月,我们达到了一个新的里程碑. 我们的YouTube频道观看次数达到了二十亿次,这绝对令人难以置信。" - }, -} - -# --- UI Helpers --- -def default_audio_for_ui(lang: str) -> str | None: - return LANGUAGE_CONFIG.get(lang, {}).get("audio") - - -def default_text_for_ui(lang: str) -> str: - return LANGUAGE_CONFIG.get(lang, {}).get("text", "") - - -def get_supported_languages_display() -> str: - """Generate a formatted display of all supported languages.""" - language_items = [] - for code, name in sorted(SUPPORTED_LANGUAGES.items()): - language_items.append(f"**{name}** (`{code}`)") - - # Split into 2 lines - mid = len(language_items) // 2 - line1 = " • ".join(language_items[:mid]) - line2 = " • ".join(language_items[mid:]) - - return f""" -### 🌍 Supported Languages ({len(SUPPORTED_LANGUAGES)} total) -{line1} - -{line2} -""" - - -def get_or_load_model(): - """Loads the ChatterboxMultilingualTTS model if it hasn't been loaded already, - and ensures it's on the correct device.""" - global MODEL - if MODEL is None: - print("Model not loaded, initializing...") - try: - MODEL = ChatterboxMultilingualTTS.from_pretrained(DEVICE) - if hasattr(MODEL, 'to') and str(MODEL.device) != DEVICE: - MODEL.to(DEVICE) - print(f"Model loaded successfully. Internal device: {getattr(MODEL, 'device', 'N/A')}") - except Exception as e: - print(f"Error loading model: {e}") - raise - return MODEL - -# Attempt to load the model at startup. -try: - get_or_load_model() -except Exception as e: - print(f"CRITICAL: Failed to load model on startup. Application may not function. Error: {e}") - -def set_seed(seed: int): - """Sets the random seed for reproducibility across torch, numpy, and random.""" - torch.manual_seed(seed) - if DEVICE == "cuda": - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - random.seed(seed) - np.random.seed(seed) - -def resolve_audio_prompt(language_id: str, provided_path: str | None) -> str | None: - """ - Decide which audio prompt to use: - - If user provided a path (upload/mic/url), use it. - - Else, fall back to language-specific default (if any). - """ - if provided_path and str(provided_path).strip(): - return provided_path - return LANGUAGE_CONFIG.get(language_id, {}).get("audio") - - -def generate_tts_audio( - text_input: str, - language_id: str, - audio_prompt_path_input: str = None, - exaggeration_input: float = 0.5, - temperature_input: float = 0.8, - seed_num_input: int = 0, - cfgw_input: float = 0.5 -) -> tuple[int, np.ndarray]: - """ - Generate high-quality speech audio from text using Chatterbox Multilingual model with optional reference audio styling. - Supported languages: English, French, German, Spanish, Italian, Portuguese, and Hindi. - - This tool synthesizes natural-sounding speech from input text. When a reference audio file - is provided, it captures the speaker's voice characteristics and speaking style. The generated audio - maintains the prosody, tone, and vocal qualities of the reference speaker, or uses default voice if no reference is provided. - - Args: - text_input (str): The text to synthesize into speech (maximum 300 characters) - language_id (str): The language code for synthesis (eg. en, fr, de, es, it, pt, hi) - audio_prompt_path_input (str, optional): File path or URL to the reference audio file that defines the target voice style. Defaults to None. - exaggeration_input (float, optional): Controls speech expressiveness (0.25-2.0, neutral=0.5, extreme values may be unstable). Defaults to 0.5. - temperature_input (float, optional): Controls randomness in generation (0.05-5.0, higher=more varied). Defaults to 0.8. - seed_num_input (int, optional): Random seed for reproducible results (0 for random generation). Defaults to 0. - cfgw_input (float, optional): CFG/Pace weight controlling generation guidance (0.2-1.0). Defaults to 0.5, 0 for language transfer. - - Returns: - tuple[int, np.ndarray]: A tuple containing the sample rate (int) and the generated audio waveform (numpy.ndarray) - """ - current_model = get_or_load_model() - - if current_model is None: - raise RuntimeError("TTS model is not loaded.") - - if seed_num_input != 0: - set_seed(int(seed_num_input)) - - print(f"Generating audio for text: '{text_input[:50]}...'") - - # Handle optional audio prompt - chosen_prompt = audio_prompt_path_input or default_audio_for_ui(language_id) - - generate_kwargs = { - "exaggeration": exaggeration_input, - "temperature": temperature_input, - "cfg_weight": cfgw_input, - } - if chosen_prompt: - generate_kwargs["audio_prompt_path"] = chosen_prompt - print(f"Using audio prompt: {chosen_prompt}") - else: - print("No audio prompt provided; using default voice.") - - wav = current_model.generate( - text_input[:300], # Truncate text to max chars - language_id=language_id, - **generate_kwargs - ) - print("Audio generation complete.") - return (current_model.sr, wav.squeeze(0).numpy()) - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Chatterbox Multilingual Demo - Generate high-quality multilingual speech from text with reference audio styling, supporting 23 languages. - """ - ) - - # Display supported languages - gr.Markdown(get_supported_languages_display()) - with gr.Row(): - with gr.Column(): - initial_lang = "fr" - text = gr.Textbox( - value=default_text_for_ui(initial_lang), - label="Text to synthesize (max chars 300)", - max_lines=5 - ) - - language_id = gr.Dropdown( - choices=list(ChatterboxMultilingualTTS.get_supported_languages().keys()), - value=initial_lang, - label="Language", - info="Select the language for text-to-speech synthesis" - ) - - ref_wav = gr.Audio( - sources=["upload", "microphone"], - type="filepath", - label="Reference Audio File (Optional)", - value=default_audio_for_ui(initial_lang) - ) - - gr.Markdown( - "💡 **Note**: Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip's language. To mitigate this, set the CFG weight to 0.", - elem_classes=["audio-note"] - ) - - exaggeration = gr.Slider( - 0.25, 2, step=.05, label="Exaggeration (Neutral = 0.5, extreme values can be unstable)", value=.5 - ) - cfg_weight = gr.Slider( - 0.2, 1, step=.05, label="CFG/Pace", value=0.5 - ) - - with gr.Accordion("More options", open=False): - seed_num = gr.Number(value=0, label="Random seed (0 for random)") - temp = gr.Slider(0.05, 5, step=.05, label="Temperature", value=.8) - - run_btn = gr.Button("Generate", variant="primary") - - with gr.Column(): - audio_output = gr.Audio(label="Output Audio") - - def on_language_change(lang, current_ref, current_text): - return default_audio_for_ui(lang), default_text_for_ui(lang) - - language_id.change( - fn=on_language_change, - inputs=[language_id, ref_wav, text], - outputs=[ref_wav, text], - show_progress=False - ) - - run_btn.click( - fn=generate_tts_audio, - inputs=[ - text, - language_id, - ref_wav, - exaggeration, - temp, - seed_num, - cfg_weight, - ], - outputs=[audio_output], - ) - -demo.launch(mcp_server=True) diff --git a/main.py b/main.py index 84811154c..0eb61ebe4 100644 --- a/main.py +++ b/main.py @@ -7,7 +7,7 @@ from dotenv import load_dotenv -from app.bot.bot import BotConfig, TelegramBot +from chattractive.bot.bot import BotConfig, TelegramBot from load_model import ensure_model_present, missing_required_files, resolve_model_dir @@ -24,9 +24,17 @@ def _get_env(name: str, *, required: bool = True) -> str: return value or "" +def _parse_bool_env(value: str | None) -> bool: + if not value: + return False + return value.strip().upper() in {"TRUE", "1", "YES", "ON"} + + async def main() -> None: load_dotenv() + antisleep_enabled = _parse_bool_env(_get_env("ANTISLEEP", required=False)) + token = _get_env("TELEGRAM_BOT_TOKEN") api_key = _get_env("GOOGLE_API_KEY") admin_group_id_str = _get_env("ADMIN_GROUP_ID") @@ -67,11 +75,24 @@ async def main() -> None: voice_language=voice_language, ) + guard = None + if antisleep_enabled: + try: + from chattractive.antisleep import AntiSleepGuard + + guard = AntiSleepGuard() + guard.enable() + except Exception: # pragma: no cover - platform dependent + logging.exception("Failed to enable anti-sleep guard") + guard = None + bot = TelegramBot(config, api_key) try: await bot.start() finally: await bot.close() + if guard: + guard.disable() if __name__ == "__main__": diff --git a/pyproject.toml b/pyproject.toml index 119bdd689..0c4d15f8d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,12 +1,12 @@ [project] -name = "chatterbox-tts" -version = "0.1.4" -description = "Chatterbox: Open Source TTS and Voice Conversion by Resemble AI" +name = "chattractive" +version = "0.1.0" +description = "Chattractive Telegram assistant with Gemini RAG and Chatterbox TTS" readme = "README.md" requires-python = ">=3.10" license = {file = "LICENSE"} authors = [ - {name = "resemble-ai", email = "engineering@resemble.ai"} + {name = "Chattractive Team"} ] dependencies = [ "numpy>=1.24.0,<1.26.0", @@ -32,13 +32,12 @@ dependencies = [ gradio = ["gradio==3.50.2"] [project.urls] -Homepage = "https://github.com/resemble-ai/chatterbox" -Repository = "https://github.com/resemble-ai/chatterbox" +Repository = "https://github.com/Chattractive/Chattractive" [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [tool.setuptools.packages.find] -where = ["app"] +where = ["chattractive"] diff --git a/venv311/CHANGES.rst b/venv311/CHANGES.rst deleted file mode 100644 index 05cca100e..000000000 --- a/venv311/CHANGES.rst +++ /dev/null @@ -1,76 +0,0 @@ -CHANGES -======= - -0.4.0 (2024-7-26) -------------------- -- Add stub files according to PEP 561 for mypy (thanks @ernix) - -0.3.4 (2023-2-18) -------------------- -- Fix to support Python2.7 ~ 3.4 (thanks @manjuu-eater) -- Support Python 3.11 - -0.3.3 (2022-12-31) -------------------- -- Support Python 3.10 -- Re-support Python2.7 ~ 3.4 (thanks @manjuu-eater) -- Fix z2h, h2z all flag off bug (thanks @manjuu-eater) - -0.3.1 (2022-12-14) -------------------- -- Fix alpha2kana infinite loop bug (thanks @frog42) - -0.3 (2021-03-29) -------------------- -- Fix bug (alphabet2kana) thanks @Cuddlemuffin007 -- Support Python 3.8 and 3.9 -- Add handy functions: alphabet2kata and kata2alphabet. thanks @kokimame -- Add function for julius: hiragana2julius - -0.2.4 (2018-02-04) -------------------- -- Fix bug (kana2alphabet) -- Support Python 3.7 -- No longer support Python 2.6 -- Add aliases of z2h -> zenkaku2hankaku and h2z -> hankaku2zenkaku - -0.2.3 (2018-02-03) -------------------- -- Fix bugs (alphabet2kana, kana2alphabet) thanks @letuananh - -0.2.2 (2018-01-22) -------------------- -- Fix bug (kana2alphabet) thanks @kokimame -- Support Python 3.6 - -0.2.1 (2017-09-14) -------------------- -- Fix bugs (alphabet2kana, kana2alphabet) - -0.2 (2015-04-02) ------------------- - -- Change module name jctconv -> jaconv -- Add alphabet and hiragana interconvert (alphabet2kana, kana2alphabet) - -0.1.1 (2015-03-12) ------------------- - -- Support Windows -- Support Python 3.5 - - -0.1 (2014-11-24) ------------------- - -- Add some Japanese characters to convert table (ゝゞ・「」。、) -- Decresing memory usage -- Some function names are deprecated (hankaku2zenkaku, zenkaku2hankaku, H2K, H2hK, K2H) - - -0.0.7 (2014-03-22) ------------------- - -z2h and h2z allow mojimoji-like target character type determination. -Bug fix about Half Kana conversion. - diff --git a/venv311/README.rst b/venv311/README.rst deleted file mode 100644 index 81da1ebeb..000000000 --- a/venv311/README.rst +++ /dev/null @@ -1,161 +0,0 @@ -jaconv -========== -|coveralls| |pyversion| |version| |license| |download| - -jaconv (Japanese Converter) is interconverter for Hiragana, Katakana, Hankaku (half-width character) and Zenkaku (full-width character) - -`Japanese README `_ is available. - -INSTALLATION -============== - -:: - - $ pip install jaconv - - -USAGE -============ - -See also `document `_ - -.. code:: python - - import jaconv - - # Hiragana to Katakana - jaconv.hira2kata('ともえまみ') - # => 'トモエマミ' - - # Hiragana to half-width Katakana - jaconv.hira2hkata('ともえまみ') - # => 'トモエマミ' - - # Katakana to Hiragana - jaconv.kata2hira('巴マミ') - # => '巴まみ' - - # half-width character to full-width character - # default parameters are followings: kana=True, ascii=False, digit=False - jaconv.h2z('ティロ・フィナーレ') - # => 'ティロ・フィナーレ' - - # half-width character to full-width character - # but only ascii characters - jaconv.h2z('abc', kana=False, ascii=True, digit=False) - # => 'abc' - - # half-width character to full-width character - # but only digit characters - jaconv.h2z('123', kana=False, ascii=False, digit=True) - # => '123' - - # half-width character to full-width character - # except half-width Katakana - jaconv.h2z('アabc123', kana=False, digit=True, ascii=True) - # => 'アabc123' - - # an alias of h2z - jaconv.hankaku2zenkaku('ティロ・フィナーレabc123') - # => 'ティロ・フィナーレabc123' - - # full-width character to half-width character - # default parameters are followings: kana=True, ascii=False, digit=False - jaconv.z2h('ティロ・フィナーレ') - # => 'ティロ・フィナーレ' - - # full-width character to half-width character - # but only ascii characters - jaconv.z2h('abc', kana=False, ascii=True, digit=False) - # => 'abc' - - # full-width character to half-width character - # but only digit characters - jaconv.z2h('123', kana=False, ascii=False, digit=True) - # => '123' - - # full-width character to half-width character - # except full-width Katakana - jaconv.z2h('アabc123', kana=False, digit=True, ascii=True) - # => 'アabc123' - - # an alias of z2h - jaconv.zenkaku2hankaku('ティロ・フィナーレabc123') - # => 'ティロ・フィナーレabc123' - - # normalize - jaconv.normalize('ティロ・フィナ〜レ', 'NFKC') - # => 'ティロ・フィナーレ' - - # Hiragana to alphabet - jaconv.kana2alphabet('じゃぱん') - # => 'japan' - - # Alphabet to Hiragana - jaconv.alphabet2kana('japan') - # => 'じゃぱん' - - # Katakana to Alphabet - jaconv.kata2alphabet('ケツイ') - # => 'ketsui' - - # Alphabet to Katakana - jaconv.alphabet2kata('namba') - # => 'ナンバ' - - # Hiragana to Julius's phoneme format - jaconv.hiragana2julius('てんきすごくいいいいいい') - # => 't e N k i s u g o k u i:' - - -NOTE -============ - -jaconv.normalize method expand unicodedata.normalize for Japanese language processing. - -.. code:: - - '〜' => 'ー' - '~' => 'ー' - "’" => "'" - '”'=> '"' - '“' => '``' - '―' => '-' - '‐' => '-' - '˗' => '-' - '֊' => '-' - '‐' => '-' - '‑' => '-' - '‒' => '-' - '–' => '-' - '⁃' => '-' - '⁻' => '-' - '₋' => '-' - '−' => '-' - '﹣' => 'ー' - '-' => 'ー' - '—' => 'ー' - '―' => 'ー' - '━' => 'ー' - '─' => 'ー' - - - - -.. |coveralls| image:: https://coveralls.io/repos/ikegami-yukino/jaconv/badge.svg?branch=master&service=github - :target: https://coveralls.io/github/ikegami-yukino/jaconv?branch=master - :alt: coveralls.io - -.. |pyversion| image:: https://img.shields.io/pypi/pyversions/jaconv.svg - -.. |version| image:: https://img.shields.io/pypi/v/jaconv.svg - :target: http://pypi.python.org/pypi/jaconv/ - :alt: latest version - -.. |license| image:: https://img.shields.io/pypi/l/jaconv.svg - :target: http://pypi.python.org/pypi/jaconv/ - :alt: license - -.. |download| image:: https://static.pepy.tech/personalized-badge/neologdn?period=total&units=international_system&left_color=black&right_color=blue&left_text=Downloads - :target: https://pepy.tech/project/neologdn - :alt: download diff --git a/venv311/Scripts/Activate.ps1 b/venv311/Scripts/Activate.ps1 deleted file mode 100644 index f694cda08..000000000 --- a/venv311/Scripts/Activate.ps1 +++ /dev/null @@ -1,502 +0,0 @@ -<# -.Synopsis -Activate a Python virtual environment for the current PowerShell session. - -.Description -Pushes the python executable for a virtual environment to the front of the -$Env:PATH environment variable and sets the prompt to signify that you are -in a Python virtual environment. Makes use of the command line switches as -well as the `pyvenv.cfg` file values present in the virtual environment. - -.Parameter VenvDir -Path to the directory that contains the virtual environment to activate. The -default value for this is the parent of the directory that the Activate.ps1 -script is located within. - -.Parameter Prompt -The prompt prefix to display when this virtual environment is activated. By -default, this prompt is the name of the virtual environment folder (VenvDir) -surrounded by parentheses and followed by a single space (ie. '(.venv) '). - -.Example -Activate.ps1 -Activates the Python virtual environment that contains the Activate.ps1 script. - -.Example -Activate.ps1 -Verbose -Activates the Python virtual environment that contains the Activate.ps1 script, -and shows extra information about the activation as it executes. - -.Example -Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv -Activates the Python virtual environment located in the specified location. - -.Example -Activate.ps1 -Prompt "MyPython" -Activates the Python virtual environment that contains the Activate.ps1 script, -and prefixes the current prompt with the specified string (surrounded in -parentheses) while the virtual environment is active. - -.Notes -On Windows, it may be required to enable this Activate.ps1 script by setting the -execution policy for the user. You can do this by issuing the following PowerShell -command: - -PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - -For more information on Execution Policies: -https://go.microsoft.com/fwlink/?LinkID=135170 - -#> -Param( - [Parameter(Mandatory = $false)] - [String] - $VenvDir, - [Parameter(Mandatory = $false)] - [String] - $Prompt -) - -<# Function declarations --------------------------------------------------- #> - -<# -.Synopsis -Remove all shell session elements added by the Activate script, including the -addition of the virtual environment's Python executable from the beginning of -the PATH variable. - -.Parameter NonDestructive -If present, do not remove this function from the global namespace for the -session. - -#> -function global:deactivate ([switch]$NonDestructive) { - # Revert to original values - - # The prior prompt: - if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { - Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt - Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT - } - - # The prior PYTHONHOME: - if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { - Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME - Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME - } - - # The prior PATH: - if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { - Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH - Remove-Item -Path Env:_OLD_VIRTUAL_PATH - } - - # Just remove the VIRTUAL_ENV altogether: - if (Test-Path -Path Env:VIRTUAL_ENV) { - Remove-Item -Path env:VIRTUAL_ENV - } - - # Just remove VIRTUAL_ENV_PROMPT altogether. - if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { - Remove-Item -Path env:VIRTUAL_ENV_PROMPT - } - - # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: - if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { - Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force - } - - # Leave deactivate function in the global namespace if requested: - if (-not $NonDestructive) { - Remove-Item -Path function:deactivate - } -} - -<# -.Description -Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the -given folder, and returns them in a map. - -For each line in the pyvenv.cfg file, if that line can be parsed into exactly -two strings separated by `=` (with any amount of whitespace surrounding the =) -then it is considered a `key = value` line. The left hand string is the key, -the right hand is the value. - -If the value starts with a `'` or a `"` then the first and last character is -stripped from the value before being captured. - -.Parameter ConfigDir -Path to the directory that contains the `pyvenv.cfg` file. -#> -function Get-PyVenvConfig( - [String] - $ConfigDir -) { - Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" - - # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). - $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue - - # An empty map will be returned if no config file is found. - $pyvenvConfig = @{ } - - if ($pyvenvConfigPath) { - - Write-Verbose "File exists, parse `key = value` lines" - $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath - - $pyvenvConfigContent | ForEach-Object { - $keyval = $PSItem -split "\s*=\s*", 2 - if ($keyval[0] -and $keyval[1]) { - $val = $keyval[1] - - # Remove extraneous quotations around a string value. - if ("'""".Contains($val.Substring(0, 1))) { - $val = $val.Substring(1, $val.Length - 2) - } - - $pyvenvConfig[$keyval[0]] = $val - Write-Verbose "Adding Key: '$($keyval[0])'='$val'" - } - } - } - return $pyvenvConfig -} - - -<# Begin Activate script --------------------------------------------------- #> - -# Determine the containing directory of this script -$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition -$VenvExecDir = Get-Item -Path $VenvExecPath - -Write-Verbose "Activation script is located in path: '$VenvExecPath'" -Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" -Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" - -# Set values required in priority: CmdLine, ConfigFile, Default -# First, get the location of the virtual environment, it might not be -# VenvExecDir if specified on the command line. -if ($VenvDir) { - Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" -} -else { - Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." - $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") - Write-Verbose "VenvDir=$VenvDir" -} - -# Next, read the `pyvenv.cfg` file to determine any required value such -# as `prompt`. -$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir - -# Next, set the prompt from the command line, or the config file, or -# just use the name of the virtual environment folder. -if ($Prompt) { - Write-Verbose "Prompt specified as argument, using '$Prompt'" -} -else { - Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" - if ($pyvenvCfg -and $pyvenvCfg['prompt']) { - Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" - $Prompt = $pyvenvCfg['prompt']; - } - else { - Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" - Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" - $Prompt = Split-Path -Path $venvDir -Leaf - } -} - -Write-Verbose "Prompt = '$Prompt'" -Write-Verbose "VenvDir='$VenvDir'" - -# Deactivate any currently active virtual environment, but leave the -# deactivate function in place. -deactivate -nondestructive - -# Now set the environment variable VIRTUAL_ENV, used by many tools to determine -# that there is an activated venv. -$env:VIRTUAL_ENV = $VenvDir - -if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { - - Write-Verbose "Setting prompt to '$Prompt'" - - # Set the prompt to include the env name - # Make sure _OLD_VIRTUAL_PROMPT is global - function global:_OLD_VIRTUAL_PROMPT { "" } - Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT - New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt - - function global:prompt { - Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " - _OLD_VIRTUAL_PROMPT - } - $env:VIRTUAL_ENV_PROMPT = $Prompt -} - -# Clear PYTHONHOME -if (Test-Path -Path Env:PYTHONHOME) { - Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME - Remove-Item -Path Env:PYTHONHOME -} - -# Add the venv to the PATH -Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH -$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" - -# SIG # Begin signature block -# MIIvJAYJKoZIhvcNAQcCoIIvFTCCLxECAQExDzANBglghkgBZQMEAgEFADB5Bgor -# BgEEAYI3AgEEoGswaTA0BgorBgEEAYI3AgEeMCYCAwEAAAQQH8w7YFlLCE63JNLG -# KX7zUQIBAAIBAAIBAAIBAAIBADAxMA0GCWCGSAFlAwQCAQUABCBnL745ElCYk8vk -# dBtMuQhLeWJ3ZGfzKW4DHCYzAn+QB6CCE8MwggWQMIIDeKADAgECAhAFmxtXno4h -# MuI5B72nd3VcMA0GCSqGSIb3DQEBDAUAMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQK -# EwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNV -# BAMTGERpZ2lDZXJ0IFRydXN0ZWQgUm9vdCBHNDAeFw0xMzA4MDExMjAwMDBaFw0z -# ODAxMTUxMjAwMDBaMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJ -# bmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNVBAMTGERpZ2lDZXJ0 -# IFRydXN0ZWQgUm9vdCBHNDCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB -# AL/mkHNo3rvkXUo8MCIwaTPswqclLskhPfKK2FnC4SmnPVirdprNrnsbhA3EMB/z -# G6Q4FutWxpdtHauyefLKEdLkX9YFPFIPUh/GnhWlfr6fqVcWWVVyr2iTcMKyunWZ -# anMylNEQRBAu34LzB4TmdDttceItDBvuINXJIB1jKS3O7F5OyJP4IWGbNOsFxl7s -# Wxq868nPzaw0QF+xembud8hIqGZXV59UWI4MK7dPpzDZVu7Ke13jrclPXuU15zHL -# 2pNe3I6PgNq2kZhAkHnDeMe2scS1ahg4AxCN2NQ3pC4FfYj1gj4QkXCrVYJBMtfb -# BHMqbpEBfCFM1LyuGwN1XXhm2ToxRJozQL8I11pJpMLmqaBn3aQnvKFPObURWBf3 -# JFxGj2T3wWmIdph2PVldQnaHiZdpekjw4KISG2aadMreSx7nDmOu5tTvkpI6nj3c -# AORFJYm2mkQZK37AlLTSYW3rM9nF30sEAMx9HJXDj/chsrIRt7t/8tWMcCxBYKqx -# YxhElRp2Yn72gLD76GSmM9GJB+G9t+ZDpBi4pncB4Q+UDCEdslQpJYls5Q5SUUd0 -# viastkF13nqsX40/ybzTQRESW+UQUOsxxcpyFiIJ33xMdT9j7CFfxCBRa2+xq4aL -# T8LWRV+dIPyhHsXAj6KxfgommfXkaS+YHS312amyHeUbAgMBAAGjQjBAMA8GA1Ud -# EwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBTs1+OC0nFdZEzf -# Lmc/57qYrhwPTzANBgkqhkiG9w0BAQwFAAOCAgEAu2HZfalsvhfEkRvDoaIAjeNk -# aA9Wz3eucPn9mkqZucl4XAwMX+TmFClWCzZJXURj4K2clhhmGyMNPXnpbWvWVPjS -# PMFDQK4dUPVS/JA7u5iZaWvHwaeoaKQn3J35J64whbn2Z006Po9ZOSJTROvIXQPK -# 7VB6fWIhCoDIc2bRoAVgX+iltKevqPdtNZx8WorWojiZ83iL9E3SIAveBO6Mm0eB -# cg3AFDLvMFkuruBx8lbkapdvklBtlo1oepqyNhR6BvIkuQkRUNcIsbiJeoQjYUIp -# 5aPNoiBB19GcZNnqJqGLFNdMGbJQQXE9P01wI4YMStyB0swylIQNCAmXHE/A7msg -# dDDS4Dk0EIUhFQEI6FUy3nFJ2SgXUE3mvk3RdazQyvtBuEOlqtPDBURPLDab4vri -# RbgjU2wGb2dVf0a1TD9uKFp5JtKkqGKX0h7i7UqLvBv9R0oN32dmfrJbQdA75PQ7 -# 9ARj6e/CVABRoIoqyc54zNXqhwQYs86vSYiv85KZtrPmYQ/ShQDnUBrkG5WdGaG5 -# nLGbsQAe79APT0JsyQq87kP6OnGlyE0mpTX9iV28hWIdMtKgK1TtmlfB2/oQzxm3 -# i0objwG2J5VT6LaJbVu8aNQj6ItRolb58KaAoNYes7wPD1N1KarqE3fk3oyBIa0H -# EEcRrYc9B9F1vM/zZn4wggawMIIEmKADAgECAhAIrUCyYNKcTJ9ezam9k67ZMA0G -# CSqGSIb3DQEBDAUAMGIxCzAJBgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJ -# bmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20xITAfBgNVBAMTGERpZ2lDZXJ0 -# IFRydXN0ZWQgUm9vdCBHNDAeFw0yMTA0MjkwMDAwMDBaFw0zNjA0MjgyMzU5NTla -# MGkxCzAJBgNVBAYTAlVTMRcwFQYDVQQKEw5EaWdpQ2VydCwgSW5jLjFBMD8GA1UE -# AxM4RGlnaUNlcnQgVHJ1c3RlZCBHNCBDb2RlIFNpZ25pbmcgUlNBNDA5NiBTSEEz -# ODQgMjAyMSBDQTEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDVtC9C -# 0CiteLdd1TlZG7GIQvUzjOs9gZdwxbvEhSYwn6SOaNhc9es0JAfhS0/TeEP0F9ce -# 2vnS1WcaUk8OoVf8iJnBkcyBAz5NcCRks43iCH00fUyAVxJrQ5qZ8sU7H/Lvy0da -# E6ZMswEgJfMQ04uy+wjwiuCdCcBlp/qYgEk1hz1RGeiQIXhFLqGfLOEYwhrMxe6T -# SXBCMo/7xuoc82VokaJNTIIRSFJo3hC9FFdd6BgTZcV/sk+FLEikVoQ11vkunKoA -# FdE3/hoGlMJ8yOobMubKwvSnowMOdKWvObarYBLj6Na59zHh3K3kGKDYwSNHR7Oh -# D26jq22YBoMbt2pnLdK9RBqSEIGPsDsJ18ebMlrC/2pgVItJwZPt4bRc4G/rJvmM -# 1bL5OBDm6s6R9b7T+2+TYTRcvJNFKIM2KmYoX7BzzosmJQayg9Rc9hUZTO1i4F4z -# 8ujo7AqnsAMrkbI2eb73rQgedaZlzLvjSFDzd5Ea/ttQokbIYViY9XwCFjyDKK05 -# huzUtw1T0PhH5nUwjewwk3YUpltLXXRhTT8SkXbev1jLchApQfDVxW0mdmgRQRNY -# mtwmKwH0iU1Z23jPgUo+QEdfyYFQc4UQIyFZYIpkVMHMIRroOBl8ZhzNeDhFMJlP -# /2NPTLuqDQhTQXxYPUez+rbsjDIJAsxsPAxWEQIDAQABo4IBWTCCAVUwEgYDVR0T -# AQH/BAgwBgEB/wIBADAdBgNVHQ4EFgQUaDfg67Y7+F8Rhvv+YXsIiGX0TkIwHwYD -# VR0jBBgwFoAU7NfjgtJxXWRM3y5nP+e6mK4cD08wDgYDVR0PAQH/BAQDAgGGMBMG -# A1UdJQQMMAoGCCsGAQUFBwMDMHcGCCsGAQUFBwEBBGswaTAkBggrBgEFBQcwAYYY -# aHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEEGCCsGAQUFBzAChjVodHRwOi8vY2Fj -# ZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRUcnVzdGVkUm9vdEc0LmNydDBDBgNV -# HR8EPDA6MDigNqA0hjJodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vRGlnaUNlcnRU -# cnVzdGVkUm9vdEc0LmNybDAcBgNVHSAEFTATMAcGBWeBDAEDMAgGBmeBDAEEATAN -# BgkqhkiG9w0BAQwFAAOCAgEAOiNEPY0Idu6PvDqZ01bgAhql+Eg08yy25nRm95Ry -# sQDKr2wwJxMSnpBEn0v9nqN8JtU3vDpdSG2V1T9J9Ce7FoFFUP2cvbaF4HZ+N3HL -# IvdaqpDP9ZNq4+sg0dVQeYiaiorBtr2hSBh+3NiAGhEZGM1hmYFW9snjdufE5Btf -# Q/g+lP92OT2e1JnPSt0o618moZVYSNUa/tcnP/2Q0XaG3RywYFzzDaju4ImhvTnh -# OE7abrs2nfvlIVNaw8rpavGiPttDuDPITzgUkpn13c5UbdldAhQfQDN8A+KVssIh -# dXNSy0bYxDQcoqVLjc1vdjcshT8azibpGL6QB7BDf5WIIIJw8MzK7/0pNVwfiThV -# 9zeKiwmhywvpMRr/LhlcOXHhvpynCgbWJme3kuZOX956rEnPLqR0kq3bPKSchh/j -# wVYbKyP/j7XqiHtwa+aguv06P0WmxOgWkVKLQcBIhEuWTatEQOON8BUozu3xGFYH -# Ki8QxAwIZDwzj64ojDzLj4gLDb879M4ee47vtevLt/B3E+bnKD+sEq6lLyJsQfmC -# XBVmzGwOysWGw/YmMwwHS6DTBwJqakAwSEs0qFEgu60bhQjiWQ1tygVQK+pKHJ6l -# /aCnHwZ05/LWUpD9r4VIIflXO7ScA+2GRfS0YW6/aOImYIbqyK+p/pQd52MbOoZW -# eE4wggd3MIIFX6ADAgECAhAHHxQbizANJfMU6yMM0NHdMA0GCSqGSIb3DQEBCwUA -# MGkxCzAJBgNVBAYTAlVTMRcwFQYDVQQKEw5EaWdpQ2VydCwgSW5jLjFBMD8GA1UE -# AxM4RGlnaUNlcnQgVHJ1c3RlZCBHNCBDb2RlIFNpZ25pbmcgUlNBNDA5NiBTSEEz -# ODQgMjAyMSBDQTEwHhcNMjIwMTE3MDAwMDAwWhcNMjUwMTE1MjM1OTU5WjB8MQsw -# CQYDVQQGEwJVUzEPMA0GA1UECBMGT3JlZ29uMRIwEAYDVQQHEwlCZWF2ZXJ0b24x -# IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQDExpQ -# eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjCCAiIwDQYJKoZIhvcNAQEBBQADggIP -# ADCCAgoCggIBAKgc0BTT+iKbtK6f2mr9pNMUTcAJxKdsuOiSYgDFfwhjQy89koM7 -# uP+QV/gwx8MzEt3c9tLJvDccVWQ8H7mVsk/K+X+IufBLCgUi0GGAZUegEAeRlSXx -# xhYScr818ma8EvGIZdiSOhqjYc4KnfgfIS4RLtZSrDFG2tN16yS8skFa3IHyvWdb -# D9PvZ4iYNAS4pjYDRjT/9uzPZ4Pan+53xZIcDgjiTwOh8VGuppxcia6a7xCyKoOA -# GjvCyQsj5223v1/Ig7Dp9mGI+nh1E3IwmyTIIuVHyK6Lqu352diDY+iCMpk9Zanm -# SjmB+GMVs+H/gOiofjjtf6oz0ki3rb7sQ8fTnonIL9dyGTJ0ZFYKeb6BLA66d2GA -# LwxZhLe5WH4Np9HcyXHACkppsE6ynYjTOd7+jN1PRJahN1oERzTzEiV6nCO1M3U1 -# HbPTGyq52IMFSBM2/07WTJSbOeXjvYR7aUxK9/ZkJiacl2iZI7IWe7JKhHohqKuc -# eQNyOzxTakLcRkzynvIrk33R9YVqtB4L6wtFxhUjvDnQg16xot2KVPdfyPAWd81w -# tZADmrUtsZ9qG79x1hBdyOl4vUtVPECuyhCxaw+faVjumapPUnwo8ygflJJ74J+B -# Yxf6UuD7m8yzsfXWkdv52DjL74TxzuFTLHPyARWCSCAbzn3ZIly+qIqDAgMBAAGj -# ggIGMIICAjAfBgNVHSMEGDAWgBRoN+Drtjv4XxGG+/5hewiIZfROQjAdBgNVHQ4E -# FgQUt/1Teh2XDuUj2WW3siYWJgkZHA8wDgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQM -# MAoGCCsGAQUFBwMDMIG1BgNVHR8Ega0wgaowU6BRoE+GTWh0dHA6Ly9jcmwzLmRp -# Z2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNENvZGVTaWduaW5nUlNBNDA5NlNI -# QTM4NDIwMjFDQTEuY3JsMFOgUaBPhk1odHRwOi8vY3JsNC5kaWdpY2VydC5jb20v -# RGlnaUNlcnRUcnVzdGVkRzRDb2RlU2lnbmluZ1JTQTQwOTZTSEEzODQyMDIxQ0Ex -# LmNybDA+BgNVHSAENzA1MDMGBmeBDAEEATApMCcGCCsGAQUFBwIBFhtodHRwOi8v -# d3d3LmRpZ2ljZXJ0LmNvbS9DUFMwgZQGCCsGAQUFBwEBBIGHMIGEMCQGCCsGAQUF -# BzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wXAYIKwYBBQUHMAKGUGh0dHA6 -# Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNENvZGVTaWdu -# aW5nUlNBNDA5NlNIQTM4NDIwMjFDQTEuY3J0MAwGA1UdEwEB/wQCMAAwDQYJKoZI -# hvcNAQELBQADggIBABxv4AeV/5ltkELHSC63fXAFYS5tadcWTiNc2rskrNLrfH1N -# s0vgSZFoQxYBFKI159E8oQQ1SKbTEubZ/B9kmHPhprHya08+VVzxC88pOEvz68nA -# 82oEM09584aILqYmj8Pj7h/kmZNzuEL7WiwFa/U1hX+XiWfLIJQsAHBla0i7QRF2 -# de8/VSF0XXFa2kBQ6aiTsiLyKPNbaNtbcucaUdn6vVUS5izWOXM95BSkFSKdE45O -# q3FForNJXjBvSCpwcP36WklaHL+aHu1upIhCTUkzTHMh8b86WmjRUqbrnvdyR2yd -# I5l1OqcMBjkpPpIV6wcc+KY/RH2xvVuuoHjlUjwq2bHiNoX+W1scCpnA8YTs2d50 -# jDHUgwUo+ciwpffH0Riq132NFmrH3r67VaN3TuBxjI8SIZM58WEDkbeoriDk3hxU -# 8ZWV7b8AW6oyVBGfM06UgkfMb58h+tJPrFx8VI/WLq1dTqMfZOm5cuclMnUHs2uq -# rRNtnV8UfidPBL4ZHkTcClQbCoz0UbLhkiDvIS00Dn+BBcxw/TKqVL4Oaz3bkMSs -# M46LciTeucHY9ExRVt3zy7i149sd+F4QozPqn7FrSVHXmem3r7bjyHTxOgqxRCVa -# 18Vtx7P/8bYSBeS+WHCKcliFCecspusCDSlnRUjZwyPdP0VHxaZg2unjHY3rMYIa -# tzCCGrMCAQEwfTBpMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIElu -# Yy4xQTA/BgNVBAMTOERpZ2lDZXJ0IFRydXN0ZWQgRzQgQ29kZSBTaWduaW5nIFJT -# QTQwOTYgU0hBMzg0IDIwMjEgQ0ExAhAHHxQbizANJfMU6yMM0NHdMA0GCWCGSAFl -# AwQCAQUAoIHIMBkGCSqGSIb3DQEJAzEMBgorBgEEAYI3AgEEMBwGCisGAQQBgjcC -# AQsxDjAMBgorBgEEAYI3AgEVMC8GCSqGSIb3DQEJBDEiBCBnAZ6P7YvTwq0fbF62 -# o7E75R0LxsW5OtyYiFESQckLhjBcBgorBgEEAYI3AgEMMU4wTKBGgEQAQgB1AGkA -# bAB0ADoAIABSAGUAbABlAGEAcwBlAF8AdgAzAC4AMQAxAC4ANwBfADIAMAAyADMA -# MQAyADAANAAuADAAMqECgAAwDQYJKoZIhvcNAQEBBQAEggIAVhZsoG5v92/dDgEz -# lrJSMsrzIVN/bHXzuCTiIZXcrKukESZFLhnBOQ7L61vUqsExhu99fyJdyYscjtgF -# qxpYm/mAbfUbX0pthvogoYsps1CZVt1MiQmZo/JqRZhcNmJnSvebLtbbsNMAsjaP -# k4MatEx/cpIz8NfwMpN2bhS9sbgNDDgtUcvJHFAFwx8JcKGFdP3eEnDsVPZLoeNG -# T0UZbfAfeHdyyaBfUJPKZ1gHrwX/vdDoF2gU+kbnwhQkEFPsJ3RKkcOkK8XFloH1 -# rHhSKubObSMmiMwvaV8lpZv3atB1ZMrIu+ACASWGLzcC5hAPjti+kDCL69gm82Ad -# tGvZJTbZeu5w8Z8xuwbbJoe4AdjtPfQXDhwTtAvMHxXHf1dNciXq+vcYSibsGCx1 -# Z0GPW8Aab1h3VcJR40TGn+UxuDjKX/CJMpq/Xec0EVjmjn/ciZMedNLm8TXgERrO -# JsgPv6r4ftmK1mZ//kViv5kQPGkYmJwcjqwY6J1FaJPhTRxm+kBgJQw664isc0Pa -# 9Tpf3gVpIDoVpA4pdh5FxZoQbfFN5hZeyuBU7qGq2edSaod2tmfynQiojG7X09oE -# UCyhdoEgQEJbXTI4tviyPCUfcLGnY8hFmkUPbDfR+NPYz4ARyNlMYvu1gFnH0m9u -# 7HR0bM2+B6TBAFXOT6T92MU3yQOhghdAMIIXPAYKKwYBBAGCNwMDATGCFywwghco -# BgkqhkiG9w0BBwKgghcZMIIXFQIBAzEPMA0GCWCGSAFlAwQCAQUAMHgGCyqGSIb3 -# DQEJEAEEoGkEZzBlAgEBBglghkgBhv1sBwEwMTANBglghkgBZQMEAgEFAAQg4yfn -# AAT1AOjfUsocX8KGhYms2ESpJ87/a8nFQw74k+UCEQDIaFDp6S7W9z0moWf/D7d4 -# GA8yMDIzMTIwNDE5NDM0OFqgghMJMIIGwjCCBKqgAwIBAgIQBUSv85SdCDmmv9s/ -# X+VhFjANBgkqhkiG9w0BAQsFADBjMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGln -# aUNlcnQsIEluYy4xOzA5BgNVBAMTMkRpZ2lDZXJ0IFRydXN0ZWQgRzQgUlNBNDA5 -# NiBTSEEyNTYgVGltZVN0YW1waW5nIENBMB4XDTIzMDcxNDAwMDAwMFoXDTM0MTAx -# MzIzNTk1OVowSDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkRpZ2lDZXJ0LCBJbmMu -# MSAwHgYDVQQDExdEaWdpQ2VydCBUaW1lc3RhbXAgMjAyMzCCAiIwDQYJKoZIhvcN -# AQEBBQADggIPADCCAgoCggIBAKNTRYcdg45brD5UsyPgz5/X5dLnXaEOCdwvSKOX -# ejsqnGfcYhVYwamTEafNqrJq3RApih5iY2nTWJw1cb86l+uUUI8cIOrHmjsvlmbj -# aedp/lvD1isgHMGXlLSlUIHyz8sHpjBoyoNC2vx/CSSUpIIa2mq62DvKXd4ZGIX7 -# ReoNYWyd/nFexAaaPPDFLnkPG2ZS48jWPl/aQ9OE9dDH9kgtXkV1lnX+3RChG4PB -# uOZSlbVH13gpOWvgeFmX40QrStWVzu8IF+qCZE3/I+PKhu60pCFkcOvV5aDaY7Mu -# 6QXuqvYk9R28mxyyt1/f8O52fTGZZUdVnUokL6wrl76f5P17cz4y7lI0+9S769Sg -# LDSb495uZBkHNwGRDxy1Uc2qTGaDiGhiu7xBG3gZbeTZD+BYQfvYsSzhUa+0rRUG -# FOpiCBPTaR58ZE2dD9/O0V6MqqtQFcmzyrzXxDtoRKOlO0L9c33u3Qr/eTQQfqZc -# ClhMAD6FaXXHg2TWdc2PEnZWpST618RrIbroHzSYLzrqawGw9/sqhux7UjipmAmh -# cbJsca8+uG+W1eEQE/5hRwqM/vC2x9XH3mwk8L9CgsqgcT2ckpMEtGlwJw1Pt7U2 -# 0clfCKRwo+wK8REuZODLIivK8SgTIUlRfgZm0zu++uuRONhRB8qUt+JQofM604qD -# y0B7AgMBAAGjggGLMIIBhzAOBgNVHQ8BAf8EBAMCB4AwDAYDVR0TAQH/BAIwADAW -# BgNVHSUBAf8EDDAKBggrBgEFBQcDCDAgBgNVHSAEGTAXMAgGBmeBDAEEAjALBglg -# hkgBhv1sBwEwHwYDVR0jBBgwFoAUuhbZbU2FL3MpdpovdYxqII+eyG8wHQYDVR0O -# BBYEFKW27xPn783QZKHVVqllMaPe1eNJMFoGA1UdHwRTMFEwT6BNoEuGSWh0dHA6 -# Ly9jcmwzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNFJTQTQwOTZTSEEy -# NTZUaW1lU3RhbXBpbmdDQS5jcmwwgZAGCCsGAQUFBwEBBIGDMIGAMCQGCCsGAQUF -# BzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wWAYIKwYBBQUHMAKGTGh0dHA6 -# Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydFRydXN0ZWRHNFJTQTQwOTZT -# SEEyNTZUaW1lU3RhbXBpbmdDQS5jcnQwDQYJKoZIhvcNAQELBQADggIBAIEa1t6g -# qbWYF7xwjU+KPGic2CX/yyzkzepdIpLsjCICqbjPgKjZ5+PF7SaCinEvGN1Ott5s -# 1+FgnCvt7T1IjrhrunxdvcJhN2hJd6PrkKoS1yeF844ektrCQDifXcigLiV4JZ0q -# BXqEKZi2V3mP2yZWK7Dzp703DNiYdk9WuVLCtp04qYHnbUFcjGnRuSvExnvPnPp4 -# 4pMadqJpddNQ5EQSviANnqlE0PjlSXcIWiHFtM+YlRpUurm8wWkZus8W8oM3NG6w -# QSbd3lqXTzON1I13fXVFoaVYJmoDRd7ZULVQjK9WvUzF4UbFKNOt50MAcN7MmJ4Z -# iQPq1JE3701S88lgIcRWR+3aEUuMMsOI5ljitts++V+wQtaP4xeR0arAVeOGv6wn -# LEHQmjNKqDbUuXKWfpd5OEhfysLcPTLfddY2Z1qJ+Panx+VPNTwAvb6cKmx5Adza -# ROY63jg7B145WPR8czFVoIARyxQMfq68/qTreWWqaNYiyjvrmoI1VygWy2nyMpqy -# 0tg6uLFGhmu6F/3Ed2wVbK6rr3M66ElGt9V/zLY4wNjsHPW2obhDLN9OTH0eaHDA -# dwrUAuBcYLso/zjlUlrWrBciI0707NMX+1Br/wd3H3GXREHJuEbTbDJ8WC9nR2Xl -# G3O2mflrLAZG70Ee8PBf4NvZrZCARK+AEEGKMIIGrjCCBJagAwIBAgIQBzY3tyRU -# fNhHrP0oZipeWzANBgkqhkiG9w0BAQsFADBiMQswCQYDVQQGEwJVUzEVMBMGA1UE -# ChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQuY29tMSEwHwYD -# VQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwHhcNMjIwMzIzMDAwMDAwWhcN -# MzcwMzIyMjM1OTU5WjBjMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQs -# IEluYy4xOzA5BgNVBAMTMkRpZ2lDZXJ0IFRydXN0ZWQgRzQgUlNBNDA5NiBTSEEy -# NTYgVGltZVN0YW1waW5nIENBMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKC -# AgEAxoY1BkmzwT1ySVFVxyUDxPKRN6mXUaHW0oPRnkyibaCwzIP5WvYRoUQVQl+k -# iPNo+n3znIkLf50fng8zH1ATCyZzlm34V6gCff1DtITaEfFzsbPuK4CEiiIY3+va -# PcQXf6sZKz5C3GeO6lE98NZW1OcoLevTsbV15x8GZY2UKdPZ7Gnf2ZCHRgB720RB -# idx8ald68Dd5n12sy+iEZLRS8nZH92GDGd1ftFQLIWhuNyG7QKxfst5Kfc71ORJn -# 7w6lY2zkpsUdzTYNXNXmG6jBZHRAp8ByxbpOH7G1WE15/tePc5OsLDnipUjW8LAx -# E6lXKZYnLvWHpo9OdhVVJnCYJn+gGkcgQ+NDY4B7dW4nJZCYOjgRs/b2nuY7W+yB -# 3iIU2YIqx5K/oN7jPqJz+ucfWmyU8lKVEStYdEAoq3NDzt9KoRxrOMUp88qqlnNC -# aJ+2RrOdOqPVA+C/8KI8ykLcGEh/FDTP0kyr75s9/g64ZCr6dSgkQe1CvwWcZklS -# UPRR8zZJTYsg0ixXNXkrqPNFYLwjjVj33GHek/45wPmyMKVM1+mYSlg+0wOI/rOP -# 015LdhJRk8mMDDtbiiKowSYI+RQQEgN9XyO7ZONj4KbhPvbCdLI/Hgl27KtdRnXi -# YKNYCQEoAA6EVO7O6V3IXjASvUaetdN2udIOa5kM0jO0zbECAwEAAaOCAV0wggFZ -# MBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFLoW2W1NhS9zKXaaL3WMaiCP -# nshvMB8GA1UdIwQYMBaAFOzX44LScV1kTN8uZz/nupiuHA9PMA4GA1UdDwEB/wQE -# AwIBhjATBgNVHSUEDDAKBggrBgEFBQcDCDB3BggrBgEFBQcBAQRrMGkwJAYIKwYB -# BQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBBBggrBgEFBQcwAoY1aHR0 -# cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0VHJ1c3RlZFJvb3RHNC5j -# cnQwQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0Rp -# Z2lDZXJ0VHJ1c3RlZFJvb3RHNC5jcmwwIAYDVR0gBBkwFzAIBgZngQwBBAIwCwYJ -# YIZIAYb9bAcBMA0GCSqGSIb3DQEBCwUAA4ICAQB9WY7Ak7ZvmKlEIgF+ZtbYIULh -# sBguEE0TzzBTzr8Y+8dQXeJLKftwig2qKWn8acHPHQfpPmDI2AvlXFvXbYf6hCAl -# NDFnzbYSlm/EUExiHQwIgqgWvalWzxVzjQEiJc6VaT9Hd/tydBTX/6tPiix6q4XN -# Q1/tYLaqT5Fmniye4Iqs5f2MvGQmh2ySvZ180HAKfO+ovHVPulr3qRCyXen/KFSJ -# 8NWKcXZl2szwcqMj+sAngkSumScbqyQeJsG33irr9p6xeZmBo1aGqwpFyd/EjaDn -# mPv7pp1yr8THwcFqcdnGE4AJxLafzYeHJLtPo0m5d2aR8XKc6UsCUqc3fpNTrDsd -# CEkPlM05et3/JWOZJyw9P2un8WbDQc1PtkCbISFA0LcTJM3cHXg65J6t5TRxktcm -# a+Q4c6umAU+9Pzt4rUyt+8SVe+0KXzM5h0F4ejjpnOHdI/0dKNPH+ejxmF/7K9h+ -# 8kaddSweJywm228Vex4Ziza4k9Tm8heZWcpw8De/mADfIBZPJ/tgZxahZrrdVcA6 -# KYawmKAr7ZVBtzrVFZgxtGIJDwq9gdkT/r+k0fNX2bwE+oLeMt8EifAAzV3C+dAj -# fwAL5HYCJtnwZXZCpimHCUcr5n8apIUP/JiW9lVUKx+A+sDyDivl1vupL0QVSucT -# Dh3bNzgaoSv27dZ8/DCCBY0wggR1oAMCAQICEA6bGI750C3n79tQ4ghAGFowDQYJ -# KoZIhvcNAQEMBQAwZTELMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IElu -# YzEZMBcGA1UECxMQd3d3LmRpZ2ljZXJ0LmNvbTEkMCIGA1UEAxMbRGlnaUNlcnQg -# QXNzdXJlZCBJRCBSb290IENBMB4XDTIyMDgwMTAwMDAwMFoXDTMxMTEwOTIzNTk1 -# OVowYjELMAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UE -# CxMQd3d3LmRpZ2ljZXJ0LmNvbTEhMB8GA1UEAxMYRGlnaUNlcnQgVHJ1c3RlZCBS -# b290IEc0MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAv+aQc2jeu+Rd -# SjwwIjBpM+zCpyUuySE98orYWcLhKac9WKt2ms2uexuEDcQwH/MbpDgW61bGl20d -# q7J58soR0uRf1gU8Ug9SH8aeFaV+vp+pVxZZVXKvaJNwwrK6dZlqczKU0RBEEC7f -# gvMHhOZ0O21x4i0MG+4g1ckgHWMpLc7sXk7Ik/ghYZs06wXGXuxbGrzryc/NrDRA -# X7F6Zu53yEioZldXn1RYjgwrt0+nMNlW7sp7XeOtyU9e5TXnMcvak17cjo+A2raR -# mECQecN4x7axxLVqGDgDEI3Y1DekLgV9iPWCPhCRcKtVgkEy19sEcypukQF8IUzU -# vK4bA3VdeGbZOjFEmjNAvwjXWkmkwuapoGfdpCe8oU85tRFYF/ckXEaPZPfBaYh2 -# mHY9WV1CdoeJl2l6SPDgohIbZpp0yt5LHucOY67m1O+SkjqePdwA5EUlibaaRBkr -# fsCUtNJhbesz2cXfSwQAzH0clcOP9yGyshG3u3/y1YxwLEFgqrFjGESVGnZifvaA -# sPvoZKYz0YkH4b235kOkGLimdwHhD5QMIR2yVCkliWzlDlJRR3S+Jqy2QXXeeqxf -# jT/JvNNBERJb5RBQ6zHFynIWIgnffEx1P2PsIV/EIFFrb7GrhotPwtZFX50g/KEe -# xcCPorF+CiaZ9eRpL5gdLfXZqbId5RsCAwEAAaOCATowggE2MA8GA1UdEwEB/wQF -# MAMBAf8wHQYDVR0OBBYEFOzX44LScV1kTN8uZz/nupiuHA9PMB8GA1UdIwQYMBaA -# FEXroq/0ksuCMS1Ri6enIZ3zbcgPMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB -# AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggr -# BgEFBQcwAoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNz -# dXJlZElEUm9vdENBLmNydDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8vY3JsMy5k -# aWdpY2VydC5jb20vRGlnaUNlcnRBc3N1cmVkSURSb290Q0EuY3JsMBEGA1UdIAQK -# MAgwBgYEVR0gADANBgkqhkiG9w0BAQwFAAOCAQEAcKC/Q1xV5zhfoKN0Gz22Ftf3 -# v1cHvZqsoYcs7IVeqRq7IviHGmlUIu2kiHdtvRoU9BNKei8ttzjv9P+Aufih9/Jy -# 3iS8UgPITtAq3votVs/59PesMHqai7Je1M/RQ0SbQyHrlnKhSLSZy51PpwYDE3cn -# RNTnf+hZqPC/Lwum6fI0POz3A8eHqNJMQBk1RmppVLC4oVaO7KTVPeix3P0c2PR3 -# WlxUjG/voVA9/HYJaISfb8rbII01YBwCA8sgsKxYoA5AY8WYIsGyWfVVa88nq2x2 -# zm8jLfR+cWojayL/ErhULSd+2DrZ8LaHlv1b0VysGMNNn3O3AamfV6peKOK5lDGC -# A3YwggNyAgEBMHcwYzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkRpZ2lDZXJ0LCBJ -# bmMuMTswOQYDVQQDEzJEaWdpQ2VydCBUcnVzdGVkIEc0IFJTQTQwOTYgU0hBMjU2 -# IFRpbWVTdGFtcGluZyBDQQIQBUSv85SdCDmmv9s/X+VhFjANBglghkgBZQMEAgEF -# AKCB0TAaBgkqhkiG9w0BCQMxDQYLKoZIhvcNAQkQAQQwHAYJKoZIhvcNAQkFMQ8X -# DTIzMTIwNDE5NDM0OFowKwYLKoZIhvcNAQkQAgwxHDAaMBgwFgQUZvArMsLCyQ+C -# Xc6qisnGTxmcz0AwLwYJKoZIhvcNAQkEMSIEIEry/+xjYMB4r/ZqFW2ejwy1FFPr -# q1ugQZH8TRrV9+DDMDcGCyqGSIb3DQEJEAIvMSgwJjAkMCIEINL25G3tdCLM0dRA -# V2hBNm+CitpVmq4zFq9NGprUDHgoMA0GCSqGSIb3DQEBAQUABIICABaD5RTtfVJC -# Cq1t4TVHDYnu+jceKL3iWD+ZjYKkCmzDAuuhMB0V+0CME5dRCVdlrChAEDPj+eG4 -# z9OGw0RqZpjHU3ZCa92R4dfER+b1S2pZbu4UyL/Hl93Cm9/dL80OggBbx2WmB1cH -# FLa1uf9MHT+/kQBC/+gt9IOstcdoJyT2CzNDUD3xTSa4Y5QGM0VrqO2tRLsTIf7V -# sKKzjv/57sdjsiaL94K6xwtOV4m6vjCtWdkgjwuf99uSoMZBo4s0H4/xvkYupqWd -# F8FBITmO3yKyjR34o+XFpx8iwqgT7Jaotsoj/33X1n7kreaAxZHw9aJOpKh8P52j -# uZgmtSQTcv9sXFeCuZ0TWio3sa1pTiaVzmZH4i2pHylNEWczfopaoAMsWHHFmixU -# qitVn/ed6okrpRgZ0e94LeQmmrhLU0A6m7uVpLIcLe/vRk1x8CripLQylnCWBDT7 -# vY371u0vxxGTaVVDptZ2j/YAP81VJzpL3aB8ZVFWoDOi5HvZrE4L14gkL5osZy+G -# VJtWxweOU/XFeUpQPIi2hkCo6DYayfOuAPBebBSFiowVzVEMS+EpsS4KfJ9qbOKd -# Jx94AicvTSUHfxNkVmkMN9ruAbwAHxA40v0c+fyuc0ErQqb5nyfk6r1Cp0royUGg -# kMDPbdMASNA2ZjAQ93slib/UPx+77W3V -# SIG # End signature block diff --git a/venv311/Scripts/activate b/venv311/Scripts/activate deleted file mode 100644 index 4ab2f2e14..000000000 --- a/venv311/Scripts/activate +++ /dev/null @@ -1,63 +0,0 @@ -# This file must be used with "source bin/activate" *from bash* -# you cannot run it directly - -deactivate () { - # reset old environment variables - if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then - PATH="${_OLD_VIRTUAL_PATH:-}" - export PATH - unset _OLD_VIRTUAL_PATH - fi - if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then - PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" - export PYTHONHOME - unset _OLD_VIRTUAL_PYTHONHOME - fi - - # Call hash to forget past commands. Without forgetting - # past commands the $PATH changes we made may not be respected - hash -r 2> /dev/null - - if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then - PS1="${_OLD_VIRTUAL_PS1:-}" - export PS1 - unset _OLD_VIRTUAL_PS1 - fi - - unset VIRTUAL_ENV - unset VIRTUAL_ENV_PROMPT - if [ ! "${1:-}" = "nondestructive" ] ; then - # Self destruct! - unset -f deactivate - fi -} - -# unset irrelevant variables -deactivate nondestructive - -VIRTUAL_ENV="C:\Users\JGSnapp\Desktop\Chattractive\venv311" -export VIRTUAL_ENV - -_OLD_VIRTUAL_PATH="$PATH" -PATH="$VIRTUAL_ENV/Scripts:$PATH" -export PATH - -# unset PYTHONHOME if set -# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) -# could use `if (set -u; : $PYTHONHOME) ;` in bash -if [ -n "${PYTHONHOME:-}" ] ; then - _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" - unset PYTHONHOME -fi - -if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then - _OLD_VIRTUAL_PS1="${PS1:-}" - PS1="(venv311) ${PS1:-}" - export PS1 - VIRTUAL_ENV_PROMPT="(venv311) " - export VIRTUAL_ENV_PROMPT -fi - -# Call hash to forget past commands. Without forgetting -# past commands the $PATH changes we made may not be respected -hash -r 2> /dev/null diff --git a/venv311/Scripts/activate.bat b/venv311/Scripts/activate.bat deleted file mode 100644 index bcbf4d66a..000000000 --- a/venv311/Scripts/activate.bat +++ /dev/null @@ -1,34 +0,0 @@ -@echo off - -rem This file is UTF-8 encoded, so we need to update the current code page while executing it -for /f "tokens=2 delims=:." %%a in ('"%SystemRoot%\System32\chcp.com"') do ( - set _OLD_CODEPAGE=%%a -) -if defined _OLD_CODEPAGE ( - "%SystemRoot%\System32\chcp.com" 65001 > nul -) - -set VIRTUAL_ENV=C:\Users\JGSnapp\Desktop\Chattractive\venv311 - -if not defined PROMPT set PROMPT=$P$G - -if defined _OLD_VIRTUAL_PROMPT set PROMPT=%_OLD_VIRTUAL_PROMPT% -if defined _OLD_VIRTUAL_PYTHONHOME set PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME% - -set _OLD_VIRTUAL_PROMPT=%PROMPT% -set PROMPT=(venv311) %PROMPT% - -if defined PYTHONHOME set _OLD_VIRTUAL_PYTHONHOME=%PYTHONHOME% -set PYTHONHOME= - -if defined _OLD_VIRTUAL_PATH set PATH=%_OLD_VIRTUAL_PATH% -if not defined _OLD_VIRTUAL_PATH set _OLD_VIRTUAL_PATH=%PATH% - -set PATH=%VIRTUAL_ENV%\Scripts;%PATH% -set VIRTUAL_ENV_PROMPT=(venv311) - -:END -if defined _OLD_CODEPAGE ( - "%SystemRoot%\System32\chcp.com" %_OLD_CODEPAGE% > nul - set _OLD_CODEPAGE= -) diff --git a/venv311/Scripts/backend-test-tools.exe b/venv311/Scripts/backend-test-tools.exe deleted file mode 100644 index 582454ef2..000000000 Binary files a/venv311/Scripts/backend-test-tools.exe and /dev/null differ diff --git a/venv311/Scripts/check-model.exe b/venv311/Scripts/check-model.exe deleted file mode 100644 index 4dfef7a40..000000000 Binary files a/venv311/Scripts/check-model.exe and /dev/null differ diff --git a/venv311/Scripts/check-node.exe b/venv311/Scripts/check-node.exe deleted file mode 100644 index 30b3430e2..000000000 Binary files a/venv311/Scripts/check-node.exe and /dev/null differ diff --git a/venv311/Scripts/deactivate.bat b/venv311/Scripts/deactivate.bat deleted file mode 100644 index 62a39a758..000000000 --- a/venv311/Scripts/deactivate.bat +++ /dev/null @@ -1,22 +0,0 @@ -@echo off - -if defined _OLD_VIRTUAL_PROMPT ( - set "PROMPT=%_OLD_VIRTUAL_PROMPT%" -) -set _OLD_VIRTUAL_PROMPT= - -if defined _OLD_VIRTUAL_PYTHONHOME ( - set "PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME%" - set _OLD_VIRTUAL_PYTHONHOME= -) - -if defined _OLD_VIRTUAL_PATH ( - set "PATH=%_OLD_VIRTUAL_PATH%" -) - -set _OLD_VIRTUAL_PATH= - -set VIRTUAL_ENV= -set VIRTUAL_ENV_PROMPT= - -:END diff --git a/venv311/Scripts/diffusers-cli.exe b/venv311/Scripts/diffusers-cli.exe deleted file mode 100644 index d823d9aa1..000000000 Binary files a/venv311/Scripts/diffusers-cli.exe and /dev/null differ diff --git a/venv311/Scripts/dotenv.exe b/venv311/Scripts/dotenv.exe deleted file mode 100644 index 4cbe66806..000000000 Binary files a/venv311/Scripts/dotenv.exe and /dev/null differ diff --git a/venv311/Scripts/f2py.exe b/venv311/Scripts/f2py.exe deleted file mode 100644 index c703b9ec5..000000000 Binary files a/venv311/Scripts/f2py.exe and /dev/null differ diff --git a/venv311/Scripts/hf.exe b/venv311/Scripts/hf.exe deleted file mode 100644 index a9e5f94e7..000000000 Binary files a/venv311/Scripts/hf.exe and /dev/null differ diff --git a/venv311/Scripts/huggingface-cli.exe b/venv311/Scripts/huggingface-cli.exe deleted file mode 100644 index 463c96e41..000000000 Binary files a/venv311/Scripts/huggingface-cli.exe and /dev/null differ diff --git a/venv311/Scripts/identify-cli.exe b/venv311/Scripts/identify-cli.exe deleted file mode 100644 index 200b4ef79..000000000 Binary files a/venv311/Scripts/identify-cli.exe and /dev/null differ diff --git a/venv311/Scripts/isympy.exe b/venv311/Scripts/isympy.exe deleted file mode 100644 index 0d754b50c..000000000 Binary files a/venv311/Scripts/isympy.exe and /dev/null differ diff --git a/venv311/Scripts/kakasi.exe b/venv311/Scripts/kakasi.exe deleted file mode 100644 index 07b405fe4..000000000 Binary files a/venv311/Scripts/kakasi.exe and /dev/null differ diff --git a/venv311/Scripts/nodeenv.exe b/venv311/Scripts/nodeenv.exe deleted file mode 100644 index 5f938c00e..000000000 Binary files a/venv311/Scripts/nodeenv.exe and /dev/null differ diff --git a/venv311/Scripts/normalizer.exe b/venv311/Scripts/normalizer.exe deleted file mode 100644 index 9297f670d..000000000 Binary files a/venv311/Scripts/normalizer.exe and /dev/null differ diff --git a/venv311/Scripts/num2words b/venv311/Scripts/num2words deleted file mode 100644 index c56382d83..000000000 --- a/venv311/Scripts/num2words +++ /dev/null @@ -1,95 +0,0 @@ -#!C:\Users\JGSnapp\Desktop\Chattractive\venv311\Scripts\python.exe -# -*- coding: utf-8 -*- -# Copyright (c) 2003, Taro Ogawa. All Rights Reserved. -# Copyright (c) 2013, Savoir-faire Linux inc. All Rights Reserved. - -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, -# MA 02110-1301 USA - -"""num2words: convert numbers into words. - -Usage: - num2words [options] - num2words --list-languages - num2words --list-converters - num2words --help - -Arguments: - Number you want to convert into words - -Options: - -L --list-languages Show all languages. - -C --list-converters Show all converters. - -l --lang= Output language [default: en]. - -t --to= Output converter [default: cardinal]. - -h --help Show this message. - -v --version Show version. - -Examples: - $ num2words 10001 - ten thousand and one - - $ num2words 24,120.10 - twenty-four thousand, one hundred and twenty point one - - $ num2words 24,120.10 -l es - veinticuatro mil ciento veinte punto uno - - $num2words 2.14 -l es --to currency - dos euros con catorce céntimos -""" - -from __future__ import print_function, unicode_literals -import os -import sys -from docopt import docopt -import num2words - -__version__ = "0.5.14" -__license__ = "LGPL" - - -def get_languages(): - return sorted(list(num2words.CONVERTER_CLASSES.keys())) - - -def get_converters(): - return sorted(list(num2words.CONVERTES_TYPES)) - - -def main(): - version = "{}=={}".format(os.path.basename(__file__), __version__) - args = docopt(__doc__, argv=None, help=True, version=version, options_first=False) - if args["--list-languages"]: - for lang in get_languages(): - sys.stdout.write(lang) - sys.stdout.write(os.linesep) - sys.exit(0) - if args["--list-converters"]: - for cvt in get_converters(): - sys.stdout.write(cvt) - sys.stdout.write(os.linesep) - sys.exit(0) - try: - words = num2words.num2words(args[''], lang=args['--lang'], to=args['--to']) - sys.stdout.write(words + os.linesep) - sys.exit(0) - except Exception as err: - sys.stderr.write(str(args[''])) - sys.stderr.write(str(err) + os.linesep) - sys.stderr.write(__doc__) - sys.exit(1) - - -if __name__ == '__main__': - main() diff --git a/venv311/Scripts/numba b/venv311/Scripts/numba deleted file mode 100644 index 2f14ff19c..000000000 --- a/venv311/Scripts/numba +++ /dev/null @@ -1,8 +0,0 @@ -#!C:\Users\JGSnapp\Desktop\Chattractive\venv311\Scripts\python.exe -# -*- coding: UTF-8 -*- -from __future__ import print_function, division, absolute_import - -from numba.misc.numba_entry import main - -if __name__ == "__main__": - main() diff --git a/venv311/Scripts/pathy.exe b/venv311/Scripts/pathy.exe deleted file mode 100644 index b899fb014..000000000 Binary files a/venv311/Scripts/pathy.exe and /dev/null differ diff --git a/venv311/Scripts/perth.exe b/venv311/Scripts/perth.exe deleted file mode 100644 index aee1f7dec..000000000 Binary files a/venv311/Scripts/perth.exe and /dev/null differ diff --git a/venv311/Scripts/pip.exe b/venv311/Scripts/pip.exe deleted file mode 100644 index 8a5709fbb..000000000 Binary files a/venv311/Scripts/pip.exe and /dev/null differ diff --git a/venv311/Scripts/pip3.11.exe b/venv311/Scripts/pip3.11.exe deleted file mode 100644 index 8a5709fbb..000000000 Binary files a/venv311/Scripts/pip3.11.exe and /dev/null differ diff --git a/venv311/Scripts/pip3.exe b/venv311/Scripts/pip3.exe deleted file mode 100644 index 8a5709fbb..000000000 Binary files a/venv311/Scripts/pip3.exe and /dev/null differ diff --git a/venv311/Scripts/pre-commit.exe b/venv311/Scripts/pre-commit.exe deleted file mode 100644 index 0e2ff2c6f..000000000 Binary files a/venv311/Scripts/pre-commit.exe and /dev/null differ diff --git a/venv311/Scripts/pymorphy.exe b/venv311/Scripts/pymorphy.exe deleted file mode 100644 index fa389499a..000000000 Binary files a/venv311/Scripts/pymorphy.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-decrypt.exe b/venv311/Scripts/pyrsa-decrypt.exe deleted file mode 100644 index 6f0fd938b..000000000 Binary files a/venv311/Scripts/pyrsa-decrypt.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-encrypt.exe b/venv311/Scripts/pyrsa-encrypt.exe deleted file mode 100644 index 4d559da71..000000000 Binary files a/venv311/Scripts/pyrsa-encrypt.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-keygen.exe b/venv311/Scripts/pyrsa-keygen.exe deleted file mode 100644 index e7d4f2c25..000000000 Binary files a/venv311/Scripts/pyrsa-keygen.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-priv2pub.exe b/venv311/Scripts/pyrsa-priv2pub.exe deleted file mode 100644 index 7ee6e628b..000000000 Binary files a/venv311/Scripts/pyrsa-priv2pub.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-sign.exe b/venv311/Scripts/pyrsa-sign.exe deleted file mode 100644 index 25f75891f..000000000 Binary files a/venv311/Scripts/pyrsa-sign.exe and /dev/null differ diff --git a/venv311/Scripts/pyrsa-verify.exe b/venv311/Scripts/pyrsa-verify.exe deleted file mode 100644 index 0e3f33884..000000000 Binary files a/venv311/Scripts/pyrsa-verify.exe and /dev/null differ diff --git a/venv311/Scripts/python.exe b/venv311/Scripts/python.exe deleted file mode 100644 index 38d286b8e..000000000 Binary files a/venv311/Scripts/python.exe and /dev/null differ diff --git a/venv311/Scripts/pythonw.exe b/venv311/Scripts/pythonw.exe deleted file mode 100644 index 95c64ef66..000000000 Binary files a/venv311/Scripts/pythonw.exe and /dev/null differ diff --git a/venv311/Scripts/s3tokenizer.exe b/venv311/Scripts/s3tokenizer.exe deleted file mode 100644 index 6e4abd2db..000000000 Binary files a/venv311/Scripts/s3tokenizer.exe and /dev/null differ diff --git a/venv311/Scripts/spacy.exe b/venv311/Scripts/spacy.exe deleted file mode 100644 index e73cba122..000000000 Binary files a/venv311/Scripts/spacy.exe and /dev/null differ diff --git a/venv311/Scripts/tiny-agents.exe b/venv311/Scripts/tiny-agents.exe deleted file mode 100644 index f9f5e1ffb..000000000 Binary files a/venv311/Scripts/tiny-agents.exe and /dev/null differ diff --git a/venv311/Scripts/torchfrtrace.exe b/venv311/Scripts/torchfrtrace.exe deleted file mode 100644 index 0563ae064..000000000 Binary files a/venv311/Scripts/torchfrtrace.exe and /dev/null differ diff --git a/venv311/Scripts/torchrun.exe b/venv311/Scripts/torchrun.exe deleted file mode 100644 index e539840c4..000000000 Binary files a/venv311/Scripts/torchrun.exe and /dev/null differ diff --git a/venv311/Scripts/tqdm.exe b/venv311/Scripts/tqdm.exe deleted file mode 100644 index 39577ff0f..000000000 Binary files a/venv311/Scripts/tqdm.exe and /dev/null differ diff --git a/venv311/Scripts/transformers-cli.exe b/venv311/Scripts/transformers-cli.exe deleted file mode 100644 index b650932d9..000000000 Binary files a/venv311/Scripts/transformers-cli.exe and /dev/null differ diff --git a/venv311/Scripts/virtualenv.exe b/venv311/Scripts/virtualenv.exe deleted file mode 100644 index 034501df6..000000000 Binary files a/venv311/Scripts/virtualenv.exe and /dev/null differ diff --git a/venv311/pyvenv.cfg b/venv311/pyvenv.cfg deleted file mode 100644 index 8a330e828..000000000 --- a/venv311/pyvenv.cfg +++ /dev/null @@ -1,5 +0,0 @@ -home = C:\Users\JGSnapp\AppData\Local\Programs\Python\Python311 -include-system-site-packages = false -version = 3.11.7 -executable = C:\Users\JGSnapp\AppData\Local\Programs\Python\Python311\python.exe -command = C:\Users\JGSnapp\AppData\Local\Programs\Python\Python311\python.exe -m venv C:\Users\JGSnapp\Desktop\Chattractive\venv311 diff --git a/venv311/share/man/man1/isympy.1 b/venv311/share/man/man1/isympy.1 deleted file mode 100644 index 0ff966158..000000000 --- a/venv311/share/man/man1/isympy.1 +++ /dev/null @@ -1,188 +0,0 @@ -'\" -*- coding: us-ascii -*- -.if \n(.g .ds T< \\FC -.if \n(.g .ds T> \\F[\n[.fam]] -.de URL -\\$2 \(la\\$1\(ra\\$3 -.. -.if \n(.g .mso www.tmac -.TH isympy 1 2007-10-8 "" "" -.SH NAME -isympy \- interactive shell for SymPy -.SH SYNOPSIS -'nh -.fi -.ad l -\fBisympy\fR \kx -.if (\nx>(\n(.l/2)) .nr x (\n(.l/5) -'in \n(.iu+\nxu -[\fB-c\fR | \fB--console\fR] [\fB-p\fR ENCODING | \fB--pretty\fR ENCODING] [\fB-t\fR TYPE | \fB--types\fR TYPE] [\fB-o\fR ORDER | \fB--order\fR ORDER] [\fB-q\fR | \fB--quiet\fR] [\fB-d\fR | \fB--doctest\fR] [\fB-C\fR | \fB--no-cache\fR] [\fB-a\fR | \fB--auto\fR] [\fB-D\fR | \fB--debug\fR] [ --- | PYTHONOPTIONS] -'in \n(.iu-\nxu -.ad b -'hy -'nh -.fi -.ad l -\fBisympy\fR \kx -.if (\nx>(\n(.l/2)) .nr x (\n(.l/5) -'in \n(.iu+\nxu -[ -{\fB-h\fR | \fB--help\fR} -| -{\fB-v\fR | \fB--version\fR} -] -'in \n(.iu-\nxu -.ad b -'hy -.SH DESCRIPTION -isympy is a Python shell for SymPy. It is just a normal python shell -(ipython shell if you have the ipython package installed) that executes -the following commands so that you don't have to: -.PP -.nf -\*(T< ->>> from __future__ import division ->>> from sympy import * ->>> x, y, z = symbols("x,y,z") ->>> k, m, n = symbols("k,m,n", integer=True) - \*(T> -.fi -.PP -So starting isympy is equivalent to starting python (or ipython) and -executing the above commands by hand. It is intended for easy and quick -experimentation with SymPy. For more complicated programs, it is recommended -to write a script and import things explicitly (using the "from sympy -import sin, log, Symbol, ..." idiom). -.SH OPTIONS -.TP -\*(T<\fB\-c \fR\*(T>\fISHELL\fR, \*(T<\fB\-\-console=\fR\*(T>\fISHELL\fR -Use the specified shell (python or ipython) as -console backend instead of the default one (ipython -if present or python otherwise). - -Example: isympy -c python - -\fISHELL\fR could be either -\&'ipython' or 'python' -.TP -\*(T<\fB\-p \fR\*(T>\fIENCODING\fR, \*(T<\fB\-\-pretty=\fR\*(T>\fIENCODING\fR -Setup pretty printing in SymPy. By default, the most pretty, unicode -printing is enabled (if the terminal supports it). You can use less -pretty ASCII printing instead or no pretty printing at all. - -Example: isympy -p no - -\fIENCODING\fR must be one of 'unicode', -\&'ascii' or 'no'. -.TP -\*(T<\fB\-t \fR\*(T>\fITYPE\fR, \*(T<\fB\-\-types=\fR\*(T>\fITYPE\fR -Setup the ground types for the polys. By default, gmpy ground types -are used if gmpy2 or gmpy is installed, otherwise it falls back to python -ground types, which are a little bit slower. You can manually -choose python ground types even if gmpy is installed (e.g., for testing purposes). - -Note that sympy ground types are not supported, and should be used -only for experimental purposes. - -Note that the gmpy1 ground type is primarily intended for testing; it the -use of gmpy even if gmpy2 is available. - -This is the same as setting the environment variable -SYMPY_GROUND_TYPES to the given ground type (e.g., -SYMPY_GROUND_TYPES='gmpy') - -The ground types can be determined interactively from the variable -sympy.polys.domains.GROUND_TYPES inside the isympy shell itself. - -Example: isympy -t python - -\fITYPE\fR must be one of 'gmpy', -\&'gmpy1' or 'python'. -.TP -\*(T<\fB\-o \fR\*(T>\fIORDER\fR, \*(T<\fB\-\-order=\fR\*(T>\fIORDER\fR -Setup the ordering of terms for printing. The default is lex, which -orders terms lexicographically (e.g., x**2 + x + 1). You can choose -other orderings, such as rev-lex, which will use reverse -lexicographic ordering (e.g., 1 + x + x**2). - -Note that for very large expressions, ORDER='none' may speed up -printing considerably, with the tradeoff that the order of the terms -in the printed expression will have no canonical order - -Example: isympy -o rev-lax - -\fIORDER\fR must be one of 'lex', 'rev-lex', 'grlex', -\&'rev-grlex', 'grevlex', 'rev-grevlex', 'old', or 'none'. -.TP -\*(T<\fB\-q\fR\*(T>, \*(T<\fB\-\-quiet\fR\*(T> -Print only Python's and SymPy's versions to stdout at startup, and nothing else. -.TP -\*(T<\fB\-d\fR\*(T>, \*(T<\fB\-\-doctest\fR\*(T> -Use the same format that should be used for doctests. This is -equivalent to '\fIisympy -c python -p no\fR'. -.TP -\*(T<\fB\-C\fR\*(T>, \*(T<\fB\-\-no\-cache\fR\*(T> -Disable the caching mechanism. Disabling the cache may slow certain -operations down considerably. This is useful for testing the cache, -or for benchmarking, as the cache can result in deceptive benchmark timings. - -This is the same as setting the environment variable SYMPY_USE_CACHE -to 'no'. -.TP -\*(T<\fB\-a\fR\*(T>, \*(T<\fB\-\-auto\fR\*(T> -Automatically create missing symbols. Normally, typing a name of a -Symbol that has not been instantiated first would raise NameError, -but with this option enabled, any undefined name will be -automatically created as a Symbol. This only works in IPython 0.11. - -Note that this is intended only for interactive, calculator style -usage. In a script that uses SymPy, Symbols should be instantiated -at the top, so that it's clear what they are. - -This will not override any names that are already defined, which -includes the single character letters represented by the mnemonic -QCOSINE (see the "Gotchas and Pitfalls" document in the -documentation). You can delete existing names by executing "del -name" in the shell itself. You can see if a name is defined by typing -"'name' in globals()". - -The Symbols that are created using this have default assumptions. -If you want to place assumptions on symbols, you should create them -using symbols() or var(). - -Finally, this only works in the top level namespace. So, for -example, if you define a function in isympy with an undefined -Symbol, it will not work. -.TP -\*(T<\fB\-D\fR\*(T>, \*(T<\fB\-\-debug\fR\*(T> -Enable debugging output. This is the same as setting the -environment variable SYMPY_DEBUG to 'True'. The debug status is set -in the variable SYMPY_DEBUG within isympy. -.TP --- \fIPYTHONOPTIONS\fR -These options will be passed on to \fIipython (1)\fR shell. -Only supported when ipython is being used (standard python shell not supported). - -Two dashes (--) are required to separate \fIPYTHONOPTIONS\fR -from the other isympy options. - -For example, to run iSymPy without startup banner and colors: - -isympy -q -c ipython -- --colors=NoColor -.TP -\*(T<\fB\-h\fR\*(T>, \*(T<\fB\-\-help\fR\*(T> -Print help output and exit. -.TP -\*(T<\fB\-v\fR\*(T>, \*(T<\fB\-\-version\fR\*(T> -Print isympy version information and exit. -.SH FILES -.TP -\*(T<\fI${HOME}/.sympy\-history\fR\*(T> -Saves the history of commands when using the python -shell as backend. -.SH BUGS -The upstreams BTS can be found at \(lahttps://github.com/sympy/sympy/issues\(ra -Please report all bugs that you find in there, this will help improve -the overall quality of SymPy. -.SH "SEE ALSO" -\fBipython\fR(1), \fBpython\fR(1) diff --git a/voices/voice_example.wav b/voices/voice_example.wav new file mode 100644 index 000000000..4c27fa2a8 Binary files /dev/null and b/voices/voice_example.wav differ