vCon Server is a powerful conversation processing and storage system that enables advanced analysis and management of conversation data. It provides a flexible pipeline for processing, storing, and analyzing conversations through various modules and integrations.
- 🐰 vCon Server
- Docker and Docker Compose
- Git
- Python 3.12 or higher (for local development)
- Poetry (for local development)
For a quick start using the automated installation script:
# Download the installation script
curl -O https://raw.githubusercontent.com/vcon-dev/vcon-server/main/scripts/install_conserver.sh
chmod +x install_conserver.sh
# Run the installation script
sudo ./install_conserver.sh --domain your-domain.com --email your-email@example.com
- Clone the repository:
git clone https://github.com/vcon-dev/vcon-server.git
cd vcon-server
- Create and configure the environment file:
cp .env.example .env
# Edit .env with your configuration
- Create the Docker network:
docker network create conserver
- Build and start the services:
docker compose build
docker compose up -d
The repository includes an automated installation script that handles the complete setup process. The script:
- Installs required dependencies
- Sets up Docker and Docker Compose
- Configures the environment
- Deploys the services
- Sets up monitoring
To use the automated installation:
./scripts/install_conserver.sh --domain your-domain.com --email your-email@example.com [--token YOUR_API_TOKEN]
Options:
--domain
: Your domain name (required)--email
: Email for DNS registration (required)--token
: API token (optional, generates random token if not provided)
Create a .env
file in the root directory with the following variables:
REDIS_URL=redis://redis
CONSERVER_API_TOKEN=your_api_token
CONSERVER_CONFIG_FILE=./config.yml
GROQ_API_KEY=your_groq_api_key
DNS_HOST=your-domain.com
DNS_REGISTRATION_EMAIL=your-email@example.com
The config.yml
file defines the processing pipeline, storage options, and chain configurations. Here's an example configuration:
links:
webhook_store_call_log:
module: links.webhook
options:
webhook-urls:
- https://example.com/conserver
deepgram_link:
module: links.deepgram_link
options:
DEEPGRAM_KEY: your_deepgram_key
minimum_duration: 30
api:
model: "nova-2"
smart_format: true
detect_language: true
summarize:
module: links.analyze
options:
OPENAI_API_KEY: your_openai_key
prompt: "Summarize this transcript..."
analysis_type: summary
model: 'gpt-4'
storages:
postgres:
module: storage.postgres
options:
user: postgres
password: your_password
host: your_host
port: "5432"
database: postgres
s3:
module: storage.s3
options:
aws_access_key_id: your_key
aws_secret_access_key: your_secret
aws_bucket: your_bucket
chains:
main_chain:
links:
- deepgram_link
- summarize
- webhook_store_call_log
storages:
- postgres
- s3
enabled: 1
The system is containerized using Docker and can be deployed using Docker Compose:
# Build the containers
docker compose build
# Start the services
docker compose up -d
# Scale the conserver service
docker compose up --scale conserver=4 -d
The system is designed to scale horizontally. The conserver service can be scaled to handle increased load:
docker compose up --scale conserver=4 -d
storages:
postgres:
module: storage.postgres
options:
user: postgres
password: your_password
host: your_host
port: "5432"
database: postgres
storages:
s3:
module: storage.s3
options:
aws_access_key_id: your_key
aws_secret_access_key: your_secret
aws_bucket: your_bucket
storages:
elasticsearch:
module: storage.elasticsearch
options:
cloud_id: "your_cloud_id"
api_key: "your_api_key"
index: vcon_index
For semantic search capabilities:
storages:
milvus:
module: storage.milvus
options:
host: "localhost"
port: "19530"
collection_name: "vcons"
embedding_model: "text-embedding-3-small"
embedding_dim: 1536
api_key: "your-openai-api-key"
organization: "your-org-id"
create_collection_if_missing: true
The system includes built-in monitoring through Datadog. Configure monitoring by setting the following environment variables:
DD_API_KEY=your_datadog_api_key
DD_SITE=datadoghq.com
View logs using:
docker compose logs -f
Common issues and solutions:
-
Redis Connection Issues:
- Check if Redis container is running:
docker ps | grep redis
- Verify Redis URL in .env file
- Check Redis logs:
docker compose logs redis
- Check if Redis container is running:
-
Service Scaling Issues:
- Ensure sufficient system resources
- Check network connectivity between containers
- Verify Redis connection for all instances
-
Storage Module Issues:
- Verify credentials and connection strings
- Check storage service availability
- Review storage module logs
For additional help, check the logs:
docker compose logs -f [service_name]
This project is licensed under the terms specified in the LICENSE file.
- Install as a non-root user: Create a dedicated user (e.g.,
vcon
) for running the application and Docker containers. - Clone repositories to /opt: Place
vcon-admin
andvcon-server
in/opt
for system-wide, non-root access. - Use persistent Docker volumes: Map Redis and other stateful service data to
/opt/vcon-data
for durability. - Follow the updated install script: Use
scripts/install_conserver.sh
which now implements these best practices.
/opt/vcon-admin
/opt/vcon-server
/opt/vcon-data/redis
volumes:
- /opt/vcon-data/redis:/data
The install script creates the vcon
user and sets permissions for all necessary directories.