Updates made here for Ubuntu v24 on an AMD device (See different branch for ARM aarch64 specific).
- Ubuntu Server (preferred)
- Nginx installed
- Docker installed
- Git installed
- Fully SYNCED and INDEXED Ergo node running on the same machine (accessible at 127.0.0.1:9053, or change config files in explorer for a remote node)
Note: Ensure that all the above prerequisites are fully completed before proceeding. Syncing a node can take a day or more, and the Explorer can take a few additional days to sync on top of that.
Open your terminal and run the following command:
git clone https://github.com/andrehafner/p2p-explorer.gitThis will download the repository into a folder named p2p-explorer. To navigate into it:
cd p2p-explorerThere are typically three passwords you can change (or leave them as they are). When this is first built on your system, it takes these passwords and uses them. Any changes to these files after the initial build will require a database rebuild or manual password change.
p2p-explorer/explorer-backend-9.17.4/docker-compose.yamlcontains a PostgreSQL password and IP change.p2p-explorer/db/db.secretcontains database and PostgreSQL passwords.
You can edit these files in the terminal with nano. For example:
sudo nano p2p-explorer/explorer-backend-9.17.4/docker-compose.yaml
sudo nano p2p-explorer/db/db.secretIf you do edit, after editing, press Ctrl+O and then Enter to save, then Ctrl+X to exit.
Since we are load balancing, the frontend must rely on the local database. If we use the load-balanced DNS, it will not do that, so we must use the external IP address of this machine. This means you need to open port 8080 and forward it to this machine (ping us in chat if you need help).
Next, open the following file and find this line "API: http://yourexternalIP:8080" and replace 'yourexternalIP' with your external IP address:
sudo nano p2p-explorer/docker-compose.ymlAfter editing, press Ctrl+O and then Enter to save, then Ctrl+X to exit.
We first need to create two things for Docker:
sudo docker network create ergo-node
sudo docker volume create --name ergo_redisNext, make sure there is a postgres_data directory with appropriate permissions:
mkdir -p postgres_data
sudo chown 999:999 postgres_dataLet's build the project. Make sure you are in the main p2p-explorer folder!
sudo docker compose up --buildThis can take some time (approximately 10 minutes). If you chose a minimal Ubuntu server install, you will probably be missing some libraries, and the build will fail. Look at the terminal where it fails, copy that error to a chat, and ask how to install the missing library. Repeat the build command and rinse and repeat.
Once built with no errors, start it up (same directory):
sudo docker compose up -dThe -d flag starts it in detached mode. Sometimes it's helpful to start it the first time with just sudo docker compose up so you can watch the logs. Press Ctrl+C to stop it if you do this and redo it with -d so you can exit your terminal without quitting the explorer.
If you need to stop it:
sudo docker compose downYou should now be able to access your explorer at the following URL: http://yourIPaddress:3000.
If you do not want to be part of the load balancer, you can simply download Python/Certbot and get a cert that will automatically configure Nginx. After that, replace all instances of the below Nginx file where it says the p2p domain with your domain, and fix the docker-compose.yml URL above (no need for port 3000 if you are using Nginx properly and a domain name).
If you want to load balance, open the following file:
sudo nano /etc/nginx/sites-enabled/defaultErase the contents and paste in the following:
server {
listen 443 ssl http2;
server_name explorer-p2p.ergoplatform.com;
ssl_certificate /etc/ssl/cloudflare/ergoplatform-p2p.pem;
ssl_certificate_key /etc/ssl/cloudflare/ergoplatform-p2p.key;
ssl_trusted_certificate /etc/ssl/cloudflare/origin_ca_rsa_root.pem;
location / { proxy_pass http://localhost:3000; }
}
server {
listen 443 ssl http2;
server_name api-p2p.ergoplatform.com;
ssl_certificate /etc/ssl/cloudflare/ergoplatform-p2p.pem;
ssl_certificate_key /etc/ssl/cloudflare/ergoplatform-p2p.key;
ssl_trusted_certificate /etc/ssl/cloudflare/origin_ca_rsa_root.pem;
location / {
proxy_pass http://localhost:8080;
rewrite ^(?!/api/v[0-9])(/.*)$ /api/v0$1 break;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 443 ssl http2;
server_name graphql-p2p.ergoplatform.com;
ssl_certificate /etc/ssl/cloudflare/ergoplatform-p2p.pem;
ssl_certificate_key /etc/ssl/cloudflare/ergoplatform-p2p.key;
ssl_trusted_certificate /etc/ssl/cloudflare/origin_ca_rsa_root.pem;
location / { proxy_pass http://localhost:3001; }
}
server {
listen 443 ssl http2;
server_name node-p2p.ergoplatform.com;
ssl_certificate /etc/ssl/cloudflare/ergoplatform-p2p.pem;
ssl_certificate_key /etc/ssl/cloudflare/ergoplatform-p2p.key;
ssl_trusted_certificate /etc/ssl/cloudflare/origin_ca_rsa_root.pem;
location / { proxy_pass http://localhost:9053; }
}Lastly, we need the .pem and .key files for the HTTPS domain name to work. Reach out to the Ergo Infra DAO to become a member to receive these files: PAIDEIA.
Once obtained, open them in a formatless text editor (like Notepad++ or Sublime Text) and run the following commands:
Create directories as needed:
sudo mkdir -p /etc/ssl/cloudflare/Now, run each command, paste in the contents of the file, and press Ctrl+O and Enter to save, then Ctrl+X to exit each time:
sudo nano /etc/ssl/cloudflare/ergoplatform-p2p.pem
sudo nano /etc/ssl/cloudflare/ergoplatform-p2p.key
sudo nano /etc/ssl/cloudflare/origin_ca_rsa_root.pemOnce done, test the Nginx file:
sudo nginx -tRead the output here; it will tell you what's wrong if there is an issue.
Reload and restart:
sudo systemctl reload nginx
sudo systemctl restart nginxYou should now be able to reach the explorer at yourdomain.whatever or, if you are part of the load balancer, p2p-explorer.ergoplatform.com.
If you want to run the Ergo Explorer on one machine but connect to an Ergo node running on a different machine within your home network (useful for splitting storage requirements or using a dedicated node machine), you'll need to update two configuration files.
- Ensure your Ergo node machine is accessible from the machine running the Explorer
- Verify the Ergo node is running and accessible on port 9053
- Know the local IP address of your Ergo node machine (e.g.,
192.168.1.100)
Edit the GraphQL service configuration in docker-compose.yml:
sudo nano docker-compose.ymlFind this line in the GraphQL service section:
ERGO_NODE_ADDRESS: http://172.17.0.1:9053Change it to your Ergo node machine's IP address:
ERGO_NODE_ADDRESS: http://192.168.1.100:9053Edit the backend configuration file:
sudo nano explorer-backend.confFind this line:
master-nodes = ["http://172.17.0.1:9053"]
Change it to your Ergo node machine's IP address:
master-nodes = ["http://192.168.1.100:9053"]
If you're using Nginx with SSL/domain setup, you'll also need to update the Nginx configuration to point to your remote Ergo node instead of localhost.
Edit the Nginx configuration file:
sudo nano /etc/nginx/sites-enabled/defaultFind the server block for your node domain (e.g., node-p2p.ergoplatform.com) and change this line:
location / { proxy_pass http://localhost:9053; }To point to your Ergo node machine's IP address:
location / { proxy_pass http://192.168.1.100:9053; }After updating Nginx, test and reload the configuration:
sudo nginx -t
sudo systemctl reload nginxAfter making all changes, restart all services:
sudo docker compose down
sudo docker compose up -d- GraphQL service: Reads from
docker-compose.ymlenvironment variables - Backend services (grabber, utx-tracker, api): Read from
explorer-backend.conf - Both must point to the same Ergo node address for proper functionality
- Test connectivity:
ping <YOUR_NODE_IP> - Verify port access:
telnet <YOUR_NODE_IP> 9053 - Check firewall settings on both machines
- Ensure both machines are on the same network segment
When running the Ergo node and p2p-explorer on different machines, you'll need to configure port forwarding on your router/gateway for proper connectivity.
- Port 9053 - Main Ergo node API (required for explorer connectivity)
- Port 9030 - P2P network communication (required for node syncing)
- Port 443 - HTTPS access (if using SSL/domain)
- Port 3000 - Frontend UI access
- Port 3001 - GraphQL service access
- Port 8080 - Backend API access
- Access your router/gateway (e.g., Xfinity Gateway app, router admin panel)
- Navigate to Port Forwarding section (may be called "Port Forwarding," "Port Mapping," or "Virtual Server")
- Add rules for each port:
- External Port: Same as internal port (e.g., 9053 β 9053)
- Internal IP: The local IP address of the target machine
- Protocol: TCP (or TCP/UDP for 9030)
- Save and apply the port forwarding rules
Note: The exact steps vary by router manufacturer. For Xfinity Gateway users, use the Xfinity app to configure port forwarding as it's often easier than the web interface.
Security Consideration: Only forward the ports you actually need. Ports 9053 and 9030 should only be accessible from your local network, while the explorer ports (3000, 3001, 8080, 443) may need external access depending on your setup.
The p2p-explorer is a comprehensive blockchain exploration system built with a microservices architecture. Here's how all the components work together:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β USER INTERFACE LAYER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Frontend UI (Port 3000) β GraphQL Service (Port 3001) β Nginx (Port 443) β
β β’ Web interface β β’ Query language for data β β’ SSL termination β
β β’ User interactions β β’ Real-time subscriptions β β’ Load balancing β
β β’ Responsive design β β’ Efficient data fetching β β’ Domain routing β
βββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββΌβββββββββββββββββββ-β
β β
βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β API & SERVICES LAYER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Backend API (Port 8080) β Chain Grabber β UTX Tracker β UTX Broadcaster β
β β’ REST API endpoints β β’ Block sync β β’ Mempool β β’ Transaction β
β β’ Business logic β β’ Indexing β β’ UTXO β β’ broadcasting β
β β’ Data processing β β’ Validation β β’ Monitoring β β’ Network β
βββββββββββββββββββββββββββββΌββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β DATA & STORAGE LAYER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β PostgreSQL Database β Redis Cache β Redis Request Cache β
β (Port 5433) β (Port 6379) β (Internal) β
β β’ Blockchain data β β’ API responsesβ β’ Query caching β
β β’ Transaction history β β’ Sessions β β’ Performance optimization β
β β’ Address balances β β’ Real-time β β’ Reduced database load β
βββββββββββββββββββββββββββββΌββββββββββββββββββΌββββββββββββββββββββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β BLOCKCHAIN LAYER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Ergo Node (Port 9053) β P2P Network (Port 9030) β
β β’ Full blockchain sync β β’ Peer discovery β
β β’ Transaction pool β β’ Block propagation β
β β’ State management β β’ Network consensus β
β β’ API endpoints β β’ Decentralized communication β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Purpose: User-friendly web interface for blockchain exploration
- Features: Block browser, transaction lookup, address search, network stats
- Technology: Modern JavaScript framework with responsive design
- Purpose: Efficient data querying and real-time updates
- Features: Single endpoint, flexible queries, subscription support
- Benefits: Reduced over-fetching, optimized data retrieval
- Purpose: Core business logic and data processing
- Features: REST endpoints, data validation, service coordination
- Responsibilities: Block processing, transaction analysis, address management
- Purpose: Continuous blockchain synchronization and indexing
- Process: Monitors node β Fetches new blocks β Processes data β Updates database
- Output: Indexed blockchain data for fast queries
- Purpose: Monitor unspent transaction outputs (UTXOs)
- Process: Tracks mempool changes β Updates UTXO state β Maintains consistency
- Output: Real-time UTXO status and balance updates
- Purpose: Broadcast transactions to the Ergo network
- Process: Receives transactions β Validates β Sends to node β Updates status
- Output: Transaction propagation confirmation
- Purpose: Persistent storage for all blockchain data
- Features: ACID compliance, complex queries, indexing, partitioning
- Data: Blocks, transactions, addresses, balances, network statistics
- Purpose: High-performance caching layer
- Features: API response caching, session management, real-time data
- Benefits: Faster response times, reduced database load
- Purpose: Source of truth for blockchain data
- Features: Full blockchain sync, transaction pool, state management
- Requirements: Significant storage space, continuous internet connection
- Blockchain Updates: New blocks arrive at Ergo Node
- Data Processing: Chain Grabber fetches and processes new data
- Storage: Processed data is stored in PostgreSQL with Redis caching
- API Access: Backend API provides structured access to data
- Frontend Delivery: GraphQL and REST APIs serve data to frontend
- User Experience: Frontend presents data in user-friendly format
- Internal Communication: Services communicate via Docker network
- External Access: Port forwarding enables remote access
- Load Balancing: Nginx distributes traffic across services
- Security: SSL termination and proper port management
- Scalability: Microservices can be scaled independently
- Reliability: Redundant services with failover capabilities
- Efficiency: Optimized data storage and retrieval patterns
- Monitoring: Built-in health checks and logging
This architecture ensures a robust, scalable, and maintainable blockchain exploration system that can handle the demands of a production environment while providing an excellent user experience.