Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/create-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ jobs:
tar czvf joystream-node-macos.tar.gz -C ./target/release joystream-node

- name: Temporarily save node binary
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: joystream-node-macos-${{ steps.compute_shasum.outputs.shasum }}
path: joystream-node-macos.tar.gz
Expand Down Expand Up @@ -80,7 +80,7 @@ jobs:
tar -czvf joystream-node-$VERSION_AND_COMMIT-arm64-linux-gnu.tar.gz joystream-node

- name: Retrieve saved MacOS binary
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: joystream-node-macos-${{ steps.compute_shasum.outputs.shasum }}

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-node-network.yml
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ jobs:
7z a -p${{ steps.network_config.outputs.encryptionKey }} chain-data.7z deploy_artifacts/*

- name: Save the output as an artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: data-chainspec-auth
path: devops/ansible/chain-data.7z
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/deploy-playground.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ on:
description: 'SURI of treasury account'
required: false
default: '//Alice'
initialBalances:
initialBalances:
description: 'JSON string or http URL to override initial balances and vesting config'
default: ''
required: false
Expand Down Expand Up @@ -112,7 +112,7 @@ jobs:
--verbose

- name: Save the endpoints file as an artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: endpoints
path: devops/ansible/endpoints.json
Expand Down
9 changes: 6 additions & 3 deletions .github/workflows/run-network-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ jobs:
if: steps.check_files.outputs.files_exists == 'false'

- name: Save joystream/node image to Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ steps.compute_shasum.outputs.shasum }}-joystream-node-docker-image.tar.gz
path: joystream-node-docker-image.tar.gz
Expand All @@ -152,21 +152,23 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
scenario: ['full', 'setupNewChain', 'setupNewChainMultiStorage', 'bonding', 'storageSync']
scenario: ['full', 'setupNewChain', 'setupNewChainMultiStorage', 'bonding', 'storage']
include:
- scenario: 'full'
no_storage: 'false'
- scenario: 'setupNewChain'
no_storage: 'true'
- scenario: 'setupNewChainMultiStorage'
no_storage: 'true'
- scenario: 'storage'
cleanup_interval: '1'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18.x'
- name: Get artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: ${{ needs.build_images.outputs.use_artifact }}
- name: Install artifacts
Expand All @@ -182,4 +184,5 @@ jobs:
run: |
export RUNTIME=${{ needs.build_images.outputs.runtime }}
export NO_STORAGE=${{ matrix.no_storage }}
export CLEANUP_INTERVAL=${{ matrix.cleanup_interval }}
tests/network-tests/run-tests.sh ${{ matrix.scenario }}
2 changes: 1 addition & 1 deletion devops/extrinsic-ordering/tx-ordering.yml
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ jobs:
run: pkill polkadot

- name: Save output as artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ env.CHAIN }}
path: |
Expand Down
83 changes: 46 additions & 37 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ services:
- chain-data:/data
environment:
- CHAIN=${CHAIN}
command: "--chain ${CHAIN:-dev} --alice --validator --pruning=archive --unsafe-ws-external --unsafe-rpc-external
--rpc-methods Safe --rpc-cors=all --log runtime --base-path /data --no-hardware-benchmarks"
command: '--chain ${CHAIN:-dev} --alice --validator --pruning=archive --unsafe-ws-external --unsafe-rpc-external
--rpc-methods Safe --rpc-cors=all --log runtime --base-path /data --no-hardware-benchmarks'
ports:
- 9944:9944
- 9933:9933
Expand All @@ -35,16 +35,25 @@ services:
- ACCOUNT_URI=${COLOSSUS_1_TRANSACTOR_URI}
- OTEL_EXPORTER_OTLP_ENDPOINT=${TELEMETRY_ENDPOINT}
- OTEL_RESOURCE_ATTRIBUTES=service.name=colossus-1,deployment.environment=production
- CLEANUP
- CLEANUP_INTERVAL
- CLEANUP_NEW_OBJECT_EXPIRATION_PERIOD
- CLEANUP_MIN_REPLICATION_THRESHOLD
entrypoint: ['/joystream/entrypoints/storage.sh']
command: [
'server', '--worker=${COLOSSUS_1_WORKER_ID}', '--port=3333', '--uploads=/data/uploads/',
'--sync', '--syncInterval=1',
'--storageSquidEndpoint=${COLOSSUS_STORAGE_SQUID_URL}',
'--apiUrl=${JOYSTREAM_NODE_WS}',
'--logFilePath=/logs',
'--tempFolder=/data/temp/',
'--pendingFolder=/data/pending/'
]
command:
[
'server',
'--worker=${COLOSSUS_1_WORKER_ID}',
'--port=3333',
'--uploads=/data/uploads/',
'--sync',
'--syncInterval=1',
'--storageSquidEndpoint=${COLOSSUS_STORAGE_SQUID_URL}',
'--apiUrl=${JOYSTREAM_NODE_WS}',
'--logFilePath=/logs',
'--tempFolder=/data/temp/',
'--pendingFolder=/data/pending/',
]

distributor-1:
image: node:18
Expand All @@ -68,7 +77,7 @@ services:
environment:
JOYSTREAM_DISTRIBUTOR__ID: distributor-1
JOYSTREAM_DISTRIBUTOR__ENDPOINTS__STORAGE_SQUID: ${DISTRIBUTOR_STORAGE_SQUID_URL}
JOYSTREAM_DISTRIBUTOR__KEYS: "[{\"suri\":\"${DISTRIBUTOR_1_ACCOUNT_URI}\"}]"
JOYSTREAM_DISTRIBUTOR__KEYS: '[{"suri":"${DISTRIBUTOR_1_ACCOUNT_URI}"}]'
JOYSTREAM_DISTRIBUTOR__WORKER_ID: ${DISTRIBUTOR_1_WORKER_ID}
JOYSTREAM_DISTRIBUTOR__PUBLIC_API__PORT: 3334
JOYSTREAM_DISTRIBUTOR__OPERATOR_API__PORT: 4334
Expand Down Expand Up @@ -105,22 +114,25 @@ services:
environment:
# ACCOUNT_URI overrides command line arg --accountUri
- ACCOUNT_URI=${COLOSSUS_2_TRANSACTOR_URI}
# Env that allows testing cleanup
- CLEANUP_NEW_OBJECT_EXPIRATION_PERIOD=10
- CLEANUP_MIN_REPLICATION_THRESHOLD=1
- CLEANUP
- CLEANUP_INTERVAL
- CLEANUP_NEW_OBJECT_EXPIRATION_PERIOD
- CLEANUP_MIN_REPLICATION_THRESHOLD
entrypoint: ['yarn', 'storage-node']
command: [
'server', '--worker=${COLOSSUS_2_WORKER_ID}', '--port=3333', '--uploads=/data/uploads',
'--sync', '--syncInterval=1',
'--storageSquidEndpoint=${COLOSSUS_STORAGE_SQUID_URL}',
'--apiUrl=${JOYSTREAM_NODE_WS}',
'--logFilePath=/logs',
'--tempFolder=/data/temp/',
'--pendingFolder=/data/pending/',
# Use cleanup on colossus-2 for testing purposes
'--cleanup',
'--cleanupInterval=1'
]
command:
[
'server',
'--worker=${COLOSSUS_2_WORKER_ID}',
'--port=3333',
'--uploads=/data/uploads',
'--sync',
'--syncInterval=1',
'--storageSquidEndpoint=${COLOSSUS_STORAGE_SQUID_URL}',
'--apiUrl=${JOYSTREAM_NODE_WS}',
'--logFilePath=/logs',
'--tempFolder=/data/temp/',
'--pendingFolder=/data/pending/',
]

distributor-2:
image: node:18
Expand All @@ -144,7 +156,7 @@ services:
environment:
JOYSTREAM_DISTRIBUTOR__ID: distributor-2
JOYSTREAM_DISTRIBUTOR__ENDPOINTS__STORAGE_SQUID: ${DISTRIBUTOR_STORAGE_SQUID_URL}
JOYSTREAM_DISTRIBUTOR__KEYS: "[{\"suri\":\"${DISTRIBUTOR_2_ACCOUNT_URI}\"}]"
JOYSTREAM_DISTRIBUTOR__KEYS: '[{"suri":"${DISTRIBUTOR_2_ACCOUNT_URI}"}]'
JOYSTREAM_DISTRIBUTOR__WORKER_ID: ${DISTRIBUTOR_2_WORKER_ID}
JOYSTREAM_DISTRIBUTOR__PUBLIC_API__PORT: 3334
JOYSTREAM_DISTRIBUTOR__OPERATOR_API__PORT: 4334
Expand Down Expand Up @@ -192,8 +204,8 @@ services:
- OTEL_EXPORTER_OTLP_ENDPOINT=${TELEMETRY_ENDPOINT}
- OTEL_RESOURCE_ATTRIBUTES=service.name=query-node,deployment.environment=production
ports:
- "${GRAPHQL_SERVER_PORT}:${GRAPHQL_SERVER_PORT}"
- "127.0.0.1:${PROCESSOR_STATE_APP_PORT}:${PROCESSOR_STATE_APP_PORT}"
- '${GRAPHQL_SERVER_PORT}:${GRAPHQL_SERVER_PORT}'
- '127.0.0.1:${PROCESSOR_STATE_APP_PORT}:${PROCESSOR_STATE_APP_PORT}'
depends_on:
- db
volumes:
Expand Down Expand Up @@ -275,7 +287,7 @@ services:
- PORT=${HYDRA_INDEXER_GATEWAY_PORT}
- PGSSLMODE=disable
ports:
- "${HYDRA_INDEXER_GATEWAY_PORT}:${HYDRA_INDEXER_GATEWAY_PORT}"
- '${HYDRA_INDEXER_GATEWAY_PORT}:${HYDRA_INDEXER_GATEWAY_PORT}'
depends_on:
- db
- redis
Expand All @@ -285,7 +297,7 @@ services:
container_name: redis
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
- '127.0.0.1:6379:6379'

faucet:
image: joystream/faucet:carthage
Expand All @@ -304,7 +316,7 @@ services:
- BALANCE_CREDIT=${BALANCE_CREDIT}
- BALANCE_LOCKED=${BALANCE_LOCKED}
ports:
- "3002:3002"
- '3002:3002'

# PostgerSQL database for Orion
orion-db:
Expand Down Expand Up @@ -437,10 +449,7 @@ services:
environment:
DATABASE_MAX_CONNECTIONS: 5
RUST_LOG: 'actix_web=info,actix_server=info'
command: [
'--database-url',
'postgres://postgres:postgres@orion_archive_db:${ARCHIVE_DB_PORT}/squid-archive',
]
command: ['--database-url', 'postgres://postgres:postgres@orion_archive_db:${ARCHIVE_DB_PORT}/squid-archive']
ports:
- '127.0.0.1:${ARCHIVE_GATEWAY_PORT}:8000'
- '[::1]:${ARCHIVE_GATEWAY_PORT}:8000'
Expand Down
28 changes: 24 additions & 4 deletions storage-node/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,30 @@
### 4.5.0

#### Features

- New commands to help storage bags / buckets management:

- `leader:set-replication` - allows adjusting bag-to-bucket assignments in order to achieve a target replication rate.
- `leader:copy-bags` - allows copying all bags from one bucket / set of buckets to a different bucket / set of buckets.
- `leader:empty-bucket` - allows removing all bags from a given bucket.

All of those commands support generating detailed summaries of planned / executed changes in the storage system thanks to the new `BagsUpdateCreator` and `BagsUpdateSummaryCreator` services.

- Adds a possibility to set `CLEANUP` and `CLEANUP_INTERVAL` via env in the `server` command.

#### Small / internal changes

- Fixes Colossus docker build by removing a deprecated [`@types/winston`](https://www.npmjs.com/package/@types/winston) package.
- Adds a few new utility functions (`stringifyBagId`, `cmpBagId`, `isEvent`, `asStorageSize`, `getBatchResults`).
- Updates `updateStorageBucketsForBags` to rely on the new `getBatchResults` utility function.

### 4.4.0

- **Optimizations:** The way data objects / data object ids are queried and processed during sync and cleanup has been optimized:
- Sync and cleanup services now process tasks in batches of configurable size (`--syncBatchSize`, `--cleanupBatchSize`) to avoid overflowing the memory.
- Synchronous operations like `sort` or `filter` on larger arrays of data objects have been optimized (for example, by replacing `.filter(Array.includes(...))` with `.filter(Set.has(...))`).
- Enforced a limit of max. results per single GraphQL query to `10,000` and max input arguments per query to `1,000`.
- Added `--cleanupWorkersNumber` flag to limit the number of concurrent async requests during cleanup.
- Sync and cleanup services now process tasks in batches of configurable size (`--syncBatchSize`, `--cleanupBatchSize`) to avoid overflowing the memory.
- Synchronous operations like `sort` or `filter` on larger arrays of data objects have been optimized (for example, by replacing `.filter(Array.includes(...))` with `.filter(Set.has(...))`).
- Enforced a limit of max. results per single GraphQL query to `10,000` and max input arguments per query to `1,000`.
- Added `--cleanupWorkersNumber` flag to limit the number of concurrent async requests during cleanup.
- A safety mechanism was added to avoid removing "deleted" objects for which a related `DataObjectDeleted` event cannot be found in storage squid.
- Improved logging during sync and cleanup.

Expand Down
Loading
Loading