Newsletter | About | Sponsor
The Fediverse has 1M+ monthly active users. Decentralized social media works. But it can't grow while new users face empty timelines and niche creators remain invisible. Chronological feeds don't scale.
"No algorithms" has been a core promise of decentralized social media. The intent is right: platforms shouldn't control what you see. But chronological feeds are not neutral. They privilege the frequent over the thoughtful, the recent over the relevant, the loud over the good. Algorithms are tools. And like any tool, they can serve or exploit. The question is not whether to use them, but whether you can see, modify, and trust them. So we shouldn't ban them. We should make them a public good, shaped by the communities that depend on them.
Fediway builds this. Not as a way to define which algorithms should be used, but as a framework that gives every instance the tools to design, test, and deploy their own. Server-side, integrated through nginx into existing Mastodon instances.
NOTE: This project is currently a work in progress and may be in an unstable state. Features may be incomplete or subject to change. Use with caution.
Most alternatives implement algorithmic feeds for Mastodon as client-side tools. But client-side solutions have significant limitations for distributing content to the right audience. They bring complexity to users who are not familiar with how decentralized social networks work. Users have to act on their own to use them: create an API key, install a browser extension, or install a desktop app.
The complexity of the Fediverse is a significant problem that limits user growth, attracting mostly technical users. Making the Fediverse accessible to everyone requires solutions that hide complexity from the user. Server-side recommendations shift this complexity from the user to the instance itself. Server-side also enables more advanced recommendation logic, improving content discovery and increasing the likelihood of new visitors registering to an instance.
The algorithm follows of a multi-stage pipeline that consists of the following main stages:
- Candidate Sourcing: ~1000 Posts are fetched from various sources which aim to preselect the best candidates from recent posts.
- Ranking: The candidates are ranked by a machine learning model that estimates the likelihood of user interaction with each candidate.
- Sampling: In the final stage, heuristics are applied to diversify recommendations which are sampled depending on the engagement scores estimated in the ranking step.
Fediway includes a recommendation engine that makes it easy to build custom recommendation pipelines:
from modules.fediway.feed import Feed
from modules.fediway.feed.sampling import TopKSampler
from modules.fediway.rankers import SimpleStatsRanker
from modules.fediway.sources.statuses import (
MostInteractedByMutualFollowsSource,
CommunityBasedRecommendationsSource,
)
pipeline = (
Feed()
.select('status_id')
.source(MostInteractedByMutualFollowsSource(account_id), 100)
.source(CommunityBasedRecommendationsSource(account_id, language='en'), 100)
.rank(SimpleStatsRanker())
.diversify(by='status:account_id', penalty=0.1)
.sample(20, sampler=TopKSampler())
.paginate(20, offset=0)
)
recommendations = pipeline.execute()
for r in recommendations:
print(r.id, r.score)| Project | Server/Client-Side | Last Activity | Description |
|---|---|---|---|
| Fediway | Server-Side | Advanced recommendation framework that can be installed into an existing mastodon server. | |
| BYOTA | Client-Side | Mozilla’s research project for user-controlled timeline ranking. | |
| fediview | Client-Side | Web-based client that provides a digest of popular posts and boosts from your Mastodon timeline. | |
| fedialgo | Client-Side | Local mastodon client, that reorders your chronological timeline, with customization options. | |
| Mastodon Digest | Client-Side | Generates a digest of popular posts from your Mastodon timeline, with customizable scoring algorithms and themes. | |
| Fedi-Feed | Client-Side | Web-based client that displays Mastodon posts in a curated feed with a user-customizable algorithm. |
- Python 3.10+
- uv (recommended) or pip
- PostgreSQL (Mastodon's database)
- Redis
- RisingWave (for real-time features)
Optional services:
- Qdrant (for content-based recommendations)
- Kafka (for streaming)
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/fediway/fediway.git
cd fediway
# Install core dependencies
uv sync
# Or install with specific extras
uv sync --extra vectors # + Qdrant support
uv sync --extra embeddings # + embedding models
uv sync --extra streaming # + Kafka support
uv sync --extra ml # + ML models
uv sync --all-extras # everything# Install with dev dependencies
uv sync --extra dev
# Run linter
uv run ruff check .
# Run formatter
uv run ruff format .
# Run tests
uv run pytestCreate a .env file based on .env.example:
cp .env.example .envRequired environment variables:
APP_SECRET- Application secret keyAPP_HOST- Your instance hostnameAPI_URL- Full URL to your API endpoint
# Run FastAPI server
uv run uvicorn apps.api.main:app --reload
# Run Kafka stream consumer (if using streaming)
uv run faststream run apps.streaming.main:app
# Run Celery worker for background tasks
uv run celery -A apps.worker.main worker --loglevel=info
# Run Celery beat scheduler
uv run celery -A apps.worker.main:app beat --loglevel=info# Build image
docker build -t fediway .
# Run container
docker run -p 8000:8000 --env-file .env fediwaydocker-compose.yaml for local development
version: '3.8'
services:
postgres:
image: postgres:16
shm_size: 256mb
environment:
- POSTGRES_USER=mastodon
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mastodon_development
command:
- "postgres"
- "-c"
- "wal_level=logical"
volumes:
- ./data/postgres16:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U mastodon -d mastodon_development"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
risingwave:
image: risingwavelabs/risingwave:latest
depends_on:
postgres:
condition: service_healthy
ports:
- "4566:4566"
- "5691:5691"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5691/metrics"]
interval: 5s
timeout: 5s
retries: 5
# Optional: Qdrant for vector search
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333"
- "6334:6334"
profiles:
- vectors
# Optional: Kafka for streaming
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
ports:
- "9092:9092"
- "29092:29092"
profiles:
- streaming
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
profiles:
- streamingCreate a PostgreSQL user for CDC (Change Data Capture):
-- psql -U postgres
CREATE USER risingwave REPLICATION LOGIN CREATEDB;
ALTER USER risingwave WITH PASSWORD 'password';
GRANT CONNECT ON DATABASE mastodon_development TO risingwave;
GRANT USAGE ON SCHEMA public TO risingwave;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO risingwave;
GRANT CREATE ON DATABASE mastodon_development TO risingwave;
CREATE PUBLICATION risingwave FOR ALL TABLES;
