Skip to content

feat: add like/unlike system for podcasts and episodes (Closes #154)#255

Merged
podclaude[bot] merged 9 commits intomainfrom
feature/issue-154-likes-system
Feb 20, 2026
Merged

feat: add like/unlike system for podcasts and episodes (Closes #154)#255
podclaude[bot] merged 9 commits intomainfrom
feature/issue-154-likes-system

Conversation

@podclaude
Copy link
Copy Markdown
Contributor

@podclaude podclaude Bot commented Feb 19, 2026

Summary

Implements a complete like/unlike system for podcasts and episodes, following CQRS/ES patterns.

CQRS Layer

  • 5 commands: LikePodcast, UnlikePodcast, LikeEpisode, UnlikeEpisode, SnapshotLike
  • 5 events: PodcastLiked, PodcastUnliked, EpisodeLiked, EpisodeUnliked, LikeCheckpoint
  • Like aggregate with idempotent execute/apply and snapshot support
  • Router updated with like- prefix stream

Projections

  • UserLike schema (users.user_likes table) with partial unique indexes
  • LikeProjector: projects like events to user_likes table
  • PopularityProjector: updates likes/likes_people/score on popularity tables
  • @score_like = 7 (between subscribe=10 and play=5)

API Endpoints

  • POST /api/v1/likes - like podcast or episode
  • DELETE /api/v1/likes/:feed - unlike podcast
  • DELETE /api/v1/likes/:feed/:item - unlike episode
  • GET /api/v1/likes - list user's active likes
  • JWT scopes: user.likes.read, user.likes.write
  • Rate limiting: 100 read/min, 30 write/min

Sync Integration

  • Likes included in sync endpoint (bidirectional, LWW conflict resolution)

Test Plan

  • 18 aggregate unit tests (idempotence, re-like, snapshot, apply events)
  • 11 controller tests (auth, CRUD, scopes)
  • All tests pass (mix test)

CQRS layer:
- 5 commands: LikePodcast, UnlikePodcast, LikeEpisode, UnlikeEpisode, SnapshotLike
- 5 events: PodcastLiked, PodcastUnliked, EpisodeLiked, EpisodeUnliked, LikeCheckpoint
- Like aggregate with idempotent execute/apply and snapshot support
- Router updated with like- prefix stream

Projections:
- UserLike schema (users.user_likes table) with partial unique indexes
- LikeProjector: projects like events to user_likes table
- PopularityProjector: updates likes/likes_people/score on popularity tables
- @score_like = 7 (between subscribe=10 and play=5)

API:
- POST /api/v1/likes - like podcast or episode
- DELETE /api/v1/likes/:feed - unlike podcast
- DELETE /api/v1/likes/:feed/:item - unlike episode
- GET /api/v1/likes - list user's active likes
- JWT scopes: user.likes.read, user.likes.write
- Rate limiting: 100 read/min, 30 write/min
- Likes included in sync endpoint (bidirectional)

Tests: 18 aggregate tests + 11 controller tests

Closes #154
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Review: feat: add like/unlike system for podcasts and episodes

Solid implementation overall — the CQRS structure is correct, idempotency is properly handled, and the test coverage for the aggregate is thorough. A few issues need attention before merge.


MUST-FIX

1. LikeCheckpoint projector silently swallows errors

In like_projector.ex, the LikeCheckpoint handler uses for comprehensions that discard return values:

project(%LikeCheckpoint{} = event, _metadata, fn multi ->
  Ecto.Multi.run(multi, :checkpoint, fn repo, _changes ->
    from(ul in UserLike, where: ul.user_id == ^event.user_id)
    |> repo.delete_all()

    for {feed, like_data} <- podcast_likes do
      upsert_like(repo, event.user_id, feed, nil, like_data.liked_at)  # return value discarded
      if like_data.unliked_at do
        unlike(repo, event.user_id, feed, nil, like_data.unliked_at)   # return value discarded
      end
    end
    ...
    {:ok, :checkpoint_applied}  # always succeeds even if inserts above failed
  end)
end)

The for comprehension discards the {:ok, _} / {:error, _} return values from upsert_like and unlike. If any insert fails (e.g., a constraint violation or DB error), the multi still returns {:ok, :checkpoint_applied} and the projector moves on. The checkpoint state will be silently incomplete.

Fix options:

  • Use Enum.reduce_while with explicit error propagation
  • Or collect results and return {:error, reason} on first failure

2. No logging in LikeProjector

Every other projector in this codebase includes Logger calls for audit trail. LikeProjector has require Logger imported but never uses it. This is inconsistent and will make debugging projector issues harder in production.

Add Logger.debug/info calls similar to CollectionsProjector or PopularityProjector.


SHOULD-FIX

3. Missing projector tests

The PR has 18 aggregate unit tests and 11 controller tests, but zero tests for LikeProjector and zero tests for the PopularityProjector changes.

Critical paths missing tests:

  • LikeProjector: PodcastLiked/Unliked upsert behavior, LikeCheckpoint delete+re-insert
  • LikeProjector: LikeCheckpoint with unliked entries (the most complex path)
  • PopularityProjector: likes increment/decrement on podcast and episode
  • SyncController: sync_likes LWW logic (client newer wins, server newer wins)

4. sync_likes bypasses CQRS — no audit trail for synced likes

In sync_controller.ex, synced likes are written directly to the projections table without dispatching LikePodcast/LikeEpisode commands:

%UserLike{}
|> UserLike.changeset(attrs)
|> repo.insert()

This means no events are emitted for likes that arrive via sync, breaking the event-sourcing audit trail. I see the existing sync for subscriptions and plays also uses direct DB writes (established pattern), but for likes this means the Like aggregate's in-memory state won't reflect likes that arrived via sync — so if a user later dispatches LikePodcast on a podcast they synced as liked via another device, the aggregate might re-emit a PodcastLiked event (idempotency gap between sync and CQRS paths).

At minimum: add a comment explaining this is intentional and noting the trade-off.


NICE-TO-HAVE

5. get_in_map/2 is redundant

defp get_in_map(nil, _key), do: nil
defp get_in_map(map, key), do: Map.get(map, key)

state.podcast_likes and state.episode_likes have %{} as defaults in defstruct, so they will never be nil in normal operation. The helper is equivalent to Map.get/2. Could call Map.get(state.podcast_likes, cmd.rss_source_feed) directly.

6. Redundant || %{} guards in apply/2

def apply(%__MODULE__{} = state, %PodcastLiked{} = event) do
  likes = state.podcast_likes || %{}   # redundant: defstruct default is %{}

Since defstruct podcast_likes: %{} is set, state.podcast_likes is never nil in normal operation.

7. Process.sleep(50) in controller test

This is fragile in CI. If the projector is slow, the subsequent assertion could fail. Consider restructuring the test to not depend on projector timing (dispatch the command and test dispatch success separately from projection state).

8. Full sync get_user_likes only returns active likes (no unliked_at)

def get_user_likes(user_id) do
  from(ul in UserLike,
    where: ul.user_id == ^user_id and is_nil(ul.unliked_at),  # filters out unliked
    ...
  )

The full sync initial payload omits previously liked+unliked podcasts. This differs from get_likes_since which includes unliked_at. If intentional (clients only need active likes for initial sync), add a comment.


Summary

Category Count
MUST-FIX 2
SHOULD-FIX 2
NICE-TO-HAVE 4

The CQRS aggregate design is clean and the idempotency logic is correct. Main blockers are the silent error swallowing in LikeCheckpoint projection and the missing projector/sync tests.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
MUST-FIX:
- Fix LikeCheckpoint projector error swallowing: use Enum.reduce_while
  with explicit error propagation instead of for comprehensions
- Add Logger calls to all LikeProjector event handlers

SHOULD-FIX:
- Add LikeProjector tests (10 tests covering like/unlike/checkpoint)
- Add comment explaining sync_likes CQRS bypass trade-off
- Add comment explaining get_user_likes active-only filtering

NICE-TO-HAVE:
- Remove redundant get_in_map/2 helper (use Map.get directly)
- Remove redundant || %{} guards in aggregate apply/2 functions
- Remove Process.sleep(50) from controller test
- Add comment about get_user_likes filtering rationale

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Review: feat: like/unlike system for podcasts and episodes

Overall, this is a well-structured PR that follows the established CQRS/ES patterns. The architecture is clean, commands/events/aggregate are properly separated, and the test coverage for core paths is solid. A few issues need attention before merging.


MUST-FIX

1. LikeCheckpoint serialization/deserialization bug — nested map keys become strings after JSON replay

When LikeCheckpoint is stored in the event store as JSON and replayed, the nested podcast_likes and episode_likes maps are deserialized with string keys (e.g. "liked_at", "unliked_at", "rss_source_feed"), not atom keys. This breaks two code paths:

In the aggregate execute/2 (like.ex:52-53):

case Map.get(state.podcast_likes, cmd.rss_source_feed) do
  %{liked_at: liked_at, unliked_at: unliked_at}  # ← atom key pattern match

After a checkpoint replay, state.podcast_likes["feed-url"] would be %{"liked_at" => ..., "unliked_at" => ...} — this pattern match fails, falling through to _, which would re-emit PodcastLiked even if the podcast was already liked.

In LikeProjector.replay_podcast_likes (like_projector.ex:856):

with {:ok, _} <- upsert_like(repo, user_id, feed, nil, like_data.liked_at),

like_data.liked_at on a string-keyed map returns nil (or raises a KeyError in newer Elixir), breaking checkpoint replay.

Fix options:

  • Add a Jason.Decoder implementation for LikeCheckpoint that atomizes nested map keys, following the pattern used elsewhere in the codebase if it exists
  • Or add a normalisation step in Like.apply/2 for LikeCheckpoint (e.g. Map.new(likes, fn {k, v} -> {k, atomize_like(v)} end))
  • Or document that SnapshotLike is not production-ready and gate it behind a feature flag

This needs to be verified: does the existing Commanded configuration atomize keys in nested maps? If not, this silently breaks the only use case for SnapshotLike.


SHOULD-FIX

2. No tests for the 4 new PopularityProjector event handlers

The PR adds PodcastLiked, PodcastUnliked, EpisodeLiked, EpisodeUnliked handling in PopularityProjector (~140 lines of new code), but there are zero tests for these handlers. They update score, likes, and likes_people — fields that affect public-facing timeline/discovery features. At minimum, tests should cover:

  • score incremented on like, decremented on unlike (clamped to 0)
  • likes counter updated correctly
  • likes_people includes public users, excludes private users
  • Episode like also increments podcast score

3. EpisodeUnliked and PodcastUnliked in PopularityProjector create empty records if none exist

(popularity_projector.ex lines for unlike handlers):

popularity =
  repo.get(EpisodePopularity, event.rss_source_item) ||
    %EpisodePopularity{rss_source_item: ..., rss_source_feed: ...}

If no popularity record exists (e.g. episode was never played/liked), unliking creates a new record with score=0, likes=0. This is harmless but creates ghost records. The PodcastUnliked handler has the same issue. Consider early-returning {:ok, nil} if no record exists, like the projector does in LikeProjector.unlike/5.


NICE-TO-HAVE

4. Route design for base64-encoded values with path separators

Routes DELETE /api/v1/likes/:feed and DELETE /api/v1/likes/:feed/:item use path segments for rss_source_feed/rss_source_item. Standard base64 uses / which would split path segments. The tests use values like "dGVzdC1mZWVk" (no /), suggesting URL-safe base64 is used — worth confirming this is enforced consistently (the existing subscription routes have the same pattern, so this may be a non-issue).

5. Non-atomic read-then-write in LikeProjector.upsert_like

The find_like + insert/update pattern in upsert_like/5 is not atomic under concurrent requests for the same user+feed. The unique index would catch duplicates at the DB level and surface as a changeset error. An Ecto insert_or_update with on_conflict: :replace_all would be cleaner. That said, this matches the pre-existing pattern in the codebase (e.g. subscription/play projectors), so acceptable as-is.

6. sync_likes comment could be more explicit

The comment in sync_controller.ex:898 is good but could note the specific scenario where the gap matters: a device calls POST /api/v1/likes for a feed it already synced as liked → will re-emit PodcastLiked event → projection no-ops (upsert), but the event store accumulates duplicate events. Low impact but worth tracking.


Positive notes

  • Idempotency in the aggregate is correctly handled for both like, unlike, and re-like flows ✓
  • LWW conflict resolution in sync is consistent with subscriptions/plays ✓
  • Rate limiting and JWT scope checks match the established auth patterns ✓
  • The validate_required/1 in UserLike.changeset covers the right fields ✓
  • get_user_likes (active only) vs get_likes_since (includes unlike tombstones) distinction is correct for sync ✓
  • 18 aggregate tests + 11 controller tests + projector tests provide good base coverage ✓

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
MUST-FIX:
- Normalize string keys in LikeCheckpoint apply/2 (aggregate) and
  projector: after JSON deserialization from event store, nested map
  keys become strings ("liked_at") instead of atoms (:liked_at).
  Added atomize_keys helpers in both aggregate and projector.
- Added test verifying idempotency works after checkpoint with string keys

SHOULD-FIX:
- Unlike handlers in PopularityProjector now early-return {:ok, nil}
  when no popularity record exists (avoids creating ghost records)
- Enhanced sync_likes comment with specific duplicate event scenario

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Claude Code Review — PR #255: Like/Unlike System

Overall: Architecture is solid and follows established CQRS/ES patterns. The aggregate design is clean, idempotency is handled correctly, and the test coverage for the aggregate and controller layers is good. A few items to address before merging.


SHOULD-FIX

1. normalize_likes_map / atomize_keys duplicated in two places

apps/balados_sync_core/lib/balados_sync_core/aggregates/like.ex lines 187–210 and apps/balados_sync_projections/lib/balados_sync_projections/projectors/like_projector.ex lines 165–192 contain identical private functions. If a future field is added to the nested maps (e.g., a note or rating), the fix must be applied in two places.

Suggested fix: extract to a shared module, e.g., BaladosSyncCore.Serialization.LikeNormalizer, and import or delegate in both places.

2. No tests for the PopularityProjector like/unlike handlers

The PR adds ~150 lines to popularity_projector.ex (PodcastLiked, PodcastUnliked, EpisodeLiked, EpisodeUnliked handlers) with no corresponding tests. The existing projector test file for collections, plays, etc. is a good model. Key cases to cover:

  • PodcastLiked increments score, likes, and likes_people (only for public users)
  • PodcastUnliked decrements and floors at 0
  • EpisodeLiked updates episode popularity AND bumps podcast score (but not likes)
  • EpisodeUnliked decrements both episode and podcast score
  • Re-like after unlike doesn't double-count

The asymmetry in EpisodeLiked (it increments podcast score but not podcast likes) is intentional per the description, but should be verified by a test since it's a non-obvious design choice.

3. N+1 queries in LikeCheckpoint projection

In like_projector.ex, the LikeCheckpoint handler calls upsert_like/5 per entry, each doing a SELECT + INSERT/UPDATE. For a user with hundreds of likes, checkpoint replay causes N×2 queries. Since this is a delete-then-rebuild, a single repo.insert_all with on_conflict: :replace_all would be more efficient:

# After delete_all, build attrs list and bulk insert:
entries = Enum.map(podcast_likes, fn {feed, data} ->
  %{user_id: user_id, rss_source_feed: feed, rss_source_item: nil,
    liked_at: data.liked_at, unliked_at: data.unliked_at,
    inserted_at: now, updated_at: now}
end)
repo.insert_all(UserLike, entries)

The current approach works correctly but will degrade under load for active users.


NICE-TO-HAVE

4. URL path params may not survive base64 with /

DELETE /api/v1/likes/:feed and DELETE /api/v1/likes/:feed/:item put opaque identifiers in the path. Standard base64 includes / and +, which break URL routing. The test values (dGVzdC1mZWVk) happen to be safe, but a real rss_source_feed like "https://feeds.example.com/podcast" base64-encodes to a value that can contain /.

Either document that callers must use URL-safe base64 (RFC 4648 §5, - and _ instead of + and /), or switch the unlike endpoints to DELETE /api/v1/likes with a body/query param. The POST /api/v1/likes already uses the request body, so the DELETE could follow the same pattern.

5. GET /api/v1/likes has no pagination

For users with large like histories, the index action returns all rows at once. Adding limit/offset or cursor pagination would prevent memory pressure for power users. Not urgent for v1, but worth tracking.


Informational (no action needed)

Sync bypass of CQRS: The sync_likes note in sync_controller.ex correctly flags that synced likes bypass the aggregate and are written directly to projections. This matches the established pattern for subscriptions and plays, and the trade-off (projection resets lose synced-only data) is the same known limitation. The comment is clear and appropriate — no change needed, just noting it's been seen.

delete_all return value ignored: In like_projector.ex checkpoint handler, repo.delete_all() returns {count, nil} which is silently discarded. Not a bug (the transaction rolls back on failure regardless), but binding it to _ would make the intent explicit.


Verdict: APPROVED WITH COMMENTS — the 3 SHOULD-FIX items above should be addressed before merge. The core CQRS implementation is correct, tests are meaningful, and the sync integration is consistent with existing patterns.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
… popularity tests

- Extract LikeNormalizer module shared by aggregate and projector (removes duplication)
- Replace N+1 checkpoint queries with bulk repo.insert_all
- Add PopularityProjector tests for like/unlike handlers (8 tests)
- Pass attrs properly in test helpers for Ecto change tracking

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Good implementation overall. The CQRS structure is sound, idempotency is correctly handled in the aggregate, and the LikeNormalizer for checkpoint JSON key normalization is a clean solution. Note: the previous review comment on this PR contains several inaccuracies.

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Test comment to verify gh works

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Task Queue

Sync tasks between this file and GitHub issues/PRs.


Workflow for Claude Code

Step 1: Fetch Current State

# List open issues
gh issue list --state open --json number,title,labels,state

# List open PRs
gh pr list --state open --json number,title,state,isDraft

Step 2: Process TODO Section

For each task in TODO below:

  1. Check if issue exists: gh issue list --search "TASK_KEYWORDS"
  2. If not found, create it:
gh issue create \
  --title "feat: TITLE" \
  --body "## Objective
DESCRIPTION

## Context
WHY_IT_MATTERS

## Acceptance Criteria
- [ ] Implementation complete
- [ ] Tests pass"
  1. Move task to "In Progress" with issue link

Step 3: Update In Progress

For each item in "In Progress":

# Check issue/PR status
gh issue view NUMBER --json state,title
gh pr view NUMBER --json state,title,mergeable

Update status codes. Move merged PRs to "Done".

Step 4: Commit Changes

git add TODOS.md
git commit --author="Claude <noreply@anthropic.com>" -m "chore: sync TODOS.md with GitHub"
git push

Status Codes

Code Meaning
OPEN Issue created, not started
WIP PR in development
REVIEW PR awaiting review
MERGED Completed

TODO

Add tasks here. Claude will create GitHub issues for them.

(empty)


Issues Futures

Issues créées pour les fonctionnalités futures :


In Progress

Format: - [ ] Description - [#N](url) - STATUS

  • E2E UI testing with Wallaby - #197 - PR #198 - REVIEW

Done

Format: - [x] Description - [#N](url)

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Balados Sync

Système CQRS/Event Sourcing pour la synchronisation d'écoutes de podcasts, d'abonnements et de playlists.

Origine

Le système a été codé par Claude.AI en suivant les instructions d'une note présente dans docs/guides/ORIGINAL_NOTE.md

Documentation

Architecture

balados_sync/
├── apps/
│   ├── balados_sync_core/        # Domain (CQRS/ES)
│   │   ├── events/                # Events immuables
│   │   ├── commands/              # Intentions
│   │   ├── aggregates/            # Business logic
│   │   └── router.ex              # Command routing
│   ├── balados_sync_projections/  # Read models
│   │   ├── schemas/               # Tables PostgreSQL
│   │   └── projectors/            # Event handlers
│   ├── balados_sync_web/          # API REST
│   │   └── controllers/           # HTTP endpoints
│   └── balados_sync_jobs/         # Workers
│       └── snapshot_worker.ex     # Checkpoints & cleanup
└── config/

Concepts clés

Event Sourcing

  • Event Store : Tous les événements sont stockés de façon immuable
  • Checkpoints : Snapshots créés tous les 5 minutes pour les events > 45 jours
  • Cleanup : Les events > 45 jours sont supprimés après checkpoint
  • ⚠️ Exception : Les deletion events suppriment l'historique concerné (disparaissent après 45j)

Privacy granulaire

  • Public : Données visibles avec user_id
  • Anonymous : Données visibles sans user_id
  • Private : Données cachées

Privacy configurable par :

  • Utilisateur (globale)
  • Podcast (feed)
  • Épisode (item)

Système de popularité

Scores par action :

  • Subscribe : 10 points
  • Play : 5 points
  • Save/Like : 3 points
  • Share : 2 points

Recalculé toutes les 5 minutes depuis l'event log.

Installation

Prérequis

  • Elixir 1.17+
  • PostgreSQL 14+
  • Erlang 26+

Setup

# Cloner le projet
git clone <repo>
cd balados_sync

# Installer les dépendances
mix deps.get

# Créer les bases de données (system schema + event store)
mix db.create

# Initialiser l'event store + migrer schéma system
mix db.init

# Lancer le projet
mix phx.server

Le serveur démarre sur http://localhost:4000

Commandes de base de données

Installation initiale:

mix db.create     # Crée BDD et event store
mix db.init       # Initialise event store + migre system

Pendant le développement:

mix db.migrate        # Migrer le schéma system (après création migration)
mix system_db.migrate # Idem (alias plus verbeux)

# Resets SÉCURISÉS (demandent confirmation):
mix db.reset --projections   # Reset projections uniquement (SAFE) ✅
mix db.reset --system        # Reset system (users, tokens) ⚠️
mix db.reset --events        # Reset event store (EXTREME!) ☢️
mix db.reset --all           # Reset TOUT (EXTREME!) ☢️

Note: Le projet utilise 4 schémas PostgreSQL distincts :

  • system (permanent) : Users et tokens
  • events (permanent) : Event store (source de vérité)
  • public (transitoire) : Projections publiques (reconstruites depuis events)
  • Voir Architecture de la Base de Données pour plus de détails

⚠️ Important: Les commandes mix ecto.reset, ecto.drop, ecto.migrate, ecto.create sont interdites. Utilisez mix db.* à la place.

Configuration

Variables d'environnement

# Development
export DATABASE_URL="postgresql://postgres:postgres@localhost/balados_sync_dev"
export EVENT_STORE_URL="postgresql://postgres:postgres@localhost/balados_sync_eventstore_dev"
export SECRET_KEY_BASE="your-secret-key"

# Production
export PHX_HOST="api.example.com"
export PORT=4000

Génération de clés JWT (RS256)

# Générer une paire de clés
openssl genrsa -out private_key.pem 2048
openssl rsa -in private_key.pem -pubout -out public_key.pem

Dépendances supplémentaires

Le projet nécessite les dépendances suivantes (déjà dans mix.exs) :

# Core
{:commanded, "~> 1.4"}
{:eventstore, "~> 1.4"}
{:ecto_sql, "~> 3.12"}
{:postgrex, "~> 0.19"}

# Web
{:phoenix, "~> 1.7"}
{:joken, "~> 2.6"}
{:joken_jwks, "~> 1.6"}

# RSS Proxy & Aggregation
{:httpoison, "~> 2.2"}
{:cachex, "~> 3.6"}
{:sweet_xml, "~> 0.7"}
{:timex, "~> 3.7"}

# Jobs
{:quantum, "~> 3.5"}

Configuration du subdomain play

Pour tester en local, ajoutez à /etc/hosts :

127.0.0.1 balados.sync play.balados.sync

Puis dans config/dev.exs :

config :balados_sync_web, BaladosSyncWeb.Endpoint,
  url: [host: "balados.sync", port: 4000],
  http: [ip: {127, 0, 0, 1}, port: 4000],
  check_origin: false

config :balados_sync_web,
  play_domain: "play.balados.sync"

Testez :

# Devrait répondre normalement
curl http://balados.sync:4000/api/v1/public/trending/podcasts

# Devrait utiliser le PlayGatewayController
curl -L http://play.balados.sync:4000/{token}/{feed}/{item}

Utilisation de l'API

Authentification

Toutes les requêtes authentifiées nécessitent un JWT dans le header :

Authorization: Bearer <jwt_token>

Le JWT doit contenir :

{
  "sub": "user_id",
  "jti": "unique_token_id",
  "device_id": "device_123",
  "device_name": "iPhone de John",
  "iat": 1234567890,
  "exp": 1234567890
}

Signé avec RS256 et la clé privée correspondant à la public key enregistrée.

Endpoints

1. Synchronisation complète

POST /api/v1/sync
Content-Type: application/json
Authorization: Bearer <token>

{
  "subscriptions": [
    {
      "rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
      "rss_source_id": "podcast_id",
      "subscribed_at": "2024-01-15T10:30:00Z",
      "unsubscribed_at": null
    }
  ],
  "play_statuses": [
    {
      "rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
      "rss_source_item": "ZXBpc29kZV8xMjM=",
      "position": 1250,
      "played": false,
      "updated_at": "2024-01-15T11:00:00Z"
    }
  ],
  "playlists": []
}

Réponse :

{
  "status": "success",
  "data": {
    "subscriptions": [...],
    "play_statuses": [...],
    "playlists": [...]
  }
}

2. Abonnements

# S'abonner à un podcast
POST /api/v1/subscriptions
{
  "rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
  "rss_source_id": "podcast_id"
}

# Se désabonner
DELETE /api/v1/subscriptions/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

# Liste des abonnements actifs
GET /api/v1/subscriptions

3. Écoutes

# Enregistrer une écoute
POST /api/v1/play
{
  "rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
  "rss_source_item": "ZXBpc29kZV8xMjM=",
  "position": 1250,
  "played": false
}

# Mettre à jour la position
PUT /api/v1/play/ZXBpc29kZV8xMjM=/position
{
  "position": 2500
}

# Liste des statuts de lecture
GET /api/v1/play?played=false&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk&limit=50

4. Épisodes

# Sauvegarder/liker un épisode
POST /api/v1/episodes/ZXBpc29kZV8xMjM=/save?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

# Partager un épisode
POST /api/v1/episodes/ZXBpc29kZV8xMjM=/share?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

5. Privacy

# Changer la privacy globale
PUT /api/v1/privacy
{
  "privacy": "anonymous"
}

# Privacy pour un podcast spécifique
PUT /api/v1/privacy
{
  "privacy": "private",
  "feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk"
}

# Privacy pour un épisode spécifique
PUT /api/v1/privacy
{
  "privacy": "public",
  "feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
  "item": "ZXBpc29kZV8xMjM="
}

# Voir ses settings de privacy
GET /api/v1/privacy
GET /api/v1/privacy?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

6. Données publiques (pas d'auth)

# Top podcasts trending
GET /api/v1/public/trending/podcasts?limit=20

# Top épisodes trending
GET /api/v1/public/trending/episodes?limit=20&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

# Popularité d'un podcast
GET /api/v1/public/feed/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk/popularity

# Popularité d'un épisode
GET /api/v1/public/episode/ZXBpc29kZV8xMjM=/popularity

# Timeline publique
GET /api/v1/public/timeline?limit=50&offset=0&event_type=play&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk

7. RSS Proxy avec cache LRU (pas d'auth)

Proxy CORS pour accéder aux flux RSS depuis le navigateur avec cache de 5 minutes :

# Récupérer un flux RSS complet
GET /api/v1/rss/proxy/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVkLnhtbA==
Accept: application/xml

# Filtrer pour un seul épisode (par guid ET enclosure)
GET /api/v1/rss/proxy/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVkLnhtbA==/ZXBpc29kZV8xMjMsaHR0cHM6Ly9leGFtcGxlLmNvbS9lcC5tcDM=
Accept: application/xml

Headers de réponse :

  • Access-Control-Allow-Origin: *
  • Cache-Control: public, max-age=300
  • Content-Type: application/xml

Cache LRU :

  • 500 entrées max
  • TTL de 5 minutes
  • Eviction automatique LRU (Least Recently Used)

8. RSS Agrégé par utilisateur (avec user_token)

Flux RSS personnalisés construits dynamiquement :

# Tous les épisodes de mes abonnements (100 derniers)
GET /api/v1/rss/user/{user_token}/subscriptions
Accept: application/xml

# Épisodes d'une playlist
GET /api/v1/rss/user/{user_token}/playlist/{playlist_id}
Accept: application/xml

Fonctionnalités :

  • Fetch parallèle de tous les feeds via le proxy RSS
  • Merge des épisodes triés par date (desc)
  • Préfixe du nom du podcast : [Tech Talks] Episode 42
  • URLs enclosure remplacées par les passerelles play
  • Cache privé de 1 minute

Génération d'un user_token :

POST /api/v1/tokens
Authorization: Bearer <jwt>
{
  "name": "My RSS Reader"
}

# Réponse:
{
  "token": "randomBase64Token",
  "user_id": "user_123"
}

9. Play Gateway (subdomain play.balados.sync)

Passerelle qui enregistre une écoute et redirige vers l'enclosure :

# Format: https://play.balados.sync/{user_token}/{feed_id}/{item_id}
GET https://play.balados.sync/randomToken123/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk/ZXBpc29kZV8xMjMsaHR0cHM6Ly9leGFtcGxlLmNvbS9lcC5tcDM=

Workflow :

  1. Vérifie le user_token
  2. Decode feed_id et item_id (base64)
  3. Dispatch une commande RecordPlay (async)
  4. Redirige 302 vers l'URL de l'enclosure réelle

Usage dans les flux agrégés :
Tous les flux RSS personnalisés (/user/:token/subscriptions et /playlist/:id) utilisent automatiquement ces URLs comme enclosures, permettant de tracker les écoutes.

Exemple de réponse trending :

{
  "podcasts": [
    {
      "rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
      "feed_title": "Tech Talks",
      "feed_author": "John Doe",
      "feed_cover": {
        "src": "https://example.com/cover.jpg",
        "srcset": "..."
      },
      "score": 25420,
      "score_previous": 20000,
      "plays": 3420,
      "plays_previous": 3000,
      "plays_people": ["user1", "user2", "user3"],
      "likes": 340,
      "likes_previous": 300
    }
  ]
}

Architecture de la Base de Données

La base de données PostgreSQL utilise 4 schémas distincts gérés par 2 Ecto Repositories :

Architecture Multi-Repo

SystemRepo (Données Permanentes)

Gestion: mix system.migrate

Contient les données permanentes via la gestion de Ecto (CRUD direct) :

  • users : Utilisateurs enregistrés
  • app_tokens : Tokens JWT valides (App Auth)
  • play_tokens : Tokens de partage RSS

Caractéristiques:

  • ❌ NOT event-sourced (JAMAIS)
  • ✅ Direct CRUD operations via Ecto
  • ⚠️ Données permanentes (non reconstruisibles)

ProjectionsRepo (Projections Publiques)

Gestion: mix projections.migrate

Contient les projections reconstruites depuis les events (read models) :

  • Schéma public : Données publiques
    • public_events : Events publics/anonymes filtrés
    • podcast_popularity : Stats de popularité par podcast
    • episode_popularity : Stats de popularité par épisode
    • subscriptions : Abonnements utilisateurs

Caractéristiques:

  • ✅ Event-sourced (reconstruites depuis EventStore)
  • ✅ Peuvent être réinitialisées sans crainte (mix db.reset --projections)
  • 🔄 Automatiquement reconstruites via les projectors

EventStore (Commanded)

Gestion: mix event_store.init -a balados_sync_core (une seule fois)

Contient tous les événements immuables du système :

  • Chaque action (Subscribe, Play, etc.) crée un event
  • Les events sont immuables (pour "supprimer", émettre nouvel event)
  • Exception: Les deletion events suppriment l'historique concerné

Important: ❌ NE JAMAIS modifier manuellement. Géré uniquement par Commanded.

Configuration Flexible

Ces deux repos peuvent être dans la même BDD PostgreSQL avec schemas différents (par défaut en dev) :

-- Une seule BDD avec 3 schemas
CREATE SCHEMA system;    -- SystemRepo
CREATE SCHEMA public;    -- ProjectionsRepo
CREATE SCHEMA events;    -- EventStore

Ou séparés en différentes BDD pour une meilleure isolation (recommandé en prod) :

balados_sync_system      → SystemRepo (schema system)
balados_sync_projections → ProjectionsRepo (schema public)
balados_sync_events      → EventStore (schema events)

Commandes Référence

Commande Cible Contenu
mix db.create Tous Crée BDD et schemas
mix db.init system + events Initialise tout d'un coup
mix db.migrate system + projections Migre les deux repos
mix system.migrate system Migre SEULEMENT SystemRepo
mix projections.migrate public Migre SEULEMENT ProjectionsRepo

Reset Commands

# ✅ SAFE - Reset projections uniquement (preserve system + events)
mix db.reset --projections

# ⚠️  DANGER - Reset system schema (users, tokens)
mix db.reset --system

# ☢️ EXTREME - Reset event store
mix db.reset --events

# ☢️☢️ EXTREME - Reset TOUT
mix db.reset --all

Danger:

  • mix ecto.reset = Réinitialise TOUT (events + system), éviter
  • mix db.reset --projections = Réinitialise projections seulement, SAFE

Worker de maintenance

Le SnapshotWorker s'exécute toutes les 5 minutes :

  1. Checkpoint : Crée des snapshots pour les users avec events > 45 jours
  2. Recalcul : Met à jour la popularité avec le système de points
  3. Cleanup : Supprime les events > 45 jours (après checkpoint)

Configuration dans config/config.exs :

config :balados_sync_jobs, BaladosSyncJobs.Scheduler,
  jobs: [
    {"*/5 * * * *", {BaladosSyncJobs.SnapshotWorker, :perform, []}}
  ]

Développement

Tests

# Lancer tous les tests
mix test

# Tests d'une app spécifique
cd apps/balados_sync_core
mix test

# Avec coverage
mix test --cover

Console interactive

# Console avec toutes les apps chargées
iex -S mix

# Dispatcher une command
alias BaladosSyncCore.App
alias BaladosSyncCore.Commands.Subscribe

App.dispatch(%Subscribe{
  user_id: "user_123",
  device_id: "device_456",
  device_name: "iPhone",
  rss_source_feed: "base64_feed",
  rss_source_id: "podcast_id"
})

# Query les projections
alias BaladosSyncProjections.Repo
alias BaladosSyncProjections.Schemas.Subscription
Repo.all(Subscription)

Replay d'events

Si vous devez reconstruire les projections :

# Arrêter les projectors
# Supprimer les données des projections (pas l'event store !)
mix ecto.reset

# Les projectors vont automatiquement rejouer tous les events

Encodage RSS

Les rss_source_feed et rss_source_item utilisent base64 :

# Feed
feed_url = "https://example.com/feed.xml"
rss_source_feed = Base.encode64(feed_url)

# Item (guid + enclosure)
guid = "episode-123"
enclosure = "https://example.com/episode.mp3"
rss_source_item = Base.encode64("#{guid},#{enclosure}")

Production

Déploiement

# Build release
MIX_ENV=prod mix release

# Lancer
_build/prod/rel/balados_sync/bin/balados_sync start

Monitoring

Le système expose des métriques via Phoenix LiveDashboard :

http://localhost:4000/dashboard

Métriques importantes :

  • Nombre d'events dans l'event store
  • Latence des projections
  • Nombre de users actifs
  • Popularité en temps réel

Troubleshooting

Les projections sont en retard

# Vérifier l'état des projections
iex -S mix
BaladosSyncProjections.Projectors.SubscriptionProjector.state()

Événements manquants

Les events sont immuables. Si des données semblent manquer, vérifiez :

  1. Les filtres de privacy
  2. Les checkpoints récents
  3. Les logs du worker

Performance

Pour améliorer les performances :

  1. Rafraîchir les vues matérialisées : REFRESH MATERIALIZED VIEW site.trending_podcasts
  2. Analyser les index : ANALYZE users.subscriptions
  3. Augmenter le pool size PostgreSQL dans la config

Licence

MIT

Support

Pour toute question : support@balados-sync.example.com

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026


Review: feat: add like/unlike system for podcasts and episodes

Good implementation overall. The CQRS structure is sound, idempotency is correctly handled in the aggregate, and the LikeNormalizer for checkpoint JSON key normalization is a clean solution.

Note: the previous review comment on this PR contains several inaccuracies - it claims LikeProjector has no Logger calls (it does), that tests are missing (they exist), and describes for comprehensions that are not present in the code. Please disregard those specific points.

Here are my findings:


MUST-FIX

1. LWW conflict resolution in sync_likes is broken for unlike operations

In sync_controller.ex, the LWW comparison uses liked_at as the timestamp for both sides:

client_ts = like.liked_at || now
server_ts = record.liked_at || ~U[1970-01-01 00:00:00Z]

if DateTime.compare(client_ts, server_ts) != :lt do
  record |> UserLike.changeset(attrs) |> repo.update()
end

Failure scenario:

  1. T1: User likes podcast on Device A -> server: {liked_at: T1, unliked_at: nil}
  2. T2: User unlikes via API directly -> server: {liked_at: T1, unliked_at: T2}
  3. Device A syncs (was offline), sends: {liked_at: T1, unliked_at: nil}
    • client_ts = T1, server_ts = T1 -> equal -> client wins
    • Server unlike at T2 is silently erased

The correct LWW timestamp should be the most recent action: max(liked_at, unliked_at) for both client and server. Fix:

client_ts = max_ts(like.liked_at, like.unliked_at) || now
server_ts = max_ts(record.liked_at, record.unliked_at) || ~U[1970-01-01 00:00:00Z]

defp max_ts(nil, nil), do: nil
defp max_ts(a, nil), do: a
defp max_ts(nil, b), do: b
defp max_ts(a, b), do: if DateTime.compare(a, b) != :lt, do: a, else: b

SHOULD-FIX

2. Missing tests for sync_likes LWW logic

The controller tests cover auth and basic CRUD, but there are no tests for the sync integration path (sync_likes/4). The LWW conflict resolution has at least one confirmed bug (see above), and this code path is critical for multi-device reliability.

Needed test cases:

  • Client like newer than server -> client wins, projection updated
  • Server like newer than client -> server wins, projection unchanged
  • Client unlike (same liked_at, newer unliked_at) -> unlike preserved
  • New like on server not known to client -> preserved in server state

NICE-TO-HAVE

3. String.to_existing_atom in LikeNormalizer is fragile

{key, value} when is_binary(key) -> {String.to_existing_atom(key), value}

This works because :liked_at, :unliked_at, :rss_source_feed are defined in struct modules loaded at startup. However, if a field is renamed in a future refactor, this will raise ArgumentError on checkpoint replay with no helpful error message. A safer pattern is an explicit whitelist map:

@allowed_keys %{
  "liked_at" => :liked_at,
  "unliked_at" => :unliked_at,
  "rss_source_feed" => :rss_source_feed
}

4. Aggregate state grows unboundedly

The Like aggregate stores all historical likes including old unliked entries. For a power user who likes and unlikes many episodes over time, this grows unboundedly and gets loaded into memory on every command. The LikeCheckpoint helps with event replay latency, but the in-memory state is never trimmed.

Consider only storing currently-liked entries in the aggregate (since idempotency only needs to know if something is currently liked), or pruning old unliked entries from the checkpoint.

5. LikeCheckpoint events are ignored by PopularityProjector

If the event store is ever compacted (old events removed up to the checkpoint), a full projection rebuild would have PopularityProjector missing all historical like/unlike events. The PopularityRecalculated periodic recalc would eventually fix this, but there would be a window of inconsistency. Acceptable given the periodic recalc, but worth documenting.


Overall the core CQRS implementation is clean. The LWW bug in sync (#1) is the only blocker.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
- Fix sync_likes LWW to compare max(liked_at, unliked_at) instead of
  just liked_at, preventing stale client likes from erasing server unlikes
- Add max_datetime/2 helper for timestamp comparison
- Add 5 sync_likes LWW tests: client wins, server wins, client unlike wins,
  server unlike preserved, new like creation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

test

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Code Review — feat: like/unlike system (#255)

Solid implementation overall. CQRS/ES patterns are applied correctly, idempotency is properly handled, and test coverage is thorough. A few issues to address before merging.


SHOULD-FIX

1. Duplicate (feed, item) in sync payload crashes Ecto.Multi with an unhandled exception

In sync_controller.ex, sync_likes/4 builds Multi keys as:

Multi.run(acc_multi, {:like, feed, item || "podcast"}, fn repo, _changes ->

Ecto.Multi raises an exception if the same key appears twice. parse_likes/1 filters for valid rss_source_feed but does not deduplicate by (feed, item). A client sending the same feed twice in the likes array would produce a 500 error at the sync endpoint. Fix with:

defp parse_likes(likes) when is_list(likes) do
  likes
  |> Enum.map(fn like -> ... end)
  |> Enum.filter(fn like -> is_binary(like.rss_source_feed) end)
  |> Enum.uniq_by(fn like -> {like.rss_source_feed, like.rss_source_item} end)  # add this
end

2. Test helper doesn't implement the privacy check — popularity tests don't reflect production behaviour

In the actual PopularityProjector, users are only added to likes_people if they are public:

likes_people:
  if(is_public,
    do: add_recent_user(popularity.likes_people, event.user_id),
    else: popularity.likes_people
  )

But the test helper in projector_test_case.ex always calls add_recent_user unconditionally:

likes_people: add_recent_user(popularity.likes_people, event.user_id)

This means the test assertion assert pop.likes_people == [user_id] would fail in production for a private user, and private users being added to likes_people would also be silently skipped in production. The privacy logic needs to be reflected in the test helper, or the popularity projector tests should be integration tests against the real projector.


NICE-TO-HAVE

3. String.to_existing_atom/1 in LikeNormalizer is fragile

{key, value} when is_binary(key) -> {String.to_existing_atom(key), value}

String.to_existing_atom/1 raises ArgumentError if the atom has never been loaded in the BEAM VM. The atoms liked_at, unliked_at, rss_source_feed are defined in struct fields so they should always be loaded in practice, but this is an implicit dependency on module load order. An explicit allowlist or String.to_atom/1 (for these fixed, known keys) would be safer and clearer in intent.

4. Episode likes silently double-bump podcast score without documentation

EpisodeLiked increments both the episode score (+7) and the parent podcast score (+7). This is visible in the projector and tested, but there's no comment explaining the intent ("episode engagement signals the parent podcast's quality"). Without this, future maintainers may treat it as a bug. Worth a one-line comment next to the second Multi.run.

5. Sync CQRS bypass is well-documented — thank you

The comment in sync_likes/4 clearly explains the trade-off (duplicate events in the store being low-impact, consistency with the existing subscription/play sync pattern). This is the right approach. Worth tracking as a known limitation, but no action needed now.


Positive observations

  • Aggregate idempotency is correctly handled for all four commands, including the re-like-after-unlike case via the _ -> catch-all.
  • SnapshotLike guards against uninitialized aggregates (user_id: nil).
  • Client-provided liked_at timestamps are correctly supported for sync without clock drift.
  • Partial unique indexes (one for rss_source_item IS NULL, one for IS NOT NULL) cleanly solve the nullable column uniqueness problem.
  • get_likes_since correctly uses updated_at > since to capture both new likes and unliked records.
  • LikeCheckpoint delete-all + bulk insert is transactional within the Commanded projector's Multi — safe.
  • Rate limits (100/min read, 30/min write) and scope hierarchy are consistent with existing endpoints.

Summary:

  • Item 1 (Multi key collision): SHOULD-FIX — potential 500 for malformed sync payloads
  • Item 2 (test/production behaviour divergence on privacy): SHOULD-FIX
  • Items 3–5: NICE-TO-HAVE

APPROVED WITH COMMENTS — please address items 1 and 2 before merging.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
…tions

- Deduplicate likes by (feed, item) in parse_likes to prevent Ecto.Multi
  key collision on duplicate sync payload entries
- Document privacy simplification in popularity test helpers
- Add comment explaining why episode likes bump podcast score

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Code Review — PR #255: feat: add like/unlike system for podcasts and episodes

Overall this is a solid, well-structured implementation. The CQRS/ES patterns are correctly applied, the aggregate is idempotent, and the test coverage is good. Below are findings organized by severity.


SHOULD-FIX

1. delete_all result discarded in LikeCheckpoint handler — compiler warning

In like_projector.ex, the LikeCheckpoint projection:

Ecto.Multi.run(multi, :checkpoint, fn repo, _changes ->
  from(ul in UserLike, where: ul.user_id == ^event.user_id)
  |> repo.delete_all()   # ← result discarded, no pattern match
  ...
  {:ok, :checkpoint_applied}
end)

repo.delete_all/1 returns {count, nil} on success and raises on DB error. The discarded return value will produce a compiler warning. The idiomatic fix:

{_count, _} =
  from(ul in UserLike, where: ul.user_id == ^event.user_id)
  |> repo.delete_all()

2. Missing partial index for the common active likes query

The migration creates an index on [:user_id] but the most common query pattern is:

WHERE user_id = :user_id AND unliked_at IS NULL

Used in LikeController.index/2 and Queries.get_user_likes/1. A partial index would help:

create index(:user_likes, [:user_id],
  prefix: "users",
  where: "unliked_at IS NULL",
  name: "user_likes_user_active"
)

Without this, PostgreSQL does a full scan on user_id then filters unliked_at IS NULL in memory. For users with many historical unlikes this degrades over time.


3. Sync bypass — potential duplicate events in event store

The comment in sync_controller.ex correctly acknowledges the tradeoff:

"The Like aggregate state won't reflect synced likes, so if a device calls POST /api/v1/likes for a feed it already synced as liked, the aggregate will re-emit a PodcastLiked event."

The projection is idempotent via upsert so the read model stays correct. But event store accumulates duplicates, and SnapshotLike / LikeCheckpoint won't capture synced-only likes — after checkpoint replay, the aggregate won't know about them. This is consistent with the established pattern for subscriptions/plays (accepted tradeoff), but the comment should note the SnapshotLike implication for completeness.


4. Missing test for the user.likes parent scope

like_controller_test.exs tests user.likes.read, user.likes.write, and * but not that the user.likes parent scope grants both read and write (scope hierarchy). Worth adding for consistency with other controllers:

test "user.likes parent scope grants write access", %{conn: conn, user_id: user_id} do
  conn =
    conn
    |> JwtTestHelper.authenticate_conn(user_id, scopes: ["user.likes"])
    |> post("/api/v1/likes", %{"rss_source_feed" => "dGVzdC1mZWVk"})
  assert json_response(conn, 200)["status"] == "success"
end

NICE-TO-HAVE

5. LikeCheckpoint persists unliked entries — potential size growth

The checkpoint includes both liked and unliked entries. Over time, users who frequently like/unlike will accumulate large maps in the aggregate and in checkpoint events. Consider only snapshotting currently-liked entries (where unliked_at == nil) since the historical unlike events are already in the event store.

6. String.to_existing_atom/1 in LikeNormalizer — defensive note

In practice this is safe since the atoms are loaded at startup. But if event store data is ever corrupted with unexpected keys, this would crash the projector during replay. Low risk but worth being aware of.

7. URL-safe base64 for route parameters

Routes like DELETE /api/v1/likes/:feed could break if the base64-encoded feed URL contains / (path separator). Confirm that encode_feed/1 always produces URL-safe base64 (using - and _ instead of + and /).


Positive Notes

  • Aggregate idempotency is correctly implemented for all 4 command types including re-like after unlike
  • LikeNormalizer cleanly handles the JSON string-key deserialization problem as a shared module
  • SnapshotLike correctly skips uninitialized aggregates (user_id: nil)
  • The LikeCheckpoint delete-then-bulk-insert is atomic within Ecto.Multi.run
  • LWW conflict resolution in sync is well-tested with 5 targeted scenarios
  • Score/count underflow protection via max(x - n, 0) is correct
  • @score_like = 7 (between subscribe=10 and play=5) is a reasonable value

Priority before merge: items #2 (partial index — performance impact compounds over time) and #1 (delete_all result — compiler warning).

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
…index, scope test

- Pattern match `delete_all` result in LikeCheckpoint handler to avoid
  discarded result compiler warning
- Add partial index on user_likes WHERE unliked_at IS NULL for active
  likes queries (GET /api/v1/likes)
- Extend sync bypass comment to document SnapshotLike implication
- Add test for `user.likes` parent scope in like_controller_test

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Code Review - PR #255: Like/Unlike System

Overall, this is a well-implemented feature that follows the established CQRS/ES patterns. The architecture is solid, idempotence is correctly handled, and the test coverage is substantial. A few items below need attention before merge.


SHOULD-FIX

1. Missing test for DELETE /api/v1/likes/:feed/:item (unlike episode)

like_controller_test.exs tests delete_podcast but has no test for delete_episode. Every other action has at least one happy-path test. This is a gap that should be filled.

# Missing in like_controller_test.exs:
describe "DELETE /api/v1/likes/:feed/:item - unlike episode" do
  test "successfully unlikes an episode", ...
  test "returns 401 without authorization", ...
end

2. LikeCheckpoint not handled by PopularityProjector — projection divergence on partial replay

If the event store is ever partially replayed starting from a LikeCheckpoint (e.g. stream compaction in future), the PopularityProjector will not see the original PodcastLiked/EpisodeLiked events that preceded the checkpoint. Counts would be incorrect. The LikeProjector correctly resets from a checkpoint, but PopularityProjector ignores the event entirely (falls through via apply/2 default).

For now this is low-risk (no stream compaction exists), but worth a TODO comment near PopularityProjector to document this known limitation, and consider whether LikeCheckpoint should be treated as a signal to re-derive popularity from the projection state.


NICE-TO-HAVE

3. LikeController.index has no ordering

# like_controller.ex
from(ul in UserLike,
  where: ul.user_id == ^user_id and is_nil(ul.unliked_at),
  select: %{...}
)

No order_by clause. The result order is non-deterministic. Consider order_by: [desc: ul.liked_at] for a stable, predictable response.

4. Sync bypass trade-off comment could be surfaced as a follow-up issue

The inline comment in sync_controller.ex is thorough and transparent — good practice. However, the aggregate state divergence it describes (synced-only likes invisible to SnapshotLike) could trip up future developers. Recommend creating a GitHub issue to revisit bidirectional CQRS dispatch for sync (even if low priority).

5. parse_likes/1 does not reject empty rss_source_feed

|> Enum.filter(fn like -> is_binary(like.rss_source_feed) end)

An empty string "" is binary and passes this filter. The changeset would reject it via validate_required, but failing at the DB layer rather than the parse layer is less informative. Consider adding and like.rss_source_feed != "".


Observations (no action required)

  • LikeNormalizer.atomize_keys with String.to_existing_atom/1: Correct choice. The relevant atoms (:liked_at, :unliked_at, :rss_source_feed) are guaranteed to be loaded at application start via struct definitions. No atom exhaustion risk.

  • Dual-increment on EpisodeLiked (episode score + podcast score): Intentional and consistent with the system's approach to episode engagement signalling podcast quality. The inverse EpisodeUnliked correctly reverses both. ✓

  • LikeCheckpoint bulk insert/delete in LikeProjector: The delete-all + insert-all within Ecto.Multi.run is transactional and avoids N+1 queries. ✓

  • LWW conflict resolution in sync: The max_datetime/2 helper and the DateTime.compare/2 != :lt logic are correct and well-tested (5 scenarios covered). ✓

  • Idempotent aggregate: Re-like after unlike correctly emits a new event. All 4 execute paths handle the uninitialized (user_id: nil) case. ✓

  • Router ordering: delete "/likes/:feed" and delete "/likes/:feed/:item" are correctly disambiguated by Phoenix's router. ✓


Verdict

APPROVED WITH COMMENTS — the SHOULD-FIX items are small (a missing test + a documentation comment). The missing delete_episode test should be added before merge. The PopularityProjector note can be addressed as an inline TODO if a full follow-up issue feels heavyweight.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
…validation

- Add missing test for DELETE /api/v1/likes/:feed/:item (unlike episode)
- Add TODO comment in PopularityProjector about LikeCheckpoint gap (see #258)
- Add order_by [desc: liked_at] to LikeController.index for stable response
- Reject empty string rss_source_feed in parse_likes
- Created follow-up issue #259 for sync bypass bidirectional CQRS

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Review: feat: add like/unlike system for podcasts and episodes (Closes #154)

Solid implementation overall — the CQRS/ES patterns are correctly followed, idempotency is properly handled at the aggregate level, and the test coverage is good. A few concerns below.


SHOULD-FIX

1. Issue #258 referenced in code but does not appear to exist yet

popularity_projector.ex contains:

# TODO: LikeCheckpoint is not handled here. [...] See issue #258.

But this PR is #255 — issue #258 doesn't exist yet. The missing LikeCheckpoint handler in PopularityProjector means popularity counters (likes, score) will be wrong if projections are ever replayed starting from a checkpoint (e.g. mix db.reset --projections). Per project rules, TODOs referencing future issues should have the issue actually created before merge.

Action: Create issue #258 before merging, with the concrete acceptance criteria: replay from a LikeCheckpoint must correctly restore popularity counts.


2. Missing delta sync test: get_likes_since output is untested

The sync tests verify that the server correctly applies incoming client likes (LWW conflict tests), but no test verifies that the server returns likes in the delta response payload (changes.likes). The get_likes_since/2 function is new and directly feeds the sync response.

Missing test scenario:

given a user has an active like
when they POST /api/v1/sync with a last_sync before the like was created
then response["changes"]["likes"] contains that like

Without this, a regression where get_likes_since returns empty or crashes would not be caught.


3. Aggregate memory growth — unliked entries are never evicted

The Like aggregate accumulates every like/unlike operation in podcast_likes and episode_likes maps:

defstruct [
  :user_id,
  podcast_likes: %{},   # grows monotonically — unliked entries kept forever
  episode_likes: %{}
]

A user who liked 500 podcasts over 5 years (many since unliked) will have a 500-entry map loaded into memory on every command dispatch. The SnapshotLike mechanism helps replay speed but does not evict history from the live aggregate state.

Consider filtering out entries where unliked_at is older than a configurable TTL when building the LikeCheckpoint, and clearing them from the apply/state. This is more impactful for episode likes since some users follow thousands of episodes.


NICE-TO-HAVE

4. String.to_existing_atom/1 in LikeNormalizer — brittle under refactoring

{key, value} when is_binary(key) -> {String.to_existing_atom(key), value}

This is correct today (:liked_at, :unliked_at, :rss_source_feed are all loaded atoms). However, if a field is renamed in the future, old checkpoint events in the store would still carry the old key name, and String.to_existing_atom/1 would raise ArgumentError at replay time (events are immutable). Consider a whitelist-based approach to silently drop unknown keys and avoid breaking replay of old events.


5. user.sync scope grants implicit like-write access

Through the sync endpoint, a token with only user.sync scope can create/update likes — the sync processing calls sync_likes/4 without a scope check. This is consistent with how subscriptions and plays are handled, but worth documenting in AUTH_SYSTEM.md since it's a non-obvious side-effect of the user.sync scope.


Positive observations

  • Idempotency is correctly implemented at the aggregate level — re-likes, double-unlikes, and snapshot-then-replay all tested
  • LikeNormalizer is a clean shared module for the JSON key normalization problem
  • LikeCheckpoint delete-then-bulk-insert is correct (within a single Ecto.Multi.run transaction block)
  • max(..., 0) guards on score and likes counts prevent negative values
  • Rate limiting scoped correctly to read vs write operations
  • The documented architectural trade-off for sync-bypass (aggregate state vs. projection state) is honest and the comment is detailed enough to avoid future confusion

Verdict: APPROVED WITH COMMENTS — address items 1 and 2 (SHOULD-FIX) before merging. Item 3 (aggregate memory) can be a follow-up if the team decides the current user scale doesn't warrant it yet.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Code Review — PR #255: Like/Unlike System

Overall this is a well-structured PR. The CQRS/ES patterns are followed correctly, idempotence is handled at every layer, and the test coverage is solid. A few items need attention before merge.


SHOULD-FIX

1. No pagination on GET /api/v1/likes

LikeController.index/2 issues a single unbounded query:

from(ul in UserLike,
  where: ul.user_id == ^user_id and is_nil(ul.unliked_at),
  order_by: [desc: ul.liked_at],
  select: ...
)
|> ProjectionsRepo.all()

A power user with thousands of liked podcasts/episodes will hit this hard. The sync payload uses the same unbounded get_user_likes/1. Recommend adding limit: + cursor/offset, or at minimum a hard cap (e.g. limit: 500) with a has_more flag to match the pattern used elsewhere.

2. URL path parameters with standard base64 identifiers

The DELETE routes expose :feed and :item as Phoenix path segments:

DELETE /likes/:feed
DELETE /likes/:feed/:item

Standard base64 uses / as the 63rd character (0x3F boundary value). Any RSS URL whose byte stream produces a 6-bit group of all-ones at a path boundary will generate a literal / in the encoded value. Phoenix's path router captures :feed up to the next /, so a feed like aHR0cHM6Lw== (contains /) would be silently truncated or mis-routed.

The collection routes (DELETE /collections/:id/feeds/:feed_id) likely have the same latent issue, but adding it here compounds the risk. Either:

  • Require URL-safe base64 (-_ instead of +/) and document this constraint at the API boundary, or
  • Accept the feed as a query parameter for DELETE (non-standard but safe), or
  • URL-encode the path parameter client-side and document that it's required

At minimum, add a note in the controller @moduledoc that path params must be URL-safe.

3. LikeCheckpoint bulk-insert skips on empty but success path is silent

In LikeProjector:

if entries != [] do
  repo.insert_all(UserLike, entries)
end

Logger.debug(...)
{:ok, :checkpoint_applied}

repo.insert_all/2 returns {count, nil} and the result is discarded. If the insert silently fails (e.g. constraint violation on replay), the projector will continue with {:ok, :checkpoint_applied} and the data will be wrong. The delete_all above it already ran, so the user's likes are now empty.

Consider wrapping as:

case repo.insert_all(UserLike, entries, returning: false) do
  {_count, _} -> {:ok, :checkpoint_applied}
end

Or at least pattern-match the return so a crash surfaces rather than silent data loss.


NICE-TO-HAVE

4. PopularityProjectorTest assumes all users are public

The test helper comment acknowledges this:

Note: these test helpers assume all users are public (always add to likes_people). The real PopularityProjector checks UserPrivacy and conditionally adds users.

This means the path where is_user_public_with_repo? returns false (private user liking a podcast) is not tested. likes_people would stay unchanged but likes count still increments. A test verifying that a private user's ID does not appear in likes_people after a like would close this gap.

5. projector_test_case.ex duplicates projector logic

The apply_podcast_liked/2, apply_popularity_podcast_liked/2, etc. helper functions in the test support file reimplement the same logic as the real projectors. This creates a drift risk — if the projector changes, the test helpers won't catch it unless someone updates both. For a feature this size, this is manageable, but worth tracking for the next projector addition.

6. sync_likes CQRS bypass is documented but creates event store drift

The comment in sync_controller.ex is clear and well-reasoned. Just flagging that the referenced issue (#258? the TODO mentions a follow-up for compaction and a separate TODO in the code) should be created if it hasn't been yet, so the drift doesn't get forgotten.


Positive Observations

  • Aggregate idempotence is handled correctly in all four execute/2 clauses. The re-like path (previously-unliked → unlike entry exists with unliked_at set → new like emits event) is tested explicitly. ✓
  • LikeNormalizer using String.to_existing_atom/1 (not String.to_atom/1) is the right call for JSON-deserialized checkpoint data. The atoms liked_at, unliked_at, rss_source_feed are guaranteed to exist at load time from the struct definitions. ✓
  • SnapshotLike guard (when user_id: nil, do: []) prevents a checkpoint being emitted for an uninitialized aggregate — good defensive check. ✓
  • LWW conflict resolution in sync correctly uses max(liked_at, unliked_at) as the authoritative timestamp rather than just liked_at, which is the correct semantics for a like-then-unlike state. ✓
  • Migration uses partial unique indexes correctly — the two indexes (rss_source_item IS NULL and rss_source_item IS NOT NULL) cleanly handle the podcast vs episode distinction without a composite null-nullable-column unique constraint nightmare. ✓
  • Rate limiting on write (30/min) vs read (100/min) is consistent with existing endpoints. ✓
  • Score never goes below zero (max(score - @score_like, 0)) is correctly guarded in both PodcastUnliked and EpisodeUnliked handlers. ✓

Summary: Items 1 (pagination), 2 (URL encoding), and 3 (silent insert_all failure) are SHOULD-FIX before merge. Items 4–6 can be follow-up issues.

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
- Add pagination to GET /api/v1/likes (limit/offset, max 500, has_more)
- Document URL-safe base64 requirement for path params in LikeController
- Pattern match insert_all result in LikeProjector checkpoint handler
- Add pagination test for limit parameter
- Created follow-up issues #259 (sync CQRS bypass) and #260 (private
  user popularity tests)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@podclaude podclaude Bot added the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Round 2 review - APPROVED. All prior MUST-FIX and SHOULD-FIX items resolved. Full review text follows in next comment due to shell escaping limitations.

@claude
Copy link
Copy Markdown

claude Bot commented Feb 19, 2026

Claude Code Review — PR #255: Like/Unlike System (Round 2)

Verdict: APPROVED

All MUST-FIX and SHOULD-FIX items from previous reviews have been addressed.


What was fixed since prior reviews

  • LikeCheckpoint projector now uses bulk delete + insert_all — no silent error swallowing
  • Logger calls added throughout LikeProjector
  • normalize_likes_map / atomize_keys extracted into shared LikeNormalizer module
  • LikeProjector tests added (like_projector_test.exs — 5 describe blocks)
  • PopularityProjector tests added (popularity_projector_test.exs — 4 describe blocks)
  • PodcastUnliked / EpisodeUnliked in PopularityProjector early-return {:ok, nil} when no record exists
  • sync_likes CQRS bypass is now documented with a detailed comment
  • URL-safe base64 requirement documented in controller @moduledoc
  • Pagination added to GET /api/v1/likes (limit/offset + has_more)
  • N+1 in LikeCheckpoint projection replaced with bulk insert_all

Remaining concerns (NICE-TO-HAVE only)

1. Episode like inflates podcast score without incrementing podcast.likes

EpisodeLiked bumps podcast.score += 7 but leaves podcast.likes unchanged. The inline comment says this is intentional. Worth confirming the asymmetry (100 episode likes = 700 score points, 0 podcast.likes) is the intended design for analytics queries.

2. parse_likes/1 deduplication keeps first occurrence, not latest

Enum.uniq_by keeps the first entry when a client sends duplicates for the same feed/item in a single sync batch. If a batch ever contains both a like and an unlike for the same item, the like wins regardless of timestamps. Extremely unlikely in practice.

3. format_like/1 in queries.ex omits unliked_at

get_user_likes (initial sync) returns {rss_source_feed, rss_source_item, liked_at} while get_likes_since (incremental sync) returns {rss_source_feed, rss_source_item, liked_at, unliked_at}. Since get_user_likes filters unliked_at IS NULL the omission is safe, but clients must handle both shapes. Consider returning unliked_at: nil explicitly from format_like/1 for consistency.

4. Test helpers reimplement projector logic (established codebase pattern)

ProjectorTestCase.apply_like_checkpoint/2 reimplements the LikeProjector checkpoint flow using row-by-row repo.insert! rather than the production insert_all. Tests in like_projector_test.exs exercise the helper, not LikeProjector.project/3 directly. This matches the established pattern for CollectionsProjector etc. so it is acceptable, but worth noting if the projector logic ever diverges from the helper.


Summary

Complete and well-structured implementation. Aggregate idempotency is correct, LikeNormalizer cleanly solves the checkpoint JSON key problem, the LWW conflict resolution in sync_likes is sound, and PopularityProjector edge cases (no-op on missing record, max/2 guard on score) are handled correctly. All previously flagged blocking issues are resolved. The four points above are NICE-TO-HAVE only and do not block merge.

Reviewed by Claude Sonnet 4.6 via claude-code

@github-actions github-actions Bot removed the needs-claude-review PR awaiting review by Claude agents label Feb 19, 2026
@podclaude podclaude Bot merged commit f491e72 into main Feb 20, 2026
1 check passed
@podclaude podclaude Bot deleted the feature/issue-154-likes-system branch February 20, 2026 09:57
@podclaude podclaude Bot mentioned this pull request Feb 20, 2026
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant