feat: add like/unlike system for podcasts and episodes (Closes #154)#255
feat: add like/unlike system for podcasts and episodes (Closes #154)#255podclaude[bot] merged 9 commits intomainfrom
Conversation
CQRS layer: - 5 commands: LikePodcast, UnlikePodcast, LikeEpisode, UnlikeEpisode, SnapshotLike - 5 events: PodcastLiked, PodcastUnliked, EpisodeLiked, EpisodeUnliked, LikeCheckpoint - Like aggregate with idempotent execute/apply and snapshot support - Router updated with like- prefix stream Projections: - UserLike schema (users.user_likes table) with partial unique indexes - LikeProjector: projects like events to user_likes table - PopularityProjector: updates likes/likes_people/score on popularity tables - @score_like = 7 (between subscribe=10 and play=5) API: - POST /api/v1/likes - like podcast or episode - DELETE /api/v1/likes/:feed - unlike podcast - DELETE /api/v1/likes/:feed/:item - unlike episode - GET /api/v1/likes - list user's active likes - JWT scopes: user.likes.read, user.likes.write - Rate limiting: 100 read/min, 30 write/min - Likes included in sync endpoint (bidirectional) Tests: 18 aggregate tests + 11 controller tests Closes #154
Review: feat: add like/unlike system for podcasts and episodesSolid implementation overall — the CQRS structure is correct, idempotency is properly handled, and the test coverage for the aggregate is thorough. A few issues need attention before merge. MUST-FIX1. LikeCheckpoint projector silently swallows errorsIn project(%LikeCheckpoint{} = event, _metadata, fn multi ->
Ecto.Multi.run(multi, :checkpoint, fn repo, _changes ->
from(ul in UserLike, where: ul.user_id == ^event.user_id)
|> repo.delete_all()
for {feed, like_data} <- podcast_likes do
upsert_like(repo, event.user_id, feed, nil, like_data.liked_at) # return value discarded
if like_data.unliked_at do
unlike(repo, event.user_id, feed, nil, like_data.unliked_at) # return value discarded
end
end
...
{:ok, :checkpoint_applied} # always succeeds even if inserts above failed
end)
end)The Fix options:
2. No logging in LikeProjectorEvery other projector in this codebase includes Add SHOULD-FIX3. Missing projector testsThe PR has 18 aggregate unit tests and 11 controller tests, but zero tests for Critical paths missing tests:
4.
|
| Category | Count |
|---|---|
| MUST-FIX | 2 |
| SHOULD-FIX | 2 |
| NICE-TO-HAVE | 4 |
The CQRS aggregate design is clean and the idempotency logic is correct. Main blockers are the silent error swallowing in LikeCheckpoint projection and the missing projector/sync tests.
MUST-FIX:
- Fix LikeCheckpoint projector error swallowing: use Enum.reduce_while
with explicit error propagation instead of for comprehensions
- Add Logger calls to all LikeProjector event handlers
SHOULD-FIX:
- Add LikeProjector tests (10 tests covering like/unlike/checkpoint)
- Add comment explaining sync_likes CQRS bypass trade-off
- Add comment explaining get_user_likes active-only filtering
NICE-TO-HAVE:
- Remove redundant get_in_map/2 helper (use Map.get directly)
- Remove redundant || %{} guards in aggregate apply/2 functions
- Remove Process.sleep(50) from controller test
- Add comment about get_user_likes filtering rationale
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Review: feat: like/unlike system for podcasts and episodesOverall, this is a well-structured PR that follows the established CQRS/ES patterns. The architecture is clean, commands/events/aggregate are properly separated, and the test coverage for core paths is solid. A few issues need attention before merging. MUST-FIX1.
|
MUST-FIX:
- Normalize string keys in LikeCheckpoint apply/2 (aggregate) and
projector: after JSON deserialization from event store, nested map
keys become strings ("liked_at") instead of atoms (:liked_at).
Added atomize_keys helpers in both aggregate and projector.
- Added test verifying idempotency works after checkpoint with string keys
SHOULD-FIX:
- Unlike handlers in PopularityProjector now early-return {:ok, nil}
when no popularity record exists (avoids creating ghost records)
- Enhanced sync_likes comment with specific duplicate event scenario
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Claude Code Review — PR #255: Like/Unlike SystemOverall: Architecture is solid and follows established CQRS/ES patterns. The aggregate design is clean, idempotency is handled correctly, and the test coverage for the aggregate and controller layers is good. A few items to address before merging. SHOULD-FIX1.
|
… popularity tests - Extract LikeNormalizer module shared by aggregate and projector (removes duplication) - Replace N+1 checkpoint queries with bulk repo.insert_all - Add PopularityProjector tests for like/unlike handlers (8 tests) - Pass attrs properly in test helpers for Ecto change tracking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Good implementation overall. The CQRS structure is sound, idempotency is correctly handled in the aggregate, and the LikeNormalizer for checkpoint JSON key normalization is a clean solution. Note: the previous review comment on this PR contains several inaccuracies. |
|
Test comment to verify gh works |
Task QueueSync tasks between this file and GitHub issues/PRs. Workflow for Claude CodeStep 1: Fetch Current State# List open issues
gh issue list --state open --json number,title,labels,state
# List open PRs
gh pr list --state open --json number,title,state,isDraftStep 2: Process TODO SectionFor each task in TODO below:
gh issue create \
--title "feat: TITLE" \
--body "## Objective
DESCRIPTION
## Context
WHY_IT_MATTERS
## Acceptance Criteria
- [ ] Implementation complete
- [ ] Tests pass"
Step 3: Update In ProgressFor each item in "In Progress": # Check issue/PR status
gh issue view NUMBER --json state,title
gh pr view NUMBER --json state,title,mergeableUpdate status codes. Move merged PRs to "Done". Step 4: Commit Changesgit add TODOS.md
git commit --author="Claude <noreply@anthropic.com>" -m "chore: sync TODOS.md with GitHub"
git pushStatus Codes
TODOAdd tasks here. Claude will create GitHub issues for them. (empty) Issues FuturesIssues créées pour les fonctionnalités futures :
In ProgressFormat: DoneFormat:
|
Balados SyncSystème CQRS/Event Sourcing pour la synchronisation d'écoutes de podcasts, d'abonnements et de playlists. OrigineLe système a été codé par Claude.AI en suivant les instructions d'une note présente dans docs/guides/ORIGINAL_NOTE.md Documentation
ArchitectureConcepts clésEvent Sourcing
Privacy granulaire
Privacy configurable par :
Système de popularitéScores par action :
Recalculé toutes les 5 minutes depuis l'event log. InstallationPrérequis
Setup# Cloner le projet
git clone <repo>
cd balados_sync
# Installer les dépendances
mix deps.get
# Créer les bases de données (system schema + event store)
mix db.create
# Initialiser l'event store + migrer schéma system
mix db.init
# Lancer le projet
mix phx.serverLe serveur démarre sur Commandes de base de donnéesInstallation initiale: mix db.create # Crée BDD et event store
mix db.init # Initialise event store + migre systemPendant le développement: mix db.migrate # Migrer le schéma system (après création migration)
mix system_db.migrate # Idem (alias plus verbeux)
# Resets SÉCURISÉS (demandent confirmation):
mix db.reset --projections # Reset projections uniquement (SAFE) ✅
mix db.reset --system # Reset system (users, tokens) ⚠️
mix db.reset --events # Reset event store (EXTREME!) ☢️
mix db.reset --all # Reset TOUT (EXTREME!) ☢️Note: Le projet utilise 4 schémas PostgreSQL distincts :
ConfigurationVariables d'environnement# Development
export DATABASE_URL="postgresql://postgres:postgres@localhost/balados_sync_dev"
export EVENT_STORE_URL="postgresql://postgres:postgres@localhost/balados_sync_eventstore_dev"
export SECRET_KEY_BASE="your-secret-key"
# Production
export PHX_HOST="api.example.com"
export PORT=4000Génération de clés JWT (RS256)# Générer une paire de clés
openssl genrsa -out private_key.pem 2048
openssl rsa -in private_key.pem -pubout -out public_key.pemDépendances supplémentairesLe projet nécessite les dépendances suivantes (déjà dans mix.exs) : # Core
{:commanded, "~> 1.4"}
{:eventstore, "~> 1.4"}
{:ecto_sql, "~> 3.12"}
{:postgrex, "~> 0.19"}
# Web
{:phoenix, "~> 1.7"}
{:joken, "~> 2.6"}
{:joken_jwks, "~> 1.6"}
# RSS Proxy & Aggregation
{:httpoison, "~> 2.2"}
{:cachex, "~> 3.6"}
{:sweet_xml, "~> 0.7"}
{:timex, "~> 3.7"}
# Jobs
{:quantum, "~> 3.5"}Configuration du subdomain playPour tester en local, ajoutez à Puis dans config :balados_sync_web, BaladosSyncWeb.Endpoint,
url: [host: "balados.sync", port: 4000],
http: [ip: {127, 0, 0, 1}, port: 4000],
check_origin: false
config :balados_sync_web,
play_domain: "play.balados.sync"Testez : # Devrait répondre normalement
curl http://balados.sync:4000/api/v1/public/trending/podcasts
# Devrait utiliser le PlayGatewayController
curl -L http://play.balados.sync:4000/{token}/{feed}/{item}Utilisation de l'APIAuthentificationToutes les requêtes authentifiées nécessitent un JWT dans le header : Authorization: Bearer <jwt_token>Le JWT doit contenir : {
"sub": "user_id",
"jti": "unique_token_id",
"device_id": "device_123",
"device_name": "iPhone de John",
"iat": 1234567890,
"exp": 1234567890
}Signé avec RS256 et la clé privée correspondant à la public key enregistrée. Endpoints1. Synchronisation complètePOST /api/v1/sync
Content-Type: application/json
Authorization: Bearer <token>
{
"subscriptions": [
{
"rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"rss_source_id": "podcast_id",
"subscribed_at": "2024-01-15T10:30:00Z",
"unsubscribed_at": null
}
],
"play_statuses": [
{
"rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"rss_source_item": "ZXBpc29kZV8xMjM=",
"position": 1250,
"played": false,
"updated_at": "2024-01-15T11:00:00Z"
}
],
"playlists": []
}Réponse : {
"status": "success",
"data": {
"subscriptions": [...],
"play_statuses": [...],
"playlists": [...]
}
}2. Abonnements# S'abonner à un podcast
POST /api/v1/subscriptions
{
"rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"rss_source_id": "podcast_id"
}
# Se désabonner
DELETE /api/v1/subscriptions/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk
# Liste des abonnements actifs
GET /api/v1/subscriptions3. Écoutes# Enregistrer une écoute
POST /api/v1/play
{
"rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"rss_source_item": "ZXBpc29kZV8xMjM=",
"position": 1250,
"played": false
}
# Mettre à jour la position
PUT /api/v1/play/ZXBpc29kZV8xMjM=/position
{
"position": 2500
}
# Liste des statuts de lecture
GET /api/v1/play?played=false&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk&limit=504. Épisodes# Sauvegarder/liker un épisode
POST /api/v1/episodes/ZXBpc29kZV8xMjM=/save?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk
# Partager un épisode
POST /api/v1/episodes/ZXBpc29kZV8xMjM=/share?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk5. Privacy# Changer la privacy globale
PUT /api/v1/privacy
{
"privacy": "anonymous"
}
# Privacy pour un podcast spécifique
PUT /api/v1/privacy
{
"privacy": "private",
"feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk"
}
# Privacy pour un épisode spécifique
PUT /api/v1/privacy
{
"privacy": "public",
"feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"item": "ZXBpc29kZV8xMjM="
}
# Voir ses settings de privacy
GET /api/v1/privacy
GET /api/v1/privacy?feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk6. Données publiques (pas d'auth)# Top podcasts trending
GET /api/v1/public/trending/podcasts?limit=20
# Top épisodes trending
GET /api/v1/public/trending/episodes?limit=20&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk
# Popularité d'un podcast
GET /api/v1/public/feed/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk/popularity
# Popularité d'un épisode
GET /api/v1/public/episode/ZXBpc29kZV8xMjM=/popularity
# Timeline publique
GET /api/v1/public/timeline?limit=50&offset=0&event_type=play&feed=aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk7. RSS Proxy avec cache LRU (pas d'auth)Proxy CORS pour accéder aux flux RSS depuis le navigateur avec cache de 5 minutes : # Récupérer un flux RSS complet
GET /api/v1/rss/proxy/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVkLnhtbA==
Accept: application/xml
# Filtrer pour un seul épisode (par guid ET enclosure)
GET /api/v1/rss/proxy/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVkLnhtbA==/ZXBpc29kZV8xMjMsaHR0cHM6Ly9leGFtcGxlLmNvbS9lcC5tcDM=
Accept: application/xmlHeaders de réponse :
Cache LRU :
8. RSS Agrégé par utilisateur (avec user_token)Flux RSS personnalisés construits dynamiquement : # Tous les épisodes de mes abonnements (100 derniers)
GET /api/v1/rss/user/{user_token}/subscriptions
Accept: application/xml
# Épisodes d'une playlist
GET /api/v1/rss/user/{user_token}/playlist/{playlist_id}
Accept: application/xmlFonctionnalités :
Génération d'un user_token : POST /api/v1/tokens
Authorization: Bearer <jwt>
{
"name": "My RSS Reader"
}
# Réponse:
{
"token": "randomBase64Token",
"user_id": "user_123"
}9. Play Gateway (subdomain play.balados.sync)Passerelle qui enregistre une écoute et redirige vers l'enclosure : # Format: https://play.balados.sync/{user_token}/{feed_id}/{item_id}
GET https://play.balados.sync/randomToken123/aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk/ZXBpc29kZV8xMjMsaHR0cHM6Ly9leGFtcGxlLmNvbS9lcC5tcDM=Workflow :
Usage dans les flux agrégés : Exemple de réponse trending : {
"podcasts": [
{
"rss_source_feed": "aHR0cHM6Ly9leGFtcGxlLmNvbS9mZWVk",
"feed_title": "Tech Talks",
"feed_author": "John Doe",
"feed_cover": {
"src": "https://example.com/cover.jpg",
"srcset": "..."
},
"score": 25420,
"score_previous": 20000,
"plays": 3420,
"plays_previous": 3000,
"plays_people": ["user1", "user2", "user3"],
"likes": 340,
"likes_previous": 300
}
]
}Architecture de la Base de DonnéesLa base de données PostgreSQL utilise 4 schémas distincts gérés par 2 Ecto Repositories : Architecture Multi-RepoSystemRepo (Données Permanentes)Gestion: Contient les données permanentes via la gestion de Ecto (CRUD direct) :
Caractéristiques:
ProjectionsRepo (Projections Publiques)Gestion: Contient les projections reconstruites depuis les events (read models) :
Caractéristiques:
EventStore (Commanded)Gestion: Contient tous les événements immuables du système :
Important: ❌ NE JAMAIS modifier manuellement. Géré uniquement par Commanded. Configuration FlexibleCes deux repos peuvent être dans la même BDD PostgreSQL avec schemas différents (par défaut en dev) : -- Une seule BDD avec 3 schemas
CREATE SCHEMA system; -- SystemRepo
CREATE SCHEMA public; -- ProjectionsRepo
CREATE SCHEMA events; -- EventStoreOu séparés en différentes BDD pour une meilleure isolation (recommandé en prod) : Commandes Référence
Reset Commands# ✅ SAFE - Reset projections uniquement (preserve system + events)
mix db.reset --projections
# ⚠️ DANGER - Reset system schema (users, tokens)
mix db.reset --system
# ☢️ EXTREME - Reset event store
mix db.reset --events
# ☢️☢️ EXTREME - Reset TOUT
mix db.reset --allDanger:
Worker de maintenanceLe
Configuration dans config :balados_sync_jobs, BaladosSyncJobs.Scheduler,
jobs: [
{"*/5 * * * *", {BaladosSyncJobs.SnapshotWorker, :perform, []}}
]DéveloppementTests# Lancer tous les tests
mix test
# Tests d'une app spécifique
cd apps/balados_sync_core
mix test
# Avec coverage
mix test --coverConsole interactive# Console avec toutes les apps chargées
iex -S mix
# Dispatcher une command
alias BaladosSyncCore.App
alias BaladosSyncCore.Commands.Subscribe
App.dispatch(%Subscribe{
user_id: "user_123",
device_id: "device_456",
device_name: "iPhone",
rss_source_feed: "base64_feed",
rss_source_id: "podcast_id"
})
# Query les projections
alias BaladosSyncProjections.Repo
alias BaladosSyncProjections.Schemas.Subscription
Repo.all(Subscription)Replay d'eventsSi vous devez reconstruire les projections : # Arrêter les projectors
# Supprimer les données des projections (pas l'event store !)
mix ecto.reset
# Les projectors vont automatiquement rejouer tous les eventsEncodage RSSLes # Feed
feed_url = "https://example.com/feed.xml"
rss_source_feed = Base.encode64(feed_url)
# Item (guid + enclosure)
guid = "episode-123"
enclosure = "https://example.com/episode.mp3"
rss_source_item = Base.encode64("#{guid},#{enclosure}")ProductionDéploiement# Build release
MIX_ENV=prod mix release
# Lancer
_build/prod/rel/balados_sync/bin/balados_sync startMonitoringLe système expose des métriques via Phoenix LiveDashboard : Métriques importantes :
TroubleshootingLes projections sont en retard# Vérifier l'état des projections
iex -S mix
BaladosSyncProjections.Projectors.SubscriptionProjector.state()Événements manquantsLes events sont immuables. Si des données semblent manquer, vérifiez :
PerformancePour améliorer les performances :
LicenceMIT SupportPour toute question : support@balados-sync.example.com |
Review: feat: add like/unlike system for podcasts and episodesGood implementation overall. The CQRS structure is sound, idempotency is correctly handled in the aggregate, and the LikeNormalizer for checkpoint JSON key normalization is a clean solution. Note: the previous review comment on this PR contains several inaccuracies - it claims LikeProjector has no Logger calls (it does), that tests are missing (they exist), and describes for comprehensions that are not present in the code. Please disregard those specific points. Here are my findings: MUST-FIX1. LWW conflict resolution in sync_likes is broken for unlike operationsIn sync_controller.ex, the LWW comparison uses liked_at as the timestamp for both sides: client_ts = like.liked_at || now
server_ts = record.liked_at || ~U[1970-01-01 00:00:00Z]
if DateTime.compare(client_ts, server_ts) != :lt do
record |> UserLike.changeset(attrs) |> repo.update()
endFailure scenario:
The correct LWW timestamp should be the most recent action: max(liked_at, unliked_at) for both client and server. Fix: client_ts = max_ts(like.liked_at, like.unliked_at) || now
server_ts = max_ts(record.liked_at, record.unliked_at) || ~U[1970-01-01 00:00:00Z]
defp max_ts(nil, nil), do: nil
defp max_ts(a, nil), do: a
defp max_ts(nil, b), do: b
defp max_ts(a, b), do: if DateTime.compare(a, b) != :lt, do: a, else: bSHOULD-FIX2. Missing tests for sync_likes LWW logicThe controller tests cover auth and basic CRUD, but there are no tests for the sync integration path (sync_likes/4). The LWW conflict resolution has at least one confirmed bug (see above), and this code path is critical for multi-device reliability. Needed test cases:
NICE-TO-HAVE3. String.to_existing_atom in LikeNormalizer is fragile{key, value} when is_binary(key) -> {String.to_existing_atom(key), value}This works because :liked_at, :unliked_at, :rss_source_feed are defined in struct modules loaded at startup. However, if a field is renamed in a future refactor, this will raise ArgumentError on checkpoint replay with no helpful error message. A safer pattern is an explicit whitelist map: @allowed_keys %{
"liked_at" => :liked_at,
"unliked_at" => :unliked_at,
"rss_source_feed" => :rss_source_feed
}4. Aggregate state grows unboundedlyThe Like aggregate stores all historical likes including old unliked entries. For a power user who likes and unlikes many episodes over time, this grows unboundedly and gets loaded into memory on every command. The LikeCheckpoint helps with event replay latency, but the in-memory state is never trimmed. Consider only storing currently-liked entries in the aggregate (since idempotency only needs to know if something is currently liked), or pruning old unliked entries from the checkpoint. 5. LikeCheckpoint events are ignored by PopularityProjectorIf the event store is ever compacted (old events removed up to the checkpoint), a full projection rebuild would have PopularityProjector missing all historical like/unlike events. The PopularityRecalculated periodic recalc would eventually fix this, but there would be a window of inconsistency. Acceptable given the periodic recalc, but worth documenting. Overall the core CQRS implementation is clean. The LWW bug in sync (#1) is the only blocker. |
- Fix sync_likes LWW to compare max(liked_at, unliked_at) instead of just liked_at, preventing stale client likes from erasing server unlikes - Add max_datetime/2 helper for timestamp comparison - Add 5 sync_likes LWW tests: client wins, server wins, client unlike wins, server unlike preserved, new like creation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
test |
Code Review — feat: like/unlike system (#255)Solid implementation overall. CQRS/ES patterns are applied correctly, idempotency is properly handled, and test coverage is thorough. A few issues to address before merging. SHOULD-FIX1. Duplicate In Multi.run(acc_multi, {:like, feed, item || "podcast"}, fn repo, _changes ->
defp parse_likes(likes) when is_list(likes) do
likes
|> Enum.map(fn like -> ... end)
|> Enum.filter(fn like -> is_binary(like.rss_source_feed) end)
|> Enum.uniq_by(fn like -> {like.rss_source_feed, like.rss_source_item} end) # add this
end2. Test helper doesn't implement the privacy check — popularity tests don't reflect production behaviour In the actual likes_people:
if(is_public,
do: add_recent_user(popularity.likes_people, event.user_id),
else: popularity.likes_people
)But the test helper in likes_people: add_recent_user(popularity.likes_people, event.user_id)This means the test assertion NICE-TO-HAVE3. {key, value} when is_binary(key) -> {String.to_existing_atom(key), value}
4. Episode likes silently double-bump podcast score without documentation
5. Sync CQRS bypass is well-documented — thank you The comment in Positive observations
Summary:
APPROVED WITH COMMENTS — please address items 1 and 2 before merging. |
…tions - Deduplicate likes by (feed, item) in parse_likes to prevent Ecto.Multi key collision on duplicate sync payload entries - Document privacy simplification in popularity test helpers - Add comment explaining why episode likes bump podcast score Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Code Review — PR #255: feat: add like/unlike system for podcasts and episodesOverall this is a solid, well-structured implementation. The CQRS/ES patterns are correctly applied, the aggregate is idempotent, and the test coverage is good. Below are findings organized by severity. SHOULD-FIX1. In Ecto.Multi.run(multi, :checkpoint, fn repo, _changes ->
from(ul in UserLike, where: ul.user_id == ^event.user_id)
|> repo.delete_all() # ← result discarded, no pattern match
...
{:ok, :checkpoint_applied}
end)
{_count, _} =
from(ul in UserLike, where: ul.user_id == ^event.user_id)
|> repo.delete_all()2. Missing partial index for the common active likes query The migration creates an index on WHERE user_id = :user_id AND unliked_at IS NULLUsed in create index(:user_likes, [:user_id],
prefix: "users",
where: "unliked_at IS NULL",
name: "user_likes_user_active"
)Without this, PostgreSQL does a full scan on 3. Sync bypass — potential duplicate events in event store The comment in
The projection is idempotent via upsert so the read model stays correct. But event store accumulates duplicates, and 4. Missing test for the
test "user.likes parent scope grants write access", %{conn: conn, user_id: user_id} do
conn =
conn
|> JwtTestHelper.authenticate_conn(user_id, scopes: ["user.likes"])
|> post("/api/v1/likes", %{"rss_source_feed" => "dGVzdC1mZWVk"})
assert json_response(conn, 200)["status"] == "success"
endNICE-TO-HAVE5. The checkpoint includes both liked and unliked entries. Over time, users who frequently like/unlike will accumulate large maps in the aggregate and in checkpoint events. Consider only snapshotting currently-liked entries (where 6. In practice this is safe since the atoms are loaded at startup. But if event store data is ever corrupted with unexpected keys, this would crash the projector during replay. Low risk but worth being aware of. 7. URL-safe base64 for route parameters Routes like Positive Notes
Priority before merge: items #2 (partial index — performance impact compounds over time) and #1 ( |
…index, scope test - Pattern match `delete_all` result in LikeCheckpoint handler to avoid discarded result compiler warning - Add partial index on user_likes WHERE unliked_at IS NULL for active likes queries (GET /api/v1/likes) - Extend sync bypass comment to document SnapshotLike implication - Add test for `user.likes` parent scope in like_controller_test Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Code Review - PR #255: Like/Unlike SystemOverall, this is a well-implemented feature that follows the established CQRS/ES patterns. The architecture is solid, idempotence is correctly handled, and the test coverage is substantial. A few items below need attention before merge. SHOULD-FIX1. Missing test for
# Missing in like_controller_test.exs:
describe "DELETE /api/v1/likes/:feed/:item - unlike episode" do
test "successfully unlikes an episode", ...
test "returns 401 without authorization", ...
end2. If the event store is ever partially replayed starting from a For now this is low-risk (no stream compaction exists), but worth a TODO comment near NICE-TO-HAVE3. # like_controller.ex
from(ul in UserLike,
where: ul.user_id == ^user_id and is_nil(ul.unliked_at),
select: %{...}
)No 4. Sync bypass trade-off comment could be surfaced as a follow-up issue The inline comment in 5. |> Enum.filter(fn like -> is_binary(like.rss_source_feed) end)An empty string Observations (no action required)
VerdictAPPROVED WITH COMMENTS — the SHOULD-FIX items are small (a missing test + a documentation comment). The missing |
…validation - Add missing test for DELETE /api/v1/likes/:feed/:item (unlike episode) - Add TODO comment in PopularityProjector about LikeCheckpoint gap (see #258) - Add order_by [desc: liked_at] to LikeController.index for stable response - Reject empty string rss_source_feed in parse_likes - Created follow-up issue #259 for sync bypass bidirectional CQRS Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Review: feat: add like/unlike system for podcasts and episodes (Closes #154)Solid implementation overall — the CQRS/ES patterns are correctly followed, idempotency is properly handled at the aggregate level, and the test coverage is good. A few concerns below. SHOULD-FIX1. Issue #258 referenced in code but does not appear to exist yet
# TODO: LikeCheckpoint is not handled here. [...] See issue #258.But this PR is #255 — issue #258 doesn't exist yet. The missing Action: Create issue #258 before merging, with the concrete acceptance criteria: replay from a 2. Missing delta sync test:
|
Code Review — PR #255: Like/Unlike SystemOverall this is a well-structured PR. The CQRS/ES patterns are followed correctly, idempotence is handled at every layer, and the test coverage is solid. A few items need attention before merge. SHOULD-FIX1. No pagination on
|
- Add pagination to GET /api/v1/likes (limit/offset, max 500, has_more) - Document URL-safe base64 requirement for path params in LikeController - Pattern match insert_all result in LikeProjector checkpoint handler - Add pagination test for limit parameter - Created follow-up issues #259 (sync CQRS bypass) and #260 (private user popularity tests) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Round 2 review - APPROVED. All prior MUST-FIX and SHOULD-FIX items resolved. Full review text follows in next comment due to shell escaping limitations. |
Claude Code Review — PR #255: Like/Unlike System (Round 2)Verdict: APPROVED All MUST-FIX and SHOULD-FIX items from previous reviews have been addressed. What was fixed since prior reviews
Remaining concerns (NICE-TO-HAVE only)1. Episode like inflates podcast score without incrementing podcast.likes EpisodeLiked bumps podcast.score += 7 but leaves podcast.likes unchanged. The inline comment says this is intentional. Worth confirming the asymmetry (100 episode likes = 700 score points, 0 podcast.likes) is the intended design for analytics queries. 2. parse_likes/1 deduplication keeps first occurrence, not latest Enum.uniq_by keeps the first entry when a client sends duplicates for the same feed/item in a single sync batch. If a batch ever contains both a like and an unlike for the same item, the like wins regardless of timestamps. Extremely unlikely in practice. 3. format_like/1 in queries.ex omits unliked_at get_user_likes (initial sync) returns {rss_source_feed, rss_source_item, liked_at} while get_likes_since (incremental sync) returns {rss_source_feed, rss_source_item, liked_at, unliked_at}. Since get_user_likes filters unliked_at IS NULL the omission is safe, but clients must handle both shapes. Consider returning unliked_at: nil explicitly from format_like/1 for consistency. 4. Test helpers reimplement projector logic (established codebase pattern) ProjectorTestCase.apply_like_checkpoint/2 reimplements the LikeProjector checkpoint flow using row-by-row repo.insert! rather than the production insert_all. Tests in like_projector_test.exs exercise the helper, not LikeProjector.project/3 directly. This matches the established pattern for CollectionsProjector etc. so it is acceptable, but worth noting if the projector logic ever diverges from the helper. SummaryComplete and well-structured implementation. Aggregate idempotency is correct, LikeNormalizer cleanly solves the checkpoint JSON key problem, the LWW conflict resolution in sync_likes is sound, and PopularityProjector edge cases (no-op on missing record, max/2 guard on score) are handled correctly. All previously flagged blocking issues are resolved. The four points above are NICE-TO-HAVE only and do not block merge. Reviewed by Claude Sonnet 4.6 via claude-code |
Summary
Implements a complete like/unlike system for podcasts and episodes, following CQRS/ES patterns.
CQRS Layer
LikePodcast,UnlikePodcast,LikeEpisode,UnlikeEpisode,SnapshotLikePodcastLiked,PodcastUnliked,EpisodeLiked,EpisodeUnliked,LikeCheckpointLikeaggregate with idempotent execute/apply and snapshot supportlike-prefix streamProjections
UserLikeschema (users.user_likestable) with partial unique indexesLikeProjector: projects like events touser_likestablePopularityProjector: updateslikes/likes_people/scoreon popularity tables@score_like = 7(between subscribe=10 and play=5)API Endpoints
POST /api/v1/likes- like podcast or episodeDELETE /api/v1/likes/:feed- unlike podcastDELETE /api/v1/likes/:feed/:item- unlike episodeGET /api/v1/likes- list user's active likesuser.likes.read,user.likes.writeSync Integration
Test Plan
mix test)