Skip to content

jan-havlin-dev/featureflag-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Feature Flag & Experiment Management API

A Go-based backend service for managing feature flags and A/B experiments, featuring role-based access control, audit logging, and rollout strategies. Built with clean architecture principles, GraphQL (gqlgen), PostgreSQL, and JWT-based authentication.

Motivation

This project marks my first deliberate adoption of an agentic AI workflow, implemented using the Cursor IDE. It was created primarily as a structured learning exercise, focused on evaluating how modern AI-assisted development can be integrated into a disciplined engineering process.

Rather than treating AI as a novelty, I approached it as a tooling shift with architectural implications. The productivity gains were significant, but more importantly, the workflow challenges established assumptions about how software is designed, implemented, and iterated on. It requires clearer intent, stronger contextual framing, and more explicit communication of constraints.

From the outset, I placed strong emphasis on initial project configuration. Establishing high-quality context for the AI proved essential. This context was iteratively refined throughout development. With sufficient structure and systematic thinking, continuity across sessions can be maintained effectively.

One notable advantage of working with Cursor is its responsiveness to developer feedback and its ability to provide deeper technical explanations when needed. Used correctly, it functions as a capable implementation partner—augmenting, not replacing, engineering judgment.

AI Workflow Perspective

Part of the workflow design was informed by structured outputs generated by another LLM. Using one model (optimized for retrieval or summarization) to prepare inputs for another highlights an important pattern: AI systems can be composed. This layered usage has practical implications for accelerating research, reducing cognitive load, and tackling complex domains more efficiently.

The key is not blind adoption, but calibrated usage—understanding when AI meaningfully improves leverage and when traditional approaches remain preferable.

Technology Scope and Intentional Expansion

This project also served a second purpose: broadening my technological range. Designing systems responsibly requires at least a working understanding of the tools involved, including their trade-offs and operational characteristics.

Language

This project was intentionally selected to support my expansion into Go. The decision was professionally motivated and carefully considered, though I will not elaborate further here.

Prior to this, my primary focus had been Rust, which provided strong foundations across abstraction layers and system-level reasoning. However, no single language is optimal across all domains. Go was chosen for its different design philosophy and ecosystem characteristics, which align well with certain categories of backend and distributed systems development.

The application is therefore implemented in Go by design, not by convenience.

API Design

While REST remains widespread and practical, there are scenarios where its structural model becomes limiting. Certain use cases require greater flexibility in shaping data contracts and query behavior.

For this reason, the project incorporates GraphQL. The objective was not trend adoption, but architectural exposure—understanding how alternative API paradigms influence system design, client interaction patterns, and schema evolution.

Database

My earlier experience was centered primarily on SQLite. While suitable for many use cases, it does not fully represent the operational realities of client–server or distributed environments.

This project therefore uses PostgreSQL, selected for its maturity, feature depth, and suitability for production-grade systems. The goal was to gain hands-on experience with a database system designed for concurrency, scaling considerations, and more complex deployment topologies.

Application overview

The system is a Feature Flag & Experiment Management API: a backend that allows creating and managing feature flags (with rollout strategies), A/B experiments with variants and user assignments, and an audit trail of changes. Access is controlled via JWT authentication and role-based permissions.

Purpose and scope

The API is intended for applications that need:

  • Feature flags – to turn features on or off and to roll them out gradually (by percentage or by user attributes) across environments such as dev, staging, and prod.
  • A/B experiments – to define experiments with multiple variants, assign weights, and deterministically assign users to variants, with assignments stored for consistency.
  • Auditability – to record who changed what and when for feature flags (and, where applicable, other entities).
  • Secure, role-aware access – so that only authorised users (admins, developers, viewers) can perform the right operations.

The primary interface is GraphQL over HTTPS. The service is structured in layers so that API, business logic, and data access stay clearly separated and testable.

Features and behaviour

Feature flags

  • Create, update, and delete feature flags.
  • Enable or disable a flag.
  • Support multiple environments (e.g. dev, staging, prod).
  • Rollout strategies (one per flag):
    • Percentage-based: deterministic rollout by user ID (e.g. 30% of users); the same user always gets the same result (e.g. via hashing into buckets).
    • Attribute-based: enable/disable based on user attributes (e.g. user ID allowlist, email domain such as @company.com); rules refer to attributes and conditions and are evaluated against a context (user ID and any provided attributes).
  • Evaluation: the API can evaluate whether a flag is “on” for a given key and evaluation context (e.g. userId, email).

Experiments (A/B)

  • Define experiments with multiple variants (A, B, C, …).
  • Assign weights to variants (e.g. 50/50, 90/10).
  • Deterministic user-to-variant assignment so the same user consistently receives the same variant.
  • Persist user–experiment–variant assignments in the database.

Audit log

  • Record every change to feature flags (and, as designed, other critical entities).
  • Store who made the change, when, and what was changed (entity, entity ID, action, actor, timestamp).

Authentication and authorization

  • JWT-based authentication: clients obtain a token (e.g. via a login mutation) and send it with requests.
  • Role-based access control: roles such as admin, developer, and viewer govern which operations a user can perform.

High-level architecture

The codebase follows a layered, clean-architecture style:

  • Transport layer (transport/graphql) – GraphQL server, resolvers, and middleware (JWT auth, logging, error handling). Resolvers are thin: they adapt GraphQL input to service calls and map results back to the API. The transport layer depends only on service contracts, not on domain entities or repositories.

  • Service layer (internal/flags, internal/experiments, internal/auth, …) – Business logic: flag lifecycle, rollout evaluation, experiment assignment, user and auth operations. Services are independent of the transport and of how data is stored.

  • Repository layer – Database access only: queries and writes against PostgreSQL. Repositories are used by services and hide persistence details. Transaction boundaries are defined here.

  • Domain entities – Core types (flags, rules, experiments, variants, users, audit entries) used by services and repositories. They are not exposed directly to the transport.

  • Graph and generated code (graph/) – GraphQL schema definitions and gqlgen-generated types and resolver scaffolding. The schema is the single source of truth for the API shape.

  • Infrastructure – Database connection and schema management (internal/db), SQL migrations (migrations/), and local development setup (e.g. Docker Compose for PostgreSQL).

Directory-wise, the layout looks like:

  • transport/graphql/ – server, resolvers, middleware
  • internal/flags/, internal/experiments/, internal/auth/, internal/users/, internal/db/ – services, repositories, and shared infra
  • graph/ – schema and gqlgen output
  • migrations/ – SQL migrations
  • tests/ or test/ – test suites

The API is served over HTTPS; GraphQL queries and mutations use the standard HTTP request/response model for compatibility with clients, proxies, and caches.

Technology stack

  • Go – implementation language (type safety, concurrency, clarity).
  • gqlgen – schema-first GraphQL: schema drives generated types and resolver interfaces.
  • PostgreSQL – primary store for users, flags, rules, experiments, variants, assignments, and audit logs.
  • JWT + RBAC – authentication and authorization.
  • Docker Compose – local PostgreSQL and development environment.

Testing

The project relies on unit tests and integration tests. Unit tests focus on the service layer (flag and experiment logic, rollout and assignment rules) using mocked repositories so that behaviour can be checked in isolation. Integration tests run against a real database (e.g. PostgreSQL via testcontainers or Docker) and hit the GraphQL API over HTTP to verify end-to-end behaviour, including authentication and role enforcement. The aim is to keep core business logic well covered and to validate that the API, services, and database work together correctly.

About

Backend service for managing feature flags and A/B experiments with PostgreSQL, role-based access, audit logs and rollout strategies.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors