A fast and declarative JSON serialization library for Elixir, inspired by Alba for Ruby. NbSerializer provides a powerful DSL for defining serializers with compile-time optimizations, making it both developer-friendly and performant.
- 🚀 Compile-time optimizations - DSL compiles to efficient runtime code
- 🎯 Declarative DSL - Clean, readable serializer definitions
- 🔒 Type safety - Explicit type annotations required for all fields with compile-time validation
- 🔌 Framework integration - Built-in support for Phoenix, Ecto, and Plug
- 🐫 Automatic camelization - Convert snake_case to camelCase for JavaScript/TypeScript (configurable)
- 🔄 Circular reference handling - Smart detection and prevention of infinite loops
- 📊 Metadata & Pagination - Built-in support for API metadata
- 🏗️ Telemetry ready - Built-in telemetry events for performance monitoring
- 🛡️ Error handling - Comprehensive error management with custom exceptions
- 🔍 Auto-discovery - Automatic serializer registration and inference
- 🌊 Stream support - Memory-efficient streaming for large datasets
- 🔌 Protocol-based extensibility - Extend formatting and transformation for custom types
- ⚡ Parallel processing - Automatic parallelization of relationship loading
- ✅ Compile-time validation - Struct field validation at compile time
- 🔧 Credo checks - 8 custom Credo checks for serializer code quality
- 📁 Namespace support - Organize TypeScript files with namespace prefixes
Add nb_serializer to your list of dependencies in mix.exs:
def deps do
[
{:nb_serializer, github: "nordbeam/nb_serializer"}
]
endImportant: All fields must have explicit type annotations. Typeless fields will cause a compile-time error. This ensures type safety and enables TypeScript generation.
defmodule UserSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :name, :string
field :email, :string
end
end
# Usage
user = %{id: 1, name: "John Doe", email: "john@example.com"}
{:ok, result} = NbSerializer.serialize(UserSerializer, user)
# => {:ok, %{id: 1, name: "John Doe", email: "john@example.com"}}
# Direct JSON encoding
json = NbSerializer.to_json!(UserSerializer, user)
# => "{\"id\":1,\"name\":\"John Doe\",\"email\":\"john@example.com\"}"Use the :for option to automatically register a serializer for a struct type:
defmodule UserSerializer do
use NbSerializer.Serializer, for: User # Auto-register for User struct
schema do
field :id, :number
field :name, :string
field :email, :string
end
end
# Now you can use inferred serialization
user = %User{id: 1, name: "Alice", email: "alice@example.com"}
NbSerializer.serialize_inferred!(user)
# => %{id: 1, name: "Alice", email: "alice@example.com"}
# Automatically uses UserSerializer!All fields require explicit type annotations. Available types:
| Type | Description | Example |
|---|---|---|
:string |
Text values | field :name, :string |
:number |
Numeric values (int or float) | field :id, :number |
:integer |
Integer values only | field :count, :integer |
:boolean |
True/false values | field :active, :boolean |
:decimal |
Decimal values | field :price, :decimal |
:uuid |
UUID strings | field :uuid, :uuid |
:date |
Date values | field :birthday, :date |
:datetime |
DateTime values | field :created_at, :datetime |
:any |
Dynamic/flexible content | field :metadata, :any |
For advanced TypeScript type generation, use the ~TS sigil:
field :config, type: ~TS"Record<string, any>"
field :metadata, type: ~TS"{ enabled: boolean; count: number }"# Nullable fields (can be null)
field :email, :string, nullable: true
# Optional fields (may be omitted from output)
field :phone, :string, optional: trueThe unified syntax supports typed lists, including lists of primitives, enums, and serializers:
# Lists of primitives
field :tags, list: :string # TypeScript: string[]
field :scores, list: :number # TypeScript: number[]
field :flags, list: :boolean # TypeScript: boolean[]
# Lists of serializers (nested objects)
field :users, list: UserSerializer # TypeScript: User[]
field :items, list: ItemSerializer # TypeScript: Item[]
# Lists can be optional
field :notes, list: :string, optional: true # TypeScript: notes?: string[]Define fields with restricted values using enums:
# Simple enum
field :status, enum: ["active", "inactive", "pending"]
# TypeScript: status: "active" | "inactive" | "pending"
# Optional enum
field :priority, enum: ["low", "medium", "high"], optional: true
# TypeScript: priority?: "low" | "medium" | "high"
# Nullable enum
field :category, enum: ["news", "blog", "update"], nullable: true
# TypeScript: category: "news" | "blog" | "update" | null
# List of enums
field :roles, list: [enum: ["admin", "user", "guest"]]
# TypeScript: roles: ("admin" | "user" | "guest")[]defmodule ProductSerializer do
use NbSerializer.Serializer
schema do
# Primitives
field :id, :number
field :name, :string
field :active, :boolean
# Lists of primitives
field :tags, list: :string
field :scores, list: :number
# Enums
field :status, enum: ["draft", "published", "archived"]
field :priority, enum: ["low", "high"], optional: true
# List of enums
field :categories, list: [enum: ["electronics", "books", "clothing"]]
# Nested serializers
field :users, list: UserSerializer
field :config, serializer: ConfigSerializer
# Optional and nullable
field :description, :string, optional: true
field :metadata, :any, nullable: true
end
endNbSerializer automatically converts snake_case keys to camelCase to match JavaScript/TypeScript conventions (enabled by default):
defmodule UserSerializer do
use NbSerializer.Serializer
schema do
field :user_name, :string
field :email_address, :string
field :is_active, :boolean
field :created_at, :datetime
end
end
user = %{user_name: "John", email_address: "john@example.com", is_active: true, created_at: "2024-01-01"}
NbSerializer.serialize!(UserSerializer, user)
# => %{userName: "John", emailAddress: "john@example.com", isActive: true, createdAt: "2024-01-01"}Configuration:
# config/config.exs
config :nb_serializer,
camelize_props: true # Default: trueOverride per-request:
# Force camelCase
NbSerializer.serialize(UserSerializer, user, camelize: true)
# Keep snake_case
NbSerializer.serialize(UserSerializer, user, camelize: false)The serializer registry allows automatic discovery of serializers based on struct types.
# Register a serializer for a struct type
defmodule UserSerializer do
use NbSerializer.Serializer, for: User # Auto-registers at compile time
schema do
field :id, :number
field :name, :string
end
end
# Inferred serialization - no need to specify serializer
user = %User{id: 1, name: "Alice"}
{:ok, result} = NbSerializer.serialize_inferred(user)
# Works with lists too
users = [%User{id: 1}, %User{id: 2}]
NbSerializer.serialize_inferred!(users)
# Manual registration
NbSerializer.Registry.register(Post, PostSerializer)Efficiently serialize large datasets without loading everything into memory:
# Stream from database
users_query
|> Repo.stream()
|> NbSerializer.serialize_stream(UserSerializer)
|> Stream.map(&NbSerializer.encode!/1)
|> Stream.into(File.stream!("users.jsonl"))
|> Stream.run()
# With inferred serializers
posts
|> Stream.map(&load_associations/1)
|> NbSerializer.serialize_stream_inferred(view: :detailed)
|> Enum.to_list()
# Process in chunks
large_dataset
|> NbSerializer.serialize_stream(ItemSerializer, chunk_size: 100)
|> Stream.each(&process_chunk/1)
|> Stream.run()Extend formatting and transformation for your custom types using protocols:
# Define a custom type
defmodule Money do
defstruct [:amount, :currency]
end
# Implement the Formatter protocol
defimpl NbSerializer.Formatter, for: Money do
def format(%Money{amount: amount, currency: currency}, opts) do
precision = Keyword.get(opts, :precision, 2)
symbol = Keyword.get(opts, :symbol, currency)
formatted = :erlang.float_to_binary(amount / 1.0, decimals: precision)
"#{symbol}#{formatted}"
end
end
# Now Money values are automatically formatted
defmodule ProductSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :name, :string
field :price, :any # Will use Money's formatter when use_protocol: true
end
end
product = %{id: 1, name: "Widget", price: %Money{amount: 19.99, currency: "USD"}}
NbSerializer.serialize!(ProductSerializer, product, use_protocol: true)
# => %{id: 1, name: "Widget", price: "$19.99"}Available Protocols:
NbSerializer.Formatter- Format values for output (DateTime, Date, Decimal, custom types)NbSerializer.Transformer- Transform values before formatting (String, List, custom types)
Note: Protocols are opt-in via use_protocol: true option to maintain backwards compatibility.
Validate struct fields at compile time to catch errors early:
defmodule UserSerializer do
use NbSerializer.Serializer, for: User # Automatically enables validation
schema do
field :id, :number
field :full_name, :string, from: :name # Validates :name exists in User struct
field :contact, :string, from: :email # Validates :email exists in User struct
end
end
# If User struct doesn't have a :name field, you'll get a compile warning:
# warning: Field `full_name` uses `from: :name` but :name does not exist in UserUse the improved within syntax for cleaner circular reference management:
import NbSerializer.Within
# Path-based syntax
NbSerializer.serialize(post, within: build([
~w(author books)a,
~w(author posts)a,
~w(comments user posts)a
]))
# Generate from serializer relationships
within_opts = Within.from_serializer(PostSerializer)
NbSerializer.serialize(post, within: within_opts)
# Merge multiple within options
within1 = [author: [books: []]]
within2 = [author: [posts: []], comments: []]
merged = Within.merge(within1, within2)
# => [author: [books: [], posts: []], comments: []]When serializing data that should keep its original key case (like airport codes, currency codes, or API responses from external services), use preserve_case/1:
defmodule FlightSerializer do
use NbSerializer.Serializer
import NbSerializer, only: [preserve_case: 1]
schema do
field :id, :number
field :departure, :string
field :arrival, :string
field :airport_data, :map, compute: :format_airports
end
def format_airports(%{airports: airports}, _opts) do
# Keys like "JFK", "LAX", "SFO" won't be camelized
preserve_case(airports)
end
end
# Without preserve_case: %{jfk: "...", lax: "..."}
# With preserve_case: %{JFK: "...", LAX: "..."}Use cases:
- Airport codes (JFK, LAX, SFO)
- Currency codes (USD, EUR, GBP)
- External API response data
- Case-sensitive identifiers
Organize generated TypeScript files into directories using the namespace macro:
defmodule MyApp.Admin.UserSerializer do
use NbSerializer.Serializer
namespace "Admin" # Files generated in types/Admin/
schema do
field :id, :number
field :name, :string
end
endThis generates types/Admin/User.ts instead of types/User.ts.
Override the default interface name with typescript_name:
defmodule MyApp.API.V2.UserResponseSerializer do
use NbSerializer.Serializer
typescript_name "UserV2Response" # Instead of "UserResponseSerializer"
schema do
field :data, :map
field :meta, :map
end
endRelationships are automatically processed in parallel when there are 3 or more:
defmodule PostSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :title, :string
# These 4 relationships will be loaded in parallel
has_one :author, AuthorSerializer
has_many :comments, CommentSerializer
has_many :tags, TagSerializer
has_many :categories, CategorySerializer
end
end
# Parallel loading happens automatically
NbSerializer.serialize!(PostSerializer, post)
# Configure the threshold
NbSerializer.serialize!(PostSerializer, post,
parallel_threshold: 2, # Start parallel at 2 relationships
relationship_timeout: 30_000 # Timeout per relationship
)
# Use System.schedulers_online() for max concurrencydefmodule PostSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :title, :string
field :excerpt, :string, compute: :generate_excerpt
field :reading_time, :number, compute: :calculate_reading_time
end
def generate_excerpt(%{body: body}, _opts) do
String.slice(body, 0, 150) <> "..."
end
def calculate_reading_time(%{body: body}, _opts) do
word_count = String.split(body) |> length()
div(word_count, 200) # Assumes 200 words per minute
end
enddefmodule BlogSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :title, :string
field :body, :string
has_one :author, serializer: AuthorSerializer
has_many :comments, serializer: CommentSerializer
has_many :tags, serializer: TagSerializer, if: :include_tags?
end
def include_tags?(_data, opts) do
opts[:include_tags] == true
end
enddefmodule UserDetailSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :name, :string
field :email, :string, if: :show_email?
field :admin_notes, :string, if: :is_admin?
field :private_data, :string, unless: :is_public_view?
end
def show_email?(_user, opts) do
opts[:current_scope] && opts[:current_scope].id == user.id
end
def is_admin?(_user, opts) do
opts[:current_scope] && opts[:current_scope].role == "admin"
end
def is_public_view?(_user, opts) do
opts[:view] == :public
end
enddefmodule ProductSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :name, :string, transform: :titleize
field :price, :number, format: :currency
field :created_at, :datetime, format: :iso8601
field :sku, :string, transform: :upcase_sku
end
def titleize(value) do
value
|> String.split()
|> Enum.map(&String.capitalize/1)
|> Enum.join(" ")
end
def upcase_sku(value) do
String.upcase(value)
end
end# Prevent infinite recursion in circular references
NbSerializer.serialize(BookSerializer, book,
within: [
author: [books: []], # Serialize author and their books, but stop there
comments: [user: []], # Serialize comments and users, but not user's comments
tags: [] # Serialize tags with no nested associations
]
)
# Set maximum nesting depth
NbSerializer.serialize(PostSerializer, post, max_depth: 3)# Add root key
NbSerializer.serialize(UserSerializer, users, root: "users")
# => {:ok, %{"users" => [...]}}
# Add metadata
NbSerializer.serialize(UserSerializer, users,
root: "users",
meta: %{version: "1.0", generated_at: DateTime.utc_now()}
)
# => {:ok, %{"users" => [...], "meta" => %{...}}}
# Pagination metadata
NbSerializer.serialize(UserSerializer, users,
page: 2,
per_page: 20,
total: 100
)
# => {:ok, %{data: [...], meta: %{pagination: %{page: 2, per_page: 20, total: 100, total_pages: 5}}}}defmodule MyAppWeb.UserJSON do
use NbSerializer.Phoenix
alias MyApp.Serializers.UserSerializer
def index(%{users: users}) do
%{users: render_many(users, UserSerializer)}
end
def show(%{user: user}) do
%{user: render_one(user, UserSerializer)}
end
def create(%{user: user}) do
%{user: render_one(user, UserSerializer, view: :detailed)}
end
def error(%{changeset: changeset}) do
render_errors(changeset)
end
enddefmodule MyAppWeb.UserController do
use MyAppWeb, :controller
def index(conn, _params) do
users = Users.list_users()
render(conn, :index, users: users)
end
def show(conn, %{"id" => id}) do
user = Users.get_user!(id)
render(conn, :show, user: user)
end
endAutomatically serialize controller assigns:
# In your router or controller
plug NbSerializer.Plug,
serializers: %{
user: UserSerializer,
users: UserSerializer,
post: PostSerializer,
posts: PostSerializer
},
meta: %{api_version: "1.0"},
cache: true,
cache_ttl: 300NbSerializer automatically handles Ecto schemas and associations:
defmodule PostWithEctoSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :title, :string
field :body, :string
# Only serialize if association is loaded
has_one :author, serializer: AuthorSerializer, if: :author_loaded?
has_many :comments, serializer: CommentSerializer
end
def author_loaded?(post, _opts) do
# Check if Ecto association is loaded
NbSerializer.Ecto.loaded?(post.author)
end
endNbSerializer provides comprehensive error handling:
defmodule SafeSerializer do
use NbSerializer.Serializer
schema do
field :id, :number
field :name, :string
# Handle errors gracefully
field :risky_field, :string, compute: :compute_risky, on_error: :null # Returns nil on error
field :important_field, :string, compute: :compute_important, on_error: {:default, "N/A"} # Returns default value
field :skippable_field, :string, compute: :compute_skippable, on_error: :skip # Omits field from output
field :critical_field, :string, compute: :compute_critical, on_error: :reraise # Raises SerializationError with context
end
def compute_risky(_data, _opts) do
# This might fail
raise "Something went wrong"
end
endNbSerializer includes a telemetry module for future performance monitoring integration. While telemetry events are not currently emitted during serialization, the module structure is in place for adding instrumentation.
The DSL compiles to efficient runtime code:
- No anonymous functions in hot paths
- Optimized field access patterns
- Minimal runtime overhead
# config/config.exs
config :nb_serializer,
encoder: Jason, # JSON encoder (defaults to Jason if available)
camelize_props: true, # Auto-convert to camelCase (default: true)
default_view: :public,
max_depth: 10
# config/dev.exs
config :nb_serializer,
# Enable compile-time struct field validation warnings
validate_struct_fields: true # default: true in dev/testAll serialization functions accept these options:
NbSerializer.serialize(UserSerializer, user,
# Circular reference control
within: [author: [books: []]],
max_depth: 5,
# Protocol-based formatting (opt-in)
use_protocol: true,
# Parallel relationship loading
parallel_threshold: 3,
relationship_timeout: 30_000,
# View and scope
view: :detailed,
current_scope: current_user,
# Output formatting
camelize: true,
root: "users",
meta: %{version: "1.0"},
# Pagination
page: 1,
per_page: 20,
total: 100
)# Run tests
mix test
# Run tests with coverage
mix coveralls
# Run benchmarks
mix run bench/serialization_bench.exs
mix run bench/quick_bench.exslib/
├── nb_serializer.ex # Main entry point
├── nb_serializer/
│ ├── serializer.ex # Core serializer module
│ ├── compiler.ex # DSL compiler
│ ├── dsl.ex # DSL macros
│ ├── ecto.ex # Ecto integration
│ ├── phoenix.ex # Phoenix integration
│ ├── plug.ex # Plug middleware
│ ├── formatters.ex # Built-in formatters
│ ├── telemetry.ex # Telemetry events
│ └── utils.ex # Utility functions
- No Anonymous Functions in DSL - All functions must be named module functions for compile-time safety
- Compile-Time Optimization - DSL compiles to efficient runtime code via macros
- Explicit Field Definition - Serializers must explicitly define included fields
- Ecto-First Design - Built-in handling for Ecto associations and schemas
- Protocol-Based Extensibility - Use Elixir protocols for custom type formatting
- Idiomatic Elixir - Follows Elixir best practices (behaviours, protocols, function capturing,
withstatements) - Performance-Conscious - Automatic parallelization, streaming support, and efficient compilation
NbSerializer includes 8 custom Credo checks for code quality:
| Check ID | Check | Priority | Description |
|---|---|---|---|
| EX6010 | InvalidNestedSerializerType |
HIGH | Detects invalid nested serializer references |
| EX6011 | OptionalVsNullable |
HIGH | Warns about confusion between optional and nullable |
| EX6012 | InconsistentNumericTypes |
NORMAL | Detects mixing :number and :integer types |
| EX6013 | DatetimeAsString |
NORMAL | Warns when datetime fields use :string type |
| EX6014 | MissingDatetimeFormat |
NORMAL | Warns when datetime fields lack format specification |
| EX6015 | MissingModuledoc |
LOW | Detects serializers without module documentation |
| EX6016 | LargeSchema |
LOW | Warns when schema has too many fields (configurable) |
| EX6017 | SimpleFieldCompute |
LOW | Suggests simpler alternatives for trivial computed fields |
Enable in .credo.exs:
%{
configs: [
%{
checks: [
{NbSerializer.Credo.InvalidNestedSerializerType, []},
{NbSerializer.Credo.OptionalVsNullable, []},
{NbSerializer.Credo.InconsistentNumericTypes, []},
{NbSerializer.Credo.LargeSchema, [max_fields: 20]},
# ... other checks
]
}
]
}NbSerializer focuses on core serialization functionality. TypeScript and Inertia.js integrations have been extracted into separate libraries:
Automatic TypeScript interface generation from NbSerializer schemas:
# mix.exs
def deps do
[
{:nb_ts, github: "nordbeam/nb_ts"}
]
endFeatures:
- Generate TypeScript interfaces from serializers
- Support for nullable, arrays, enums, and custom types
- Automatic camelCase conversion
- Runtime type validation with OXC
- Real-time type regeneration during development (via compile hooks)
When nb_ts is installed, serializers automatically trigger TypeScript type regeneration
when recompiled during development. This provides real-time type updates without
manually running mix nb_ts.gen.types.
Configure automatic generation in config/dev.exs:
config :nb_ts,
output_dir: "assets/js/types",
auto_generate: true # Enable real-time type updates (default in dev)See: github.com/nordbeam/nb_ts
Seamless Inertia.js integration with automatic serialization:
# mix.exs
def deps do
[
{:nb_inertia, github: "nordbeam/nb_inertia"}
]
endFeatures:
- Controller helpers for Inertia responses
- Lazy, deferred, and merge props support
- Shared props management
- TypeScript prop type generation
- Automatic camelCase conversion
See: github.com/nordbeam/nb_inertia
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Write tests for your changes
- Ensure all tests pass (
mix test) - Check code formatting (
mix format) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by Alba for Ruby
- Built with love for the Elixir community