Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
36760a7
chore: add AI memory files for code standards analysis session
web3dev1337 Mar 8, 2026
9ff1175
docs: add server architecture analysis report
web3dev1337 Mar 8, 2026
8be8c26
docs: add client architecture analysis report
web3dev1337 Mar 8, 2026
d93c2ee
docs: add protocol layer analysis report
web3dev1337 Mar 8, 2026
c70568d
docs: add coding style conventions report
web3dev1337 Mar 8, 2026
4447e38
docs: add code smells analysis report
web3dev1337 Mar 8, 2026
a2de034
docs: add SOLID principles and clean code analysis report
web3dev1337 Mar 8, 2026
aa2cb7f
docs: compile CODING_STANDARDS.md from 6 analysis reports
web3dev1337 Mar 8, 2026
54a2ced
docs: lower pattern duplication threshold from 5 to 3
web3dev1337 Mar 8, 2026
6514d38
docs: add contribution process, backwards compat, and data-driven rules
web3dev1337 Mar 8, 2026
7f0a832
docs: remove game-specific references from coding standards
web3dev1337 Mar 8, 2026
fa1e236
docs: split coding standards into two focused documents
web3dev1337 Mar 8, 2026
89e91e6
docs: update progress tracking
web3dev1337 Mar 8, 2026
7633847
docs: clarify God Class guidance — no new responsibilities, not untou…
web3dev1337 Mar 8, 2026
c827383
docs: add Code Clarity section with refactoring guidance
web3dev1337 Mar 8, 2026
e3aa5ef
docs: add Performance Impact section to CONTRIBUTING.md
web3dev1337 Mar 8, 2026
34ec1b1
docs: add games-tested requirement to PRs
web3dev1337 Mar 8, 2026
6e0986b
docs: expand games-tested to include device and repro steps
web3dev1337 Mar 8, 2026
a97f69c
docs: split AI review into general (bugs) and standards (style)
web3dev1337 Mar 8, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
913 changes: 913 additions & 0 deletions CODING_STANDARDS.md

Large diffs are not rendered by default.

195 changes: 195 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
# Contributing to Hytopia

For code style, naming conventions, and technical standards, see [CODING_STANDARDS.md](CODING_STANDARDS.md).

---

## PR Requirements

Every pull request must include:

1. **Description** — What changed, why, and what it affects
2. **Test evidence** — How you verified it works (screenshots, test output, repro steps)
3. **Games tested** — Which games (SDK examples, your own game, etc.), on what devices (desktop browser, mobile browser, specific OS), and what you did in-game to exercise the change
4. **Breaking change flag** — If defaults, public API signatures, or wire format changed, say so explicitly
5. **Performance impact** — For runtime code changes: what targets were tested, any before/after numbers

---

## Backwards Compatibility

### Defaults Are Sacred

Changing a default value is a breaking change. Existing games depend on current defaults without specifying them explicitly.

```typescript
// A game using the SDK today:
const world = new World({ name: 'My World' });
// This implicitly depends on:
// tickRate = 60
// gravity = { x: 0, y: -32, z: 0 }
// ambientLightColor defaults
// particle alpha = 1.0

// If you change gravity default to -9.8, every existing game
// that doesn't explicitly set gravity will break silently.
```

**Rules:**
- Never change a default value without a deprecation path
- If a default must change, require explicit opt-in via options
- Document the migration in the PR description
- Consider adding a console warning for one release cycle

### Public API Contract

These are part of the API contract and must not change without a major version bump:
- Method signatures on exported classes
- Event string values (`'PLAYER.JOINED_WORLD'`)
- Options interface field names and their defaults
- Wire format packet structure
- Constructor parameter shapes

---

## Performance Impact

Every change should be considered from a performance perspective. Hytopia runs on both desktop and mobile, on both client and server — what's cheap on a desktop GPU can be a bottleneck on a mobile browser, and what's fine for one player can collapse at 50.

### Think Across All Targets

| Target | Constraints to consider |
|--------|------------------------|
| Server | Tick budget (~16ms at 60Hz), memory per-world, scales with player count |
| Desktop client | GPU draw calls, texture memory, physics step time |
| Mobile client | Thermal throttling, limited GPU/RAM, battery drain, smaller bandwidth |
| High player count | Per-player serialization cost, event fan-out, network packet size |

### What to Ask Yourself

Before submitting a PR that touches runtime code:

- **Does this scale?** Will it still work with 50 players? 200 entities? What's the growth curve — linear, quadratic, constant?
- **Does this allocate?** Any `new`, string concatenation, or array creation in a per-tick or per-frame path adds GC pressure. Worse on mobile.
- **Does this add draw calls?** New visual elements, materials, or render passes affect mobile frame rate disproportionately.
- **Does this add network traffic?** Extra packets or larger payloads affect mobile users on limited connections. Check if the data can be delta-compressed or batched.
- **Does this affect startup time?** New asset loading, initialization, or validation that runs on connect/join impacts mobile users most.

### When Performance Evidence Is Required

If your change touches any of these, include before/after measurements in the PR:

- Tick loop or frame loop code
- Serialization / deserialization
- Network packet handling
- Entity creation, destruction, or sync
- Asset loading or caching
- Physics simulation setup
- Anything called per-entity or per-player per-tick

"It works on my machine" is not sufficient — consider the lowest-spec target.

---

## Review Process

### Quality Gate Stack

PRs pass through these layers in order. A failure at any layer blocks merge.

| Layer | What | Automated? |
|-------|------|:----------:|
| 1. Type checks | `tsc --noEmit` catches type errors | Yes (CI) |
| 2. Lint | ESLint enforces style rules | Yes (CI) |
| 3. Unit tests | Verify isolated behavior | Yes (CI) |
| 4. Performance tests | No regressions in hot paths | Yes (CI) |
| 5. AI review (general) | Bug detection, logic errors, edge cases, merge readiness | Yes |
| 6. AI review (standards) | CODING_STANDARDS.md compliance check | Yes |
| 7. Human review | Maintainer reviews architecture, intent, edge cases | No |
| 8. Manual testing | Run affected systems, verify behavior | No |
| 9. Game regression | Test against existing games to catch silent breakage | No |

### AI Review — General (Layer 5)

A general-purpose AI review with fresh context. Use multiple tools (e.g. Claude Code, Codex) for independent perspectives. The reviewer checks:
- Bugs, logic errors, off-by-one mistakes
- Edge cases and failure modes
- Missing validation or error handling
- Whether the change is actually ready to merge

Prompt should be open-ended: *"Review this PR for bugs, edge cases, and merge readiness"* — not limited to style.

### AI Review — Standards (Layer 6)

A separate, focused check against CODING_STANDARDS.md:
- Hard rule violations
- Naming convention mismatches
- Backwards compatibility concerns
- Missing cleanup or event listener pairing

This is intentionally separate from the general review so style concerns don't crowd out bug detection.

### Fresh Context for Both

Each AI review must run with fresh context — no carry-over from previous reviews. This prevents the reviewer from developing blind spots about the codebase.

### Human Review (Layer 7)

Human reviewers focus on what automation cannot catch:
- Does the change make architectural sense?
- Are there edge cases the tests don't cover?
- Will this be maintainable in 6 months?
- Does the PR description accurately reflect the change?

---

## Testing Expectations

### Unit Tests

New public methods should have corresponding tests. Tests verify behavior, not implementation:

```typescript
// DO: Test behavior
it('rejects invalid packet format', () => { ... });
it('emits JOINED_WORLD when player enters', () => { ... });

// DON'T: Test implementation details
it('calls _internalMethod three times', () => { ... });
```

### Performance Tests

Changes to hot paths (tick loops, serialization, network sync) must include before/after benchmarks. Key metrics:
- Tick processing time (ms per tick)
- Serialization throughput (entities per second)
- Memory allocation rate in hot paths (should be zero)

### Game Regression

Before merging changes that affect defaults, physics, networking, or entity behavior:
1. Run at least one existing game against the branch — SDK examples, your own game, or both
2. Verify no visual or behavioral differences
3. In the PR, describe:
- **Which games** — e.g. `examples/payload-game`, your own game
- **Which devices** — e.g. Chrome desktop, Safari iOS, Android Chrome
- **What you did** — e.g. "spawned 20 entities, walked around, triggered physics collisions, tested on mobile with 3 players"
4. Note any intentional behavior changes in the PR description

---

## Robustness Checklist

Before submitting a PR, verify:

- [ ] No new God Class additions (see CODING_STANDARDS.md section 11)
- [ ] Event listeners have matching cleanup
- [ ] Error paths use ErrorHandler, not raw throw
- [ ] No defaults were changed (or change is flagged as breaking)
- [ ] Public API types don't use `any`
- [ ] Hot path changes don't allocate (no `new` in tick loops)
- [ ] Options pattern used for configurable values
- [ ] Configuration arrays marked `readonly`
- [ ] Considered performance on mobile, not just desktop
- [ ] Changes that scale with player/entity count have been stress-tested
- [ ] No unnecessary network traffic added (batching, delta compression considered)
17 changes: 17 additions & 0 deletions ai-memory/analysis/code-standards-guidelines-44f2a42/init.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Code Standards Analysis Task

## Request
Analyze the entire Hytopia codebase in extreme detail to:
1. Document coding style, design patterns, and architecture
2. Create comprehensive guidelines/principles for AI code contributions
3. Create linting rules enforceable by AI review
4. Identify areas where current code could be improved (so new contributions are held to a higher standard)

## Approach
Using multiple agent teams to analyze different aspects in parallel:
- Team 1: Server architecture & patterns
- Team 2: Client architecture & patterns
- Team 3: Protocol layer analysis
- Team 4: Code smells & improvement opportunities
- Team 5: Style & conventions extraction
- Team 6: Enterprise patterns & clean code analysis
15 changes: 15 additions & 0 deletions ai-memory/analysis/code-standards-guidelines-44f2a42/progress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Progress

- [x] Server architecture analysis (server-architecture.md)
- [x] Client architecture analysis (client-architecture.md)
- [x] Protocol layer analysis (protocol-architecture.md)
- [x] Style conventions analysis (style-conventions.md)
- [x] Code smells analysis (code-smells.md)
- [x] SOLID principles analysis (principles-analysis.md)
- [x] Compile final CODING_STANDARDS.md from all 6 reports
- [x] Add mega-sappy guidelines (backwards compat, data-driven, review layers)
- [x] Remove game-specific content (damage/weapon examples)
- [x] Critical review and restructure into two documents
- [x] CODING_STANDARDS.md — code quality only (13 hard rules, clean numbering)
- [x] CONTRIBUTING.md — process/governance (PR reqs, review layers, backwards compat)
- [x] Update PR #13 description
Loading