Releases: snapsynapse/ai-capability-reference
v2.0.0 - AI Capability Reference
AI Capability Reference v2.0.0
This release is the real product reframe.
What started as a plan-by-plan feature tracker is now a capability-first, ontology-driven reference for understanding what today’s AI tools can actually do, where they work, what they cost, and what constraints apply.
What’s new in v2
-
Capabilities are now the front door
The homepage is no longer organized around vendor feature branding first. It starts from plain-English capabilities people actually care about. -
Detailed availability is still here
The feature-first view remains available for plan, surface, and entitlement lookup. -
A new constraints view
Access, limits, platform caveats, and gating now have their own dedicated view. -
A real shared data model under the site
The repo now includes first-class records for capabilities, providers, products, implementations, model access, and evidence. -
Broader and cleaner coverage
Coverage expanded across ChatGPT, Claude, Gemini, Copilot, Grok, Perplexity, and open/self-hosted model ecosystems. -
Ontology-first repo direction
New design docs, migration strategy, validation tooling, and evidence sync workflows now support the project as a maintained reference system, not just a static tracker. -
Cross-platform skills
The repo now includes reusable skills and bundle tooling for Claude, Perplexity, and Codex workflows.
Why this is v2.0.0
Because this is not just a UI refresh.
It changes the project’s:
- core mental model
- public information architecture
- data model
- maintenance workflow
- long-term roadmap
The old question was:
- “What features does this vendor say it has?”
The new question is:
- “What can this AI actually do for me, on my plan, on my surface, with these constraints?”
That is a real version boundary.
Also in this release
- new hero and social preview assets
- improved About page and README
- better provider filtering and UI polish
- fixed theme, anchor, spacing, and layout issues
- resolved large batches of verification issues
- refreshed roadmap around ontology, capability-first UX, and future agent-readable exports
- hardened skill-source hygiene and documented
skill-provenanceas a companion dependency
Thanks
Built with human review and multi-agent collaboration across Claude, Codex, Gemini, and Perplexity-assisted research workflows.
v1.0.0 — 7 platforms, automated verification, live dashboard, WCAG 2.1 AA
AI Feature Tracker v1.0.0
A community-maintained, single source of truth for AI feature availability across subscription tiers. Answer questions like "Is Agent Mode on the free plan?" or "Which platforms support voice on Linux?" in seconds — without hunting through marketing pages.
What's covered
7 platforms tracked
| Platform | Vendor | Sample features tracked |
|---|---|---|
| ChatGPT | OpenAI | Agent Mode, Custom GPTs, Voice, Deep Research, Codex, DALL-E |
| Claude | Anthropic | Code, Cowork, Projects, Artifacts, Extended Thinking, Vision |
| Copilot | Microsoft | Office Integration, Designer, Vision, Voice |
| Gemini | Advanced, NotebookLM, AI Studio, Deep Research, Gems, Imagen, Live | |
| Perplexity | Perplexity AI | Comet, Agent Mode, Pro Search, Focus, Collections, Voice |
| Grok | xAI | Chat, Aurora, DeepSearch, Think Mode, Voice |
| Local Models | Various | Llama, Mistral, DeepSeek, Qwen, Codestral |
What each entry tracks
- Plan-by-plan availability — exactly which tier unlocks each feature
-
- Platform support — Windows, macOS, Linux, iOS, Android, web, terminal, API
-
- Status — GA, Beta, Preview, Deprecated
-
- Talking points — presenter-ready sentences with click-to-copy
-
- Source links — every data point backed by a citation
Dashboard features
- Filter by category — Voice, Coding, Research, Agents, and more
-
- Filter by price tier — find what's available at your budget
-
- Provider toggles — focus on specific platforms
-
- Permalinks & shareable URLs — filter state preserved in URL parameters
-
- Dark/light mode
-
- WCAG 2.1 AA accessibility — full keyboard navigation, skip links, ARIA live regions, reduced motion support, 4.5:1 contrast minimums
Automated verification system
Feature data is kept current by a multi-model verification pipeline:
- Multi-model cascade — queries Gemini, Perplexity, Grok, and Claude in parallel
-
- Bias prevention — skips same-provider models (won't ask Gemini about Google features)
-
- Consensus required — 3 models must agree before flagging a change
-
- Auto-changelog — confirmed changes logged to each feature's record
-
- Human review gate — creates issues/PRs, never auto-merges
# Run verification
node scripts/verify-features.js
# Check a single platform
node scripts/verify-features.js --platform claude
# Stale-only check (>30 days)
node scripts/verify-features.js --stale-onlyFor contributors
Data lives in data/platforms/ as plain markdown files. Adding or correcting a feature is a one-file edit + PR:
## Feature Name
| Property | Value |
|----------|-------|
| Category | agent |
| Status | ga |
### Availability
| Plan | Available | Limits | Notes |
|------|-----------|--------|-------|
| Free | ❌ | — | Not available |
| Plus | ✅ | 40/mo | Message limit |
### Talking Point
> "Your presenter-ready sentence with **key details bolded**."
### Sources
- [Official docs](https://example.com)
- ```
See [`data/_schema.md`](data/_schema.md) for the full spec and [`CONTRIBUTING.md`](CONTRIBUTING.md) to get started.
---
## Local development
```bash
git clone https://github.com/snapsynapse/ai-feature-tracker.git
cd ai-feature-tracker
node scripts/build.js
open docs/index.htmlThe site auto-deploys via GitHub Actions on every push to main.