Conversation
HashRouter fragments aren't crawlable, so every route showed identical meta tags and sitemap entries. Switch to BrowserRouter for clean URLs; public/404.html stashes the intended pathname in sessionStorage and redirects to '/', which main.tsx replays before the router boots. .nojekyll disables Jekyll processing so dotfiles (e.g. .well-known) ship in the Pages artifact.
Add src/components/SEO.tsx, a small runtime component each page renders to update <title>, description/keywords/robots meta, canonical, Open Graph, Twitter cards, and a page-specific JSON-LD block. Wire it into all six pages with route-specific copy (TechArticle for docs pages, SoftwareSourceCode for the landing page). Enrich index.html with a site-wide @graph JSON-LD (Organization, WebSite, SoftwareSourceCode, TechArticle), canonical, theme-color, font preconnect, and a <noscript> pointer to the machine-readable surfaces for non-JS crawlers.
Add a Vite plugin that mirrors spec/SPECIFICATION.md and spec/grammar/kndl.ebnf from the repo root into /spec/* on both dev and build, and regenerates /llms-full.txt (spec + EBNF + example index) at build time so agents can slurp everything in one request. Ship the discovery stack under public/: - /llms.txt — concise index in the llmstxt.org format - /robots.txt — explicit allows for GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot, Applebot-Extended, Meta-ExternalAgent, et al. - /sitemap.xml — all six SPA routes plus the raw machine-readable URLs - /.well-known/security.txt — security contact per securitytxt.org - /examples/*.kndl — eight curated snippets (basic-building, intent- overheat, process-shipment, query-aggregation, healthcare-observation, fintech-transaction, robotics-pose, logistics-trace) plus index.md and a standalone index.html so /examples/ returns 200 The Vite plugin needs node types, so add @types/node and set tsconfig.node.json "types": ["node"].
GitHub Pages has no server-side SPA fallback, so direct hits on /spec, /spec/full, /workflow, /mcp, /explorer would return the 404.html with a 404 status — poor for SEO even though the content renders after the JS redirect. Stamp one HTML shell per route at build time so each URL is served with status 200 and the right <title>, <meta>, canonical, Open Graph, Twitter cards, and route-specific JSON-LD already in the markup. At runtime the <SEO> component finds the same tags (via data-seo selectors) and overwrites them in place with matching values, so there is no duplication or flash of old meta. Wire as the final `build` step and a standalone `prerender` script.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.