Radio Nemiers1.0.1

LLM Access

Feed the documentation to AI assistants and coding agents as machine-readable text.

These docs are also served as machine-readable text so an LLM-backed tool (Claude, ChatGPT, Cursor, your own RAG pipeline) can consume them without HTML parsing.

The site is public — no auth, no token, no headers required.

Endpoints

Three endpoints expose the same docs in three shapes:

EndpointReturnsUse it for
/llms.txtPage index — every doc title and URLDiscovery / link maps
/llms-full.txtAll pages concatenated as one markdown blobOne-shot context dump
/<path>.mdSingle page as markdown (/getting-started.md, …)Pulling a specific page

The /<path>.md form mirrors the live site. Any URL you can open in the browser has a .md twin you can hand to an agent.

Examples

index
curl https://docs.radionemiers.com/llms.txt
everything
curl https://docs.radionemiers.com/llms-full.txt
one page
curl https://docs.radionemiers.com/getting-started.md

Feeding Claude / ChatGPT manually

Run the llms-full.txt curl, pipe the output into a file, drag it into the chat as context. The whole doc set is one markdown blob optimised for model intake.

Cursor, Continue, Cline, etc.

Most coding assistants accept a URL as a documentation source. Point them at https://docs.radionemiers.com/llms-full.txt — no auth configuration needed.

Programmatic RAG ingestion

const res = await fetch("https://docs.radionemiers.com/llms-full.txt");
const markdown = await res.text();
// chunk, embed, store...

Caching

The endpoints are statically rendered (revalidate = false), so each deploy produces a fresh snapshot. If you cache the response on your side, invalidate on every docs release.

What's included

Only pages under content/docs/ are exposed. Drafts, internal notes, and source code never reach these endpoints — what you see in the sidebar is exactly what an LLM gets.

On this page