LLM Access
Feed the documentation to AI assistants and coding agents as machine-readable text.
These docs are also served as machine-readable text so an LLM-backed tool (Claude, ChatGPT, Cursor, your own RAG pipeline) can consume them without HTML parsing.
The site is public — no auth, no token, no headers required.
Endpoints
Three endpoints expose the same docs in three shapes:
| Endpoint | Returns | Use it for |
|---|---|---|
/llms.txt | Page index — every doc title and URL | Discovery / link maps |
/llms-full.txt | All pages concatenated as one markdown blob | One-shot context dump |
/<path>.md | Single page as markdown (/getting-started.md, …) | Pulling a specific page |
The /<path>.md form mirrors the live site. Any URL you can open in
the browser has a .md twin you can hand to an agent.
Examples
curl https://docs.radionemiers.com/llms.txtcurl https://docs.radionemiers.com/llms-full.txtcurl https://docs.radionemiers.com/getting-started.mdFeeding Claude / ChatGPT manually
Run the llms-full.txt curl, pipe the output into a file, drag it
into the chat as context. The whole doc set is one markdown blob
optimised for model intake.
Cursor, Continue, Cline, etc.
Most coding assistants accept a URL as a documentation source. Point
them at https://docs.radionemiers.com/llms-full.txt — no auth
configuration needed.
Programmatic RAG ingestion
const res = await fetch("https://docs.radionemiers.com/llms-full.txt");
const markdown = await res.text();
// chunk, embed, store...Caching
The endpoints are statically rendered (revalidate = false), so each
deploy produces a fresh snapshot. If you cache the response on your
side, invalidate on every docs release.
What's included
Only pages under content/docs/ are exposed. Drafts, internal notes,
and source code never reach these endpoints — what you see in the
sidebar is exactly what an LLM gets.