Four audiences, one framework

Most frameworks serve two audiences: browser and API. Forge serves four. The reading layer and the operations layer are complementary -- and that is what makes Forge AI-native rather than AI-compatible.

Most frameworks are built for two audiences: a browser that wants HTML, and an API client that wants JSON. The same URL, different Accept headers, different responses.

That model made sense when the only agents reading your content were humans and crawlers. It does not cover what is happening now.

Forge is built for four audiences.

Browser       → HTML
API client    → JSON
AI reading    → /llms.txt, /{slug}.aidoc, /llms-full.txt
AI operating  → MCP tools

The reading layer and the operations layer are complementary, not overlapping. An agent that reads your content to answer a question uses different endpoints than an agent that creates, publishes, or archives content. Both are first-class.

This is what AI-native means in practice -- not "we added an AI feature," but "we designed for AI agents as a distinct audience from the start."

The reading layer

Three formats. Three agent contexts. Each matched to a real use case.

FormatThe question the agent is askingData volume
/llms.txt"What exists on this site?"Small -- one line per item
/{slug}.aidoc"Give me this one piece of content"Medium -- one article
/llms-full.txt"Give me everything"Large -- full corpus

A research agent indexing your site does not need to fetch every article. A question-answering agent responding to one query does not need the full corpus. The design matches how agents actually work, not a hypothetical general case.

/llms.txt

Compact content index in llmstxt.org format. One line per published item: title, URL, summary. An agent can determine what your site contains and which URLs to follow without fetching any content pages.

/{slug}.aidoc

Token-efficient format for a single content item. Contains: title, URL, summary, and markdown body. Returned as text/plain. Curated for what an agent needs to understand and cite content -- not a raw dump of all database fields.

This is the distinction from application/json on the same URL. JSON gives you the full API representation including internal fields. .aidoc is curated for agent consumption: enough to understand, enough to cite, no more.

/llms-full.txt

Full markdown corpus of all published content. One request, everything. For agents that need to reason across the full body of your content rather than navigate it item by item.

The operations layer

The reading layer covers agents that consume your content. The operations layer covers agents that manage it.

That is forge-mcp -- described in detail in D7. The short version: every content type you define generates a complete set of typed MCP tools. The agent creates, updates, publishes, and archives content through the same access rules as any other authenticated operator.

The two layers are designed to work together. An agent can read your content index via /llms.txt, identify what needs updating, fetch the relevant item via /{slug}.aidoc, and then update it via the MCP tool -- all in one session, all within the access boundaries you set.

What browser and API clients get

The browser and API paths have not changed. Accept: text/html (or no header) returns a rendered HTML page. Accept: application/json returns the structured JSON representation. These are handled by the same routes that serve all other content -- no duplication.

The AI-specific formats are additive, not a replacement for anything that was already there.