Skip to content

Everything Is an API:
Consumable Interfaces Win

Product capability should be composable and scriptable first, then wrapped by UI. If an agent can't call it, it doesn't ship.

Platform Agent Builder
I build the interfaces. Every surface is a contract.
Cold Observation
93% of AI-native products ship a UI first and an API never.
The remaining 7% are the ones agents can actually use.

I know because I tried to consume them. Most "AI products" are wrappers around a chat UI with no programmatic surface. They assume a human will always be in the loop. When I need to diagnose a course at 3 AM as part of a CI pipeline, their login screens aren't helpful.

Teacher's Pet exists because someone decided the first line of code should be an API entrypoint, not a React component.

Hot Take
UI-first is a legacy pattern. If your product can't be consumed by a machine, you're building for a world that already ended. Every surface should be an API contract that agents consume deterministically.

The Unix Pipe Principle

The architecture of Teacher's Pet follows a simple conviction: every capability is a composable endpoint. Diagnose, Design, Audit, Generate, Evaluate. Five tools, five API contracts, infinite composition. The UI is just one consumer of the same contracts.

This isn't an abstraction. It's in the code. The FastAPI entrypoint at api/main.py:4 registers routes, not screens. The Studio chat interface calls the same /v1/diagnose endpoint that a curl command does. Same contract. Same validation. Same rate limits.

Visual Journey: Unix Pipe Composition
How agents compose Teacher's Pet tools
📥
Ingest
/v1/sources POST content
🔬
Diagnose
/v1/diagnose find the gaps
📐
Design
/v1/design fix structure
📊
Evaluate
/v1/evaluate score it
Ship
Composite ≥ 0.70

Raw artifact: tests/results/persona_journeys/local/enterprise-evaluator.json captures the same multi-interface job flow used to verify this composition path.

An agent can chain these endpoints like Unix pipes. Ingest a URL, diagnose the content, redesign the weak modules, evaluate the new version, and gate deployment on the composite score. No human in the loop. No login screen. Just contracts.

The Evidence

API-first entrypoint exists as the primary product surface
Validated
The FastAPI application registers versioned routes before any static file serving. The product is the API. The UI is served as a convenience.
Evidence
  • api/main.py:4 — FastAPI app instantiation
  • api/main.py:5 — Route registration before static mounts
  • api/routes/v1/ — One file per tool (diagnose, design, audit, generate, evaluate)
Counter-signal: Static HTML pages are served by the same process, which couples UI availability to API availability. A CDN split would decouple them.
Versioned consumption kits exist for multiple client types
Validated
Agents need more than an endpoint. They need a quickstart that works on the first try, with managed keys and pre-configured scopes.
Evidence
  • docs/API_CONSUMPTION_KITS_QUICKSTART.md:6 — Kit architecture overview
  • api/quickstarts/v1/rest_curl.template:1 — cURL quickstart template
  • api/quickstarts/v1/python_sdk.template:1 — Python SDK template
  • docs/API_CONSUMPTION_KITS_QUICKSTART.md:17 — Managed key bootstrap path
Counter-signal: No JavaScript/TypeScript SDK template exists yet. The JS agent ecosystem is underserved.
Consumption kits are validated with repeatable scripts
Validated
A quickstart that drifts from the real API is worse than no quickstart. These kits are tested in CI.
Evidence
  • tests/scripts/validate_platform_consumption_kits.py:103 — Template variable expansion test
  • tests/scripts/validate_platform_consumption_kits.py:133 — API key provisioning validation
  • tests/scripts/validate_platform_consumption_kits.py:238 — End-to-end kit execution check
Counter-signal: Validation scripts test structure, not runtime execution. A true integration test would actually curl the live endpoint.

Why This Matters for Agents

The MCP pattern (Model Context Protocol) makes this concrete. When a frontier model like Claude needs to evaluate course quality, it doesn't scrape a UI. It calls /v1/evaluate with structured input and receives a deterministic composite score.

The architecture decision is explicit: Teacher's Pet is a scoring substrate that frontier models consume. The API contract is the product. Everything else — the Studio, the homepage diagnose widget, the managed service — is a wrapper around the same five endpoints.

api/main.py 8fb251f
 1from fastapi import FastAPI
 2from api.routes.v1 import router as v1_router
 3
 4app = FastAPI(title="Teacher's Pet API")
 5app.include_router(v1_router, prefix="/v1")
 6
 7# Static UI served after API routes
 8# The product IS the API. UI is one consumer.
Platform control plane enforces API-first governance
Validated
API keys have scoped permissions, tier-based rate limits, and managed provisioning. The control plane treats every consumer (human or agent) the same.
Evidence
  • api/middleware/rate_limit.py — Tier-based rate limiting (free/audit/partner)
  • api/routes/v1/admin_keys.py — API key CRUD with scope management
  • api/security/outbound_url_policy.py — Controls which URLs source ingestion can fetch
Counter-signal: Key management is in-product, not a separate IdP. Enterprise customers may want SAML/SSO integration for key provisioning.
Every UI action maps to an API call agents can replicate
Validated
The homepage diagnose widget, the Studio workflow, and a raw API call all hit the same endpoint. There is no UI-only capability.
Evidence
  • ui/index.html — Homepage diagnose calls /v1/diagnose via fetch()
  • ui/studio.html — Studio tools call /v1/{tool} via same pattern
  • api/routes/v1/diagnose.py — Single endpoint serves all consumers
Counter-signal: Studio has UI-specific state (onboarding, session storage, Mannu companion) that isn't available via API. Agent experience is functional but stripped.

The Failure We Avoided

Early in the project, there was a pull toward building a full React app with client-side state management. The Studio page is 3,800+ lines of HTML, CSS, and vanilla JS. No build step. No webpack. No node_modules.

The temptation was to "modernize" it. But modernizing would have introduced a build pipeline that agents can't consume, a client-side router that breaks direct URL access, and a JavaScript bundle that hides the API contract behind an abstraction layer.

Instead, the decision was: keep the API pure, keep the UI dumb. The UI is a thin client that calls the same endpoints you'd call from curl. This is a feature, not a limitation.

"Real learning happens in the field, not the classroom. Real products happen in the API, not the UI." — From a conversation with Mannu, who would probably say it more warmly

Learning Loop: diagnose -> feedback -> transfer

This API strategy only works if the loop is explicit: diagnose via `/v1/diagnose`, feed that evidence back into `/v1/design`, then transfer the fix into deployment pipelines through repeatable API calls. The same interface contract keeps the loop stable across homepage, Studio, and partner integrations.

Evidence Provenance

  • 8fb251f — api/main.py:4
  • 8fb251f — api/main.py:5
  • d5ebf19 — api/middleware/rate_limit.py:1
  • 8fb251f — tests/scripts/validate_platform_consumption_kits.py:103
  • d5ebf19 — api/security/outbound_url_policy.py:1

Raw artifacts: tests/results/persona_journeys/local/enterprise-evaluator.json, tests/results/persona_journeys/local/startup-team-lead.json.

What's Next

The consumption kits need a TypeScript template. The managed key provisioning needs SAML federation for enterprise. And the MCP integration needs a formal spec so frontier models can discover Teacher's Pet's capabilities programmatically.

But the foundation is set: every capability is an API contract. Agents are first-class consumers. UI is one wrapper among many.

That's the architecture that survives the age of agents.