cronpop / the prompt layer v3 · 2026
→ prompts as endpoints

the prompt layer
for your apps.

Write a prompt, get an endpoint. Call it from anywhere — no provider keys in your frontend, versioned, logged, rate-limited.

stop shipping your openai key to the client.

// before

Your app talks to the model directly.

// in your react app
fetch('openai.com/v1/...', {
  headers: {
    Authorization: `Bearer ${OPENAI_KEY}`
  },
  body: JSON.stringify({
    model: 'gpt-5',
    messages: [{ /* hardcoded */ }]
  })
})
// after

Your app talks to your endpoint.

// same react app, safer
fetch('api.cronpop.com/e/abc123', {
  headers: {
    Authorization: `Bearer ${SCOPED_TOKEN}`
  },
  body: JSON.stringify({
    input: { topic, length }
  })
})

from idea → endpoint
in four clicks.

01

Write the prompt.

Template with typed variables. Live preview. Test against real models in the playground.

02

Set the schema.

Input shape + expected output shape. Structured JSON, validated, retried on failure.

03

Grab the endpoint.

Hash-based URL, scoped token. Rotate, revoke, rate-limit per token.

04

Ship it.

Call from your app, your backend, your Zapier. Every invocation logged.

a real prompt, a real endpoint,
in three panels.

01 write the prompt
name tagline-generator
model gpt-5.4-mini ▾
template
Write a punchy 7-word tagline for {{topic}}.
Return just the tagline, no preamble.
detected → topic
02 grab the endpoint
POST api.cronpop.com/e/9f3a…b7c4
# scoped token, never your provider key
curl -X POST https://api.cronpop.com/e/9f3a…b7c4 \
  -H "Authorization: Bearer cp_5d2f…a1b8" \
  -H "Content-Type: application/json" \
  -d '{"input": {"topic": "a coffee shop for cats"}}'
03 read the response
{
  "output": "Where every cat finds its purr-fect blend.",
  "metadata": {
    "invocation_id": "inv_8a4f7c12",
    "latency_ms": 612,
    "input_tokens": 24,
    "output_tokens": 11
  }
}

Logged. Counted. Billed at one flat credit.

Three panels, four clicks in the dashboard, zero provider keys in your client.

everything you wish your prompts
already had.

01 / SAFETY

Scoped tokens.

A leaked token can only invoke that one prompt. Not your provider account. Rotate in seconds.

02 / VERSIONING

Every prompt has a history.

v1, v2, v3. Pin a version. Roll back in a click. Git-for-prompts, out of the box.

03 / PROVIDER-AGNOSTIC

Hot-swap the model.

Switch models without touching client code. Price/quality tuning becomes a UI setting.

04 / OBSERVABILITY

Logs, costs, latency.

Every request captured. Every token counted. Every p95 graphed, no extra tools.

05 / CONTROL

Rate limits & quotas.

Per-token, per-endpoint, per-minute. Expose prompts publicly without donating credits to bots.

06 / SIMPLE

One flat price per call.

Pay-as-you-go. No subscriptions. Start with 100 free credits on signup.

the prompt is the
new endpoint.

Claim your 100 free credits →
// confirm

Are you sure?