cronpop / the prompt layer v3 · 2026
→ prompts as endpoints

the prompt layer
for your apps.

Write a prompt, get an endpoint. Call it from anywhere — no provider keys in your frontend, versioned, logged, rate-limited.

stop shipping your openai key to the client.

// before

Your app talks to the model directly.

// in your react app
fetch('openai.com/v1/...', {
  headers: {
    Authorization: `Bearer ${OPENAI_KEY}`
  },
  body: JSON.stringify({
    model: 'gpt-5',
    messages: [{ /* hardcoded */ }]
  })
})
// after

Your app talks to your endpoint.

// same react app, safer
fetch('api.cronpop.com/e/abc123', {
  headers: {
    Authorization: `Bearer ${SCOPED_TOKEN}`
  },
  body: JSON.stringify({
    input: { topic, length }
  })
})

from idea → endpoint
in four clicks.

01

Write the prompt.

Template with typed variables. Live preview. Test against real models in the playground.

02

Set the schema.

Input shape + expected output shape. Structured JSON, validated, retried on failure.

03

Grab the endpoint.

Hash-based URL, scoped token. Rotate, revoke, rate-limit per token.

04

Ship it.

Call from your app, your backend, your Zapier. Every invocation logged.

everything you wish your prompts
already had.

01 / SAFETY

Scoped tokens.

A leaked token can only invoke that one prompt. Not your provider account. Rotate in seconds.

02 / VERSIONING

Every prompt has a history.

v1, v2, v3. Pin a version. Roll back in a click. Git-for-prompts, out of the box.

03 / PROVIDER-AGNOSTIC

Hot-swap the model.

Switch models without touching client code. Price/quality tuning becomes a UI setting.

04 / OBSERVABILITY

Logs, costs, latency.

Every request captured. Every token counted. Every p95 graphed, no extra tools.

05 / CONTROL

Rate limits & quotas.

Per-token, per-endpoint, per-minute. Expose prompts publicly without donating credits to bots.

06 / SIMPLE

One flat price per call.

Pay-as-you-go. No subscriptions. Start with 100 free credits on signup.

the prompt is the
new endpoint.

Claim your 100 free credits →