the prompt layer
for your apps.
Write a prompt, get an endpoint. Call it from anywhere — no provider keys in your frontend, versioned, logged, rate-limited.
stop shipping your openai key to the client.
Your app talks to the model directly.
// in your react app fetch('openai.com/v1/...', { headers: { Authorization: `Bearer ${OPENAI_KEY}` }, body: JSON.stringify({ model: 'gpt-5', messages: [{ /* hardcoded */ }] }) })
Your app talks to your endpoint.
// same react app, safer fetch('api.cronpop.com/e/abc123', { headers: { Authorization: `Bearer ${SCOPED_TOKEN}` }, body: JSON.stringify({ input: { topic, length } }) })
from idea → endpoint
in four clicks.
Write the prompt.
Template with typed variables. Live preview. Test against real models in the playground.
Set the schema.
Input shape + expected output shape. Structured JSON, validated, retried on failure.
Grab the endpoint.
Hash-based URL, scoped token. Rotate, revoke, rate-limit per token.
Ship it.
Call from your app, your backend, your Zapier. Every invocation logged.
everything you wish your prompts
already had.
Scoped tokens.
A leaked token can only invoke that one prompt. Not your provider account. Rotate in seconds.
Every prompt has a history.
v1, v2, v3. Pin a version. Roll back in a click. Git-for-prompts, out of the box.
Hot-swap the model.
Switch models without touching client code. Price/quality tuning becomes a UI setting.
Logs, costs, latency.
Every request captured. Every token counted. Every p95 graphed, no extra tools.
Rate limits & quotas.
Per-token, per-endpoint, per-minute. Expose prompts publicly without donating credits to bots.
One flat price per call.
Pay-as-you-go. No subscriptions. Start with 100 free credits on signup.