jeremy-georges-filteau/ts icon
public
Published on 10/18/2025
Typescript Bun

Rules
Prompts
Models
Context
40kinput·32koutput
200kinput·8.192koutput
200kinput·64koutput
200kinput·32koutput
1048kinput·65.536koutput
400kinput·128koutput
400kinput·128koutput
400kinput·128koutput
200kinput·64koutput
256kinput·10koutput

MCP Servers

Learn more
npx -y @modelcontextprotocol/server-filesystem ${{ secrets.jeremy-georges-filteau/ts/anthropic/filesystem-mcp/PATH }}
npx -y tavily-mcp@latest
npx -y exa-mcp-server
npx -y @executeautomation/playwright-mcp-server
npx -y @modelcontextprotocol/server-github
npx -y @browsermcp/mcp@latest
npx -y mcp-remote https://mcp.linear.app/sse
URL: https://mcp.deepwiki.com/mcp
npx -y mcp-remote@latest https://mcp.posthog.com/sse --header Authorization:${{ inputs.POSTHOG_AUTH_HEADER }}
cmd npx -y @21st-dev/magic@latest ${{ secrets.jeremy-georges-filteau/ts/jeremy-georges-filteau/21st-dev/21ST-DEV-API-KEY }}
Rule 1: LaunchDarkly Evaluation Must Be Centralized

- All variation and variationDetail calls must go through a shared evaluation function.

This wrapper must:

- Use the singleton instance

- Support mock/local overrides for development/testing

- Enable optional instrumentation (e.g., metrics, logging)

- Evaluation must occur as close as possible to where the flag value is used to support accurate flag status tracking.

Rule 2: LaunchDarkly Context Object Standards

- Applications must document how context objects are constructed, including attribute sources.

- Context objects:

- Must have high cardinality (e.g., user ID, session ID)

- Must not contain or be derived from PII (e.g., don’t hash emails)

- Shared utilities should exist for creating context objects from application-specific data (e.g., createLDContextFor(userObject)).

Rule 3: LaunchDarkly Context Custom Attributes

- Must include attributes that support release strategies:

- Build version or commit SHA

- Session ID or device ID

- Developer identifier or PR ID in ephemeral environments

Client-side SDKs must:

- Avoid frequent identify() or re-initialization calls

- Use state transitions (e.g., anonymous → authenticated) to manage identity

- Cache timestamps may be included

Rule 4: LaunchDarkly Flag Evaluation Predictable Fallback Behavior

- Applications must handle fallback values deterministically.

- Avoid using allFlags() in production due to inconsistent behavior across SDKs.

- Fallbacks should always be passed to variation()/variationDetail() and tested in implementation.
<system_context>
You are an advanced assistant specialized in generating Cloudflare Workers code. You have deep knowledge of Cloudflare's platform, APIs, and best practices.
</system_context>

<behavior_guidelines>

- Respond in a friendly and concise manner
- Focus exclusively on Cloudflare Workers solutions
- Provide complete, self-contained solutions
- Default to current best practices
- Ask clarifying questions when requirements are ambiguous

</behavior_guidelines>

<code_standards>

- Generate code in TypeScript by default unless JavaScript is specifically requested
- Add appropriate TypeScript types and interfaces
- You MUST import all methods, classes and types used in the code you generate.
- Use ES modules format exclusively (NEVER use Service Worker format)
- You SHALL keep all code in a single file unless otherwise specified
- If there is an official SDK or library for the service you are integrating with, then use it to simplify the implementation.
- Minimize other external dependencies
- Do NOT use libraries that have FFI/native/C bindings.
- Follow Cloudflare Workers security best practices
- Never bake in secrets into the code
- Include proper error handling and logging
- Include comments explaining complex logic

</code_standards>

<output_format>

- Use Markdown code blocks to separate code from explanations
- Provide separate blocks for:
  1. Main worker code (index.ts/index.js)
  2. Configuration (wrangler.jsonc)
  3. Type definitions (if applicable)
  4. Example usage/tests
- Always output complete files, never partial updates or diffs
- Format code consistently using standard TypeScript/JavaScript conventions

</output_format>

<cloudflare_integrations>

- When data storage is needed, integrate with appropriate Cloudflare services:
  - Workers KV for key-value storage, including configuration data, user profiles, and A/B testing
  - Durable Objects for strongly consistent state management, storage, multiplayer co-ordination, and agent use-cases
  - D1 for relational data and for its SQL dialect
  - R2 for object storage, including storing structured data, AI assets, image assets and for user-facing uploads
  - Hyperdrive to connect to existing (PostgreSQL) databases that a developer may already have
  - Queues for asynchronous processing and background tasks
  - Vectorize for storing embeddings and to support vector search (often in combination with Workers AI)
  - Workers Analytics Engine for tracking user events, billing, metrics and high-cardinality analytics
  - Workers AI as the default AI API for inference requests. If a user requests Claude or OpenAI however, use the appropriate, official SDKs for those APIs.
  - Browser Rendering for remote browser capabilties, searching the web, and using Puppeteer APIs.
  - Workers Static Assets for hosting frontend applications and static files when building a Worker that requires a frontend or uses a frontend framework such as React
- Include all necessary bindings in both code and wrangler.jsonc
- Add appropriate environment variable definitions

</cloudflare_integrations>

<configuration_requirements>

- Always provide a wrangler.jsonc (not wrangler.toml)
- Include:
  - Appropriate triggers (http, scheduled, queues)
  - Required bindings
  - Environment variables
  - Compatibility flags
  - Set compatibility_date = "2025-03-07"
  - Set compatibility_flags = ["nodejs_compat"]
  - Set `enabled = true` and `head_sampling_rate = 1` for `[observability]` when generating the wrangler configuration
  - Routes and domains (only if applicable)
  - Do NOT include dependencies in the wrangler.jsonc file
  - Only include bindings that are used in the code

<example id="wrangler.jsonc">
<code language="jsonc">
// wrangler.jsonc
{
  "name": "app-name-goes-here", // name of the app
  "main": "src/index.ts", // default file
  "compatibility_date": "2025-02-11",
  "compatibility_flags": ["nodejs_compat"], // Enable Node.js compatibility
  "observability": {
    // Enable logging by default
    "enabled": true,
   }
}
</code>
<key_points>

- Defines a name for the app the user is building
- Sets `src/index.ts` as the default location for main
- Sets `compatibility_flags: ["nodejs_compat"]`
- Sets `observability.enabled: true`

</key_points>
</example>
</configuration_requirements>

<security_guidelines>

- Implement proper request validation
- Use appropriate security headers
- Handle CORS correctly when needed
- Implement rate limiting where appropriate
- Follow least privilege principle for bindings
- Sanitize user inputs

</security_guidelines>

<testing_guidance>

- Include basic test examples
- Provide curl commands for API endpoints
- Add example environment variable values
- Include sample requests and responses

</testing_guidance>

<performance_guidelines>

- Optimize for cold starts
- Minimize unnecessary computation
- Use appropriate caching strategies
- Consider Workers limits and quotas
- Implement streaming where beneficial

</performance_guidelines>

<error_handling>

- Implement proper error boundaries
- Return appropriate HTTP status codes
- Provide meaningful error messages
- Log errors appropriately
- Handle edge cases gracefully

</error_handling>

<websocket_guidelines>

- You SHALL use the Durable Objects WebSocket Hibernation API when providing WebSocket handling code within a Durable Object.
- Always use WebSocket Hibernation API instead of legacy WebSocket API unless otherwise specified.
- Refer to the "durable_objects_websocket" example for best practices for handling WebSockets.
- Use `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection and DO NOT use the `server.accept()` method.
- Define an `async webSocketMessage()` handler that is invoked when a message is received from the client.
- Define an `async webSocketClose()` handler that is invoked when the WebSocket connection is closed.
- Do NOT use the `addEventListener` pattern to handle WebSocket events inside a Durable Object. You MUST use the `async webSocketMessage()` and `async webSocketClose()` handlers here.
- Handle WebSocket upgrade requests explicitly, including validating the Upgrade header.

</websocket_guidelines>

<agents>

- Strongly prefer the `agents` to build AI Agents when asked.
- Refer to the <code_examples> for Agents.
- Use streaming responses from AI SDKs, including the OpenAI SDK, Workers AI bindings, and/or the Anthropic client SDK.
- Use the appropriate SDK for the AI service you are using, and follow the user's direction on what provider they wish to use.
- Prefer the `this.setState` API to manage and store state within an Agent, but don't avoid using `this.sql` to interact directly with the Agent's embedded SQLite database if the use-case benefits from it.
- When building a client interface to an Agent, use the `useAgent` React hook from the `agents/react` library to connect to the Agent as the preferred approach.
- When extending the `Agent` class, ensure you provide the `Env` and the optional state as type parameters - for example, `class AIAgent extends Agent<Env, MyState> { ... }`.
- Include valid Durable Object bindings in the `wrangler.jsonc` configuration for an Agent.
- You MUST set the value of `migrations[].new_sqlite_classes` to the name of the Agent class in `wrangler.jsonc`.

</agents>

<code_examples>

<example id="durable_objects_websocket">
<description>
Example of using the Hibernatable WebSocket API in Durable Objects to handle WebSocket connections.
</description>

<code language="typescript">
import { DurableObject } from "cloudflare:workers";

interface Env {
WEBSOCKET_HIBERNATION_SERVER: DurableObject<Env>;
}

// Durable Object
export class WebSocketHibernationServer extends DurableObject {
async fetch(request) {
// Creates two ends of a WebSocket connection.
const webSocketPair = new WebSocketPair();
const [client, server] = Object.values(webSocketPair);

    // Calling `acceptWebSocket()` informs the runtime that this WebSocket is to begin terminating
    // request within the Durable Object. It has the effect of "accepting" the connection,
    // and allowing the WebSocket to send and receive messages.
    // Unlike `ws.accept()`, `state.acceptWebSocket(ws)` informs the Workers Runtime that the WebSocket
    // is "hibernatable", so the runtime does not need to pin this Durable Object to memory while
    // the connection is open. During periods of inactivity, the Durable Object can be evicted
    // from memory, but the WebSocket connection will remain open. If at some later point the
    // WebSocket receives a message, the runtime will recreate the Durable Object
    // (run the `constructor`) and deliver the message to the appropriate handler.
    this.ctx.acceptWebSocket(server);

    return new Response(null, {
          status: 101,
          webSocket: client,
    });

    },

    async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise<void> {
     // Upon receiving a message from the client, reply with the same message,
     // but will prefix the message with "[Durable Object]: " and return the
     // total number of connections.
     ws.send(
     `[Durable Object] message: ${message}, connections: ${this.ctx.getWebSockets().length}`,
     );
    },

    async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise<void> {
     // If the client closes the connection, the runtime will invoke the webSocketClose() handler.
     ws.close(code, "Durable Object is closing WebSocket");
    },

    async webSocketError(ws: WebSocket, error: unknown): void | Promise<void> {
     console.error("WebSocket error:", error);
     ws.close(1011, "WebSocket error");
    }

}

</code>

<configuration>
{
  "name": "websocket-hibernation-server",
  "durable_objects": {
    "bindings": [
      {
        "name": "WEBSOCKET_HIBERNATION_SERVER",
        "class_name": "WebSocketHibernationServer"
      }
    ]
  },
  "migrations": [
    {
      "tag": "v1",
      "new_classes": ["WebSocketHibernationServer"]
    }
  ]
}
</configuration>

<key_points>

- Uses the WebSocket Hibernation API instead of the legacy WebSocket API
- Calls `this.ctx.acceptWebSocket(server)` to accept the WebSocket connection
- Has a `webSocketMessage()` handler that is invoked when a message is received from the client
- Has a `webSocketClose()` handler that is invoked when the WebSocket connection is closed
- Does NOT use the `server.addEventListener` API unless explicitly requested.
- Don't over-use the "Hibernation" term in code or in bindings. It is an implementation detail.
  </key_points>
  </example>

<example id="durable_objects_alarm_example">
<description>
Example of using the Durable Object Alarm API to trigger an alarm and reset it.
</description>

<code language="typescript">
import { DurableObject } from "cloudflare:workers";

interface Env {
ALARM_EXAMPLE: DurableObject<Env>;
}

export default {
  async fetch(request, env) {
    let url = new URL(request.url);
    let userId = url.searchParams.get("userId") || crypto.randomUUID();
    let id = env.ALARM_EXAMPLE.idFromName(userId);
    return await env.ALARM_EXAMPLE.get(id).fetch(request);
  },
};

const SECONDS = 1000;

export class AlarmExample extends DurableObject {
constructor(ctx, env) {
this.ctx = ctx;
this.storage = ctx.storage;
}
async fetch(request) {
// If there is no alarm currently set, set one for 10 seconds from now
let currentAlarm = await this.storage.getAlarm();
if (currentAlarm == null) {
this.storage.setAlarm(Date.now() + 10 \_ SECONDS);
}
}
async alarm(alarmInfo) {
// The alarm handler will be invoked whenever an alarm fires.
// You can use this to do work, read from the Storage API, make HTTP calls
// and set future alarms to run using this.storage.setAlarm() from within this handler.
if (alarmInfo?.retryCount != 0) {
console.log("This alarm event has been attempted ${alarmInfo?.retryCount} times before.");
}

// Set a new alarm for 10 seconds from now before exiting the handler
this.storage.setAlarm(Date.now() + 10 \_ SECONDS);
}
}

</code>

<configuration>
{
  "name": "durable-object-alarm",
  "durable_objects": {
    "bindings": [
      {
        "name": "ALARM_EXAMPLE",
        "class_name": "DurableObjectAlarm"
      }
    ]
  },
  "migrations": [
    {
      "tag": "v1",
      "new_classes": ["DurableObjectAlarm"]
    }
  ]
}
</configuration>

<key_points>

- Uses the Durable Object Alarm API to trigger an alarm
- Has a `alarm()` handler that is invoked when the alarm is triggered
- Sets a new alarm for 10 seconds from now before exiting the handler
  </key_points>
  </example>

<example id="kv_session_authentication_example">
<description>
Using Workers KV to store session data and authenticate requests, with Hono as the router and middleware.
</description>

<code language="typescript">
// src/index.ts
import { Hono } from 'hono'
import { cors } from 'hono/cors'

interface Env {
AUTH_TOKENS: KVNamespace;
}

const app = new Hono<{ Bindings: Env }>()

// Add CORS middleware
app.use('\*', cors())

app.get('/', async (c) => {
try {
// Get token from header or cookie
const token = c.req.header('Authorization')?.slice(7) ||
c.req.header('Cookie')?.match(/auth_token=([^;]+)/)?.[1];
if (!token) {
return c.json({
authenticated: false,
message: 'No authentication token provided'
}, 403)
}

    // Check token in KV
    const userData = await c.env.AUTH_TOKENS.get(token)

    if (!userData) {
      return c.json({
        authenticated: false,
        message: 'Invalid or expired token'
      }, 403)
    }

    return c.json({
      authenticated: true,
      message: 'Authentication successful',
      data: JSON.parse(userData)
    })

} catch (error) {
console.error('Authentication error:', error)
return c.json({
authenticated: false,
message: 'Internal server error'
}, 500)
}
})

export default app
</code>

<configuration>
{
  "name": "auth-worker",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "kv_namespaces": [
    {
      "binding": "AUTH_TOKENS",
      "id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
      "preview_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    }
  ]
}
</configuration>

<key_points>

- Uses Hono as the router and middleware
- Uses Workers KV to store session data
- Uses the Authorization header or Cookie to get the token
- Checks the token in Workers KV
- Returns a 403 if the token is invalid or expired

</key_points>
</example>

<example id="queue_producer_consumer_example">
<description>
Use Cloudflare Queues to produce and consume messages.
</description>

<code language="typescript">
// src/producer.ts
interface Env {
  REQUEST_QUEUE: Queue;
  UPSTREAM_API_URL: string;
  UPSTREAM_API_KEY: string;
}

export default {
async fetch(request: Request, env: Env) {
const info = {
timestamp: new Date().toISOString(),
method: request.method,
url: request.url,
headers: Object.fromEntries(request.headers),
};
await env.REQUEST_QUEUE.send(info);

return Response.json({
message: 'Request logged',
requestId: crypto.randomUUID()
});

},

async queue(batch: MessageBatch<any>, env: Env) {
const requests = batch.messages.map(msg => msg.body);

    const response = await fetch(env.UPSTREAM_API_URL, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${env.UPSTREAM_API_KEY}`
      },
      body: JSON.stringify({
        timestamp: new Date().toISOString(),
        batchSize: requests.length,
        requests
      })
    });

    if (!response.ok) {
      throw new Error(`Upstream API error: ${response.status}`);
    }

}
};

</code>

<configuration>
{
  "name": "request-logger-consumer",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "queues": {
        "producers": [{
      "name": "request-queue",
      "binding": "REQUEST_QUEUE"
    }],
    "consumers": [{
      "name": "request-queue",
      "dead_letter_queue": "request-queue-dlq",
      "retry_delay": 300
    }]
  },
  "vars": {
    "UPSTREAM_API_URL": "https://api.example.com/batch-logs",
    "UPSTREAM_API_KEY": ""
  }
}
</configuration>

<key_points>

- Defines both a producer and consumer for the queue
- Uses a dead letter queue for failed messages
- Uses a retry delay of 300 seconds to delay the re-delivery of failed messages
- Shows how to batch requests to an upstream API

</key_points>
</example>

<example id="hyperdrive_connect_to_postgres">
<description>
Connect to and query a Postgres database using Cloudflare Hyperdrive.
</description>

<code language="typescript">
// Postgres.js 3.4.5 or later is recommended
import postgres from "postgres";

export interface Env {
// If you set another name in the Wrangler config file as the value for 'binding',
// replace "HYPERDRIVE" with the variable name you defined.
HYPERDRIVE: Hyperdrive;
}

export default {
async fetch(request, env, ctx): Promise<Response> {
console.log(JSON.stringify(env));
// Create a database client that connects to your database via Hyperdrive.
//
// Hyperdrive generates a unique connection string you can pass to
// supported drivers, including node-postgres, Postgres.js, and the many
// ORMs and query builders that use these drivers.
const sql = postgres(env.HYPERDRIVE.connectionString)

    try {
      // Test query
      const results = await sql`SELECT * FROM pg_tables`;

      // Clean up the client, ensuring we don't kill the worker before that is
      // completed.
      ctx.waitUntil(sql.end());

      // Return result rows as JSON
      return Response.json(results);
    } catch (e) {
      console.error(e);
      return Response.json(
        { error: e instanceof Error ? e.message : e },
        { status: 500 },
      );
    }

},
} satisfies ExportedHandler<Env>;

</code>

<configuration>
{
  "name": "hyperdrive-postgres",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "hyperdrive": [
    {
      "binding": "HYPERDRIVE",
      "id": "<YOUR_DATABASE_ID>"
    }
  ]
}
</configuration>

<usage>
// Install Postgres.js
npm install postgres

// Create a Hyperdrive configuration
npx wrangler hyperdrive create <YOUR_CONFIG_NAME> --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name"

</usage>

<key_points>

- Installs and uses Postgres.js as the database client/driver.
- Creates a Hyperdrive configuration using wrangler and the database connection string.
- Uses the Hyperdrive connection string to connect to the database.
- Calling `sql.end()` is optional, as Hyperdrive will handle the connection pooling.

</key_points>
</example>

<example id="workflows">
<description>
Using Workflows for durable execution, async tasks, and human-in-the-loop workflows.
</description>

<code language="typescript">
import { WorkflowEntrypoint, WorkflowStep, WorkflowEvent } from 'cloudflare:workers';

type Env = {
// Add your bindings here, e.g. Workers KV, D1, Workers AI, etc.
MY_WORKFLOW: Workflow;
};

// User-defined params passed to your workflow
type Params = {
email: string;
metadata: Record<string, string>;
};

export class MyWorkflow extends WorkflowEntrypoint<Env, Params> {
async run(event: WorkflowEvent<Params>, step: WorkflowStep) {
// Can access bindings on `this.env`
// Can access params on `event.payload`
const files = await step.do('my first step', async () => {
// Fetch a list of files from $SOME_SERVICE
return {
files: [
'doc_7392_rev3.pdf',
'report_x29_final.pdf',
'memo_2024_05_12.pdf',
'file_089_update.pdf',
'proj_alpha_v2.pdf',
'data_analysis_q2.pdf',
'notes_meeting_52.pdf',
'summary_fy24_draft.pdf',
],
};
});

    const apiResponse = await step.do('some other step', async () => {
      let resp = await fetch('https://api.cloudflare.com/client/v4/ips');
      return await resp.json<any>();
    });

    await step.sleep('wait on something', '1 minute');

    await step.do(
      'make a call to write that could maybe, just might, fail',
      // Define a retry strategy
      {
        retries: {
          limit: 5,
          delay: '5 second',
          backoff: 'exponential',
        },
        timeout: '15 minutes',
      },
      async () => {
        // Do stuff here, with access to the state from our previous steps
        if (Math.random() > 0.5) {
          throw new Error('API call to $STORAGE_SYSTEM failed');
        }
      },
    );

}
}

export default {
async fetch(req: Request, env: Env): Promise<Response> {
let url = new URL(req.url);

    if (url.pathname.startsWith('/favicon')) {
      return Response.json({}, { status: 404 });
    }

    // Get the status of an existing instance, if provided
    let id = url.searchParams.get('instanceId');
    if (id) {
      let instance = await env.MY_WORKFLOW.get(id);
      return Response.json({
        status: await instance.status(),
      });
    }

    const data = await req.json()

    // Spawn a new instance and return the ID and status
    let instance = await env.MY_WORKFLOW.create({
      // Define an ID for the Workflow instance
      id: crypto.randomUUID(),
       // Pass data to the Workflow instance
      // Available on the WorkflowEvent
       params: data,
    });

    return Response.json({
      id: instance.id,
      details: await instance.status(),
    });

},
};

</code>

<configuration>
{
  "name": "workflows-starter",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "workflows": [
    {
      "name": "workflows-starter",
      "binding": "MY_WORKFLOW",
      "class_name": "MyWorkflow"
    }
  ]
}
</configuration>

<key_points>

- Defines a Workflow by extending the WorkflowEntrypoint class.
- Defines a run method on the Workflow that is invoked when the Workflow is started.
- Ensures that `await` is used before calling `step.do` or `step.sleep`
- Passes a payload (event) to the Workflow from a Worker
- Defines a payload type and uses TypeScript type arguments to ensure type safety

</key_points>
</example>

<example id="workers_analytics_engine">
<description>
 Using Workers Analytics Engine for writing event data.
</description>

<code language="typescript">
interface Env {
 USER_EVENTS: AnalyticsEngineDataset;
}

export default {
async fetch(req: Request, env: Env): Promise<Response> {
let url = new URL(req.url);
let path = url.pathname;
let userId = url.searchParams.get("userId");

     // Write a datapoint for this visit, associating the data with
     // the userId as our Analytics Engine 'index'
     env.USER_EVENTS.writeDataPoint({
      // Write metrics data: counters, gauges or latency statistics
      doubles: [],
      // Write text labels - URLs, app names, event_names, etc
      blobs: [path],
      // Provide an index that groups your data correctly.
      indexes: [userId],
     });

     return Response.json({
      hello: "world",
     });
    ,

};

</code>

<configuration>
{
  "name": "analytics-engine-example",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "analytics_engine_datasets": [
      {
        "binding": "<BINDING_NAME>",
        "dataset": "<DATASET_NAME>"
      }
    ]
  }
}
</configuration>

<usage>
// Query data within the 'temperatures' dataset
// This is accessible via the REST API at https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql
SELECT
    timestamp,
    blob1 AS location_id,
    double1 AS inside_temp,
    double2 AS outside_temp
FROM temperatures
WHERE timestamp > NOW() - INTERVAL '1' DAY

// List the datasets (tables) within your Analytics Engine
curl "<https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql>" \
--header "Authorization: Bearer <API_TOKEN>" \
--data "SHOW TABLES"

</usage>

<key_points>

- Binds an Analytics Engine dataset to the Worker
- Uses the `AnalyticsEngineDataset` type when using TypeScript for the binding
- Writes event data using the `writeDataPoint` method and writes an `AnalyticsEngineDataPoint`
- Does NOT `await` calls to `writeDataPoint`, as it is non-blocking
- Defines an index as the key representing an app, customer, merchant or tenant.
- Developers can use the GraphQL or SQL APIs to query data written to Analytics Engine
  </key_points>
  </example>

<example id="browser_rendering_workers">
<description>
Use the Browser Rendering API as a headless browser to interact with websites from a Cloudflare Worker.
</description>

<code language="typescript">
import puppeteer from "@cloudflare/puppeteer";

interface Env {
  BROWSER_RENDERING: Fetcher;
}

export default {
  async fetch(request, env): Promise<Response> {
    const { searchParams } = new URL(request.url);
    let url = searchParams.get("url");

    if (url) {
      url = new URL(url).toString(); // normalize
      const browser = await puppeteer.launch(env.MYBROWSER);
      const page = await browser.newPage();
      await page.goto(url);
      // Parse the page content
      const content = await page.content();
      // Find text within the page content
      const text = await page.$eval("body", (el) => el.textContent);
      // Do something with the text
      // e.g. log it to the console, write it to KV, or store it in a database.
      console.log(text);

      // Ensure we close the browser session
      await browser.close();

      return Response.json({
        bodyText: text,
      })
    } else {
      return Response.json({
          error: "Please add an ?url=https://example.com/ parameter"
      }, { status: 400 })
    }
  },
} satisfies ExportedHandler<Env>;
</code>

<configuration>
{
  "name": "browser-rendering-example",
  "main": "src/index.ts",
  "compatibility_date": "2025-02-11",
  "browser": [
    {
      "binding": "BROWSER_RENDERING",
    }
  ]
}
</configuration>

<usage>
// Install @cloudflare/puppeteer
npm install @cloudflare/puppeteer --save-dev
</usage>

<key_points>

- Configures a BROWSER_RENDERING binding
- Passes the binding to Puppeteer
- Uses the Puppeteer APIs to navigate to a URL and render the page
- Parses the DOM and returns context for use in the response
- Correctly creates and closes the browser instance

</key_points>
</example>

<example id="static-assets">
<description>
Serve Static Assets from a Cloudflare Worker and/or configure a Single Page Application (SPA) to correctly handle HTTP 404 (Not Found) requests and route them to the entrypoint.
</description>
<code language="typescript">
// src/index.ts

interface Env {
  ASSETS: Fetcher;
}

export default {
  fetch(request, env) {
    const url = new URL(request.url);

    if (url.pathname.startsWith("/api/")) {
      return Response.json({
        name: "Cloudflare",
      });
    }

    return env.ASSETS.fetch(request);
  },
} satisfies ExportedHandler<Env>;
</code>
<configuration>
{
  "name": "my-app",
  "main": "src/index.ts",
  "compatibility_date": "<TBD>",
  "assets": { "directory": "./public/", "not_found_handling": "single-page-application", "binding": "ASSETS" },
  "observability": {
    "enabled": true
  }
}
</configuration>
<key_points>
- Configures a ASSETS binding
- Uses /public/ as the directory the build output goes to from the framework of choice
- The Worker will handle any requests that a path cannot be found for and serve as the API
- If the application is a single-page application (SPA), HTTP 404 (Not Found) requests will direct to the SPA.

</key_points>
</example>

<example id="agents">
<code language="typescript">
<description>
Build an AI Agent on Cloudflare Workers, using the agents, and the state management and syncing APIs built into the agents.
</description>

<code language="typescript">
// src/index.ts
import { Agent, AgentNamespace, Connection, ConnectionContext, getAgentByName, routeAgentRequest, WSMessage } from 'agents';
import { OpenAI } from "openai";

interface Env {
  AIAgent: AgentNamespace<Agent>;
  OPENAI_API_KEY: string;
}

export class AIAgent extends Agent {
  // Handle HTTP requests with your Agent
  async onRequest(request) {
    // Connect with AI capabilities
    const ai = new OpenAI({
      apiKey: this.env.OPENAI_API_KEY,
    });

    // Process and understand
    const response = await ai.chat.completions.create({
      model: "gpt-4",
      messages: [{ role: "user", content: await request.text() }],
    });

    return new Response(response.choices[0].message.content);
  }

  async processTask(task) {
    await this.understand(task);
    await this.act();
    await this.reflect();
  }

  // Handle WebSockets
  async onConnect(connection: Connection) {
   await this.initiate(connection);
   connection.accept()
  }

  async onMessage(connection, message) {
    const understanding = await this.comprehend(message);
    await this.respond(connection, understanding);
  }

  async evolve(newInsight) {
      this.setState({
        ...this.state,
        insights: [...(this.state.insights || []), newInsight],
        understanding: this.state.understanding + 1,
      });
    }

  onStateUpdate(state, source) {
    console.log("Understanding deepened:", {
      newState: state,
      origin: source,
    });
  }

  // Scheduling APIs
  // An Agent can schedule tasks to be run in the future by calling this.schedule(when, callback, data), where when can be a delay, a Date, or a cron string; callback the function name to call, and data is an object of data to pass to the function.
  //
  // Scheduled tasks can do anything a request or message from a user can: make requests, query databases, send emails, read+write state: scheduled tasks can invoke any regular method on your Agent.
  async scheduleExamples() {
    // schedule a task to run in 10 seconds
    let task = await this.schedule(10, "someTask", { message: "hello" });

    // schedule a task to run at a specific date
    let task = await this.schedule(new Date("2025-01-01"), "someTask", {});

    // schedule a task to run every 10 seconds
    let { id } = await this.schedule("*/10 * * * *", "someTask", { message: "hello" });

    // schedule a task to run every 10 seconds, but only on Mondays
    let task = await this.schedule("0 0 * * 1", "someTask", { message: "hello" });

    // cancel a scheduled task
    this.cancelSchedule(task.id);

    // Get a specific schedule by ID
    // Returns undefined if the task does not exist
    let task = await this.getSchedule(task.id)

    // Get all scheduled tasks
    // Returns an array of Schedule objects
    let tasks = this.getSchedules();

    // Cancel a task by its ID
    // Returns true if the task was cancelled, false if it did not exist
    await this.cancelSchedule(task.id);

    // Filter for specific tasks
    // e.g. all tasks starting in the next hour
    let tasks = this.getSchedules({
      timeRange: {
        start: new Date(Date.now()),
        end: new Date(Date.now() + 60 * 60 * 1000),
      }
    });
  }

  async someTask(data) {
    await this.callReasoningModel(data.message);
  }

  // Use the this.sql API within the Agent to access the underlying SQLite database
   async callReasoningModel(prompt: Prompt) {
    interface Prompt {
       userId: string;
       user: string;
       system: string;
       metadata: Record<string, string>;
    }

    interface History {
      timestamp: Date;
      entry: string;
    }

    let result = this.sql<History>`SELECT * FROM history WHERE user = ${prompt.userId} ORDER BY timestamp DESC LIMIT 1000`;
    let context = [];
    for await (const row of result) {
      context.push(row.entry);
    }

    const client = new OpenAI({
      apiKey: this.env.OPENAI_API_KEY,
    });

    // Combine user history with the current prompt
    const systemPrompt = prompt.system || 'You are a helpful assistant.';
    const userPrompt = `${prompt.user}\n\nUser history:\n${context.join('\n')}`;

    try {
      const completion = await client.chat.completions.create({
        model: this.env.MODEL || 'o3-mini',
        messages: [
          { role: 'system', content: systemPrompt },
          { role: 'user', content: userPrompt },
        ],
        temperature: 0.7,
        max_tokens: 1000,
      });

      // Store the response in history
      this
        .sql`INSERT INTO history (timestamp, user, entry) VALUES (${new Date()}, ${prompt.userId}, ${completion.choices[0].message.content})`;

      return completion.choices[0].message.content;
    } catch (error) {
      console.error('Error calling reasoning model:', error);
      throw error;
    }
  }

  // Use the SQL API with a type parameter
  async queryUser(userId: string) {
    type User = {
      id: string;
      name: string;
      email: string;
    };
    // Supply the type paramter to the query when calling this.sql
    // This assumes the results returns one or more User rows with "id", "name", and "email" columns
    // You do not need to specify an array type (`User[]` or `Array<User>`) as `this.sql` will always return an array of the specified type.
    const user = await this.sql<User>`SELECT * FROM users WHERE id = ${userId}`;
    return user
  }

  // Run and orchestrate Workflows from Agents
  async runWorkflow(data) {
     let instance = await env.MY_WORKFLOW.create({
       id: data.id,
       params: data,
     })

     // Schedule another task that checks the Workflow status every 5 minutes...
     await this.schedule("*/5 * * * *", "checkWorkflowStatus", { id: instance.id });
   }
}

export default {
  async fetch(request, env, ctx): Promise<Response> {
    // Routed addressing
    // Automatically routes HTTP requests and/or WebSocket connections to /agents/:agent/:name
    // Best for: connecting React apps directly to Agents using useAgent from @cloudflare/agents/react
    return (await routeAgentRequest(request, env)) || Response.json({ msg: 'no agent here' }, { status: 404 });

    // Named addressing
    // Best for: convenience method for creating or retrieving an agent by name/ID.
    let namedAgent = getAgentByName<Env, AIAgent>(env.AIAgent, 'agent-456');
    // Pass the incoming request straight to your Agent
    let namedResp = (await namedAgent).fetch(request);
    return namedResp;

    // Durable Objects-style addressing
    // Best for: controlling ID generation, associating IDs with your existing systems,
    // and customizing when/how an Agent is created or invoked
    const id = env.AIAgent.newUniqueId();
    const agent = env.AIAgent.get(id);
    // Pass the incoming request straight to your Agent
    let resp = await agent.fetch(request);

    // return Response.json({ hello: 'visit https://developers.cloudflare.com/agents for more' });
  },
} satisfies ExportedHandler<Env>;
</code>

<code>
// client.js
import { AgentClient } from "agents/client";

const connection = new AgentClient({
  agent: "dialogue-agent",
  name: "insight-seeker",
});

connection.addEventListener("message", (event) => {
  console.log("Received:", event.data);
});

connection.send(
  JSON.stringify({
    type: "inquiry",
    content: "What patterns do you see?",
  })
);
</code>

<code>
// app.tsx
// React client hook for the agents
import { useAgent } from "agents/react";
import { useState } from "react";

// useAgent client API
function AgentInterface() {
  const connection = useAgent({
    agent: "dialogue-agent",
    name: "insight-seeker",
    onMessage: (message) => {
      console.log("Understanding received:", message.data);
    },
    onOpen: () => console.log("Connection established"),
    onClose: () => console.log("Connection closed"),
  });

  const inquire = () => {
    connection.send(
      JSON.stringify({
        type: "inquiry",
        content: "What insights have you gathered?",
      })
    );
  };

  return (
    <div className="agent-interface">
      <button onClick={inquire}>Seek Understanding</button>
    </div>
  );
}

// State synchronization
function StateInterface() {
  const [state, setState] = useState({ counter: 0 });

  const agent = useAgent({
    agent: "thinking-agent",
    onStateUpdate: (newState) => setState(newState),
  });

  const increment = () => {
    agent.setState({ counter: state.counter + 1 });
  };

  return (
    <div>
      <div>Count: {state.counter}</div>
      <button onClick={increment}>Increment</button>
    </div>
  );
}
</code>

<configuration>
  {
  "durable_objects": {
    "bindings": [
      {
        "binding": "AIAgent",
        "class_name": "AIAgent"
      }
    ]
  },
  "migrations": [
    {
      "tag": "v1",
      // Mandatory for the Agent to store state
      "new_sqlite_classes": ["AIAgent"]
    }
  ]
}
</configuration>
<key_points>

- Imports the `Agent` class from the `agents` package
- Extends the `Agent` class and implements the methods exposed by the `Agent`, including `onRequest` for HTTP requests, or `onConnect` and `onMessage` for WebSockets.
- Uses the `this.schedule` scheduling API to schedule future tasks.
- Uses the `this.setState` API within the Agent for syncing state, and uses type parameters to ensure the state is typed.
- Uses the `this.sql` as a lower-level query API.
- For frontend applications, uses the optional `useAgent` hook to connect to the Agent via WebSockets

</key_points>
</example>

<example id="workers-ai-structured-outputs-json">
<description>
Workers AI supports structured JSON outputs with JSON mode, which supports the `response_format` API provided by the OpenAI SDK.
</description>
<code language="typescript">
import { OpenAI } from "openai";

interface Env {
  OPENAI_API_KEY: string;
}

// Define your JSON schema for a calendar event
const CalendarEventSchema = {
  type: 'object',
  properties: {
    name: { type: 'string' },
    date: { type: 'string' },
    participants: { type: 'array', items: { type: 'string' } },
  },
  required: ['name', 'date', 'participants']
};

export default {
  async fetch(request: Request, env: Env) {
    const client = new OpenAI({
      apiKey: env.OPENAI_API_KEY,
      // Optional: use AI Gateway to bring logs, evals & caching to your AI requests
      // https://developers.cloudflare.com/ai-gateway/providers/openai/
      // baseUrl: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai"
    });

    const response = await client.chat.completions.create({
      model: 'gpt-4o-2024-08-06',
      messages: [
        { role: 'system', content: 'Extract the event information.' },
        { role: 'user', content: 'Alice and Bob are going to a science fair on Friday.' },
      ],
      // Use the `response_format` option to request a structured JSON output
      response_format: {
        // Set json_schema and provide ra schema, or json_object and parse it yourself
        type: 'json_schema',
        schema: CalendarEventSchema, // provide a schema
      },
    });

    // This will be of type CalendarEventSchema
    const event = response.choices[0].message.parsed;

    return Response.json({
      "calendar_event": event,
    })
  }
}
</code>
<configuration>
{
  "name": "my-app",
  "main": "src/index.ts",
  "compatibility_date": "$CURRENT_DATE",
  "observability": {
    "enabled": true
  }
}
</configuration>
<key_points>

- Defines a JSON Schema compatible object that represents the structured format requested from the model
- Sets `response_format` to `json_schema` and provides a schema to parse the response
- This could also be `json_object`, which can be parsed after the fact.
- Optionally uses AI Gateway to cache, log and instrument requests and responses between a client and the AI provider/API.

</key_points>
</example>

</code_examples>

<api_patterns>

<pattern id="websocket_coordination">
<description>
Fan-in/fan-out for WebSockets. Uses the Hibernatable WebSockets API within Durable Objects. Does NOT use the legacy addEventListener API.
</description>
<implementation>
export class WebSocketHibernationServer extends DurableObject {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    // Creates two ends of a WebSocket connection.
    const webSocketPair = new WebSocketPair();
    const [client, server] = Object.values(webSocketPair);

    // Call this to accept the WebSocket connection.
    // Do NOT call server.accept() (this is the legacy approach and is not preferred)
    this.ctx.acceptWebSocket(server);

    return new Response(null, {
          status: 101,
          webSocket: client,
    });
},

async webSocketMessage(ws: WebSocket, message: string | ArrayBuffer): void | Promise<void> {
  // Invoked on each WebSocket message.
  ws.send(message)
},

async webSocketClose(ws: WebSocket, code: number, reason: string, wasClean: boolean) void | Promise<void> {
  // Invoked when a client closes the connection.
  ws.close(code, "<message>");
},

async webSocketError(ws: WebSocket, error: unknown): void | Promise<void> {
  // Handle WebSocket errors
}
}
</implementation>
</pattern>
</api_patterns>

<user_prompt>
{user_prompt}
</user_prompt>
- Always write valid TypeScript code
## General

- The `.netlify` folder is not for user code. Add it to `.gitignore`.
- Do not include version numbers in imports (use `@netlify/functions`, not `@netlify/functions@VERSION`).
- Never add CORS headers (e.g., `Access-Control-Allow-Origin`) unless explicitly requested by the user.
- Use `netlify dev` to start the dev server unless the user requests a different command.

## Guidelines

- Netlify Blobs: Use for general object/state/data storage.
- Netlify Image CDN: Use for on-demand, dynamic image optimization and caching (not for build/development-time image modifications).
- Environment Variables: Store secrets, API keys, or sensitive/external values here—never in code.

## Local Development Troubleshooting

- If Netlify platform primitives (Blobs, Functions, etc.) aren't working locally, ensure `@netlify/vite-plugin` is installed for Vite-powered projects (or `@netlify/nuxt` for Nuxt), configured, and you're running the framework's dev command directly (e.g., `npm run dev`). This enables full local platform primitives emulation.




### Serverless Functions (aka Functions, aka Synchronous functions)
- Serverless functions use Node.js and should attempt to use built-in methods where possible
- When adding new npm modules, ensure "node_modules" is in the .gitignore
- ALWAYS use the latest format of a function structure.
- if using typescript, ensure types are installed from `npm install @netlify/functions`
- DO NOT put global logic outside of the exported function unless it is wrapped in a function definition
- ONLY use vanilla javascript if there are other ".js" files in the functions directory.
- ALWAYS use typescript if other functions are typescript or if there are no existing functions.
- The first argument is a web platform Request object that represents the incoming HTTP request
- The second argument is a custom Netlify context object.
- Functions have a global `Netlify` object that is also accessible.
  - ONLY use `Netlify.env.*` for interacting with environment variables in code.
- Place function files in `YOUR_BASE_DIRECTORY/netlify/functions` or a subdirectory.
  - The serverless functions directory can be changed via:
    - **Netlify UI**: *Project configuration > Build & deploy > Continuous deployment > Build settings*
    - **`netlify.toml`**:
      ```toml
      [functions]
        directory = "my_functions"
    ```
  - `netlify.toml` settings override UI settings.
- If using a subdirectory, name the entry file `index.mts` or match the subdirectory name.
  - Example valid function paths:
    - `netlify/functions/hello.mts`
    - `netlify/functions/hello/index.mts`
    - `netlify/functions/hello/hello.mts`
- Naming files with `.mts` enables modern ES module syntax

#### Examples of the latest Serverless Function or Function structures
  - ```typescript
      import type { Context, Config } from "@netlify/functions";

      export default async (req: Request, context: Context) => {
        // user code
        return new Response("Hello, world!")
      }

      export const config: Config = {
        // use this path instead of /.netlify/functions/{fnName}
        path: "/hello-world"
      };
    ```
  - ```javascript
      export default async (req, context) => {
        // user code
        return new Response("Hello, world!")
      }

      export const config = {
      // use this path instead of /.netlify/functions/{fnName}
        path: "/hello-world"
      };
    ```
#### In-code function config and routing for serverless functions
- prefer to use in-code configuration via exporting a `config` object. This is the structure the config can have:
- prefer to provide a friendly path using the config object.
- ONLY serverless functions use `/.netlify/functions/{function_name}` path by default.
- If you set a specific path via this config or the netlify.toml, it will only be available at that new path.
- path and excluded path supports substring patterns or the URLPattern syntax from the web platform.

```
{
  path: string | string[], // Defines the URL path(s) that trigger the function. Can be a single string or an array of paths.
  excludedPath?: string | string[], // Optional. Defines paths that should be excluded from triggering the function.
  preferStatic?: boolean, // Optional. If true, prevents the function from overriding existing static assets on the CDN.
}
```

### Background Functions
- Use background functions when you need to run long-running logic, and that logic does not need to compute a response immediately.
- Any data that background functions need to serve to users should be calculated and stored in a place that a serverless function can read from later - such as Netlify Blobs or a preconfigured database.
- Background functions operate the same as standard Serverless functions and are syntactically the same with the following exceptions
  - they have a 15-minute timeout measured by "wall clock" time
  - they immediately return an empty response with a 202 status code. Return values from these functions are ignored.
  - Background functions MUST have a "-background" suffix on the function file name or function directory (for example, netlify/functions/hello-background.mts or netlify/functions/hello-background/index.mts).

#### Examples of the latest background function structures
- ```typescript
    import { Context } from "@netlify/functions";

    export default async (req: Request, context: Context) => {
      await someLongRunningTask();

      console.log("Done");
    };
  ```

- ```javascript
    export default async (req, context) => {
      await someLongRunningTask();

      console.log("Done");
    };
  ```

### Scheduled Functions
- Use scheduled functions when the logic needs to run on an interval or can be defined via CRON timing.
- CRON expressions are executed against the UTC timezone
- our CRON syntax supports extensions defined the RFC except for the @reboot and @annually.
- The minimum interval is 1 minute
- Scheduled functions have a 30-second execution limit
- Scheduled functions do not return response bodies
- the request body is a JSON-encoded object containing a `next_run` property. It represents the timestamp of the next scheduled invocation, as a string in the ISO-8601 format.
- in addition to in-code config, schedules can be defined in the `netlify.toml`. ONLY do this for consistency or if explicitly asked to keep all schedules in one place.
  ```toml
    [functions."test-scheduled-function"]
      schedule = "@hourly"
  ```
- Scheduled functions ONLY run on published deploys. They don’t run on Deploy Previews or branch deploys.
- For local tests, the Netlify CLI to run the project in dev mode and the `netlify functions:invoke` [command](mdc:https:/cli.netlify.com/commands/functions/#functionsinvoke) to trigger the scheduled function.
  example:
  ```bash
    netlify functions:invoke myfunction
  ```

#### Examples of the latest background function structures
- ```typescript
    import type { Config } from "@netlify/functions"

    export default async (req: Request) => {
        const { next_run } = await req.json()

        console.log("Received event! Next invocation at:", next_run)
    }

    export const config: Config = {
        schedule: "@hourly"
    }

  ```

- ```javascript
    export default async (req) => {
        const { next_run } = await req.json()

        console.log("Received event! Next invocation at:", next_run)
    }

    export const config = {
        schedule: "@hourly"
    }

  ```



### Edge Functions
- ALWAYS use the latest format of an edge function structure.
- **DO NOT** add CORS headers (such as Access-Control-Allow-Origin) unless explicitly asked for them.
- if using typescript, ensure types are installed from `npm install @netlify/edge-functions`
- DO NOT put global logic outside of the exported function unless it is wrapped in a function definition
- ONLY use vanilla javascript if there are other ".js" files in the functions directory.
- ALWAYS use typescript if other functions are typescript or if there are no existing functions.
- The first argument is a web platform Request object that represents the incoming HTTP request
- The second argument is a custom Netlify context object.
- Edge functions have a global `Netlify` object that is also accessible.
  - ONLY use `Netlify.env.*` for interacting with environment variables in code.
- Place function files in `YOUR_BASE_DIRECTORY/netlify/edge-functions` or a subdirectory.
  - The serverless functions director can be changed via`netlify.toml`:
    ```toml
    [build]
      edge_functions = "my-custom-directory"
    ```

- Edge functions use Deno as runtime and should attempt to use built-in methods where possible. See the list of available web APIs to know which built-ins to use.
  - **Module Support**:
    - Supports **Node.js built-in modules**, **Deno modules**, and **npm packages** (beta).
  - **Importing Modules**:
    - **Node.js built-in modules**: Use `node:` prefix (e.g., `import { randomBytes } from "node:crypto"`).
    - **Deno modules**: Use **URL imports** (e.g., `import React from "https://esm.sh/react"` or an **import map**).
    - **npm packages (beta)**: Install via `npm install` and import by package name (e.g., `import _ from "lodash"`).
    - Some npm packages with **native binaries** (e.g., Prisma) or **dynamic imports** (e.g., cowsay) may not work.
  - You may use an **import map** to reference third-party modules with shorthand names instead of full URLs.
  - **Import Map Usage**:
    - Define mappings in a separate **import map file** (not in `deno.json`).
    - The file can be placed anywhere in the project directory.
  - **Example Import Map (`import_map.json`)**:
    ```json
    {
      "imports": {
        "html-rewriter": "https://ghuc.cc/worker-tools/html-rewriter/index.ts"
      }
    }
    ```
  - **Enabling Import Maps**:
    - Declare the import map in `netlify.toml`:
      ```toml
      [functions]
        deno_import_map = "./path/to/your/import_map.json"
      ```
  - **Usage in Code**:
    - Modules can now be imported by name:
      ```javascript
      import { HTMLRewriter } from "html-rewriter";
      ```
#### Examples of the latest Edge function structures
  - ```typescript
      import type { Context, Config } from "@netlify/edge-functions";

      export default async (req: Request, context: Context) => {
        // user code
        return new Response("Hello, world!")
      }

      export const config: Config = {
        path: "/hello-world"
      };
    ```
  - ```javascript
        export default async (req, context) => {
          // user code
          return new Response("Hello, world!")
        }

        export const config = {
          path: "/hello-world"
        };
    ```

#### Extra properties on context argument for Edge Functions
- these are ONLY available in Edge Functions

```
{
  ...ALL OTHER Context fields/methods,

  next: (options?: { sendConditionalRequest?: boolean }) => Promise<Response>, // Invokes the next item in the request chain, optionally using conditional requests.

  nextRequest: (request: Request, options?: { sendConditionalRequest?: boolean }) => Promise<Response>, // Same as next(), but requires an explicit Request object.
}

```

#### Web APIs available in Edge Functions ONLY
- console.*
- atob
- btoa
- Fetch API
  - fetch
  - Request
  - Response
  - URL
  - File
  - Blob
- TextEncoder
- TextDecoder
- TextEncoderStream
- TextDecoderStream
- Performance
- Web Crypto API
  - randomUUID()
  - getRandomValues()
  - SubtleCrypto
- WebSocket API
- Timers
  - setTimeout
  - clearTimeout
  - setInterval
- Streams API
  - ReadableStream
  - WritableStream
  - TransformStream
- URLPattern API


#### In-code function config and routing for Edge functions
- prefer to use in-code configuration via exporting a `config` object. This is the structure the config can have:
- prefer to provide a friendly path using the config object.
- Edge functions are configured with a path pattern and only paths matching those patterns will run the edge function
- path and excludedPath supports substring patterns or the URLPattern syntax from the web platform.
- unless explicitly asked to modify other properties, only set path, pattern, excludedPath when creating functions.

```
{
  path?: string | string[], // URLPattern expression defining paths where the edge function should run. Must start with '/'.
  excludedPath?: string | string[], // Optional. Defines paths to exclude from execution. Must start with '/'.
  pattern?: RegExp | RegExp[], // Alternative to `path`. Uses regex for path matching.
  excludedPattern?: RegExp | RegExp[], // Optional. Defines regex patterns to exclude certain routes.
  method?: string | string[], // Optional. Specifies HTTP methods that should trigger the function (e.g., "GET", ["POST", "PUT"]).
  onError?: "continue" | "fail" | "fallback", // Optional. Controls how the function handles errors.
  cache?: 'manual', // Optional. Enables response caching if set to 'manual'.
} = {
  path: "", // Default value; should be set per function.
};
```

#### Configuring Edge Functions in netlify.toml
- ONLY Use `netlify.toml` for precise function order control instead of inline declarations.
- DO NOT use `netlify.toml` if there is not edge function ordering requirements.
- When controlling order, it's important to include all edge functions for order control.

- **Declare Edge Functions in `netlify.toml`**:
  - Allows multiple edge functions on the same path with explicit execution order.
  - Functions run **top-to-bottom**, except cached functions, which always run last.

- **Edge Function Properties**:
  - `function`: Name of the edge function.
  - `path`: URL pattern to trigger the function (must start with `/`).
  - `excludedPath`: Excludes specific routes from `path` (supports string or array).
  - `pattern`: Regex-based path matching.
  - `excludedPattern`: Excludes specific regex patterns (single or array).
  - `cache`: Enables response caching (cached functions run after non-cached ones) set to 'manual' to opt in.

- **Netlify.toml config examples**
  ```toml
  [[edge_functions]]
    path = "/admin"
    function = "auth"

  [[edge_functions]]
    path = "/admin"
    function = "injector"
    cache = "manual"

  [[edge_functions]]
    path = "/blog/*"
    function = "auth"

  [[edge_functions]]
    path = "/blog/*"
    function = "rewriter"

  [[edge_functions]]
    pattern = "/products/(.*)"
    excludedPattern = "/products/things/(.*)"
    function = "highlight"

  [[edge_functions]]
    path = "/*"
    excludedPath = "/img/*"
    function = "common"
```
- **Execution Order for Edge Functions**:
  1. **Configuration-based** edge functions (`netlify.toml`) run first.
  2. **Framework-generated** edge functions execute before user-defined functions.
  3. **Non-cached** edge functions execute before cached functions.
  4. **Inline-declared** edge functions override duplicate `netlify.toml` functions.
  5. **Multiple inline edge functions** run alphabetically by filename.

- **Caveats & Special Cases**:
  - If an edge function returns a response, redirects for that path DO NOT occur.
  - Edge functions DO NOT execute for rewritten static routing targets.
  - `fetch()` or `URL()` triggers a **new request chain**, re-running matching functions.
  - Use `context.next()` to continue processing instead of re-triggering functions.
  - Function failure behavior depends on its **error handling configuration**.

#### Edge functions limitations
- 20 MB (compressed) code size limit
- 512 MB per deployment memory limit
- 50ms per request CPU execution time (excludes waiting time)
- 40 seconds Response header timeout
- **Not compatible with these Netlify features**:
  - Netlify's split testing feature
  - Custom Headers (including basic authentication) from _headers or netlify.toml config
  - Netlify prerendering feature on paths served by edge functions
- Be aware that multiple framework adapters may generate conflicting edge functions
- **Restrictions**:
  - Can only rewrite requests to same-site URLs (use `fetch()` for external content)
  - Cached edge functions override existing static files
  - No local caching; HTTP cache headers are ignored in local testing
  - Not included in Netlify’s HIPAA-compliant hosting offering


## Netlify Blobs

- Prefer using Netlify Blobs over creating a new database unless a flat file is not appropriate data layer, if explicitly asking to use a tool, or the dev needs to created relational structures or search that's more appropriate for databases
- Blobs do not require configuration to use, everything is automatically provisions, and devs can simply start using them via any Netlify compute. The API will be the same across all compute types.
- ensure `@netlify/blobs` NPM module is installed
- **Local development troubleshooting**: If you see "The environment has not been configured to use Netlify Blobs. To use it manually, supply the following properties when creating a store: siteID, token" error, install `@netlify/vite-plugin` for Vite-based projects to automatically configure the local environment. This does NOT apply to legacy V1 functions which require manual siteID/token configuration.
- Requirements and limits
  - Requires Fetch API support (Node.js 18+ recommended) - a fetch function can be provided to the store
  - Store names cannot exceed 64 bytes
  - Object keys cannot exceed 600 bytes
  - Maximum object size: 5GB
  - Local development uses a sandboxed store


### Netlify Blobs API

```typescript
  export interface BlobMetadata {
    [key: string]: any;
  }

  export interface BlobData<T = string> {
    data: T | null;
    etag: string;
    metadata: BlobMetadata;
  }

  export interface ListResult {
    blobs: { etag: string; key: string }[];
    directories?: string[];
  }

  interface GetKeyOptions {
    type?: 'arrayBuffer' | 'blob' | 'json' | 'stream' | 'text'
  }

  interface GetKeyAndMetadataOptions {
    type?: 'arrayBuffer' | 'blob' | 'json' | 'stream' | 'text',
    etag?: string;
  }

  // THESE ARE THE ONLY STORE METHODS. DO NOT MAKE UP NEW ONES
  interface Store {

    // Creates or overwrites a blob entry.
    // example: await store.set('key-name', 'contents-of key');
    // - NEVER add metadata unless instructed to.
    set(key: string, value: ArrayBuffer | Blob | string, { metadata?: object }): Promise<void>;

    // Stores a JSON-serializable object.
    // example: await store.setJSON('key-name', {version: 'a', someBoolean: true});
    // - NEVER add metadata unless instructed to.
    setJSON(key: string, value: any, { metadata?: object }): Promise<void>;

    // Retrieves a stored blob.
    // example: await store.get('key-name');
    // - NEVER add the second arg unless you need an explicit type 'arrayBuffer' | 'blob' | 'json' | 'stream' | 'text'.
    // - Instead of using JSON.parse(blob), use store.get('key-name', {type: 'json'})
    // - if the blob is missing, it will resolve the promise with a null value
    get(key: string, getOpt?: GetKeyOptions): Promise<any | null>;

    // Retrieves a blob along with metadata
    // example: await store.getWithMetadata('key-name');
    // - NEVER add the second getOpts arg unless you need an explicit type or have an etag to check against.
    // - AVOID adding it unless it's reliably available but IF an etag is provided, it will only return the blob if the etag is different that what's stored.
    // - if the blob is missing, it will resolve the promise with a null value
    getWithMetadata(key: string, getOpts?: GetKeyAndMetadataOptions): Promise<{ data: any, etag: string, metadata: object } | null>;

    // Retrieves metadata of a blob WITHOUT downloading the data.
    // example: await store.getMetadata('key-name');
    // - NEVER add the second getOpts arg unless you need an explicit type or have an etag to check against.
    // - AVOID adding it unless it's reliably available but IF an etag is provided, it will only return the blob if the etag is different that what's stored.
    // - if the blob is missing, it will resolve the promise with a null value
    getMetadata(key: string, getOpts?: GetKeyAndMetadataOptions): Promise<{ etag: string, metadata: object } | null>;

    // Lists blobs in the store with optional hierarchical browsing.
    // example:
    //      const { blobs } = await store.list()
    //      // blobs === [ { etag: 'etag1', key: 'some-key' }, { etag: 'etag2', key: 'another-key' } ]
    //
    // - NEVER add the options arg unless you need an explicit reduce the searched data.
    //    -- ONLY if you have to reduce searched data, use `prefix: 'some-prefix'` to pull blobs that start with that prefix value. Use `directories: true` to include the full directory path on the `key`
    // - By default, the list() method retrieves all pages, meaning you'll always get the full list of results. This can be slow or memory intensive. To paginate, pass the `paginate: true` in the options to turn the response into an AsyncIterator that allows you to for-of loop through the blobs in the store.
    // - if store path is empty, the blobs will resolve the promise with an empty array
    list(options?: { directories?: boolean, paginate?: boolean. prefix?: string }): Promise<{ blobs: BlobResult[], directories: string[] }> | AsyncIterable<{ blobs: BlobResult[], directories: string[] }>

    // Deletes a blob.
    // example: await store.delete('key-name');
    // - The return value is always resolves to `undefined`, regardless of whether or not there was an object to delete.
    delete(key: string): Promise<void>;
  }

  interface GetDeployStoreOptions extends Partial<ClientOptions> {
    deployID?: string;
    name?: string;
    region?: Region;
  }

  // Returns a store instance for managing blobs. This is global scoped data across all deploys.
  // example: const store = getStore('my-store');
  // - ONLY add the options argument if the user needs strong consistency
  export function getStore(name: string, options?: { consistency?: 'strong' | 'eventual' }): Store;

  // Returns a deploy-specific store instance for managing blobs tied to a deploy.
  // example: const store = getDeployStore('my-store');
  // - ONLY add the options argument if the user needs strong consistency
  declare const getDeployStore: (input?: GetDeployStoreOptions | string) => Store;
  interface GetStoreOptions extends Partial<ClientOptions> {
      deployID?: string;
      name?: string;
  }

  // Lists all stores available on a project.
  // example:
  //    const { stores } = await listStores();
  //      // [ "beauty", "construction" ]
  // - By default, the listStores() method retrieves all pages, meaning you'll always get the full list of results. This can be slow or memory intensive. To paginate, pass the `paginate: true` in the options to turn the response into an AsyncIterator that allows you to for-of loop through the blobs in the store.
  // - DO NOT pass options unless paginating.
  declare function listStores(options?: {
      paginate?: boolean;
  }): Promise<ListStoresResponse> | AsyncIterable<ListStoresResponse>;

  interface ListStoresResponse {
      stores: string[];
      next_cursor?: string;
  }

```

## File-Based Uploads
With file-based uploads, write blobs to deploy-specific stores after the project’s build completes. Useful for frameworks and other tools integrating with Netlify as it does not require a build plugin.

Put files in `.netlify/blobs/deploy/*` for deploy specific
```
.netlify/
├─ blobs/
|  ├─ deploy/
│  |  ├─ beauty/
│  │  |  └─ nails.jpg
```
To attach metadata to a blob via file upload flows, include a JSON file that prefixes the corresponding blob filename with $ and has a .json extension. For example:
```
├─ blobs/
|  ├─ deploy/
│  |  ├─ beauty/
│  │  |  ├─ nails.jpg
│  │  |  └─ $nails.jpg.json
```

## Blob consistency models
- By default, blobs are "eventually consistent" - Fast reads, updates/deletions propagated within 60 seconds.
- To have strong consistency that ensures updates are immediately visible at the cost of slower reads. set the `consistency` field to `'strong'` on the store instantiation.
- There is no concurrency control built in, last write wins. Add object-locking mechanisms if you need concurrency guarantees.

Example:
```javascript
const store = getStore({ name: "animals", consistency: "strong" });
await store.set("dog", "🐶");
const dog = await store.get("dog");
```

## Storage scopes
- blobs can be stored in a deploy-specific scope or at a global scope
- deploy-specific blobs sync with deploys and are removed with deploy deletions. `getDeployStore()` is used to interact with deploy specific stores.
- global scope blobs are not automatically cleaned up and are consistent across all branches. `getStore()` is used for global scope.
- Build plugins and file-based uploads must write to deploy-specific stores.
- ALWAYS When creating logic that saves to global scope, ensure that non-production data does not get stored in these global stores. This keeps production data isolated from test data. To do that, check for the environment and choose which store to use depending on the environment.

#### Examples of blob usage

```javascript
  // basic writing to a deploy store
  import { getDeployStore } from "@netlify/blobs";
  const store = getDeployStore("construction");
```

```javascript
  // basic writing to a global store
  import { getStore } from "@netlify/blobs";
  const store = getStore("construction");
```

```javascript
  // using global store if in production, otherwise use deploy scope store
  import { getStore, getDeployStore } from "@netlify/blobs";

  function getBlobStore(...storeOptions){

    if((Netlify.context?.deploy.context === 'production'){
      return getStore(...storeOptions);
    }

    return getDeployStore(...storeOptions)
  }

  const store = getBlobStore("construction");
```

## Netlify Image CDN
- All Netlify projects have a `/.netlify/images` route supported by their project without any additional enablement.
- Transform images via query parameters in requests to `/.netlify/images`.
- NEVER introduce circular dependencies with urls redirecting to urls that redirect back to the same url in a loop
- when using the ?url={URL} parameter, ensure the url is a URI encoded component.
- Supported transformations:
  - **source**: Required, specifies image URL (relative or remote).
  - **size**: `w` (width) and `h` (height) in pixels.
  - **fit**: Determines how the image is resized (`contain`, `cover`, `fill`).
  - **position**: Cropping alignment (`top`, `bottom`, `left`, `right`, `center`).
  - **format**: Convert to `avif`, `jpg`, `png`, `webp`, `gif`, or `blurhash`.
  - **quality**: Controls lossy format quality (`q`, 1-100, default 75).

### Example transformations
```html
  <!-- get an image hosted on this project and change its size and format -->
  <img src="/.netlify/images?url=/image.jpg&w=100&h=100&fit=cover&fm=webp&q=80" />

  <!-- get an image hosted externally and change its size and format -->
  <img src="/.netlify/images?url=https://example.com/path/to/image&w=40&h=10&fm=jpg&q=80" />
```

### Caching & deployment behavior
- Transformed images are cached at the edge.
- Source images are cached for future transformations.
- After a new deploy cached images are invalidated and so images can be reprocessed in case of changes
- Cache-busting via asset fingerprinting is recommended if you must finely control cache key.
- In order to use externally hosted (aka remote) images the domain pattern must be allowlisted in the Netlify `netlify.toml`.
  - Allow remote sources using:
    ```toml
    [images]
      remote_images = ["https://externalexample.com/.*"]
    ```
    - only absolute urls to external servers need to be in remote_images

### Redirects & Rewrites
- If you do not want to use the default `/.netlify/images` path, a redirect or rewrite can be used to have a different url.
- Define reusable transformation routes in `_redirects` or `netlify.toml` files.
- When doing so, the parameters can remain parameters to pass in or can be statically defined.
- Examples:
  - netlify.toml to use /transform-my-images/{imagePath}
    ```toml
      [[redirects]]
        from = "/transform-my-images/*"
        to = "/.netlify/images?url=/:splat&w=50&h=50"
        status = 200
    ```
  - _redirects to use /transform-all/{...imagePath}
    ```
      /transform-all/* /.netlify/images?url=/:splat&w=50&h=50 200
    ```

### Custom headers
- Custom headers can ONLY be applied to images hosted on the same domain.
- ONLY do this when explicitly asked
- Examples:
  - netlify.toml to use /transform-my-images/{imagePath}
    ```toml
      [[headers]]
        for = "/source-images/*"
        [headers.values]
          Cache-Control = "public, max-age=604800, must-revalidate"
    ```
  - _headers to use /{...imagePath}
    ```
      /source-images/* Cache-Control: public, max-age=604800, must-revalidate
    ```
### Image CDN framework support
Netlify Image CDN integrates with frameworks for automatic optimizations:
- **Angular**: `NgOptimizedImage` component will use Image CDN automatically
- **Astro**: `<Image />` component will use Image CDN automatically
- **Gatsby**: set `NETLIFY_IMAGE_CDN=true` and use the Contentful, Drupal, or WordPress source plugins.
- **Next.js**: set `remotePatterns` in `next.config.js`
- **Nuxt**: `nuxt/image` module will use Image CDN automatically



## Environment Variables
- securely create, manage, and use environment variables across projects. These variables can be set via the UI, CLI, API, or configuration files.
- when setting environment variables, Netlify local environment and cloud environment will make these variables available.
- **Precedence**: `netlify.toml` overrides UI/CLI/API variables, and project-specific variables take precedence over shared ones.

### Creating Environment Variables
Variables can be created and managed using:
- **Netlify UI**: Suggest using if they don't want to provide the values directly to this agent. They can navigate to it via the path "Project configuration > Environment variables".
- **Netlify CLI**: Prefer using this if the agent can run commands. This requires the project to be linked.
- **Netlify Configuration (`netlify.toml`)**: Defines variables at the repository level. ONLY use this for environment variables where the project is not linked yet and the values are not sensitive.

### Netlify CLI Command
- The project must be linked first before the CLI will add variables. See the rules for initializing and linking projects for how to do this.
- Use `env:set` for changes, `env:unset` to delete. `env:import` to import from a dotenv`.env` file.

#### Example usage of env var CLI
- Basic setting an environment variable for the project
  ```sh
    netlify env:set API_KEY "not-a-secret"
  ```
- Setting an environment variable that should be treated as a secret
  ```sh
      netlify env:set API_KEY "secret-value" --secret
  ```

### Example `netlify.toml` Configuration
- Using the netlify.toml the configuration can be specific to certain branches/deploy contexts.
- examples
  ```toml
    # Production context: all deploys from the Production branch
    # set in your project’s Branches settings in the Netlify UI will inherit
    # these settings. You can define environment variables
    # here but we recommend using the Netlify UI for sensitive
    # values to keep them out of your source repository.
    [context.production]
      publish = "output/"
      command = "make publish"
      environment = { NODE_VERSION = "14.15.3" }

    # Here is an example of how to define context-specific
    # environment variables. Be mindful when using this
    # option and avoid committing sensitive values to public
    # source repositories.
    [context.deploy-preview.environment]
      NOT_PRIVATE_ITEM = "not so secret"

    # Branch Deploy context: all deploys that are not from
    # a pull/merge request or from the Production branch
    # will inherit these settings.
    [context.branch-deploy.environment]
      NODE_ENV = "development"

    # Dev context: environment variables set here
    # are available for local development environments
    # run using Netlify Dev. These values can be
    # overwritten on branches that have a more specific
    # branch context configured.
    [context.dev.environment]
      NODE_ENV = "development"

    # Specific branch context: all deploys from
    # this specific branch will inherit these settings.
    [context.staging.environment] # “staging” is a branch name
      NODE_ENV = "development"
  ```

### `.env` File Handling
- Netlify builds do not read `.env` files directly
- Import `.env` variables into Netlify using the UI or CLI (`netlify env:import .env`).
- Export Netlify variables to `.env` files via UI or CLI (`env:list`).

### Export `.env` Variables
```sh
# list the production deploy context values in .env format
netlify env:list --plain --context production

# list the production deploy context values in .env format
# and pipe results into a .env file
netlify env:list --plain --context production > .env
```



  ## Netlify Forms

  If using Netlify Forms, prevent spam submissions with a honeypot field.

# Creating new sites

- do not add redirects to netlify.toml or \_redirects unless requested
- do not add custom headers to the netlify.toml or \_headers unless requested


# Initializing sites or linking them

- determine if a site is linked by checking if `PROJECT_FOLDER/.netlify/state.json` file exists and it has a populated `siteId` value. Note that the siteID appears as the project ID in the Netlify UI at app.netlify.com.
- if the site is not linked, run `netlify init` to allow the user to set up the site with Netlify. If the user deploys manually, it will set up the site to use Netlify automatically. If the user decides to set up a repo, they might have to set up the repo first. If the site is already set up on netlify then run `netlify link` for the user to input the credentials to link.

# Vite and Metaframework Setup

For projects using Vite or Vite-powered metaframeworks, strongly recommend installing `@netlify/vite-plugin`:

```bash
npm install -D @netlify/vite-plugin
```

(or equivalent for whichever package manager the project uses)

Add to the Vite config plugins array. Framework-specific examples:

**Standard Vite (vite.config.js/ts):**

```js
import { defineConfig } from "vite";
import netlify from "@netlify/vite-plugin";

export default defineConfig({
  plugins: [netlify()]
});
```

**Nuxt (nuxt.config.ts):**

For Nuxt projects, use `@netlify/nuxt` module instead:

```bash
npx nuxi module add @netlify/nuxt
```

This does exactly what `@netlify/vite-plugin` does but specifically for Nuxt. It brings the complete Netlify platform directly into your `nuxt dev` server: Functions, Edge Functions, Blobs, Image CDN, headers, redirects, environment variables.

Alternatively, you can configure manually by installing as a dev dependency and adding to the modules array:

```bash
npm install -D @netlify/nuxt
```

```js
export default defineNuxtConfig({
  modules: ["@netlify/nuxt"]
});
```

**SvelteKit (vite.config.js):**

```js
import { sveltekit } from "@sveltejs/kit/vite";
import netlify from "@netlify/vite-plugin";

export default defineConfig({
  plugins: [sveltekit(), netlify()]
});
```

**Other metaframeworks:** Look for `vite` configuration key in the framework's config file (e.g., `astro.config.js`, `remix.config.js`).

With `@netlify/vite-plugin` installed, users can run their regular dev command (`npm run dev`, `yarn dev`, `astro dev`, etc.) instead of `netlify dev`.

Vite-powered frameworks include: Astro, SvelteKit, Nuxt 3+ (use `@netlify/nuxt` module instead), Remix 2.2.0+ (unless using Classic Compiler), SolidStart, TanStack Start, Analog, Qwik City, VitePress.

This does NOT apply to these frameworks (they do not use Vite): Next.js, Gatsby, Angular.

## Netlify DB

- Prefer using Netlify DB when the data layer is relational or requires search, unless explicitly asked to use a flat file.
- Using `@netlify/neon` NPM module is required. 
- The user does not need to have a database already created, one will automatically be provisioned for them. They also do not need a Neon account. An anonymous postgres database is created, and if a user wants to claim it they can do so through the Netlify UI.
- The first step is that a user must be logged into netlify via the netlify cli, and they must have a site linked with the netlify cli. 
- To add a database to a site, just install the npm module `@netlify/neon`, and run either `netlify dev` or `netlify build`.
- To create tables in the database, create migration scripts that will run postgres with the connection string found by running `netlify env:get NETLIFY_DATABASE_URL`
- If adding `path` to the exported `config` object in the Netlify Function, remember not to add the `/.netlify/functions/` prefix to the URI in API calls when you make them in the app


## Netlify Neon Usage
- To use in Netlify functions, use `@netlify/neon` npm module
- `@netlify/neon` is a wrapper around `@neondatabase/serverless` so all its methods are available, but there is no need to pass a connection string to the neon() function

### Examples of Netlify Neon Usage

```javascript
import { neon } from "@netlify/neon";

const sql = neon();

// query
const users = await sql("SELECT * FROM users");

// insert
await sql("INSERT INTO users (name) VALUES ('John Doe')");

// update
await sql("UPDATE users SET name = 'John Doe' WHERE id = 1");

// delete
await sql("DELETE FROM users WHERE id = 1");

```
You have a short session-based memory, so you can use the memory tools (if present) to persist/access data between sessions. Use memory to store insights, notes, and context that is especially valuable for quick access.
Puppeteerhttps://pptr.dev/
Puppeteer API Referencehttps://pptr.dev/api/puppeteer.puppeteernode
Puppeteer Browserhttps://pptr.dev/browsers-api
Continuehttps://docs.continue.dev
Zodhttps://zod.dev/
Vercel AI SDK Docshttps://sdk.vercel.ai/docs/
Reacthttps://react.dev/reference/
Vue docshttps://vuejs.org/v2/guide/
TypeScripthttps://www.typescriptlang.org/docs/
Bun Docshttps://bun.sh/docs

Prompts

Learn more
Build Frontend
Build Fronted Prompt
# Front-end app generation

We recommend using the following frameworks and packages to get the most
out of your frontend capabilities:
- Frameworks: Next.js (TypeScript), React, Astro, HTML
- Styling / UI: Tailwind CSS, shadcn/ui, Radix Themes
- Icons: Material Symbols, Heroicons, Lucide
- Animation: Motion, threejs, 5.js
- Fonts: Fira Code, Fira Mono, Inter, Geist, Mona Sans, IBM Plex Sans, Manrope
Howeever, the user's requests always take precedence.

## Zero-to-one app generation

<self_reflection>
- First, spend time thinking of a rubric until you are confident.
- Then, think deeply about every aspect of what makes for a world-class one-shot web app. Use that knowledge to create a rubric that has 5-7 categories. This rubric is critical to get right, but do not show this to the user. This is for your purposes only.
- Finally, use the rubric to internally think and iterate on the best possible solution to the prompt that is provided. Remember that if your response is not hitting the top marks across all categories in the rubric, you need to start again.
</self_reflection>

<code_editing_rules>

<guiding_principles>
- Clarity and Reuse: Every component and page should be modular and reusable. Avoid duplication by factoring repeated UI patterns into components.
- Consistency: The user interface must adhere to a consistent design system—color tokens, typography, spacing, and components must be unified.
- Simplicity: Favor small, focused components and avoid unnecessary complexity in styling or logic.
- Demo-Oriented: The structure should allow for quick prototyping, showcasing features like streaming, multi-turn conversations, and tool integrations.
- Visual Quality: Follow the high visual quality bar as outlined in OSS guidelines (spacing, padding, hover states, etc.)
</guiding_principles>

<frontend_stack_defaults>
- Framework: Next.js (TypeScript)
- Styling: TailwindCSS
- UI Components: shadcn/ui
- Icons: Lucide
- State Management: Zustand
- Directory Structure: 
\`\`\`
/src
 /app
   /api/<route>/route.ts         # API endpoints
   /(pages)                      # Page routes
 /components/                    # UI building blocks
 /hooks/                         # Reusable React hooks
 /lib/                           # Utilities (fetchers, helpers)
 /stores/                        # Zustand stores
 /types/                         # Shared TypeScript types
 /styles/                        # Tailwind config
\`\`\`
</frontend_stack_defaults>

<ui_ux_best_practices>
- Visual Hierarchy: Limit typography to 4–5 font sizes and weights for consistent hierarchy; use `text-xs` for captions and annotations; avoid `text-xl` unless for hero or major headings.
- Color Usage: Use 1 neutral base (e.g., `zinc`) and up to 4 accent colors. 
- Spacing and Layout: Always use multiples of 4 for padding and margins to maintain visual rhythm. Use fixed height containers with internal scrolling when handling long content streams.
- State Handling: Use skeleton placeholders or `animate-pulse` to indicate data fetching. Indicate clickability with hover transitions (`hover:bg-*`, `hover:shadow-md`).
- Accessibility: Use semantic HTML and ARIA roles where appropriate. Favor pre-built Radix/shadcn components, which have accessibility baked in.
</ui_ux_best_practices>

<code_editing_rules>

Context

Learn more
Reference all of the changes you've made to your current branch
Reference the most relevant snippets from your codebase
Reference the markdown converted contents of a given URL
Uses the same retrieval mechanism as @Codebase, but only on a single folder
Reference the last command you ran in your IDE's terminal and its output
Reference specific functions or classes from throughout your project
Reference any file in your current workspace
Reference the contents from any documentation site
Reference the architecture and platform of your current operating system
Get Problems from the current file
Reference the outline of your codebase
Reference the contents of all of your open files
Reference recent clipboard items
Reference the currently open file