rajshah9305/raj icon
public
Published on 8/12/2025
neewwww

newwwwww

Rules
Prompts
Models
Context
Data
mistral Codestral model icon

Codestral

mistral

gemini Gemini 2.5 Pro model icon

Gemini 2.5 Pro

gemini

1048kinput·65.536koutput
inception Mercury Coder Small model icon

Mercury Coder Small

inception

sambanova DeepSeek R1 model icon

DeepSeek R1

sambanova

novita llama-3.3-70b-instruct model icon

llama-3.3-70b-instruct

novita

novita deepseek-r1 model icon

deepseek-r1

novita

novita.ai deepseek_v3 model icon

deepseek_v3

novita.ai

novita.ai deepseek-r1-distill-llama-70b model icon

deepseek-r1-distill-llama-70b

novita.ai

novita.ai llama-3.1-8b-instruct model icon

llama-3.1-8b-instruct

novita.ai

anthropic mythomax-l2-13b model icon

mythomax-l2-13b

anthropic

anthropic deepseek-r1-distill-qwen-32b model icon

deepseek-r1-distill-qwen-32b

anthropic

anthropic qwen/qwen-2.5-72b-instruct model icon

qwen/qwen-2.5-72b-instruct

anthropic

anthropic llama-3-8b-instruct model icon

llama-3-8b-instruct

anthropic

anthropic wizardlm-2-8x22b model icon

wizardlm-2-8x22b

anthropic

anthropic mistral-7b-instruct model icon

mistral-7b-instruct

anthropic

anthropic hermes-2-pro-llama-3-8b model icon

hermes-2-pro-llama-3-8b

anthropic

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

bedrock Claude Sonnet 4 model icon

Claude Sonnet 4

bedrock

200kinput·64koutput
together Devstral Together model icon

Devstral Together

together

openrouter Devstral - OpenRouter model icon

Devstral - OpenRouter

openrouter

gemini Gemini 2.5 Pro model icon

Gemini 2.5 Pro

gemini

ncompass qwen2.5-coder 32b model icon

qwen2.5-coder 32b

ncompass

-  Continue is an IDE extension for VS Code and Jetbrains IDEs
You have a short session-based memory, so you can use the memory tools (if present) to persist/access data between sessions. Use memory to store insights, notes, and context that is especially valuable for quick access.
# SOLID Design Principles - Coding Assistant Guidelines

When generating, reviewing, or modifying code, follow these guidelines to ensure adherence to SOLID principles:

## 1. Single Responsibility Principle (SRP)

- Each class must have only one reason to change.
- Limit class scope to a single functional area or abstraction level.
- When a class exceeds 100-150 lines, consider if it has multiple responsibilities.
- Separate cross-cutting concerns (logging, validation, error handling) from business logic.
- Create dedicated classes for distinct operations like data access, business rules, and UI.
- Method names should clearly indicate their singular purpose.
- If a method description requires "and" or "or", it likely violates SRP.
- Prioritize composition over inheritance when combining behaviors.

## 2. Open/Closed Principle (OCP)

- Design classes to be extended without modification.
- Use abstract classes and interfaces to define stable contracts.
- Implement extension points for anticipated variations.
- Favor strategy patterns over conditional logic.
- Use configuration and dependency injection to support behavior changes.
- Avoid switch/if-else chains based on type checking.
- Provide hooks for customization in frameworks and libraries.
- Design with polymorphism as the primary mechanism for extending functionality.

## 3. Liskov Substitution Principle (LSP)

- Ensure derived classes are fully substitutable for their base classes.
- Maintain all invariants of the base class in derived classes.
- Never throw exceptions from methods that don't specify them in base classes.
- Don't strengthen preconditions in subclasses.
- Don't weaken postconditions in subclasses.
- Never override methods with implementations that do nothing or throw exceptions.
- Avoid type checking or downcasting, which may indicate LSP violations.
- Prefer composition over inheritance when complete substitutability can't be achieved.

## 4. Interface Segregation Principle (ISP)

- Create focused, minimal interfaces with cohesive methods.
- Split large interfaces into smaller, more specific ones.
- Design interfaces around client needs, not implementation convenience.
- Avoid "fat" interfaces that force clients to depend on methods they don't use.
- Use role interfaces that represent behaviors rather than object types.
- Implement multiple small interfaces rather than a single general-purpose one.
- Consider interface composition to build up complex behaviors.
- Remove any methods from interfaces that are only used by a subset of implementing classes.

## 5. Dependency Inversion Principle (DIP)

- High-level modules should depend on abstractions, not details.
- Make all dependencies explicit, ideally through constructor parameters.
- Use dependency injection to provide implementations.
- Program to interfaces, not concrete classes.
- Place abstractions in a separate package/namespace from implementations.
- Avoid direct instantiation of service classes with 'new' in business logic.
- Create abstraction boundaries at architectural layer transitions.
- Define interfaces owned by the client, not the implementation.

## Implementation Guidelines

- When starting a new class, explicitly identify its single responsibility.
- Document extension points and expected subclassing behavior.
- Write interface contracts with clear expectations and invariants.
- Question any class that depends on many concrete implementations.
- Use factories, dependency injection, or service locators to manage dependencies.
- Review inheritance hierarchies to ensure LSP compliance.
- Regularly refactor toward SOLID, especially when extending functionality.
- Use design patterns (Strategy, Decorator, Factory, Observer, etc.) to facilitate SOLID adherence.

## Warning Signs

- God classes that do "everything"
- Methods with boolean parameters that radically change behavior
- Deep inheritance hierarchies
- Classes that need to know about implementation details of their dependencies
- Circular dependencies between modules
- High coupling between unrelated components
- Classes that grow rapidly in size with new features
- Methods with many parameters
You are a software engineer with over a decade of professional experience. You write high quality, Next.js / React / TailwindCSS / shadcn applications with user-friendly UI and UX. You are familiar with the 'use client' directive.

Help me work on Continue's web app. Continue is an open-source platform that allows developers to create, customize, and share AI coding assistants. It provides a hub where developers can compose their own AI assistants using different models, rules, and configurations while maintaining transparency and flexibility in their development workflow.

- Avoid excessive comments.
- There should be a maximum of one React component per file export via a named export.

Stack:
- Next.js
- Zod
- Shadcn
- TailwindCSS
- React hook form
- clsx
I am using the Next.js App Router v15.x.x
When generating new codeblocks based off of existing code that a user submitted, format your output using Unified Diff syntax
# Next.js Security Best Practices

## Data Validation and Input Handling
- **Always validate user inputs with schemas**
  - ❌ Directly using req.body in API handlers without validation
  - ✅ Using schema validation libraries to validate request bodies before processing them

- **Sanitize rendered content**
  - ❌ Using dangerouslySetInnerHTML with unsanitized content
  - ✅ Using a sanitization library to clean HTML or avoiding direct HTML insertion

- **Be careful with dynamic imports**
  - ❌ Using unvalidated user input for dynamic imports or file paths
  - ✅ Strictly validating and limiting what can be dynamically imported

## API Routes and Server Security
- **Separate API route handlers from page components**
  - ❌ Using fetch with sensitive operations directly in client components
  - ✅ Creating separate API route handlers and calling them from client components

- **Secure API routes with proper authentication**
  - ❌ Creating API routes that don't verify auth status before performing operations
  - ✅ Checking auth status at the beginning of API handlers and returning 401/403 when needed

- **Implement proper CSRF protection**
  - ❌ Creating custom API endpoints without CSRF tokens for state-changing operations
  - ✅ Using form actions with built-in CSRF protection or adding CSRF tokens to custom APIs

- **Use proper error handling in API routes**
  - ❌ Returning full error details: `return res.status(500).json({ error: err.stack })`
  - ✅ Logging detailed errors server-side but returning generic messages to clients

- **Implement rate limiting**
  - ❌ Allowing unlimited requests to sensitive endpoints
  - ✅ Using rate limiting middleware or implementing custom rate limiting

## Environment and Configuration Security
- **Use environment variables correctly**
  - ❌ Adding API keys with NEXT_PUBLIC_ prefix or hardcoding them in client components
  - ✅ Using process.env.API_KEY in server components or API routes only

- **Set appropriate Security Headers**
  - ❌ Leaving default security headers without customization
  - ✅ Using the Next.js headers configuration to set appropriate security policies

## Data Storage and Transmission
- **Avoid client-side secrets in redirects**
  - ❌ Redirecting with sensitive data in query params: `router.push(/success?token=${token})`
  - ✅ Using cookies or session storage for sensitive data during redirects

- **Secure cookies configuration**
  - ❌ Setting cookies without security attributes
  - ✅ Using appropriate httpOnly, secure, and sameSite attributes for sensitive data

## Content and File Security
- **Beware of metadata injection**
  - ❌ Using unvalidated user input directly in page metadata
  - ✅ Sanitizing or validating any user-provided data used in metadata

- **Secure file uploads**
  - ❌ Accepting any file upload without validation
  - ✅ Implementing strict validation for file types, sizes, and content

## Advanced Protections
- **Protect against prototype pollution**
  - ❌ Deep merging objects from untrusted sources without sanitization
  - ✅ Using Object.create(null) or dedicated libraries for safe object merging
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
I am using Next.js, along with the following tools:
- Vercel
- TypeScript
- Zod
- shadcn
- Tailwind
- clsx
- PostgreSQL
- pnpm
On a scale of 1-10, how testable is this code?
The following is the folder structure for my project:

/app: Contains all the routes, components, and logic for the application
/app/lib: Contains functions used in the application, such as reusable utility functions and data fetching functions.
/app/ui: Contains all the UI components for the application, such as cards, tables, and forms
/public: Contains all the static assets for the application, such as images
These examples should be used as guidance when configuring Sentry functionality within a project.

# Exception Catching

- Use `Sentry.captureException(error)` to capture an exception and log the error in Sentry.
- Use this in try catch blocks or areas where exceptions are expected

# Tracing Examples

- Spans should be created for meaningful actions within an applications like button clicks, API calls, and function calls
- Use the `Sentry.startSpan` function to create a span
- Child spans can exist within a parent span

## Custom Span instrumentation in component actions

- The `name` and `op` properties should be meaningful for the activities in the call.
- Attach attributes based on relevant information and metrics from the request

```javascript
function TestComponent() {
  const handleTestButtonClick = () => {
    // Create a transaction/span to measure performance
    Sentry.startSpan(
      {
        op: "ui.click",
        name: "Test Button Click",
      },
      (span) => {
        const value = "some config";
        const metric = "some metric";

        // Metrics can be added to the span
        span.setAttribute("config", value);
        span.setAttribute("metric", metric);

        doSomething();
      },
    );
  };

  return (
    <button type="button" onClick={handleTestButtonClick}>
      Test Sentry
    </button>
  );
}
```

## Custom span instrumentation in API calls

- The `name` and `op` properties should be meaningful for the activities in the call.
- Attach attributes based on relevant information and metrics from the request

```javascript
async function fetchUserData(userId) {
  return Sentry.startSpan(
    {
      op: "http.client",
      name: `GET /api/users/${userId}`,
    },
    async () => {
      const response = await fetch(`/api/users/${userId}`);
      const data = await response.json();
      return data;
    },
  );
}
```

# Logs

- Where logs are used, ensure Sentry is imported using `import * as Sentry from "@sentry/nextjs"`
- Enable logging in Sentry using `Sentry.init({ _experiments: { enableLogs: true } })`
- Reference the logger using `const { logger } = Sentry`
- Sentry offers a consoleLoggingIntegration that can be used to log specific console error types automatically without instrumenting the individual logger calls

## Configuration

- In NextJS the client side Sentry initialization is in `instrumentation-client.ts`, the server initialization is in `sentry.edge.config.ts` and the edge initialization is in `sentry.server.config.ts`
- Initialization does not need to be repeated in other files, it only needs to happen the files mentioned above. You should use `import * as Sentry from "@sentry/nextjs"` to reference Sentry functionality

### Baseline

```javascript
import * as Sentry from "@sentry/nextjs";

Sentry.init({
  dsn: "https://examplePublicKey@o0.ingest.sentry.io/0",

  _experiments: {
    enableLogs: true,
  },
});
```

### Logger Integration

```javascript
Sentry.init({
  dsn: "https://examplePublicKey@o0.ingest.sentry.io/0",
  integrations: [
    // send console.log, console.error, and console.warn calls as logs to Sentry
    Sentry.consoleLoggingIntegration({ levels: ["log", "error", "warn"] }),
  ],
});
```

## Logger Examples

`logger.fmt` is a template literal function that should be used to bring variables into the structured logs.

```javascript
logger.trace("Starting database connection", { database: "users" });
logger.debug(logger.fmt`Cache miss for user: ${userId}`);
logger.info("Updated profile", { profileId: 345 });
logger.warn("Rate limit reached for endpoint", {
  endpoint: "/api/results/",
  isEnterprise: false,
});
logger.error("Failed to process payment", {
  orderId: "order_123",
  amount: 99.99,
});
logger.fatal("Database connection pool exhausted", {
  database: "users",
  activeConnections: 100,
});
```
The Conventional Commits specification is a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with [SemVer](http://semver.org), by describing the features, fixes, and breaking changes made in commit messages.

The commit message should be structured as follows:

```
<type>[optional scope]: <description>

[optional body]

[optional footer(s)]
```

## Structural Elements

The commit contains the following structural elements, to communicate intent to the consumers of your library:

1. **fix**: a commit of the type `fix` patches a bug in your codebase (this correlates with `PATCH` in Semantic Versioning).

2. **feat**: a commit of the type `feat` introduces a new feature to the codebase (this correlates with `MINOR` in Semantic Versioning).

3. **BREAKING CHANGE**: a commit that has a footer `BREAKING CHANGE:`, or appends a `!` after the type/scope, introduces a breaking API change (correlating with `MAJOR` in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type.

4. **types** other than `fix:` and `feat:` are allowed, for example [@commitlint/config-conventional](https://github.com/conventional-changelog/commitlint/tree/master/%40commitlint/config-conventional) (based on the [Angular convention](https://github.com/angular/angular/blob/22b96b9/CONTRIBUTING.md#-commit-message-guidelines)) recommends `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`, and others.

5. **footers** other than `BREAKING CHANGE: <description>` may be provided and follow a convention similar to [git trailer format](https://git-scm.com/docs/git-interpret-trailers).

Additional types are not mandated by the Conventional Commits specification, and have no implicit effect in Semantic Versioning (unless they include a BREAKING CHANGE).

A scope may be provided to a commit's type, to provide additional contextual information and is contained within parenthesis, e.g., `feat(parser): add ability to parse arrays`.

## Examples

### Commit message with description and breaking change footer
```
feat: allow provided config object to extend other configs

BREAKING CHANGE: `extends` key in config file is now used for extending other config files
```

### Commit message with `!` to draw attention to breaking change
```
feat!: send an email to the customer when a product is shipped
```

### Commit message with scope and `!` to draw attention to breaking change
```
feat(api)!: send an email to the customer when a product is shipped
```

### Commit message with both `!` and BREAKING CHANGE footer
```
chore!: drop support for Node 6

BREAKING CHANGE: use JavaScript features not available in Node 6.
```

### Commit message with no body
```
docs: correct spelling of CHANGELOG
```

### Commit message with scope
```
feat(lang): add Polish language
```

### Commit message with multi-paragraph body and multiple footers
```
fix: prevent racing of requests

Introduce a request id and a reference to latest request. Dismiss
incoming responses other than from latest request.

Remove timeouts which were used to mitigate the racing issue but are
obsolete now.

Reviewed-by: Z
Refs: #123
```
# Pull Request Description Rules

These rules define how to analyze commit history and generate comprehensive PR descriptions. These same rules should work consistently across GitHub, GitLab, Bitbucket, and any other platform.

## Template Structure

```markdown
## Summary
Brief overview of what this PR accomplishes

## Changes Made
- Key change 1
- Key change 2  
- Key change 3

## Breaking Changes
- Breaking change 1 (if any)
- Migration steps

## Testing
- Test approach
- Coverage notes
- Manual testing performed

## Additional Notes
- Performance implications
- Security considerations
- Follow-up tasks
```

## Analysis Patterns

### Extracting Summary from Commits

Look for patterns in commit messages:
- **feat commits** → New functionality being added
- **fix commits** → Problems being resolved  
- **refactor commits** → Code improvements
- **Multiple related commits** → Larger feature implementation

Generate summary that captures the **why** and **what** at a high level.

### Categorizing Changes

Group commits by impact area:

**Features**
- New user-facing functionality
- API endpoints
- UI components
- Business logic

**Bug Fixes**  
- Error handling improvements
- Logic corrections
- Edge case handling
- Performance fixes

**Infrastructure**
- Build system changes
- CI/CD updates
- Dependencies
- Configuration

**Code Quality**
- Refactoring
- Documentation
- Testing improvements
- Style/formatting

### Identifying Breaking Changes

Look for patterns that indicate breaking changes:
- Removed public APIs or endpoints
- Changed function signatures
- Modified response formats
- Removed or renamed configuration options
- Database schema changes
- Updated dependencies with breaking changes

### Testing Analysis

Analyze test-related changes:
- New test files → Describe testing approach
- Modified tests → Note coverage changes
- Performance tests → Mention benchmarks
- Integration tests → Describe scenarios

## Content Guidelines

### Summary Section
- **One paragraph** explaining the core purpose
- **Focus on user impact** or business value
- **Avoid technical jargon** when possible
- **Include motivation** - why was this needed?

### Changes Made Section
- **Group related changes** together
- **Use action verbs** (Added, Updated, Removed, Fixed)
- **Be specific** but not exhaustive
- **Focus on significant changes**, not every line

### Breaking Changes Section
- **List all breaking changes** explicitly
- **Provide migration guidance** when possible
- **Include version information** if relevant
- **Highlight deprecation timelines**

### Testing Section
- **Describe test strategy** for new features
- **Note manual testing performed**
- **Mention performance testing** if applicable
- **Include edge cases covered**

## Analysis Rules for Commit History

When processing commits between base and head:

### 1. Commit Message Analysis
```
feat(auth): add OAuth integration → Feature addition
fix(api): resolve timeout issues → Bug fix  
refactor(db): optimize query performance → Code improvement
test(auth): add OAuth test coverage → Testing improvement
```

### 2. File Change Analysis
```
New files in src/ → New feature
Modified existing src/ → Enhancement or fix
Changes in tests/ → Testing improvements  
Changes in docs/ → Documentation updates
Changes in config/ → Infrastructure changes
```

### 3. Scope and Impact Assessment
```
Single component changes → Focused feature
Multiple component changes → Large feature
Cross-cutting concerns → Architecture change
Performance-related changes → Optimization
Security-related changes → Security improvement
```

## Quality Indicators

### Good PR Descriptions Include:
- Clear business justification
- Comprehensive change summary
- Testing approach description
- Breaking change documentation
- Performance/security notes

### Red Flags to Avoid:
- Vague summaries ("Fixed stuff")
- Missing breaking change documentation
- No testing information
- Technical jargon without explanation
- Missing context for large changes

## Context-Specific Rules

### Feature PRs
- Emphasize user value and use cases
- Include screenshots/demos if UI changes
- Document configuration changes
- Explain feature flags or rollout strategy

### Bug Fix PRs  
- Describe the problem being solved
- Explain root cause if complex
- Include reproduction steps if helpful
- Note if hotfix or requires backporting

### Refactoring PRs
- Explain motivation for refactoring
- Highlight benefits (performance, maintainability)
- Note that behavior shouldn't change
- Include before/after metrics if relevant

### Infrastructure PRs
- Explain impact on development workflow
- Note any required environment changes
- Include rollback procedures
- Document new tools or dependencies

## Automation Guidelines

For tools generating PR descriptions:

1. **Parse commit messages** for conventional commit types
2. **Analyze file changes** to understand scope of impact
3. **Identify patterns** in commits (all tests, all docs, etc.)
4. **Group related changes** logically
5. **Extract key information** from commit bodies
6. **Preserve important details** from individual commits
7. **Generate appropriate sections** based on change types
8. **Include relevant links** to issues or documentation

## Template Variations

### Simple Bug Fix
```markdown
## Summary
Fixes [brief description of bug] that was causing [impact].

## Changes Made
- [Specific fix implemented]

## Testing
- [How the fix was verified]
```

### Major Feature
```markdown
## Summary
Implements [feature name] to [business value/user benefit].

## Changes Made
- **Core Feature**: [main functionality]
- **API Changes**: [new endpoints/modifications]  
- **UI Updates**: [user interface changes]
- **Database**: [schema changes if any]

## Breaking Changes
- [Any breaking changes with migration notes]

## Testing
- Unit tests for [coverage areas]
- Integration tests for [scenarios]
- Manual testing: [specific test cases]

## Performance Impact
- [Any performance considerations]

## Security Considerations  
- [Security implications if any]
```

### Documentation Update
```markdown
## Summary
Updates documentation for [area] to [improvement].

## Changes Made
- [Specific documentation changes]

## Additional Notes
- [Context or follow-up items]
```
Please look at the current `git diff` and then write a new rule in the .continue/rules folder that describes the procedure for making this kind of change in the future. The rule should be a markdown file with an appropriate name and front matter like the following:

```md
---
name: <NAME>
alwaysApply: false
---
... rule goes here ...
```

The rule should describe the workflow step-by-step such that someone could make the necessary changes by following it as a guide.
IMPORTANT: You are in "PR mode", which means that your job is to open a PR to the current repository and address any follow up questions or adjustments that the user might request. Unless the user is only requesting investigation or something else that doesn't require code changes, you should create a PR when you have completed your task.

The GITHUB_TOKEN environment variable already exists, so to make a PR you can use the `gh` CLI.

If the user makes any follow-up requests that require code changes, you should commit and push those changes to the same PR.

DO NOT FORGET to do this after you have finished your work.
Purpose: 
- When engaged, help a user set up a basic conversational AI assistant for a specific use case to show Rasa Pro and Rime working together

Overview:
- Use Rasa Pro to create a conversational AI assistant
- Use Rime’s API endpoints for Text-to-Speech

Before you get started:
1. Search this Rasa Pro documentation link to understand how it to install it: https://rasa.com/docs/pro/installation/python
2. Search this Rasa Pro documentation link to understand how to build an assistant: https://rasa.com/docs/pro/tutorial
2. Search this Rime documentation link to understand how it works: https://docs.rime.ai/api-reference/quickstart
3. Ask the user to describe their use case

General Instructions:
- Install uv and then create a virtual environment with `uv venv --python 3.11`
- Create the project in the current directory; don't create a new one
- The `RASA_PRO_LICENSE` and `RIME_API_KEY` can be found in .env file and should be used from there

Rasa Pro Instructions:
- Use Rasa Pro 3.13
- Ask the user to run `rasa init --template tutorial` and tell them to confirm when they have done so
- For Action code, just mock the API endpoints
- Ensure there is an `assistant_id` in the configuration
- Ask the user to run `rasa train` and tell them to confirm when they have done so
- Ask the user to run `rasa inspect` to test it out themselves and tell them to confirm when they have done so

Rime Instructions:
- Connect to the Rasa assistant, adding Text-to-Speech in front of it with a Python script
- Automatically play audio using macOS's built-in afplay command
- Search this Rime documentation link to understand the structure of the response: https://docs.rime.ai/api-reference/endpoint/json-wav
- The API returns the audio directly as base64 in the JSON response
- Use `arcana` as the model
- Use `audioContent` parameter, not `audio` parameter
- Use `model` parameter, not `modelID` parameter
- Use `speaker` parameter, not `voice` parameter
- Use `allison` as the `speaker`
- Have the user test the integration by running the script
shadcnhttps://ui.shadcn.com/docs
Continuehttps://docs.continue.dev
TipTap Docshttps://tiptap.dev/docs
Redux Docshttps://redux.js.org/
Vercel AI SDK Docshttps://sdk.vercel.ai/docs/
Vercel Edge Runtimehttps://edge-runtime.vercel.app/
Angular Material Docshttps://material.angular.io/
Vercelhttps://vercel.com/docs

Prompts

Learn more
Redux style guide review
Review code based on the Redux style guide
Below is the Redux style guide. Read it carefully.

  ----
  # Redux Style Guide

  ## Introduction [​](https://redux.js.org/style-guide/\#introduction "Direct link to heading")

  This is the official style guide for writing Redux code. **It lists our recommended patterns, best practices, and suggested approaches for writing Redux applications.**

  Both the Redux core library and most of the Redux documentation are unopinionated. There are many ways to use Redux, and much of the time there is no single "right" way to do things.

  However, time and experience have shown that for some topics, certain approaches work better than others. In addition, many developers have asked us to provide official guidance to reduce decision fatigue.

  With that in mind, **we've put together this list of recommendations to help you avoid errors, bikeshedding, and anti-patterns**. We also understand that team preferences vary and different projects have different requirements, so no style guide will fit all sizes. **You are encouraged to follow these recommendations, but take the time to evaluate your own situation and decide if they fit your needs**.

  Finally, we'd like to thank the Vue documentation authors for writing the [Vue Style Guide page](https://vuejs.org/v2/style-guide/), which was the inspiration for this page.

  ## Rule Categories [​](https://redux.js.org/style-guide/\#rule-categories "Direct link to heading")

  We've divided these rules into three categories:

  ### Priority A: Essential [​](https://redux.js.org/style-guide/\#priority-a-essential "Direct link to heading")

  **These rules help prevent errors, so learn and abide by them at all costs**. Exceptions may exist, but should be very rare and only be made by those with expert knowledge of both JavaScript and Redux.

  ### Priority B: Strongly Recommended [​](https://redux.js.org/style-guide/\#priority-b-strongly-recommended "Direct link to heading")

  These rules have been found to improve readability and/or developer experience in most projects. Your code will still run if you violate them, but violations should be rare and well-justified. **Follow these rules whenever it is reasonably possible**.

  ### Priority C: Recommended [​](https://redux.js.org/style-guide/\#priority-c-recommended "Direct link to heading")

  Where multiple, equally good options exist, an arbitrary choice can be made to ensure consistency. In these rules, **we describe each acceptable option and suggest a default choice**. That means you can feel free to make a different choice in your own codebase, as long as you're consistent and have a good reason. Please do have a good reason though!

  ## Priority A Rules: Essential [​](https://redux.js.org/style-guide/\#priority-a-rules-essential "Direct link to heading")

  ### Do Not Mutate State [​](https://redux.js.org/style-guide/\#do-not-mutate-state "Direct link to heading")

  Mutating state is the most common cause of bugs in Redux applications, including components failing to re-render properly, and will also break time-travel debugging in the Redux DevTools. **Actual mutation of state values should always be avoided**, both inside reducers and in all other application code.

  Use tools such as [`redux-immutable-state-invariant`](https://github.com/leoasis/redux-immutable-state-invariant) to catch mutations during development, and [Immer](https://immerjs.github.io/immer/) to avoid accidental mutations in state updates.

  > **Note**: it is okay to modify _copies_ of existing values - that is a normal part of writing immutable update logic. Also, if you are using the Immer library for immutable updates, writing "mutating" logic is acceptable because the real data isn't being mutated - Immer safely tracks changes and generates immutably-updated values internally.

  ### Reducers Must Not Have Side Effects [​](https://redux.js.org/style-guide/\#reducers-must-not-have-side-effects "Direct link to heading")

  Reducer functions should _only_ depend on their `state` and `action` arguments, and should only calculate and return a new state value based on those arguments. **They must not execute any kind of asynchronous logic (AJAX calls, timeouts, promises), generate random values ( `Date.now()`, `Math.random()`), modify variables outside the reducer, or run other code that affects things outside the scope of the reducer function**.

  > **Note**: It is acceptable to have a reducer call other functions that are defined outside of itself, such as imports from libraries or utility functions, as long as they follow the same rules.

  #### Detailed Explanation

  The purpose of this rule is to guarantee that reducers will behave predictably when called. For example, if you are doing time-travel debugging, reducer functions may be called many times with earlier actions to produce the "current" state value. If a reducer has side effects, this would cause those effects to be executed during the debugging process, and result in the application behaving in unexpected ways.

  There are some gray areas to this rule. Strictly speaking, code such as `console.log(state)` is a side effect, but in practice has no effect on how the application behaves.

  ### Do Not Put Non-Serializable Values in State or Actions [​](https://redux.js.org/style-guide/\#do-not-put-non-serializable-values-in-state-or-actions "Direct link to heading")

  **Avoid putting non-serializable values such as Promises, Symbols, Maps/Sets, functions, or class instances into the Redux store state or dispatched actions**. This ensures that capabilities such as debugging via the Redux DevTools will work as expected. It also ensures that the UI will update as expected.

  > **Exception**: you may put non-serializable values in actions _if_ the action will be intercepted and stopped by a middleware before it reaches the reducers. Middleware such as `redux-thunk` and `redux-promise` are examples of this.

  ### Only One Redux Store Per App [​](https://redux.js.org/style-guide/\#only-one-redux-store-per-app "Direct link to heading")

  **A standard Redux application should only have a single Redux store instance, which will be used by the whole application**. It should typically be defined in a separate file such as `store.js`.

  Ideally, no app logic will import the store directly. It should be passed to a React component tree via `<Provider>`, or referenced indirectly via middleware such as thunks. In rare cases, you may need to import it into other logic files, but this should be a last resort.

  ## Priority B Rules: Strongly Recommended [​](https://redux.js.org/style-guide/\#priority-b-rules-strongly-recommended "Direct link to heading")

  ### Use Redux Toolkit for Writing Redux Logic [​](https://redux.js.org/style-guide/\#use-redux-toolkit-for-writing-redux-logic "Direct link to heading")

  **[Redux Toolkit](https://redux.js.org/redux-toolkit/overview) is our recommended toolset for using Redux**. It has functions that build in our suggested best practices, including setting up the store to catch mutations and enable the Redux DevTools Extension, simplifying immutable update logic with Immer, and more.

  You are not required to use RTK with Redux, and you are free to use other approaches if desired, but **using RTK will simplify your logic and ensure that your application is set up with good defaults**.

  ### Use Immer for Writing Immutable Updates [​](https://redux.js.org/style-guide/\#use-immer-for-writing-immutable-updates "Direct link to heading")

  Writing immutable update logic by hand is frequently difficult and prone to errors. [Immer](https://immerjs.github.io/immer/) allows you to write simpler immutable updates using "mutative" logic, and even freezes your state in development to catch mutations elsewhere in the app. **We recommend using Immer for writing immutable update logic, preferably as part of [Redux Toolkit](https://redux.js.org/redux-toolkit/overview)**.

  ### Structure Files as Feature Folders with Single-File Logic [​](https://redux.js.org/style-guide/\#structure-files-as-feature-folders-with-single-file-logic "Direct link to heading")

  Redux itself does not care about how your application's folders and files are structured. However, co-locating logic for a given feature in one place typically makes it easier to maintain that code.

  Because of this, **we recommend that most applications should structure files using a "feature folder" approach** (all files for a feature in the same folder). Within a given feature folder, **the Redux logic for that feature should be written as a single "slice" file**, preferably using the Redux Toolkit `createSlice` API. (This is also known as the ["ducks" pattern](https://github.com/erikras/ducks-modular-redux)). While older Redux codebases often used a "folder-by-type" approach with separate folders for "actions" and "reducers", keeping related logic together makes it easier to find and update that code.

  #### Detailed Explanation: Example Folder Structure

  An example folder structure might look something like:

  - `/src`
    - `index.tsx`: Entry point file that renders the React component tree
    - `/app`
      - `store.ts`: store setup
      - `rootReducer.ts`: root reducer (optional)
      - `App.tsx`: root React component
    - `/common`: hooks, generic components, utils, etc
    - `/features`: contains all "feature folders"
      - `/todos`: a single feature folder
        - `todosSlice.ts`: Redux reducer logic and associated actions
        - `Todos.tsx`: a React component

  `/app` contains app-wide setup and layout that depends on all the other folders.

  `/common` contains truly generic and reusable utilities and components.

  `/features` has folders that contain all functionality related to a specific feature. In this example, `todosSlice.ts` is a "duck"-style file that contains a call to RTK's `createSlice()` function, and exports the slice reducer and action creators.

  ### Put as Much Logic as Possible in Reducers [​](https://redux.js.org/style-guide/\#put-as-much-logic-as-possible-in-reducers "Direct link to heading")

  Wherever possible, **try to put as much of the logic for calculating a new state into the appropriate reducer, rather than in the code that prepares and dispatches the action** (like a click handler). This helps ensure that more of the actual app logic is easily testable, enables more effective use of time-travel debugging, and helps avoid common mistakes that can lead to mutations and bugs.

  There are valid cases where some or all of the new state should be calculated first (such as generating a unique ID), but that should be kept to a minimum.

  #### Detailed Explanation

  The Redux core does not actually care whether a new state value is calculated in the reducer or in the action creation logic. For example, for a todo app, the logic for a "toggle todo" action requires immutably updating an array of todos. It is legal to have the action contain just the todo ID and calculate the new array in the reducer:

  ```codeBlockLines_e6Vv
  // Click handler:
  const onTodoClicked = (id) => {
      dispatch({type: "todos/toggleTodo", payload: {id}})
  }

  // Reducer:
  case "todos/toggleTodo": {
      return state.map(todo => {
          if(todo.id !== action.payload.id) return todo;

          return {...todo, completed: !todo.completed };
      })
  }

  ```

  And also to calculate the new array first and put the entire new array in the action:

  ```codeBlockLines_e6Vv
  // Click handler:
  const onTodoClicked = id => {
    const newTodos = todos.map(todo => {
      if (todo.id !== id) return todo

      return { ...todo, completed: !todo.completed }
    })

    dispatch({ type: 'todos/toggleTodo', payload: { todos: newTodos } })
  }

  // Reducer:
  case "todos/toggleTodo":
      return action.payload.todos;

  ```

  However, doing the logic in the reducer is preferable for several reasons:

  - Reducers are always easy to test, because they are pure functions - you just call `const result = reducer(testState, action)`, and assert that the result is what you expected. So, the more logic you can put in a reducer, the more logic you have that is easily testable.
  - Redux state updates must always follow [the rules of immutable updates](https://redux.js.org/usage/structuring-reducers/immutable-update-patterns). Most Redux users realize they have to follow the rules inside a reducer, but it's not obvious that you _also_ have to do this if the new state is calculated _outside_ the reducer. This can easily lead to mistakes like accidental mutations, or even reading a value from the Redux store and passing it right back inside an action. Doing all of the state calculations in a reducer avoids those mistakes.
  - If you are using Redux Toolkit or Immer, it is much easier to write immutable update logic in reducers, and Immer will freeze the state and catch accidental mutations.
  - Time-travel debugging works by letting you "undo" a dispatched action, then either do something different or "redo" the action. In addition, hot-reloading of reducers normally involves re-running the new reducer with the existing actions. If you have a correct action but a buggy reducer, you can edit the reducer to fix the bug, hot-reload it, and you should get the correct state right away. If the action itself was wrong, you'd have to re-run the steps that led to that action being dispatched. So, it's easier to debug if more logic is in the reducer.
  - Finally, putting logic in reducers means you know where to look for the update logic, instead of having it scattered in random other parts of the application code.

  ### Reducers Should Own the State Shape [​](https://redux.js.org/style-guide/\#reducers-should-own-the-state-shape "Direct link to heading")

  The Redux root state is owned and calculated by the single root reducer function. For maintainability, that reducer is intended to be split up by key/value "slices", with **each "slice reducer" being responsible for providing an initial value and calculating the updates to that slice of the state**.

  In addition, slice reducers should exercise control over what other values are returned as part of the calculated state. **Minimize the use of "blind spreads/returns"** like `return action.payload` or `return {...state, ...action.payload}`, because those rely on the code that dispatched the action to correctly format the contents, and the reducer effectively gives up its ownership of what that state looks like. That can lead to bugs if the action contents are not correct.

  > **Note**: A "spread return" reducer may be a reasonable choice for scenarios like editing data in a form, where writing a separate action type for each individual field would be time-consuming and of little benefit.

  #### Detailed Explanation

  Picture a "current user" reducer that looks like:

  ```codeBlockLines_e6Vv
  const initialState = {
      firstName: null,
      lastName: null,
      age: null,
  };

  export default usersReducer = (state = initialState, action) {
      switch(action.type) {
          case "users/userLoggedIn": {
              return action.payload;
          }
          default: return state;
      }
  }

  ```

  In this example, the reducer completely assumes that `action.payload` is going to be a correctly formatted object.

  However, imagine if some part of the code were to dispatch a "todo" object inside the action, instead of a "user" object:

  ```codeBlockLines_e6Vv
  dispatch({
    type: 'users/userLoggedIn',
    payload: {
      id: 42,
      text: 'Buy milk'
    }
  })

  ```

  The reducer would blindly return the todo, and now the rest of the app would likely break when it tries to read the user from the store.

  This could be at least partly fixed if the reducer has some validation checks to ensure that `action.payload` actually has the right fields, or tries to read the right fields out by name. That does add more code, though, so it's a question of trading off more code for safety.

  Use of static typing does make this kind of code safer and somewhat more acceptable. If the reducer knows that `action` is a `PayloadAction<User>`, then it _should_ be safe to do `return action.payload`.

  ### Name State Slices Based On the Stored Data [​](https://redux.js.org/style-guide/\#name-state-slices-based-on-the-stored-data "Direct link to heading")

  As mentioned in [Reducers Should Own the State Shape](https://redux.js.org/style-guide/#reducers-should-own-the-state-shape), the standard approach for splitting reducer logic is based on "slices" of state. Correspondingly, `combineReducers` is the standard function for joining those slice reducers into a larger reducer function.

  The key names in the object passed to `combineReducers` will define the names of the keys in the resulting state object. Be sure to name these keys after the data that is kept inside, and avoid use of the word "reducer" in the key names. Your object should look like `{users: {}, posts: {}}`, rather than `{usersReducer: {}, postsReducer: {}}`.

  #### Detailed Explanation

  Object literal shorthand makes it easy to define a key name and a value in an object at the same time:

  ```codeBlockLines_e6Vv
  const data = 42
  const obj = { data }
  // same as: {data: data}

  ```

  `combineReducers` accepts an object full of reducer functions, and uses that to generate state objects that have the same key names. This means that the key names in the functions object define the key names in the state object.

  This results in a common mistake, where a reducer is imported using "reducer" in the variable name, and then passed to `combineReducers` using the object literal shorthand:

  ```codeBlockLines_e6Vv
  import usersReducer from 'features/users/usersSlice'

  const rootReducer = combineReducers({
    usersReducer
  })

  ```

  In this case, use of the object literal shorthand created an object like `{usersReducer: usersReducer}`. So, "reducer" is now in the state key name. This is redundant and useless.

  Instead, define key names that only relate to the data inside. We suggest using explicit `key: value` syntax for clarity:

  ```codeBlockLines_e6Vv
  import usersReducer from 'features/users/usersSlice'
  import postsReducer from 'features/posts/postsSlice'

  const rootReducer = combineReducers({
    users: usersReducer,
    posts: postsReducer
  })

  ```

  It's a bit more typing, but it results in the most understandable code and state definition.

  ### Organize State Structure Based on Data Types, Not Components [​](https://redux.js.org/style-guide/\#organize-state-structure-based-on-data-types-not-components "Direct link to heading")

  Root state slices should be defined and named based on the major data types or areas of functionality in your application, not based on which specific components you have in your UI. This is because there is not a strict 1:1 correlation between data in the Redux store and components in the UI, and many components may need to access the same data. Think of the state tree as a sort of global database that any part of the app can access to read just the pieces of state needed in that component.

  For example, a blogging app might need to track who is logged in, information on authors and posts, and perhaps some info on what screen is active. A good state structure might look like `{auth, posts, users, ui}`. A bad structure would be something like `{loginScreen, usersList, postsList}`.

  ### Treat Reducers as State Machines [​](https://redux.js.org/style-guide/\#treat-reducers-as-state-machines "Direct link to heading")

  Many Redux reducers are written "unconditionally". They only look at the dispatched action and calculate a new state value, without basing any of the logic on what the current state might be. This can cause bugs, as some actions may not be "valid" conceptually at certain times depending on the rest of the app logic. For example, a "request succeeded" action should only have a new value calculated if the state says that it's already "loading", or an "update this item" action should only be dispatched if there is an item marked as "being edited".

  To fix this, **treat reducers as "state machines", where the combination of both the current state _and_ the dispatched action determines whether a new state value is actually calculated**, not just the action itself unconditionally.

  #### Detailed Explanation

  A [finite state machine](https://en.wikipedia.org/wiki/Finite-state_machine) is a useful way of modeling something that should only be in one of a finite number of "finite states" at any time. For example, if you have a `fetchUserReducer`, the finite states can be:

  - `"idle"` (fetching not started yet)
  - `"loading"` (currently fetching the user)
  - `"success"` (user fetched successfully)
  - `"failure"` (user failed to fetch)

  To make these finite states clear and [make impossible states impossible](https://kentcdodds.com/blog/make-impossible-states-impossible), you can specify a property that holds this finite state:

  ```codeBlockLines_e6Vv
  const initialUserState = {
    status: 'idle', // explicit finite state
    user: null,
    error: null
  }

  ```

  With TypeScript, this also makes it easy to use [discriminated unions](https://basarat.gitbook.io/typescript/type-system/discriminated-unions) to represent each finite state. For instance, if `state.status === 'success'`, then you would expect `state.user` to be defined and wouldn't expect `state.error` to be truthy. You can enforce this with types.

  Typically, reducer logic is written by taking the action into account first. When modeling logic with state machines, it's important to take the state into account first. Creating "finite state reducers" for each state helps encapsulate behavior per state:

  ```codeBlockLines_e6Vv
  import {
    FETCH_USER,
    // ...
  } from './actions'

  const IDLE_STATUS = 'idle';
  const LOADING_STATUS = 'loading';
  const SUCCESS_STATUS = 'success';
  const FAILURE_STATUS = 'failure';

  const fetchIdleUserReducer = (state, action) => {
    // state.status is "idle"
    switch (action.type) {
      case FETCH_USER:
        return {
          ...state,
          status: LOADING_STATUS
        }
      }
      default:
        return state;
    }
  }

  // ... other reducers

  const fetchUserReducer = (state, action) => {
    switch (state.status) {
      case IDLE_STATUS:
        return fetchIdleUserReducer(state, action);
      case LOADING_STATUS:
        return fetchLoadingUserReducer(state, action);
      case SUCCESS_STATUS:
        return fetchSuccessUserReducer(state, action);
      case FAILURE_STATUS:
        return fetchFailureUserReducer(state, action);
      default:
        // this should never be reached
        return state;
    }
  }

  ```

  Now, since you're defining behavior per state instead of per action, you also prevent impossible transitions. For instance, a `FETCH_USER` action should have no effect when `status === LOADING_STATUS`, and you can enforce that, instead of accidentally introducing edge-cases.

  ### Normalize Complex Nested/Relational State [​](https://redux.js.org/style-guide/\#normalize-complex-nestedrelational-state "Direct link to heading")

  Many applications need to cache complex data in the store. That data is often received in a nested form from an API, or has relations between different entities in the data (such as a blog that contains Users, Posts, and Comments).

  **Prefer storing that data in [a "normalized" form](https://redux.js.org/usage/structuring-reducers/normalizing-state-shape) in the store**. This makes it easier to look up items based on their ID and update a single item in the store, and ultimately leads to better performance patterns.

  ### Keep State Minimal and Derive Additional Values [​](https://redux.js.org/style-guide/\#keep-state-minimal-and-derive-additional-values "Direct link to heading")

  Whenever possible, **keep the actual data in the Redux store as minimal as possible, and _derive_ additional values from that state as needed**. This includes things like calculating filtered lists or summing up values. As an example, a todo app would keep an original list of todo objects in state, but derive a filtered list of todos outside the state whenever the state is updated. Similarly, a check for whether all todos have been completed, or number of todos remaining, can be calculated outside the store as well.

  This has several benefits:

  - The actual state is easier to read
  - Less logic is needed to calculate those additional values and keep them in sync with the rest of the data
  - The original state is still there as a reference and isn't being replaced

  Deriving data is often done in "selector" functions, which can encapsulate the logic for doing the derived data calculations. In order to improve performance, these selectors can be _memoized_ to cache previous results, using libraries like `reselect` and `proxy-memoize`.

  ### Model Actions as Events, Not Setters [​](https://redux.js.org/style-guide/\#model-actions-as-events-not-setters "Direct link to heading")

  Redux does not care what the contents of the `action.type` field are - it just has to be defined. It is legal to write action types in present tense ( `"users/update"`), past tense ( `"users/updated"`), described as an event ( `"upload/progress"`), or treated as a "setter" ( `"users/setUserName"`). It is up to you to determine what a given action means in your application, and how you model those actions.

  However, **we recommend trying to treat actions more as "describing events that occurred", rather than "setters"**. Treating actions as "events" generally leads to more meaningful action names, fewer total actions being dispatched, and a more meaningful action log history. Writing "setters" often results in too many individual action types, too many dispatches, and an action log that is less meaningful.

  #### Detailed Explanation

  Imagine you've got a restaurant app, and someone orders a pizza and a bottle of Coke. You could dispatch an action like:

  ```codeBlockLines_e6Vv
  { type: "food/orderAdded",  payload: {pizza: 1, coke: 1} }

  ```

  Or you could dispatch:

  ```codeBlockLines_e6Vv
  {
      type: "orders/setPizzasOrdered",
      payload: {
          amount: getState().orders.pizza + 1,
      }
  }

  {
      type: "orders/setCokesOrdered",
      payload: {
          amount: getState().orders.coke + 1,
      }
  }

  ```

  The first example would be an "event". "Hey, someone ordered a pizza and a pop, deal with it somehow".

  The second example is a "setter". "I _know_ there are fields for 'pizzas ordered' and 'pops ordered', and I am commanding you to set their current values to these numbers".

  The "event" approach only really needed a single action to be dispatched, and it's more flexible. It doesn't matter how many pizzas were already ordered. Maybe there's no cooks available, so the order gets ignored.

  With the "setter" approach, the client code needed to know more about what the actual structure of the state is, what the "right" values should be, and ended up actually having to dispatch multiple actions to finish the "transaction".

  ### Write Meaningful Action Names [​](https://redux.js.org/style-guide/\#write-meaningful-action-names "Direct link to heading")

  The `action.type` field serves two main purposes:

  - Reducer logic checks the action type to see if this action should be handled to calculate a new state
  - Action types are shown in the Redux DevTools history log for you to read

  Per [Model Actions as "Events"](https://redux.js.org/style-guide/#model-actions-as-events-not-setters), the actual contents of the `type` field do not matter to Redux itself. However, the `type` value _does_ matter to you, the developer. **Actions should be written with meaningful, informative, descriptive type fields**. Ideally, you should be able to read through a list of dispatched action types, and have a good understanding of what happened in the application without even looking at the contents of each action. Avoid using very generic action names like `"SET_DATA"` or `"UPDATE_STORE"`, as they don't provide meaningful information on what happened.

  ### Allow Many Reducers to Respond to the Same Action [​](https://redux.js.org/style-guide/\#allow-many-reducers-to-respond-to-the-same-action "Direct link to heading")

  Redux reducer logic is intended to be split into many smaller reducers, each independently updating their own portion of the state tree, and all composed back together to form the root reducer function. When a given action is dispatched, it might be handled by all, some, or none of the reducers.

  As part of this, you are encouraged to **have many reducer functions all handle the same action separately** if possible. In practice, experience has shown that most actions are typically only handled by a single reducer function, which is fine. But, modeling actions as "events" and allowing many reducers to respond to those actions will typically allow your application's codebase to scale better, and minimize the number of times you need to dispatch multiple actions to accomplish one meaningful update.

  ### Avoid Dispatching Many Actions Sequentially [​](https://redux.js.org/style-guide/\#avoid-dispatching-many-actions-sequentially "Direct link to heading")

  **Avoid dispatching many actions in a row to accomplish a larger conceptual "transaction"**. This is legal, but will usually result in multiple relatively expensive UI updates, and some of the intermediate states could be potentially invalid by other parts of the application logic. Prefer dispatching a single "event"-type action that results in all of the appropriate state updates at once, or consider use of action batching addons to dispatch multiple actions with only a single UI update at the end.

  #### Detailed Explanation

  There is no limit on how many actions you can dispatch in a row. However, each dispatched action does result in execution of all store subscription callbacks (typically one or more per Redux-connected UI component), and will usually result in UI updates.

  While UI updates queued from React event handlers will usually be batched into a single React render pass, updates queued _outside_ of those event handlers are not. This includes dispatches from most `async` functions, timeout callbacks, and non-React code. In those situations, each dispatch will result in a complete synchronous React render pass before the dispatch is done, which will decrease performance.

  In addition, multiple dispatches that are conceptually part of a larger "transaction"-style update sequence will result in intermediate states that might not be considered valid. For example, if actions `"UPDATE_A"`, `"UPDATE_B"`, and `"UPDATE_C"` are dispatched in a row, and some code is expecting all three of `a`, `b`, and `c` to be updated together, the state after the first two dispatches will effectively be incomplete because only one or two of them has been updated.

  If multiple dispatches are truly necessary, consider batching the updates in some way. Depending on your use case, this may just be batching React's own renders (possibly using [`batch()` from React-Redux](https://react-redux.js.org/api/batch)), debouncing the store notification callbacks, or grouping many actions into a larger single dispatch that only results in one subscriber notification. See [the FAQ entry on "reducing store update events"](https://redux.js.org/faq/performance#how-can-i-reduce-the-number-of-store-update-events) for additional examples and links to related addons.

  ### Evaluate Where Each Piece of State Should Live [​](https://redux.js.org/style-guide/\#evaluate-where-each-piece-of-state-should-live "Direct link to heading")

  The ["Three Principles of Redux"](https://redux.js.org/understanding/thinking-in-redux/three-principles) says that "the state of your whole application is stored in a single tree". This phrasing has been over-interpreted. It does not mean that literally _every_ value in the entire app _must_ be kept in the Redux store. Instead, **there should be a single place to find all values that _you_ consider to be global and app-wide**. Values that are "local" should generally be kept in the nearest UI component instead.

  Because of this, it is up to you as a developer to decide what state should actually live in the Redux store, and what should stay in component state. **[Use these rules of thumb to help evaluate each piece of state and decide where it should live](https://redux.js.org/faq/organizing-state#do-i-have-to-put-all-my-state-into-redux-should-i-ever-use-reacts-usestate-or-usereducer)**.

  ### Use the React-Redux Hooks API [​](https://redux.js.org/style-guide/\#use-the-react-redux-hooks-api "Direct link to heading")

  **Prefer using [the React-Redux hooks API ( `useSelector` and `useDispatch`)](https://react-redux.js.org/api/hooks) as the default way to interact with a Redux store from your React components**. While the classic `connect` API still works fine and will continue to be supported, the hooks API is generally easier to use in several ways. The hooks have less indirection, less code to write, and are simpler to use with TypeScript than `connect` is.

  The hooks API does introduce some different tradeoffs than `connect` does in terms of performance and data flow, but we now recommend them as the default.

  #### Detailed Explanation

  The [classic `connect` API](https://react-redux.js.org/api/connect) is a [Higher Order Component](https://reactjs.org/docs/higher-order-components.html). It generates a new wrapper component that subscribes to the store, renders your own component, and passes down data from the store and action creators as props.

  This is a deliberate level of indirection, and allows you to write "presentational"-style components that receive all their values as props, without being specifically dependent on Redux.

  The introduction of hooks has changed how most React developers write their components. While the "container/presentational" concept is still valid, hooks push you to write components that are responsible for requesting their own data internally by calling an appropriate hook. This leads to different approaches in how we write and test components and logic.

  The indirection of `connect` has always made it a bit difficult for some users to follow the data flow. In addition, `connect`'s complexity has made it very difficult to type correctly with TypeScript, due to the multiple overloads, optional parameters, merging of props from `mapState` / `mapDispatch` / parent component, and binding of action creators and thunks.

  `useSelector` and `useDispatch` eliminate the indirection, so it's much more clear how your own component is interacting with Redux. Since `useSelector` just accepts a single selector, it's much easier to define with TypeScript, and the same goes for `useDispatch`.

  For more details, see Redux maintainer Mark Erikson's post and conference talk on the tradeoffs between hooks and HOCs:

  - [Thoughts on React Hooks, Redux, and Separation of Concerns](https://blog.isquaredsoftware.com/2019/07/blogged-answers-thoughts-on-hooks/)
  - [ReactBoston 2019: Hooks, HOCs, and Tradeoffs](https://blog.isquaredsoftware.com/2019/09/presentation-hooks-hocs-tradeoffs/)

  Also see the [React-Redux hooks API docs](https://react-redux.js.org/api/hooks) for info on how to correctly optimize components and handle rare edge cases.

  ### Connect More Components to Read Data from the Store [​](https://redux.js.org/style-guide/\#connect-more-components-to-read-data-from-the-store "Direct link to heading")

  Prefer having more UI components subscribed to the Redux store and reading data at a more granular level. This typically leads to better UI performance, as fewer components will need to render when a given piece of state changes.

  For example, rather than just connecting a `<UserList>` component and reading the entire array of users, have `<UserList>` retrieve a list of all user IDs, render list items as `<UserListItem userId={userId}>`, and have `<UserListItem>` be connected and extract its own user entry from the store.

  This applies for both the React-Redux `connect()` API and the `useSelector()` hook.

  ### Use the Object Shorthand Form of `mapDispatch` with `connect` [​](https://redux.js.org/style-guide/\#use-the-object-shorthand-form-of-mapdispatch-with-connect "Direct link to heading")

  The `mapDispatch` argument to `connect` can be defined as either a function that receives `dispatch` as an argument, or an object containing action creators. **We recommend always using [the "object shorthand" form of `mapDispatch`](https://react-redux.js.org/using-react-redux/connect-mapdispatch#defining-mapdispatchtoprops-as-an-object)**, as it simplifies the code considerably. There is almost never a real need to write `mapDispatch` as a function.

  ### Call `useSelector` Multiple Times in Function Components [​](https://redux.js.org/style-guide/\#call-useselector-multiple-times-in-function-components "Direct link to heading")

  **When retrieving data using the `useSelector` hook, prefer calling `useSelector` many times and retrieving smaller amounts of data, instead of having a single larger `useSelector` call that returns multiple results in an object**. Unlike `mapState`, `useSelector` is not required to return an object, and having selectors read smaller values means it is less likely that a given state change will cause this component to render.

  However, try to find an appropriate balance of granularity. If a single component does need all fields in a slice of the state , just write one `useSelector` that returns that whole slice instead of separate selectors for each individual field.

  ### Use Static Typing [​](https://redux.js.org/style-guide/\#use-static-typing "Direct link to heading")

  **Use a static type system like TypeScript or Flow rather than plain JavaScript**. The type systems will catch many common mistakes, improve the documentation of your code, and ultimately lead to better long-term maintainability. While Redux and React-Redux were originally designed with plain JS in mind, both work well with TS and Flow. Redux Toolkit is specifically written in TS and is designed to provide good type safety with a minimal amount of additional type declarations.

  ### Use the Redux DevTools Extension for Debugging [​](https://redux.js.org/style-guide/\#use-the-redux-devtools-extension-for-debugging "Direct link to heading")

  **Configure your Redux store to enable [debugging with the Redux DevTools Extension](https://github.com/reduxjs/redux-devtools/tree/main/extension)**. It allows you to view:

  - The history log of dispatched actions
  - The contents of each action
  - The final state after an action was dispatched
  - The diff in the state after an action
  - The [function stack trace showing the code where the action was actually dispatched](https://github.com/reduxjs/redux-devtools/blob/main/extension/docs/Features/Trace.md)

  In addition, the DevTools allows you to do "time-travel debugging", stepping back and forth in the action history to see the entire app state and UI at different points in time.

  **Redux was specifically designed to enable this kind of debugging, and the DevTools are one of the most powerful reasons to use Redux**.

  ### Use Plain JavaScript Objects for State [​](https://redux.js.org/style-guide/\#use-plain-javascript-objects-for-state "Direct link to heading")

  Prefer using plain JavaScript objects and arrays for your state tree, rather than specialized libraries like Immutable.js. While there are some potential benefits to using Immutable.js, most of the commonly stated goals such as easy reference comparisons are a property of immutable updates in general, and do not require a specific library. This also keeps bundle sizes smaller and reduces complexity from data type conversions.

  As mentioned above, we specifically recommend using Immer if you want to simplify immutable update logic, specifically as part of Redux Toolkit.

  #### Detailed Explanation

  Immutable.js has been semi-frequently used in Redux apps since the beginning. There are several common reasons stated for using Immutable.js:

  - Performance improvements from cheap reference comparisons
  - Performance improvements from making updates thanks to specialized data structures
  - Prevention of accidental mutations
  - Easier nested updates via APIs like `setIn()`

  There are some valid aspects to those reasons, but in practice, the benefits aren't as good as stated, and there's multiple negatives to using it:

  - Cheap reference comparisons are a property of any immutable updates, not just Immutable.js
  - Accidental mutations can be prevented via other mechanisms, such as using Immer (which eliminates accident-prone manual copying logic, and deep-freezes state in development by default) or `redux-immutable-state-invariant` (which checks state for mutations)
  - Immer allows simpler update logic overall, eliminating the need for `setIn()`
  - Immutable.js has a very large bundle size
  - The API is fairly complex
  - The API "infects" your application's code. All logic must know whether it's dealing with plain JS objects or Immutable objects
  - Converting from Immutable objects to plain JS objects is relatively expensive, and always produces completely new deep object references
  - Lack of ongoing maintenance to the library

  The strongest remaining reason to use Immutable.js is fast updates of _very_ large objects (tens of thousands of keys). Most applications won't deal with objects that large.

  Overall, Immutable.js adds too much overhead for too little practical benefit. Immer is a much better option.

  ## Priority C Rules: Recommended [​](https://redux.js.org/style-guide/\#priority-c-rules-recommended "Direct link to heading")

  ### Write Action Types as `domain/eventName` [​](https://redux.js.org/style-guide/\#write-action-types-as-domaineventname "Direct link to heading")

  The original Redux docs and examples generally used a "SCREAMING\_SNAKE\_CASE" convention for defining action types, such as `"ADD_TODO"` and `"INCREMENT"`. This matches typical conventions in most programming languages for declaring constant values. The downside is that the uppercase strings can be hard to read.

  Other communities have adopted other conventions, usually with some indication of the "feature" or "domain" the action is related to, and the specific action type. The NgRx community typically uses a pattern like `"[Domain] Action Type"`, such as `"[Login Page] Login"`. Other patterns like `"domain:action"` have been used as well.

  Redux Toolkit's `createSlice` function currently generates action types that look like `"domain/action"`, such as `"todos/addTodo"`. Going forward, **we suggest using the `"domain/action"` convention for readability**.

  ### Write Actions Using the Flux Standard Action Convention [​](https://redux.js.org/style-guide/\#write-actions-using-the-flux-standard-action-convention "Direct link to heading")

  The original "Flux Architecture" documentation only specified that action objects should have a `type` field, and did not give any further guidance on what kinds of fields or naming conventions should be used for fields in actions. To provide consistency, Andrew Clark created a convention called ["Flux Standard Actions"](https://github.com/redux-utilities/flux-standard-action) early in Redux's development. Summarized, the FSA convention says that actions:

  - Should always put their data into a `payload` field
  - May have a `meta` field for additional info
  - May have an `error` field to indicate the action represents a failure of some kind

  Many libraries in the Redux ecosystem have adopted the FSA convention, and Redux Toolkit generates action creators that match the FSA format.

  **Prefer using FSA-formatted actions for consistency**.

  > **Note**: The FSA spec says that "error" actions should set `error: true`, and use the same action type as the "valid" form of the action. In practice, most developers write separate action types for the "success" and "error" cases. Either is acceptable.

  ### Use Action Creators [​](https://redux.js.org/style-guide/\#use-action-creators "Direct link to heading")

  "Action creator" functions started with the original "Flux Architecture" approach. With Redux, action creators are not strictly required. Components and other logic can always call `dispatch({type: "some/action"})` with the action object written inline.

  However, using action creators provides consistency, especially in cases where some kind of preparation or additional logic is needed to fill in the contents of the action (such as generating a unique ID).

  **Prefer using action creators for dispatching any actions**. However, rather than writing action creators by hand, **we recommend using the `createSlice` function from Redux Toolkit, which will generate action creators and action types automatically**.

  ### Use RTK Query for Data Fetching [​](https://redux.js.org/style-guide/\#use-rtk-query-for-data-fetching "Direct link to heading")

  In practice, **the single most common use case for side effects in a typical Redux app is fetching and caching data from the server**.

  Because of this, **we recommend using [RTK Query](https://redux.js.org/tutorials/essentials/part-7-rtk-query-basics) as the default approach for data fetching and caching in a Redux app**. RTK Query has been designed to correctly manage the logic for fetching data from the server as needed, caching it, deduplicating requests, updating components, and much more. We recommend _against_ writing data fetching logic by hand in almost all cases.

  ### Use Thunks and Listeners for Other Async Logic [​](https://redux.js.org/style-guide/\#use-thunks-and-listeners-for-other-async-logic "Direct link to heading")

  Redux was designed to be extensible, and the middleware API was specifically created to allow different forms of async logic to be plugged into the Redux store. That way, users wouldn't be forced to learn a specific library like RxJS if it wasn't appropriate for their needs.

  This led to a wide variety of Redux async middleware addons being created, and that in turn has caused confusion and questions over which async middleware should be used.

  **We recommend using [the Redux thunk middleware](https://redux.js.org/usage/writing-logic-thunks) for imperative logic**, such as complex sync logic that needs access to `dispatch` or `getState`, and moderately complex async logic. This includes use cases like moving logic out of components.

  **We recommend using [the RTK "listener" middleware"](https://redux-toolkit.js.org/api/createListenerMiddleware) for "reactive" logic that needs to respond to dispatched actions or state changes**, such as longer-running async workflows and "background thread"-type behavior.

  We recommend _against_ using the more complex Redux-Saga and Redux-Observable libraries in most cases, especially for async data fetching. Only use these libraries if no other tool is powerful enough to handle your use case.

  ### Move Complex Logic Outside Components [​](https://redux.js.org/style-guide/\#move-complex-logic-outside-components "Direct link to heading")

  We have traditionally suggested keeping as much logic as possible outside components. That was partly due to encouraging the "container/presentational" pattern, where many components simply accept data as props and display UI accordingly, but also because dealing with async logic in class component lifecycle methods can become difficult to maintain.

  **We still encourage moving complex synchronous or async logic outside components, usually into thunks**. This is especially true if the logic needs to read from the store state.

  However, **the use of React hooks does make it somewhat easier to manage logic like data fetching directly inside a component**, and this may replace the need for thunks in some cases.

  ### Use Selector Functions to Read from Store State [​](https://redux.js.org/style-guide/\#use-selector-functions-to-read-from-store-state "Direct link to heading")

  "Selector functions" are a powerful tool for encapsulating reading values from the Redux store state and deriving further data from those values. In addition, libraries like Reselect enable creating memoized selector functions that only recalculate results when the inputs have changed, which is an important aspect of optimizing performance.

  **We strongly recommend using memoized selector functions for reading store state whenever possible**, and recommend creating those selectors with Reselect.

  However, don't feel that you _must_ write selector functions for every field in your state. Find a reasonable balance for granularity, based on how often fields are accessed and updated, and how much actual benefit the selectors are providing in your application.

  ### Name Selector Functions as `selectThing` [​](https://redux.js.org/style-guide/\#name-selector-functions-as-selectthing "Direct link to heading")

  **We recommend prefixing selector function names with the word `select`**, combined with a description of the value being selected. Examples of this would be `selectTodos`, `selectVisibleTodos`, and `selectTodoById`.

  ### Avoid Putting Form State In Redux [​](https://redux.js.org/style-guide/\#avoid-putting-form-state-in-redux "Direct link to heading")

  **Most form state shouldn't go in Redux**. In most use cases, the data is not truly global, is not being cached, and is not being used by multiple components at once. In addition, connecting forms to Redux often involves dispatching actions on every single change event, which causes performance overhead and provides no real benefit. (You probably don't need to time-travel backwards one character from `name: "Mark"` to `name: "Mar"`.)

  Even if the data ultimately ends up in Redux, prefer keeping the form edits themselves in local component state, and only dispatching an action to update the Redux store once the user has completed the form.

  There are use cases when keeping form state in Redux does actually make sense, such as WYSIWYG live previews of edited item attributes. But, in most cases, this isn't necessary.

  --------

  Now, based on this style guide, provide feedback on the following code, making sure to reference the specific sections of the style guide that you mention using markdown hyperlinks:
Review TypeORM Migration
Review TypeORM Migration
Please review the current @diff, which includes updates to the TypeORM entities and an accompanying migration. Please review the migration to make sure that it correctly represents the changes both up and down. Also make sure that it follows any best practices. Keep in mind that it was auto-generated by TypeORM, which is an imperfect process and can sometimes miss certain changes (e.g. cascade behavior).
Service Test Prompt
Write Service Test
Please write a suite of Jest tests for this service. In the `beforeAll` hook, initialize any services that are needed by calling `Services.get(true)`. In the `beforeEach` hook, clear any tables that need to be cleared before each test. Finally, write the tests themselves. Here's an example:

```typescript
describe("OrganizationSecretService", () => {
  let testOrgId: string;
  let secureKeyValueService: ISecureKeyValueService;

  beforeAll(async () => {
    const services = await Services.get(true);
    secureKeyValueService = services.secureKeyValueService;

    // Create a test organization
    const orgRepo = getAppDataSource().getRepository(Organization);
    const org = orgRepo.create({
      workOsId: "12345",
      name: "Test Organization",
      slug: "test-org",
    });
    const savedOrg = await orgRepo.save(org);
    testOrgId = savedOrg.id;
  });

  beforeEach(async () => {
    // Clear the OrganizationSecret table
    await getAppDataSource().getRepository(OrganizationSecret).clear();
  });

  // ... tests ...
});
```

The tests should be complete, covering any reasonable edge cases, but should not be excessively long. The test file should be adjacent to the service file with the same name, except with a `.test.ts` extension.
Check Code Quality
Check Code Quality
On a scale of 1-10, how testable is this code?
Please analyze the provided code and rate it on a scale of 1-10 for how well it follows the Single Responsibility Principle (SRP), where:

1 = The code completely violates SRP, with many unrelated responsibilities mixed together
10 = The code perfectly follows SRP, with each component having exactly one well-defined responsibility

In your analysis, please consider:

1. Primary responsibility: Does each class/function have a single, well-defined purpose?
2. Cohesion: How closely related are the methods and properties within each class?
3. Reason to change: Are there multiple distinct reasons why the code might need to be modified?
4. Dependency relationships: Does the code mix different levels of abstraction or concerns?
5. Naming clarity: Do the names of classes/functions clearly indicate their single responsibility?

Please provide:
- Numerical rating (1-10)
- Brief justification for the rating
- Specific examples of SRP violations (if any)
- Suggestions for improving SRP adherence
- Any positive aspects of the current design

Rate more harshly if you find:
- Business logic mixed with UI code
- Data access mixed with business rules
- Multiple distinct operations handled by one method
- Classes that are trying to do "everything"
- Methods that modify the system in unrelated ways

Rate more favorably if you find:
- Clear separation of concerns
- Classes/functions with focused, singular purposes
- Well-defined boundaries between different responsibilities
- Logical grouping of related functionality
- Easy-to-test components due to their single responsibility
Check SOLID
Create a new PyTorch module
Please analyze the provided code and evaluate how well it adheres to each of the SOLID principles on a scale of 1-10, where:

1 = Completely violates the principle
10 = Perfectly implements the principle

For each principle, provide:
- Numerical rating (1-10)
- Brief justification for the rating
- Specific examples of violations (if any)
- Suggestions for improvement
- Positive aspects of the current design

## Single Responsibility Principle (SRP)
Rate how well each class/function has exactly one responsibility and one reason to change.
Consider:
- Does each component have a single, well-defined purpose?
- Are different concerns properly separated (UI, business logic, data access)?
- Would changes to one aspect of the system require modifications across multiple components?

## Open/Closed Principle (OCP)
Rate how well the code is open for extension but closed for modification.
Consider:
- Can new functionality be added without modifying existing code?
- Is there effective use of abstractions, interfaces, or inheritance?
- Are extension points well-defined and documented?
- Are concrete implementations replaceable without changes to client code?

## Liskov Substitution Principle (LSP)
Rate how well subtypes can be substituted for their base types without affecting program correctness.
Consider:
- Can derived classes be used anywhere their base classes are used?
- Do overridden methods maintain the same behavior guarantees?
- Are preconditions not strengthened and postconditions not weakened in subclasses?
- Are there any type checks that suggest LSP violations?

## Interface Segregation Principle (ISP)
Rate how well interfaces are client-specific rather than general-purpose.
Consider:
- Are interfaces focused and minimal?
- Do clients depend only on methods they actually use?
- Are there "fat" interfaces that should be split into smaller ones?
- Are there classes implementing methods they don't need?

## Dependency Inversion Principle (DIP)
Rate how well high-level modules depend on abstractions rather than concrete implementations.
Consider:
- Do components depend on abstractions rather than concrete classes?
- Is dependency injection or inversion of control used effectively?
- Are dependencies explicit rather than hidden?
- Can implementations be swapped without changing client code?

## Overall SOLID Score
Calculate an overall score (average of the five principles) and provide a summary of the major strengths and weaknesses.

Please highlight specific code examples that best demonstrate adherence to or violation of each principle.
Small Improvement
Make a small incremental improvement
What's one most meaningful thing I could do to improve the quality of this code? It shouldn't be too drastic but should still improve the code.
Playwright e2e test
Playwright e2e test
Please write an e2e test using Playwright, following these guidelines:
- Tests live in the app/tests directory
- Tests are split into 3 parts: selectors, actions, and tests
- Selectors live in app/tests/_selectors and are responsible only for getting elements on a give page. Here is an example:
```BlockForm.selectors.ts
import { Page } from "@playwright/test";

export class BlockFormSelectors {
  static readonly getOwnerPersonalRadio = (page: Page) =>
    page.getByLabel("Personal");

  static readonly getSlugInput = (page: Page) =>
    page.getByRole("textbox", { name: "Slug" });

  static readonly getDescriptionInput = (page: Page) =>
    page.getByRole("textbox", { name: "Description" });
}
```

- Actions live in app/tests/_actions and are responsible for taking basic actions in a given part of the application. Here is an example:
```Organization.actions.ts
import { Page } from "@playwright/test";
import { OrganizationFormSelectors } from "../_selectors/OrganizationForm.selectors";
import { NavBarSelectors } from "../_selectors/NavBar.selectors";

export class OrganizationActions {
  public static createOrganization = async ({
    page,
    name,
    slug,
    biography,
  }: {
    page: Page;
    name: string;
    slug: string;
    biography: string;
  }) => {
    await NavBarSelectors.getCreateButton(page).click();
    await NavBarSelectors.getCreateOrganizationButton(page).click();

    await OrganizationFormSelectors.getNameInput(page).fill(name);
    await OrganizationFormSelectors.getSlugInput(page).fill(slug);
    await OrganizationFormSelectors.getBiographyInput(page).fill(biography);

    await OrganizationFormSelectors.getCreateOrganizationButton(page).click();
  };
}
```

- Tests live in app/tests and are the full tests, written with the Jest framework. Here is an example:
```block.spec.ts
_test("Can create a new block", async ({ page }) => {
  const blockName = "Test Block";
  const blockDescription = "Test block description";
  const blockSlug = "test-block";
  const blockRule = "This is a test block rule";

  await BlockActions.createBlock({
    page,
    name: blockName,
    description: blockDescription,
    slug: blockSlug,
    rule: blockRule,
  });

  await GlobalActions.expectPath({
    page,
    path: ROUTES.PACKAGE({
      ownerSlug: TEST_USER_EXPECTED_SLUG,
      packageSlug: blockSlug,
    }),
  });

  await expect(page.getByText(blockName).first()).toBeVisible();
  await expect(page.getByText(blockDescription)).toBeVisible();
  await expect(page.getByText(blockRule)).toBeVisible();
})
```

Please write tests that cover the basic functionality described below, making sure to correctly create the corresponding selectors, actions, and tests in their respective files:
Commit
Generate commit message
@diff

Generate a commit message for the above set of changes. First, give a single sentence, no more than 80 characters. Then, after 2 line breaks, give a list of no more than 5 short bullet points, each no more than 40 characters. Output nothing except for the commit message, and don't surround it in quotes.
New TypeORM Entity
New TypeORM Entity
Please create a new TypeORM entity in the `services/control-plane/src/db/entity` directory. Please follow these guidelines:
- Use the @Entity decorator on the class
- Always include the generated UUID id column and the timestamp column
- Use class-validator decorators wherever appropriate on columns, for example @IsDate, @IsString, etc.
- Use TypeORM decorators to describe the column's data type, whether it is nullable, max length, and other important properties like cascade relationships
- For references to other tables, you should use ManyToMany, ManyToOne, or OneToMany where appropriate. Make sure to wrap other tables' types in Relation<...>.
- If you add a relation, make sure to update the entity file for that other table
- Once you are done, you should also add the entity to `services/control-plane/src/db/dataSource.ts`
- Lastly, run `cd services/control-plane && npm run typeorm migration:generate -- ./src/db/migrations/<ENTITY_NAME>` in order to generate a migration

If you need to view any existing tables, you can look into the contents of `services/control-plane/src/db/entity` and read any of the relevant files.

This is an example entity file:
```OrgProxyKey.ts
import {
  Column,
  CreateDateColumn,
  Entity,
  ManyToOne,
  PrimaryColumn,
  type Relation,
} from "typeorm";

import {
  IsDate,
  IsOptional,
  IsString,
  IsUUID,
  Length,
  ValidateNested,
} from "class-validator";
import { Organization } from "../Organization.js";

@Entity()
export class OrgProxyKey {
  @IsUUID(4)
  @PrimaryColumn("uuid", { generated: "uuid" })
  id: string;

  @IsDate()
  @CreateDateColumn()
  timestamp: Date;

  @IsString()
  @Length(64, 64)
  @Column({ unique: true, length: 64 })
  key: string;

  @IsOptional()
  @IsDate()
  @Column({ nullable: true })
  lastConnected: Date;

  @ValidateNested()
  @ManyToOne(() => Organization, (org) => org.proxyKeys)
  organization: Relation<Organization>;
}
```

Please create an entity with the following description:
Next.js Caching Review
Understand the caching behavior of your code
Your task is to analyze the user's code to help them understand it's current caching behavior, and mention any potential issues.
Be concise, only mentioning what is necessary.
Use the following as a starting point for your review:

1. Examine the four key caching mechanisms:
   - Request Memoization in Server Components
   - Data Cache behavior with fetch requests
   - Full Route Cache (static vs dynamic rendering)
   - Router Cache for client-side navigation

2. Look for and identify:
   - Fetch configurations (cache, revalidate options)
   - Dynamic route segments and generateStaticParams
   - Route segment configs affecting caching
   - Cache invalidation methods (revalidatePath, revalidateTag)

3. Highlight:
   - Potential caching issues or anti-patterns
   - Opportunities for optimization
   - Unexpected dynamic rendering
   - Unnecessary cache opt-outs

4. Provide clear explanations of:
   - Current caching behavior
   - Performance implications
   - Recommended adjustments if needed

Lastly, point them to the following link to learn more: https://nextjs.org/docs/app/building-your-application/caching
Next.js Optimizations Review
Check for any potential optimizations that are missing in your code
Please review my Next.js code with a focus on optimization areas.

Use the below as a starting point, but consider any other potential areas for improvement.

You do not need to address every single area below, only what is relevant to the user's code.

1. Images: Check for proper usage of next/image, responsive sizing, priority loading for LCP, and correct image formats.

2. Font Loading: Verify next/font implementation, font subsetting, and proper loading strategies.

3. Component Loading: Identify opportunities for lazy loading using next/dynamic, especially for client components and heavy libraries.

4. Metadata: Ensure proper metadata implementation for SEO using either config-based or file-based approaches.

5. Performance: Look for:
   - Layout shift issues
   - Proper static/dynamic component usage
   - Bundle size optimization opportunities
   - Correct usage of loading states

Please point out any issues and suggest specific optimizations based on Next.js best practices.
Your task is to create a new page. Keep the following guidelines in mind.

- Add 'use client' if using hooks/interactivity
- Export a default React component
- Add layout.tsx in same folder if shared UI is needed for nested routes.
- Add loading.tsx in same folder if there is any loading state needed and build it with skeleton-style loaders
Next.js Security Review
Check for any potential security vulnerabilities in your code
Please review my Next.js code with a focus on security issues.

Use the below as a starting point, but consider any other potential issues

You do not need to address every single area below, only what is relevant to the user's code.

1. Data Exposure:
- Verify Server Components aren't passing full database objects to Client Components
- Check for sensitive data in props passed to 'use client' components
- Look for direct database queries outside a Data Access Layer
- Ensure environment variables (non NEXT_PUBLIC_) aren't exposed to client

2. Server Actions ('use server'):
- Confirm input validation on all parameters
- Verify user authentication/authorization checks
- Check for unencrypted sensitive data in .bind() calls

3. Route Safety:
- Validate dynamic route parameters ([params])
- Check custom route handlers (route.ts) for proper CSRF protection
- Review middleware.ts for security bypass possibilities

4. Data Access:
- Ensure parameterized queries for database operations
- Verify proper authorization checks in data fetching functions
- Look for sensitive data exposure in error messages

Key files to focus on: files with 'use client', 'use server', route.ts, middleware.ts, and data access functions.

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace
@gitlab-mr
Reference an open MR for this branch on GitLab
@jira
Reference the conversation in a Jira issue
@commit
@currentFile
Reference the currently open file
@problems
Get Problems from the current file
@postgres
Reference the schema of a table and sample rows
@web
Reference relevant pages from across the web
@open
Reference the contents of all of your open files
@docs
Reference the contents from any documentation site
@discord
Reference the messages in a Discord channel
@greptile
Query a Greptile index of the current repo/branch
@clipboard
Reference recent clipboard items

New Relic

https://log-api.newrelic.com/log/v1

MCP Servers

Learn more

Playwright

npx -y @executeautomation/playwright-mcp-server

Memory

npx -y @modelcontextprotocol/server-memory

Continue Docs MCP

npx -y @continuedev/docs.continue.dev-mcp

Dallin's Memory MCP

npx -y @modelcontextprotocol/server-memory