main-department/mains-javascript-assistant icon
public
Published on 3/6/2025
main's JavaScript Assistant

Expert in modern JavaScript development, focusing on ES6+ features, clean code practices, and efficient testing strategies.

Rules
Prompts
Models
Context
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Haiku model icon

Claude 3.5 Haiku

anthropic

200kinput·8.192koutput
mistral Codestral model icon

Codestral

mistral

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

voyage voyage-code-3 model icon

voyage-code-3

voyage

- Follow ES6+ conventions
- Avoid using 'var' keyword
- Follow Next.js patterns, use app router and correctly use server and client components.
- Use Tailwind CSS for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use React Context for state management.
- Use Prisma for database access.
- Follow AirBnB style guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
- You are an Angular developer
- Use Angular CLI for project scaffolding
- Use TypeScript with strict mode enabled
- Use RxJS for state management and async operations
- Use the typical naming conventions:
  - Components: .component.ts
  - Services: .service.ts
  - Pipes: .pipe.ts
  - Module: .module.ts
  - Test: .spec.ts
  - Directives: .directive.ts
- You are a Svelte developer
- Use SvelteKit for the framework
- Use TailwindCSS for styling
- Use TypeScript
- Use the canonical SvelteKit file structure:
  ```
  src/
    actions/
    components/
    data/
    routes/
    runes/
    styles/
    utils/
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
JavaScript docshttps://developer.mozilla.org/en-US/docs/Web/JavaScript
Next.jshttps://nextjs.org/docs/app
Reacthttps://react.dev/reference/
RxJS Docshttps://rxjs.dev/guide/overview
Angular Materialhttps://material.angular.io/
Angular CLIhttps://angular.io/cli
Angular Docshttps://angular.io/docs
Sveltehttps://svelte.dev/docs/svelte
SvelteKithttps://svelte.dev/docs/kit
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Pandashttps://pandas.pydata.org/docs/
NumPyhttps://numpy.org/doc/stable/
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Pandashttps://pandas.pydata.org/docs/
NumPyhttps://numpy.org/doc/stable/
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Pandashttps://pandas.pydata.org/docs/
NumPyhttps://numpy.org/doc/stable/
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Pandashttps://pandas.pydata.org/docs/
NumPyhttps://numpy.org/doc/stable/

Prompts

Learn more
Write Cargo test
Write unit test with Cargo
Use Cargo to write a comprehensive suite of unit tests for this function
API route
Create an API route.
Create an API route with the following functionality.
New Component
Create a new Angular component
Please create a new Angular component following these guidelines:
- Include JSDoc comments for component and inputs/outputs
- Implement proper lifecycle hooks
- Include TypeScript interfaces for models
- Follow container/presentational component pattern where appropriate
- Include unit tests with Jasmine/Karma in a separate test file
- Make sure to create separate files for any services, pipes, modules, and directives
Review
Review changes
Please review the current code changes looking for:

- Memory leaks (unsubscribed observables)
- Proper change detection strategy
- Proper use of async pipe
- Proper error handling

Format the review as:
```
## <FILENAME>
- <ISSUE>
...
- <ISSUE>
```
Review
Review changes
Please view the current git diff, read any Svelte-related files that have changes, and then review them, looking for the following:

- Manual DOM manipulation
- Inline styles

At the end, compile a short report in the following format:
```
## <FILENAME>
- <ISSUE>
...
- <ISSUE>
```
New Component
Create a new Svelte component
Please create a new Svelte component following these guidelines:
- Include JSDoc comments for component and props
- Include basic error handling and loading states
- ALWAYS add a TypeScript prop interface
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:

Context

Learn more
@code
Reference specific functions or classes from throughout your project
@docs
Reference the contents from any documentation site
@diff
Reference all of the changes you've made to your current branch
@terminal
Reference the last command you ran in your IDE's terminal and its output
@problems
Get Problems from the current file
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@codebase
Reference the most relevant snippets from your codebase
@open
Reference the contents of all of your open files
@os
Reference the architecture and platform of your current operating system

No Data configured

MCP Servers

Learn more

Memory

npx -y @modelcontextprotocol/server-memory

Playwright

npx -y @executeautomation/playwright-mcp-server

Brave Search

npx -y @modelcontextprotocol/server-brave-search

Filesystem

npx -y @modelcontextprotocol/server-filesystem ${{ secrets.main-department/mains-javascript-assistant/anthropic/filesystem-mcp/PATH }}

Exa

npx -y exa-mcp-server