tuesday/tuesdaythe13th icon
public
Published on 2/27/2025
Tuesday the 13th

Specialized in Next.js framework, focusing on server-side rendering, API routes, and optimal performance practices.

Rules
Prompts
Models
Context
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Haiku model icon

Claude 3.5 Haiku

anthropic

200kinput·8.192koutput
mistral Codestral model icon

Codestral

mistral

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

voyage voyage-code-3 model icon

voyage-code-3

voyage

gemini Gemini 2.0 Flash model icon

Gemini 2.0 Flash

gemini

1048kinput·8.192koutput
openai OpenAI GPT-4o model icon

OpenAI GPT-4o

OpenAI

128kinput·16.384koutput
deepseek DeepSeek-V3 model icon

DeepSeek-V3

deepseek

anthropic CodeGate Anthropic model icon

CodeGate Anthropic

anthropic

- Follow Next.js patterns, use app router and correctly use server and client components.
- Use Tailwind CSS for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use React Context for state management.
- Use Prisma for database access.
- Follow AirBnB style guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
- You are a PyTorch ML engineer
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
- Follow the Solidity best practices.
- Use the latest version of Solidity.
- Use OpenZeppelin libraries for common patterns like ERC20 or ERC721.
- Utilize Hardhat for development and testing.
- Employ Chai for contract testing.
- Use Infura for interacting with Ethereum networks.
- Follow AirBnB style guide for code formatting.
- Use CamelCase for naming functions and variables in Solidity.
- Use named exports for JavaScript files related to smart contracts.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING CONTRACTS AND CODE.
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
Next.jshttps://nextjs.org/docs/app
Prismahttps://www.prisma.io/docs
Reacthttps://react.dev/reference/
React Hook Formhttps://react-hook-form.com/
Shadcn UIhttps://ui.shadcn.com/docs
Tailwind CSShttps://tailwindcss.com/docs/installation
TanStack Queryhttps://tanstack.com/query/
Zodhttps://zod.dev/
PyTorchhttps://pytorch.org/docs/stable/index.html
Ethereumhttps://ethereum.org/en/developers/docs/
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Pandashttps://pandas.pydata.org/docs/
NumPyhttps://numpy.org/doc/stable/

Prompts

Learn more
API route
Create an API route.
Create an API route with the following functionality.
Client component
Create a client component.
Create a client component with the following functionality. If writing this as a server component is not possible, explain why.
Page
Creates a new Next.js page based on the description provided.
Create a new Next.js page based on the following description.
Prisma schema
Create a Prisma schema.
Create or update a Prisma schema with the following models and relationships. Include necessary fields, relationships, and any relevant enums.
Server component
Create a server component.
Create a server component with the following functionality. If writing this as a server component is not possible, explain why.
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
Page
Creates a new Next.js page based on the description provided.
Create a new Next.js page based on the following description.
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
API route inspection
Analyzes API routes for security issues
Review this API route for security vulnerabilities. Ask questions about the context, data flow, and potential attack vectors. Be thorough in your investigation.
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:

Context

Learn more
@code
Reference specific functions or classes from throughout your project
@docs
Reference the contents from any documentation site
@diff
Reference all of the changes you've made to your current branch
@terminal
Reference the last command you ran in your IDE's terminal and its output
@problems
Get Problems from the current file
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@codebase
Reference the most relevant snippets from your codebase
@file
Reference any file in your current workspace
@url
Reference the markdown converted contents of a given URL
@currentFile
Reference the currently open file
@repo-map
Reference the outline of your codebase
@open
Reference the contents of all of your open files

No Data configured

MCP Servers

Learn more

Memory

npx -y @modelcontextprotocol/server-memory

Docker MCP Github

docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN mcp/github

Playwright

npx -y @executeautomation/playwright-mcp-server

Docker MCP Postgres

docker run -i --rm mcp/postgres ${{ secrets.tuesday/tuesdaythe13th/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}