Recursion Lab
# SOLID Design Principles - Coding Assistant Guidelines
When generating, reviewing, or modifying code, follow these guidelines to ensure adherence to SOLID principles:
## 1. Single Responsibility Principle (SRP)
- Each class must have only one reason to change.
- Limit class scope to a single functional area or abstraction level.
- When a class exceeds 100-150 lines, consider if it has multiple responsibilities.
- Separate cross-cutting concerns (logging, validation, error handling) from business logic.
- Create dedicated classes for distinct operations like data access, business rules, and UI.
- Method names should clearly indicate their singular purpose.
- If a method description requires "and" or "or", it likely violates SRP.
- Prioritize composition over inheritance when combining behaviors.
## 2. Open/Closed Principle (OCP)
- Design classes to be extended without modification.
- Use abstract classes and interfaces to define stable contracts.
- Implement extension points for anticipated variations.
- Favor strategy patterns over conditional logic.
- Use configuration and dependency injection to support behavior changes.
- Avoid switch/if-else chains based on type checking.
- Provide hooks for customization in frameworks and libraries.
- Design with polymorphism as the primary mechanism for extending functionality.
## 3. Liskov Substitution Principle (LSP)
- Ensure derived classes are fully substitutable for their base classes.
- Maintain all invariants of the base class in derived classes.
- Never throw exceptions from methods that don't specify them in base classes.
- Don't strengthen preconditions in subclasses.
- Don't weaken postconditions in subclasses.
- Never override methods with implementations that do nothing or throw exceptions.
- Avoid type checking or downcasting, which may indicate LSP violations.
- Prefer composition over inheritance when complete substitutability can't be achieved.
## 4. Interface Segregation Principle (ISP)
- Create focused, minimal interfaces with cohesive methods.
- Split large interfaces into smaller, more specific ones.
- Design interfaces around client needs, not implementation convenience.
- Avoid "fat" interfaces that force clients to depend on methods they don't use.
- Use role interfaces that represent behaviors rather than object types.
- Implement multiple small interfaces rather than a single general-purpose one.
- Consider interface composition to build up complex behaviors.
- Remove any methods from interfaces that are only used by a subset of implementing classes.
## 5. Dependency Inversion Principle (DIP)
- High-level modules should depend on abstractions, not details.
- Make all dependencies explicit, ideally through constructor parameters.
- Use dependency injection to provide implementations.
- Program to interfaces, not concrete classes.
- Place abstractions in a separate package/namespace from implementations.
- Avoid direct instantiation of service classes with 'new' in business logic.
- Create abstraction boundaries at architectural layer transitions.
- Define interfaces owned by the client, not the implementation.
## Implementation Guidelines
- When starting a new class, explicitly identify its single responsibility.
- Document extension points and expected subclassing behavior.
- Write interface contracts with clear expectations and invariants.
- Question any class that depends on many concrete implementations.
- Use factories, dependency injection, or service locators to manage dependencies.
- Review inheritance hierarchies to ensure LSP compliance.
- Regularly refactor toward SOLID, especially when extending functionality.
- Use design patterns (Strategy, Decorator, Factory, Observer, etc.) to facilitate SOLID adherence.
## Warning Signs
- God classes that do "everything"
- Methods with boolean parameters that radically change behavior
- Deep inheritance hierarchies
- Classes that need to know about implementation details of their dependencies
- Circular dependencies between modules
- High coupling between unrelated components
- Classes that grow rapidly in size with new features
- Methods with many parameters
- Follow Turborepo for scaling monorepos and build system.
- Use pnpm for package management.
- Use Vite for build tool.
- Use Vitest for unit testing.
- Follow Reactjs & typescript patterns
- Use Tailwind CSS v4 for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use Zustand and React Context for state management
- Follow eslint-plugin-react and prettier guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
Create an exploratory data analysis workflow that includes:
Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions
Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)
Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation
Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook
The user has provided the following information:
<!-- Sequential Thinking Workflow -->
<assistant>
<toolbox>
<mcp_server name="sequential-thinking"
role="workflow_controller"
execution="sequential-thinking"
description="Initiate the sequential-thinking MCP server">
<tool name="STEP" value="1">
<description>Gather context by reading the relevant file(s).</description>
<arguments>
<argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
<argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
</arguments>
<result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="2">
<description>Generate code changes based on the gathered context (from STEP 1).</description>
<arguments>
<argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
<argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
</arguments>
<result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="3">
<description>Review the generated changes (from STEP 2) and suggest improvements.</description>
<arguments>
<argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
</arguments>
<result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
</tool>
</mcp_server>
</toolbox>
</assistant>
Design a RAG (Retrieval-Augmented Generation) system with:
Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation
Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization
Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation
LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies
Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking
Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach
The user's knowledge base has the following characteristics:
Generate a data processing pipeline with these requirements:
Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues
Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations
Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring
Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests
The user has provided the following information:
On a scale of 1-10, how testable is this code?
Please analyze the provided code and rate it on a scale of 1-10 for how well it follows the Single Responsibility Principle (SRP), where:
1 = The code completely violates SRP, with many unrelated responsibilities mixed together
10 = The code perfectly follows SRP, with each component having exactly one well-defined responsibility
In your analysis, please consider:
1. Primary responsibility: Does each class/function have a single, well-defined purpose?
2. Cohesion: How closely related are the methods and properties within each class?
3. Reason to change: Are there multiple distinct reasons why the code might need to be modified?
4. Dependency relationships: Does the code mix different levels of abstraction or concerns?
5. Naming clarity: Do the names of classes/functions clearly indicate their single responsibility?
Please provide:
- Numerical rating (1-10)
- Brief justification for the rating
- Specific examples of SRP violations (if any)
- Suggestions for improving SRP adherence
- Any positive aspects of the current design
Rate more harshly if you find:
- Business logic mixed with UI code
- Data access mixed with business rules
- Multiple distinct operations handled by one method
- Classes that are trying to do "everything"
- Methods that modify the system in unrelated ways
Rate more favorably if you find:
- Clear separation of concerns
- Classes/functions with focused, singular purposes
- Well-defined boundaries between different responsibilities
- Logical grouping of related functionality
- Easy-to-test components due to their single responsibility
What's one most meaningful thing I could do to improve the quality of this code? It shouldn't be too drastic but should still improve the code.
No Data configured
URL: https://mcp.context7.com/mcp
npx -y @modelcontextprotocol/server-sequential-thinking
npx -y @modelcontextprotocol/server-filesystem ${{ secrets.recursionmeta/recursionlab/anthropic/filesystem-mcp/key }}
npx -y @continuedev/docs.continue.dev-mcp
npx -y @modelcontextprotocol/server-memory
npx -y @upstash/context7-mcp@latest