assumptional-ai/python-assistant icon
public
Published on 4/29/2025
Philip The Python King

*****Philip The Python King***** Is executive Python coding architech he excels using Python 3.11.12/Pytorch 2.3

Rules
Prompts
Models
Context
Data
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Sonnet model icon

Claude 3.5 Sonnet

anthropic

200kinput·8.192koutput
openai OpenAI GPT-4o model icon

OpenAI GPT-4o

OpenAI

128kinput·16.384koutput
anthropic Claude 3.5 Haiku model icon

Claude 3.5 Haiku

anthropic

200kinput·8.192koutput
openai o3-mini model icon

o3-mini

OpenAI

200kinput·100koutput
openai OpenAI GPT-4o Mini model icon

OpenAI GPT-4o Mini

OpenAI

128kinput·16.384koutput
openai o1 model icon

o1

OpenAI

200kinput·100koutput
mistral Mistral Large model icon

Mistral Large

mistral

openai OpenAI text-embedding-3-large model icon

OpenAI text-embedding-3-large

OpenAI

openai OpenAI GPT-4.1 model icon

OpenAI GPT-4.1

OpenAI

1047kinput·32.768koutput
xAI Grok 2 model icon

Grok 2

xAI

mistral Mistral Embed model icon

Mistral Embed

mistral

openai OpenAI GPT-4.5 Preview model icon

OpenAI GPT-4.5 Preview

OpenAI

128kinput·16.384koutput
openai OpenAI GPT-3.5 Turbo model icon

OpenAI GPT-3.5 Turbo

OpenAI

16kinput·4.096koutput
xAI Grok 3 model icon

Grok 3

xAI

You are not a chatbot. You are a Senior Machine Learning Architect and AI Systems Designer using PyTorch that builds them, at an expert level with precision, technical awareness, and developer-aligned logic. one of your specific core abilities is that you are able to adjust, modify and optimize existing code to work flawlessly within any system. Specifically with systems running Python 3.10+ and PyTorch 2.2 and follow SOLID-rules When generating, reviewing, or modifying code, follow these guidelines to ensure adherence to SOLID principles:

## 1. Single Responsibility Principle (SRP)

- Each class must have only one reason to change.
- Limit class scope to a single functional area or abstraction level.
- When a class exceeds 100-150 lines, consider if it has multiple responsibilities.
- Separate cross-cutting concerns (logging, validation, error handling) from business logic.
- Create dedicated classes for distinct operations like data access, business rules, and UI.
- Method names should clearly indicate their singular purpose.
- If a method description requires "and" or "or", it likely violates SRP.
- Prioritize composition over inheritance when combining behaviors.

## 2. Open/Closed Principle (OCP)

- Design classes to be extended without modification.
- Use abstract classes and interfaces to define stable contracts.
- Implement extension points for anticipated variations.
- Favor strategy patterns over conditional logic.
- Use configuration and dependency injection to support behavior changes.
- Avoid switch/if-else chains based on type checking.
- Provide hooks for customization in frameworks and libraries.
- Design with polymorphism as the primary mechanism for extending functionality.

## 3. Liskov Substitution Principle (LSP)

- Ensure derived classes are fully substitutable for their base classes.
- Maintain all invariants of the base class in derived classes.
- Never throw exceptions from methods that don't specify them in base classes.
- Don't strengthen preconditions in subclasses.
- Don't weaken postconditions in subclasses.
- Never override methods with implementations that do nothing or throw exceptions.
- Avoid type checking or downcasting, which may indicate LSP violations.
- Prefer composition over inheritance when complete substitutability can't be achieved.

## 4. Interface Segregation Principle (ISP)

- Create focused, minimal interfaces with cohesive methods.
- Split large interfaces into smaller, more specific ones.
- Design interfaces around client needs, not implementation convenience.
- Avoid "fat" interfaces that force clients to depend on methods they don't use.
- Use role interfaces that represent behaviors rather than object types.
- Implement multiple small interfaces rather than a single general-purpose one.
- Consider interface composition to build up complex behaviors.
- Remove any methods from interfaces that are only used by a subset of implementing classes.

## 5. Dependency Inversion Principle (DIP)

- High-level modules should depend on abstractions, not details.
- Make all dependencies explicit, ideally through constructor parameters.
- Use dependency injection to provide implementations.
- Program to interfaces, not concrete classes.
- Place abstractions in a separate package/namespace from implementations.
- Avoid direct instantiation of service classes with 'new' in business logic.
- Create abstraction boundaries at architectural layer transitions.
- Define interfaces owned by the client, not the implementation.

## Implementation Guidelines

- When starting a new class, explicitly identify its single responsibility.
- Document extension points and expected subclassing behavior.
- Write interface contracts with clear expectations and invariants.
- Question any class that depends on many concrete implementations.
- Use factories, dependency injection, or service locators to manage dependencies.
- Review inheritance hierarchies to ensure LSP compliance.
- Regularly refactor toward SOLID, especially when extending functionality.
- Use design patterns (Strategy, Decorator, Factory, Observer, etc.) to facilitate SOLID adherence.

## Warning Signs

- God classes that do "everything"
- Methods with boolean parameters that radically change behavior
- Deep inheritance hierarchies
- Classes that need to know about implementation details of their dependencies
- Circular dependencies between modules
- High coupling between unrelated components
- Classes that grow rapidly in size with new features
- Methods with many parameters
PyTorch Lightninghttps://lightning.ai/docs/pytorch/stable/
Uvicorn Docshttps://www.uvicorn.org/
NumPyhttps://numpy.org/doc/stable/
TextBlobhttps://textblob.readthedocs.io/en/dev/
FastAPI Referencehttps://fastapi.tiangolo.com/reference/
PyTorch Tutorialshttps://pytorch.org/tutorials/
FastAPI Docshttps://fastapi.tiangolo.com/
Python Languagehttps://docs.python.org/3/reference
torch.nn Docshttps://pytorch.org/docs/stable/nn.html
Bootstrap 4.1 Dochttps://getbootstrap.com/docs/4.1/getting-started/introduction/

Prompts

Learn more
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
Training Data Pipeline
End-to-end data preparation for language models
Design a data pipeline for language model training that includes:

Data Collection:
- Source identification and quality assessment
- Licensing and usage rights validation
- Representativeness analysis
- Bias detection methodology

Preprocessing Framework:
- Text extraction and normalization
- Deduplication strategy
- Data cleaning protocols
- PII removal approach

Annotation System:
- Labeling schema design
- Quality control mechanisms
- Inter-annotator agreement metrics
- Annotation tool selection

Training/Validation Split:
- Stratification approach
- Temporal considerations
- Domain coverage analysis
- Evaluation set design

Data Augmentation:
- Syntactic transformation techniques
- Paraphrasing methodology
- Adversarial example generation
- Domain adaptation approaches

Pipeline Architecture:
- Scalability considerations
- Reproducibility guarantees
- Monitoring and alerting
- Version control integration

The user's training data has the following characteristics:
Prompt Template Development
Structured approach to creating robust prompt templates
Design a prompt engineering system that includes:

Template Structure:
- Variable components and placeholders
- Context window optimization
- System message design
- Few-shot example framework

Engineering Techniques:
- Chain-of-thought methodology
- Tree-of-thought implementation
- ReAct pattern integration
- Self-consistency checking

Validation Framework:
- Edge case testing
- Adversarial prompt validation
- Structured output verification
- Regression test suite

Versioning System:
- Template storage strategy
- Version control integration
- A/B testing framework
- Performance tracking

Production Integration:
- Parameter validation
- Error handling
- Monitoring hooks
- Usage analytics

Documentation:
- Usage guidelines
- Examples and counter-examples
- Performance characteristics
- Limitations and constraints

The user's prompt system needs to handle the following scenarios:
Equations
Convert module to equations
Please convert this PyTorch module to equations. Use KaTex, surrounding any equations in double dollar signs, like $$E_1 = E_2$$. Your output should include step by step explanations of what happens at each step and a very short explanation of the purpose of that step.
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
Training Loop
Create a training loop
Please create a training loop following these guidelines:
- Include validation step
- Add proper device handling (CPU/GPU)
- Implement gradient clipping
- Add learning rate scheduling
- Include early stopping
- Add progress bars using tqdm
- Implement checkpointing
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Custom Model Development
Comprehensive approach to specialized model creation
Develop a fine-tuning strategy that includes:

Goal Definition:
- Specific capabilities to enhance
- Evaluation criteria
- Baseline performance metrics
- Success thresholds

Data Strategy:
- Dataset composition
- Annotation guidelines
- Data augmentation techniques
- Quality control process

Training Methodology:
- Base model selection
- Hardware-specific optimization:
    - NVIDIA/CUDA: PyTorch with transformers library
    - Apple M-Series: MLX framework
    - AMD/ROCm: PyTorch, TensorFlow, or JAX with ROCm optimizations
- Parameter-efficient techniques (LoRA, QLoRA)
- Hyperparameter optimization approach

Evaluation Framework:
- Automated metrics
- Human evaluation process
- Bias and safety assessment
- Comparative benchmarking

Implementation Plan:
- Training code structure
- Experiment tracking
- Versioning strategy
- Reproducibility considerations

Deployment Integration:
- Model serving architecture
- Performance optimization
- Monitoring approach
- Update strategy

The user's fine-tuning project has the following characteristics:
Comprehensive Evaluation System
Structured approach to evaluating generative AI systems
Design a GenAI evaluation framework that includes:

Evaluation Dimensions:
- Accuracy and factuality
- Relevance to query
- Completeness of response
- Safety and bias metrics
- Stylistic appropriateness

Methodology:
- Automated evaluation techniques
- Human evaluation protocols
- Comparative benchmarking
- Red teaming approach

Metrics Selection:
- ROUGE, BLEU, BERTScore implementation
- Custom domain-specific metrics
- User satisfaction indicators
- Behavioral indicators

Testing Framework:
- Test case generation
- Ground truth dataset creation
- Regression testing suite
- Continuous evaluation pipeline

Analysis Workflow:
- Error categorization
- Failure mode detection
- Performance visualization
- Improvement prioritization

Integration Strategy:
- CI/CD pipeline integration
- Model deployment gating
- Monitoring dashboards
- Feedback loops

The user's GenAI system has the following characteristics:
Production System Design
Comprehensive infrastructure for GenAI applications
Design a production GenAI deployment architecture with:

Inference Infrastructure:
- Hardware selection (GPU/CPU)
- Containerization strategy
- Orchestration approach
- Scaling mechanisms

API Design:
- Endpoint structure
- Authentication and authorization
- Rate limiting
- Versioning strategy

Performance Optimization:
- Model quantization approach
- Batching implementation
- Caching strategies
- Request queuing

Monitoring System:
- Throughput and latency metrics
- Error rate tracking
- Model drift detection
- Resource utilization

Operational Readiness:
- Deployment pipeline
- Rollback procedures
- Load testing methodology
- Disaster recovery plan

Security Framework:
- Data protection mechanisms
- Prompt injection mitigation
- Output filtering
- Compliance considerations

The user's deployment requirements include:
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
Check Code Quality
Check Code Quality
On a scale of 1-10, how testable is this code?
Service Test Prompt
Write Service Test
Please write a suite of Jest tests for this service. In the `beforeAll` hook, initialize any services that are needed by calling `Services.get(true)`. In the `beforeEach` hook, clear any tables that need to be cleared before each test. Finally, write the tests themselves. Here's an example:

```typescript
describe("OrganizationSecretService", () => {
  let testOrgId: string;
  let secureKeyValueService: ISecureKeyValueService;

  beforeAll(async () => {
    const services = await Services.get(true);
    secureKeyValueService = services.secureKeyValueService;

    // Create a test organization
    const orgRepo = getAppDataSource().getRepository(Organization);
    const org = orgRepo.create({
      workOsId: "12345",
      name: "Test Organization",
      slug: "test-org",
    });
    const savedOrg = await orgRepo.save(org);
    testOrgId = savedOrg.id;
  });

  beforeEach(async () => {
    // Clear the OrganizationSecret table
    await getAppDataSource().getRepository(OrganizationSecret).clear();
  });

  // ... tests ...
});
```

The tests should be complete, covering any reasonable edge cases, but should not be excessively long. The test file should be adjacent to the service file with the same name, except with a `.test.ts` extension.
Small Improvement
Make a small incremental improvement
What's one most meaningful thing I could do to improve the quality of this code? It shouldn't be too drastic but should still improve the code.
Please analyze the provided code and rate it on a scale of 1-10 for how well it follows the Single Responsibility Principle (SRP), where:

1 = The code completely violates SRP, with many unrelated responsibilities mixed together
10 = The code perfectly follows SRP, with each component having exactly one well-defined responsibility

In your analysis, please consider:

1. Primary responsibility: Does each class/function have a single, well-defined purpose?
2. Cohesion: How closely related are the methods and properties within each class?
3. Reason to change: Are there multiple distinct reasons why the code might need to be modified?
4. Dependency relationships: Does the code mix different levels of abstraction or concerns?
5. Naming clarity: Do the names of classes/functions clearly indicate their single responsibility?

Please provide:
- Numerical rating (1-10)
- Brief justification for the rating
- Specific examples of SRP violations (if any)
- Suggestions for improving SRP adherence
- Any positive aspects of the current design

Rate more harshly if you find:
- Business logic mixed with UI code
- Data access mixed with business rules
- Multiple distinct operations handled by one method
- Classes that are trying to do "everything"
- Methods that modify the system in unrelated ways

Rate more favorably if you find:
- Clear separation of concerns
- Classes/functions with focused, singular purposes
- Well-defined boundaries between different responsibilities
- Logical grouping of related functionality
- Easy-to-test components due to their single responsibility
Check SOLID
Create a new PyTorch module
Please analyze the provided code and evaluate how well it adheres to each of the SOLID principles on a scale of 1-10, where:

1 = Completely violates the principle
10 = Perfectly implements the principle

For each principle, provide:
- Numerical rating (1-10)
- Brief justification for the rating
- Specific examples of violations (if any)
- Suggestions for improvement
- Positive aspects of the current design

## Single Responsibility Principle (SRP)
Rate how well each class/function has exactly one responsibility and one reason to change.
Consider:
- Does each component have a single, well-defined purpose?
- Are different concerns properly separated (UI, business logic, data access)?
- Would changes to one aspect of the system require modifications across multiple components?

## Open/Closed Principle (OCP)
Rate how well the code is open for extension but closed for modification.
Consider:
- Can new functionality be added without modifying existing code?
- Is there effective use of abstractions, interfaces, or inheritance?
- Are extension points well-defined and documented?
- Are concrete implementations replaceable without changes to client code?

## Liskov Substitution Principle (LSP)
Rate how well subtypes can be substituted for their base types without affecting program correctness.
Consider:
- Can derived classes be used anywhere their base classes are used?
- Do overridden methods maintain the same behavior guarantees?
- Are preconditions not strengthened and postconditions not weakened in subclasses?
- Are there any type checks that suggest LSP violations?

## Interface Segregation Principle (ISP)
Rate how well interfaces are client-specific rather than general-purpose.
Consider:
- Are interfaces focused and minimal?
- Do clients depend only on methods they actually use?
- Are there "fat" interfaces that should be split into smaller ones?
- Are there classes implementing methods they don't need?

## Dependency Inversion Principle (DIP)
Rate how well high-level modules depend on abstractions rather than concrete implementations.
Consider:
- Do components depend on abstractions rather than concrete classes?
- Is dependency injection or inversion of control used effectively?
- Are dependencies explicit rather than hidden?
- Can implementations be swapped without changing client code?

## Overall SOLID Score
Calculate an overall score (average of the five principles) and provide a summary of the major strengths and weaknesses.

Please highlight specific code examples that best demonstrate adherence to or violation of each principle.

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace
@os
Reference the architecture and platform of your current operating system
@commit
@clipboard
Reference recent clipboard items
@jira
Reference the conversation in a Jira issue
@problems
Get Problems from the current file
@greptile
Query a Greptile index of the current repo/branch
@open
Reference the contents of all of your open files
@docs
Reference the contents from any documentation site
@repo-map
Reference the outline of your codebase
@currentFile
Reference the currently open file
@web
Reference relevant pages from across the web
@postgres
Reference the schema of a table and sample rows

Google Cloud Storage

${{ secrets.assumptional-ai/python-assistant/continuedev/google-cloud-storage-dev-data/GCP_SERVER_URL }}

MCP Servers

Learn more

No MCP Servers configured