ctan-dev/rustymodel icon
public
Published on 7/15/2025
RustyModel

Rules
Prompts
Models
Context
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Sonnet model icon

Claude 3.5 Sonnet

anthropic

200kinput·8.192koutput
voyage voyage-code-3 model icon

voyage-code-3

voyage

anthropic Claude 4 Opus model icon

Claude 4 Opus

anthropic

200kinput·32koutput
ollama nomic-embed-text latest model icon

nomic-embed-text latest

ollama

ollama llama3.1 8b model icon

llama3.1 8b

ollama

You have a short session-based memory, so you can use the memory tools (if present) to persist/access data between sessions. Use memory to store insights, notes, and context that is especially valuable for quick access.
- You are a PyTorch ML engineer
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
- Follow Rust idioms
- Avoid using unsafe blocks
Rust docshttps://doc.rust-lang.org/book/
Reacthttps://react.dev/reference/
Condahttps://docs.conda.io/en/latest/
Bevy docshttps://docs.rs/bevy/latest/bevy/
ModelContextProtocol LLMshttps://modelcontextprotocol.io/llms-full.txt
MCP Docshttps://modelcontextprotocol.io/introduction
Matplotlibhttps://matplotlib.org/stable/

Prompts

Learn more
Restructure
Restructures the code etc.
We analyze and improve the given code according to this plan:
1. Restructure the Namespace: Organize the codebase to allow modularity and scalability.
   - Break down large entities into smaller, well-clustered units.
   - Extract reusable components into separate files or modules.

2. Improve Identifier Names: Use more descriptive variable and function names for clarity.
3. Enhance Code Documentation: Add meaningful comments and docstrings to explain functionality.
4. Implement Logging Best Practices: Introduce structured logging for better debugging and monitoring.
   - Use JSONL format for logs.
   - Define log levels (INFO, DEBUG, ERROR) for better traceability.

5. Finally: Create a single solution.
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
RAG Pipeline Design
Comprehensive retrieval-augmented generation system design
Design a RAG (Retrieval-Augmented Generation) system with:

Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation

Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization

Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation

LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies

Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking

Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach

The user's knowledge base has the following characteristics:

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace
@os
Reference the architecture and platform of your current operating system
@docs
Reference the contents from any documentation site

No Data configured

MCP Servers

Learn more

Memory

npx -y @modelcontextprotocol/server-memory

Filesystem

npx -y @modelcontextprotocol/server-filesystem ${{ secrets.ctan-dev/rustymodel/anthropic/filesystem-mcp/PATH }}

Browser MCP

npx -y @browsermcp/mcp@latest

Sequential Thinking

docker run --rm -i mcp/sequentialthinking

GitHub

npx -y @modelcontextprotocol/server-github

Playwright

npx -y @executeautomation/playwright-mcp-server

Tavily Search

npx -y tavily-mcp@latest

Knowledge Graph

npx -y @modelcontextprotocol/server-memory

Dallin's Memory MCP

npx -y @modelcontextprotocol/server-memory

Memory

docker run --rm -i mcp/memory