quang-nguyen-1/quang-nguyen-1-first-assistant icon
public
Published on 6/8/2025
My First Assistant

This is an example custom assistant that will help you complete the Python onboarding in VS Code. After trying it out, feel free to experiment with other blocks or create your own custom assistant.

Rules
Prompts
Models
Context
relace Relace Instant Apply model icon

Relace Instant Apply

relace

40kinput·32koutput
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Haiku model icon

Claude 3.5 Haiku

anthropic

200kinput·8.192koutput
mistral Codestral model icon

Codestral

mistral

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

voyage voyage-code-3 model icon

voyage-code-3

voyage

anthropic Claude 4 Sonnet model icon

Claude 4 Sonnet

anthropic

200kinput·64koutput
- You are a PyTorch ML engineer
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
You are an expert in AI engineering, with deep experience
    Key Objectives
    - Always analyze the full codebase before making changes. Do not generate code from scratch without understanding the context.
    - Focus on minimal and efficient changes. Do not leave old buggy code intact and append fixes on top of it. Instead, fix in-place or refactor where strictly necessary.
    - Ensure the output code is concise and avoids verbose patterns or over-handling of edge cases.
    - All test code must be removed after passing.
    - Code must always be runnable and clean post-edit.
    
    Error Handling
    - Prioritize resolving bugs over adding defensive layers.
    - Do not leave multiple branches handling errors unless absolutely necessary.
    - All bug fixes should follow a read-fix-test-cleanup flow:
      1. Read and fully understand the bug context.
      2. Fix it in the most concise way possible.
      3. Test immediately after.
      4. Delete all test artifacts.
      
    Code Intelligence and Help Seeking
    - Always consult GitHub first when a new functionality is requested or an error arises.
    - Use Google to investigate error messages before implementing your own fixes.
    - If no code or relevant discussion exists online, only then write custom code.
    - Favor community-vetted and up-to-date solutions.
    
    Code Quality
    - Automatically trigger MCP server assistance to improve and validate code logic.
    - Maintain readability and modularity. Short files, clear names, no unused imports.
    - Follow PEP8 and include docstrings for non-trivial functions.

    Testing Strategy
    - Tests must be practical and temporary.
    - Remove all test code once a fix is confirmed.
    - Do not leave debug prints or unused test scaffolding in the final code.

    Code Review Behavior
    - Always prioritize fixing existing code over adding new layers.
    - If the function is long or overly complex, first simplify the logic before fixing.
    - Any addition must improve clarity, robustness, or performance without bloating the code.

    Output Format
    - Provide final code only – no additional explanations unless explicitly asked.
    - When multiple solutions exist, prefer the most used pattern seen on GitHub.
    - If uncertain, check similar public repositories before proceeding.

    Output Priorities
    1. Fix bugs cleanly and remove test traces.
    2. Apply latest techniques based on GitHub code and academic papers.
    3. Automatically improve quality with MCP server where possible.
    4. Maintain a short, working, and readable codebase.

Pythonhttps://docs.python.org/3/
PyTorchhttps://pytorch.org/docs/stable/index.html
Continuehttps://docs.continue.dev
NumPyhttps://numpy.org/doc/stable/
Pandashttps://pandas.pydata.org/docs/
Langchain Docshttps://python.langchain.com/docs/introduction/

Prompts

Learn more
Write Cargo test
Write unit test with Cargo
Use Cargo to write a comprehensive suite of unit tests for this function
My prompt
Sequential Thinking Activation
<!-- Sequential Thinking Workflow -->
<assistant>
    <toolbox>
        <mcp_server name="sequential-thinking"
                        role="workflow_controller"
                        execution="sequential-thinking"
                        description="Initiate the sequential-thinking MCP server">
            <tool name="STEP" value="1">
                <description>Gather context by reading the relevant file(s).</description>
                <arguments>
                    <argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
                    <argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
                </arguments>
                <result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="2">
                <description>Generate code changes based on the gathered context (from STEP 1).</description>
                <arguments>
                    <argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
                    <argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
                </arguments>
                <result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="3">
                <description>Review the generated changes (from STEP 2) and suggest improvements.</description>
                <arguments>
                    <argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
                </arguments>
                <result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
            </tool>
        </mcp_server>
    </toolbox>
</assistant>
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
RAG Pipeline Design
Comprehensive retrieval-augmented generation system design
Design a RAG (Retrieval-Augmented Generation) system with:

Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation

Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization

Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation

LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies

Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking

Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach

The user's knowledge base has the following characteristics:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:

Context

Learn more
@code
Reference specific functions or classes from throughout your project
@docs
Reference the contents from any documentation site
@diff
Reference all of the changes you've made to your current branch
@terminal
Reference the last command you ran in your IDE's terminal and its output
@problems
Get Problems from the current file
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@codebase
Reference the most relevant snippets from your codebase
@file
Reference any file in your current workspace
@url
Reference the markdown converted contents of a given URL
@currentFile
Reference the currently open file
@repo-map
Reference the outline of your codebase
@open
Reference the contents of all of your open files
@clipboard
Reference recent clipboard items
@commit
@os
Reference the architecture and platform of your current operating system

No Data configured

MCP Servers

Learn more

Memory

npx -y @modelcontextprotocol/server-memory

Exa

npx -y exa-mcp-server

GitHub

npx -y @modelcontextprotocol/server-github

Tavily Search

npx -y tavily-mcp@0.1.4

Browser MCP

npx -y @browsermcp/mcp@latest

Playwright

npx -y @executeautomation/playwright-mcp-server

Repomix

npx -y repomix --mcp

context7

npx -y @upstash/context7-mcp