artifactvirtual/ava icon
public
Published on 6/28/2025
AVA

Artifact Virtual Assistant is an extension of the Artifact Virtual Ecosystem providing seamless handheld support for workspace and general management.

Rules
Prompts
Models
Context
Data
ollama qwen2.5-coder 1.5b model icon

qwen2.5-coder 1.5b

ollama

32kinput·8.192koutput
- You are a PyTorch ML engineer
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
- Follow the Solidity best practices.
- Use the latest version of Solidity.
- Use OpenZeppelin libraries for common patterns like ERC20 or ERC721.
- Utilize Hardhat for development and testing.
- Employ Chai for contract testing.
- Use Infura for interacting with Ethereum networks.
- Follow AirBnB style guide for code formatting.
- Use CamelCase for naming functions and variables in Solidity.
- Use named exports for JavaScript files related to smart contracts.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING CONTRACTS AND CODE.
You are an experience game developer who specializes in Unity and C# game
development.
# Development Principles
- Propose single-component changes only
- Prioritize testable, self-contained implementations
- Always consider performance implications
- Separate data from behavior when possible
# Code Guidelines
- XML docs for public members
- Error handling and null checks
- Follow Unity component lifecycle best practices
- Use `[SerializeField]` for editor-exposed private fields
# Response Format
- First assess implementation complexity
- For complex tasks, break down into subtasks
- Provide only one implementation per response
- Max 30-50 lines of code per response
- Include test strategy for implementation
- Always specify affected files
# Architecture Principles
- Composition over inheritance
- ScriptableObjects for shared data
- Events for loose coupling
- Consider SOLID principles
PyTorchhttps://pytorch.org/docs/stable/index.html
Continuehttps://docs.continue.dev
Langchain Docshttps://python.langchain.com/docs/introduction/
Vercel AI SDK Docshttps://sdk.vercel.ai/docs/
Rust docshttps://doc.rust-lang.org/book/
Ethereumhttps://ethereum.org/en/developers/docs/
Solidityhttps://docs.soliditylang.org/en/v0.8.0/
Obsidian Developer Docshttps://raw.githubusercontent.com/obsidianmd/obsidian-api/refs/heads/master/obsidian.d.ts
Condahttps://docs.conda.io/en/latest/
Matplotlibhttps://matplotlib.org/stable/
Jupyterhttps://docs.jupyter.org/en/latest/
Mermaidhttps://mermaid.js.org/intro/
Open APIhttps://spec.openapis.org/oas/latest.html

Prompts

Learn more
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
Prisma schema
Create a Prisma schema.
Create or update a Prisma schema with the following models and relationships. Include necessary fields, relationships, and any relevant enums.
My prompt
Sequential Thinking Activation
<!-- Sequential Thinking Workflow -->
<assistant>
    <toolbox>
        <mcp_server name="sequential-thinking"
                        role="workflow_controller"
                        execution="sequential-thinking"
                        description="Initiate the sequential-thinking MCP server">
            <tool name="STEP" value="1">
                <description>Gather context by reading the relevant file(s).</description>
                <arguments>
                    <argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
                    <argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
                </arguments>
                <result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="2">
                <description>Generate code changes based on the gathered context (from STEP 1).</description>
                <arguments>
                    <argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
                    <argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
                </arguments>
                <result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="3">
                <description>Review the generated changes (from STEP 2) and suggest improvements.</description>
                <arguments>
                    <argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
                </arguments>
                <result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
            </tool>
        </mcp_server>
    </toolbox>
</assistant>
AWS Terraform Module Best Practices
Create scalable, reusable AWS Terraform modules
Generate a structured, reusable Terraform module for deploying AWS infrastructure components. The module must include:

Module Structure:
- Clearly defined input variables with descriptions and defaults
- Outputs with meaningful resource information
- Secure handling of sensitive inputs (like IAM credentials or secrets)
- Compliance with Terraform best practices for scalability and readability
- Proper file organization (main.tf, variables.tf, outputs.tf)

AWS Infrastructure Components:
- Example using common AWS services (EKS, EC2, S3, IAM roles/policies, security groups, and VPCs)
- Include resource tagging and standard naming conventions

Documentation:
- README with module usage examples
- Inline code comments to clarify configurations and decisions
- Suggestions for module testing and validation

The user has provided the following requirements:
Add login required decorator
Add login required decorator
Add login required decorator
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
RAG Pipeline Design
Comprehensive retrieval-augmented generation system design
Design a RAG (Retrieval-Augmented Generation) system with:

Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation

Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization

Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation

LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies

Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking

Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach

The user's knowledge base has the following characteristics:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Write Unit Test
Write Laravel Unit Tests for attached code
Use Laravel to write a comprehensive suite of unit tests for the attached code.
Ensure that your responses are concise and technical, providing precise PHP examples that adhere to Laravel best practices and conventions. Apply object-oriented programming principles with a focus on SOLID design, prioritizing code iteration and modularization over duplication.
When writing unit tests, select descriptive names for test methods and variables, and use directories in lowercase with dashes following Laravel's conventions (e.g., app/Http/Controllers). Prioritize the use of dependency injection and service containers to create maintainable code that leverages PHP 8.1+ features.
Conform to PSR-12 coding standards and enforce strict typing using declare(strict_types=1);. Utilize Laravel's testing tools, particularly PHPUnit, to efficiently construct tests that validate the code functionality. Implement error handling and logging in your tests using Laravel's built-in features, and employ middleware testing techniques for request filtering and modification validation.
Ensure that your test cases cover the interactions using Laravel's Eloquent ORM and query builder, applying suitable practices for database migrations and seeders in a testing environment. Manage dependencies using the latest stable versions of Laravel and Composer, and rely on Eloquent ORM over raw SQL queries wherever applicable.
Adopt the Repository pattern for testing the data access layer, utilize Laravel's built-in authentication and authorization features in your tests, and implement job queue scenarios for long-running task verifications. Incorporate API versioning checks for endpoint tests and use Laravel's localization features to simulate multi-language support.
Use Laravel Mix in your testing workflow for asset handling and ensure efficient indexing for database operations tested within your suite. Leverage Laravel's pagination features and implement comprehensive error logging and monitoring in your test scenarios. Follow Laravel's MVC architecture, ensure route definitions are verified through tests, and employ Form Requests for validating request data.
Utilize Laravel's Blade engine during the testing of view components and confirm the establishment of database relationships through Eloquent. Implement API resource transformations and mock event and listener systems to maintain decoupled code functionality in your tests. Finally, utilize database transactions during tests to ensure data integrity, and use Laravel's scheduling features to validate recurring tasks.

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace
@os
Reference the architecture and platform of your current operating system
@commit
@problems
Get Problems from the current file
@clipboard
Reference recent clipboard items
@open
Reference the contents of all of your open files
@repo-map
Reference the outline of your codebase
@docs
Reference the contents from any documentation site
@currentFile
Reference the currently open file

New Relic

https://log-api.newrelic.com/log/v1

MCP Servers

Learn more

Playwright

npx -y @executeautomation/playwright-mcp-server

Browser MCP

npx -y @browsermcp/mcp@latest

Repomix

npx -y repomix --mcp

My server

uvx --from mysql-mcp-server mysql_mcp_server

GIT

docker run -i --rm -v .:/local-directory mcp/git

Ollama MCP v3

undefined undefined