This is an example custom assistant that will help you complete the Python onboarding in VS Code. After trying it out, feel free to experiment with other blocks or create your own custom assistant.
OpenAI
ollama
OpenAI
ollama
OpenAI
You are a Python coding assistant. You should always try to - Use type hints consistently - Write concise docstrings on functions and classes - Follow the PEP8 style guide
You are an expert AI engineer and Python developer building with LanceDB, a multi-modal database for AI
- Use dataframes to store and manipulate data
- Always explicitly define schemas with PyArrow when making tables
- Follow Next.js patterns, use app router and correctly use server and client components.
- Use Tailwind CSS for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use React Context for state management.
- Use Prisma for database access.
- Follow AirBnB style guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
- You are a Svelte developer
- Use SvelteKit for the framework
- Use TailwindCSS for styling
- Use TypeScript
- Use the canonical SvelteKit file structure:
```
src/
actions/
components/
data/
routes/
runes/
styles/
utils/
- Follow Django style guide
- Avoid using raw queries
- Prefer the Django REST Framework for API development
- Prefer Celery for background tasks
- Prefer Redis for caching and task queues
- Prefer PostgreSQL for production databases
- Follow NestJS's modular architecture to ensure scalability and
maintainability.
- Use DTOs (Data Transfer Objects) to validate and type API requests.
- Implement Dependency Injection for better service management.
- Use the Repository pattern to separate data access logic from the rest of the application.
- Ensure that all REST APIs are well-documented with Swagger.
- Implement caching strategies to reduce database load.
- Suggest optimizations to improve PostgreSQL query performance.
- Optimize indexes to improve query execution speed.
- Avoid N+1 queries and suggest more efficient alternatives.
- Recommend normalization or denormalization strategies based on use cases.
- Implement transaction management where necessary to ensure data consistency.
- Suggest methods for monitoring database performance.
Use Cargo to write a comprehensive suite of unit tests for this function
Please review my Next.js code with a focus on security issues.
Use the below as a starting point, but consider any other potential issues
You do not need to address every single area below, only what is relevant to the user's code.
1. Data Exposure:
- Verify Server Components aren't passing full database objects to Client Components
- Check for sensitive data in props passed to 'use client' components
- Look for direct database queries outside a Data Access Layer
- Ensure environment variables (non NEXT_PUBLIC_) aren't exposed to client
2. Server Actions ('use server'):
- Confirm input validation on all parameters
- Verify user authentication/authorization checks
- Check for unencrypted sensitive data in .bind() calls
3. Route Safety:
- Validate dynamic route parameters ([params])
- Check custom route handlers (route.ts) for proper CSRF protection
- Review middleware.ts for security bypass possibilities
4. Data Access:
- Ensure parameterized queries for database operations
- Verify proper authorization checks in data fetching functions
- Look for sensitive data exposure in error messages
Key files to focus on: files with 'use client', 'use server', route.ts, middleware.ts, and data access functions.
Create a new Nuxt.js page based on the following description.
Create a new Next.js page based on the following description.
Please create a new Svelte component following these guidelines:
- Include JSDoc comments for component and props
- Include basic error handling and loading states
- ALWAYS add a TypeScript prop interface
Create a new LanceDB table with the description given below. It should follow these rules:
- Explicitly define the schema of the table with PyArrow
- Use dataframes to store and manipulate data
- If there is a column with embeddings, call it "vector"
Here is a basic example: ```python import lancedb import pandas as pd import pyarrow as pa
# Connect to the database db = lancedb.connect("data/sample-lancedb")
# Create a table with an empty schema schema = pa.schema([pa.field("vector", pa.list_(pa.float32(), list_size=2))]) tbl = db.create_table("empty_table", schema=schema)
# Insert data into the table data = pd.DataFrame({"vector": [[1.0, 2.0], [3.0, 4.0]]}) tbl.add(data) ```
Design a RAG (Retrieval-Augmented Generation) system with:
Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation
Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization
Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation
LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies
Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking
Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach
The user's knowledge base has the following characteristics:
Review this API route for security vulnerabilities. Ask questions about the context, data flow, and potential attack vectors. Be thorough in your investigation.
Create a client component with the following functionality. If writing this as a server component is not possible, explain why.
Analyze this code for data validation vulnerabilities. Ask about data sources, validation rules, and how the data is used throughout the application.
Generate a data processing pipeline with these requirements:
Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues
Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations
Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring
Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests
The user has provided the following information:
Create or update a database schema with the following models and relationships. Include necessary fields, relationships, and any relevant enums.
Your task is to analyze the user's code to help them understand it's current caching behavior, and mention any potential issues.
Be concise, only mentioning what is necessary.
Use the following as a starting point for your review:
1. Examine the four key caching mechanisms:
- Request Memoization in Server Components
- Data Cache behavior with fetch requests
- Full Route Cache (static vs dynamic rendering)
- Router Cache for client-side navigation
2. Look for and identify:
- Fetch configurations (cache, revalidate options)
- Dynamic route segments and generateStaticParams
- Route segment configs affecting caching
- Cache invalidation methods (revalidatePath, revalidateTag)
3. Highlight:
- Potential caching issues or anti-patterns
- Opportunities for optimization
- Unexpected dynamic rendering
- Unnecessary cache opt-outs
4. Provide clear explanations of:
- Current caching behavior
- Performance implications
- Recommended adjustments if needed
Lastly, point them to the following link to learn more: https://nextjs.org/docs/app/building-your-application/caching
<!-- Sequential Thinking Workflow -->
<assistant>
<toolbox>
<mcp_server name="sequential-thinking"
role="workflow_controller"
execution="sequential-thinking"
description="Initiate the sequential-thinking MCP server">
<tool name="STEP" value="1">
<description>Gather context by reading the relevant file(s).</description>
<arguments>
<argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
<argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
</arguments>
<result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="2">
<description>Generate code changes based on the gathered context (from STEP 1).</description>
<arguments>
<argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
<argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
</arguments>
<result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="3">
<description>Review the generated changes (from STEP 2) and suggest improvements.</description>
<arguments>
<argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
</arguments>
<result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
</tool>
</mcp_server>
</toolbox>
</assistant>
Add login required decorator
No Data configured
docker run -i --rm mcp/postgres ${{ secrets.mark-carey/masterdev/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}
npx -y @executeautomation/playwright-mcp-server
npx -y @browsermcp/mcp@latest
npx -y @modelcontextprotocol/server-memory
npx -y @modelcontextprotocol/server-postgres ${{ secrets.mark-carey/masterdev/anthropic/postgres-mcp/CONNECTION_STRING }}
docker run --rm -i mcp/sequentialthinking
npx -y @modelcontextprotocol/server-github
npx -y @modelcontextprotocol/server-filesystem ${{ secrets.mark-carey/masterdev/anthropic/filesystem-mcp/PATH }}
npx -y @modelcontextprotocol/server-brave-search
docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN mcp/github
docker run --rm -i --mount type=bind,src=${{ secrets.mark-carey/masterdev/docker/mcp-git/GIT_DIR }},dst=${{ secrets.mark-carey/masterdev/docker/mcp-git/GIT_DIR }} mcp/git