quantum-energyai/qxay icon
public
Published on 8/17/2025
cosmic

the quantumxai platform

Prompts
Models
Context
Data
relace Relace Instant Apply model icon

Relace Instant Apply

relace

40kinput·32koutput
anthropic Claude 4 Sonnet model icon

Claude 4 Sonnet

anthropic

200kinput·64koutput
anthropic Claude 4.1 Opus model icon

Claude 4.1 Opus

anthropic

200kinput·32koutput
mistral Codestral model icon

Codestral

mistral

No Rules configured

Jupyterhttps://docs.jupyter.org/en/latest/
Ethereumhttps://ethereum.org/en/developers/docs/
LanceDB Open Source Docshttps://lancedb.github.io/lancedb/
Uvicorn Docshttps://www.uvicorn.org/
Obsidian Developer Docshttps://raw.githubusercontent.com/obsidianmd/obsidian-api/refs/heads/master/obsidian.d.ts
Condahttps://docs.conda.io/en/latest/
Terraform Docshttps://developer.hashicorp.com/terraform/docs
PyTorchhttps://pytorch.org/docs/stable/index.html
Zodhttps://zod.dev/
Pandashttps://pandas.pydata.org/docs/
Langchain Docshttps://python.langchain.com/docs/introduction/
Reacthttps://react.dev/reference/
Rust docshttps://doc.rust-lang.org/book/
Pythonhttps://docs.python.org/3/
Kubernetes Docshttps://kubernetes.io/docs/home/

Prompts

Learn more
Page
Creates a new Next.js page based on the description provided.
Create a new Next.js page based on the following description.
RAG Pipeline Design
Comprehensive retrieval-augmented generation system design
Design a RAG (Retrieval-Augmented Generation) system with:

Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation

Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization

Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation

LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies

Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking

Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach

The user's knowledge base has the following characteristics:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
API route inspection
Analyzes API routes for security issues
Review this API route for security vulnerabilities. Ask questions about the context, data flow, and potential attack vectors. Be thorough in your investigation.
Client component
Create a client component.
Create a client component with the following functionality. If writing this as a server component is not possible, explain why.
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Next.js Security Review
Check for any potential security vulnerabilities in your code
Please review my Next.js code with a focus on security issues.

Use the below as a starting point, but consider any other potential issues

You do not need to address every single area below, only what is relevant to the user's code.

1. Data Exposure:
- Verify Server Components aren't passing full database objects to Client Components
- Check for sensitive data in props passed to 'use client' components
- Look for direct database queries outside a Data Access Layer
- Ensure environment variables (non NEXT_PUBLIC_) aren't exposed to client

2. Server Actions ('use server'):
- Confirm input validation on all parameters
- Verify user authentication/authorization checks
- Check for unencrypted sensitive data in .bind() calls

3. Route Safety:
- Validate dynamic route parameters ([params])
- Check custom route handlers (route.ts) for proper CSRF protection
- Review middleware.ts for security bypass possibilities

4. Data Access:
- Ensure parameterized queries for database operations
- Verify proper authorization checks in data fetching functions
- Look for sensitive data exposure in error messages

Key files to focus on: files with 'use client', 'use server', route.ts, middleware.ts, and data access functions.
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
New Component
Create a new Svelte component
Please create a new Svelte component following these guidelines:
- Include JSDoc comments for component and props
- Include basic error handling and loading states
- ALWAYS add a TypeScript prop interface
New LanceDB
Create a new LanceDB table
Create a new LanceDB table with the description given below. It should follow these rules:
  - Explicitly define the schema of the table with PyArrow
  - Use dataframes to store and manipulate data
  - If there is a column with embeddings, call it "vector"

Here is a basic example: ```python import lancedb import pandas as pd import pyarrow as pa
# Connect to the database db = lancedb.connect("data/sample-lancedb")
# Create a table with an empty schema schema = pa.schema([pa.field("vector", pa.list_(pa.float32(), list_size=2))]) tbl = db.create_table("empty_table", schema=schema)
# Insert data into the table data = pd.DataFrame({"vector": [[1.0, 2.0], [3.0, 4.0]]}) tbl.add(data) ```

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@terminal
Reference the last command you ran in your IDE's terminal and its output
@file
Reference any file in your current workspace

Logstash

${{ secrets.quantum-energyai/qxay/continuedev/logstash-dev-data/LOGSTASH_URL }}

MCP Servers

Learn more

Context7 MCP

URL: https://mcp.context7.com/mcp

Exa

npx -y exa-mcp-server

Postgres

docker run -i --rm mcp/postgres ${{ secrets.quantum-energyai/qxay/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}

Brave Search

npx -y @modelcontextprotocol/server-brave-search

Filesystem

npx -y @modelcontextprotocol/server-filesystem ${{ secrets.quantum-energyai/qxay/anthropic/filesystem-mcp/PATH }}

Playwright

npx -y @executeautomation/playwright-mcp-server