zaim98269/slug icon
public
Published on 8/5/2025
Chatbot

it has uncountable features

Rules
Prompts
Models
Context
Data
relace Relace Instant Apply model icon

Relace Instant Apply

relace

40kinput·32koutput
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Sonnet model icon

Claude 3.5 Sonnet

anthropic

200kinput·8.192koutput
mistral Codestral model icon

Codestral

mistral

voyage voyage-code-3 model icon

voyage-code-3

voyage

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

anthropic Claude 4 Sonnet model icon

Claude 4 Sonnet

anthropic

200kinput·64koutput
openai OpenAI GPT-4.1 model icon

OpenAI GPT-4.1

OpenAI

1047kinput·32.768koutput
together Llama 4 Maverick Instruct (17Bx128E) model icon

Llama 4 Maverick Instruct (17Bx128E)

together

gemini Gemini 2.5 Pro model icon

Gemini 2.5 Pro

gemini

1048kinput·65.536koutput
anthropic Claude 4 Opus model icon

Claude 4 Opus

anthropic

200kinput·32koutput
openai Morph Fast Apply model icon

Morph Fast Apply

OpenAI

xAI Grok 2 model icon

Grok 2

xAI

openai OpenAI GPT-4o model icon

OpenAI GPT-4o

OpenAI

128kinput·16.384koutput
together Llama 4 Scout Instruct (17Bx16E) model icon

Llama 4 Scout Instruct (17Bx16E)

together

gemini Gemini 2.0 Flash model icon

Gemini 2.0 Flash

gemini

1048kinput·8.192koutput
openai o3-mini model icon

o3-mini

OpenAI

200kinput·100koutput
anthropic Claude 3.5 Haiku model icon

Claude 3.5 Haiku

anthropic

200kinput·8.192koutput
inception Mercury Coder Small model icon

Mercury Coder Small

inception

cohere Morph Rerank v2 model icon

Morph Rerank v2

cohere

voyage voyage-code-2 model icon

voyage-code-2

voyage

ollama qwen2.5-coder 1.5b model icon

qwen2.5-coder 1.5b

ollama

openai OpenAI GPT-4o Mini model icon

OpenAI GPT-4o Mini

OpenAI

128kinput·16.384koutput
openai o1 model icon

o1

OpenAI

200kinput·100koutput
mistral Mistral Embed model icon

Mistral Embed

mistral

ollama nomic-embed-text latest model icon

nomic-embed-text latest

ollama

lmstudio deepseek-r1 8b model icon

deepseek-r1 8b

lmstudio

anthropic hermes-2-pro-llama-3-8b model icon

hermes-2-pro-llama-3-8b

anthropic

deepinfra Qwen2.5 Coder 32B Instruct model icon

Qwen2.5 Coder 32B Instruct

deepinfra

mistral Mistral Large model icon

Mistral Large

mistral

openai Morph Embedding v2 model icon

Morph Embedding v2

OpenAI

ncompass Qwen 2.5 Coder 32b model icon

Qwen 2.5 Coder 32b

ncompass

lmstudio qwen2.5-coder 1.5b model icon

qwen2.5-coder 1.5b

lmstudio

ollama deepseek-r1 8b model icon

deepseek-r1 8b

ollama

sambanova DeepSeek R1 model icon

DeepSeek R1

sambanova

anthropic midnight-rose-70b model icon

midnight-rose-70b

anthropic

deepinfra DeepSeek R1 model icon

DeepSeek R1

deepinfra

anthropic CodeGate Anthropic model icon

CodeGate Anthropic

anthropic

novita deepseek-r1 model icon

deepseek-r1

novita

openai OpenAI text-embedding-3-large model icon

OpenAI text-embedding-3-large

OpenAI

- You are an Angular developer
- Use Angular CLI for project scaffolding
- Use TypeScript with strict mode enabled
- Use RxJS for state management and async operations
- Use the typical naming conventions:
  - Components: .component.ts
  - Services: .service.ts
  - Pipes: .pipe.ts
  - Module: .module.ts
  - Test: .spec.ts
  - Directives: .directive.ts
- Follow Nuxt.js 3 patterns and correctly use server and client components.
- Use Nuxt UI for components and styling (built on top of Tailwind CSS).
- Use VueUse for utility composables.
- Use Pinia for state management.
- Use Vee-Validate + Zod for form handling and validation.
- Use Nuxt DevTools for debugging.
- Use Vue Query (TanStack) for complex data fetching scenarios.
- Use Prisma for database access.
- Follow Vue.js Style Guide for code formatting.
- Use script setup syntax for components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
- You are a PyTorch ML engineer
- Use type hints consistently
- Optimize for readability over premature optimization
- Write modular code, using separate files for models, data loading, training, and evaluation
- Follow PEP8 style guide for Python code
- You are a Svelte developer
- Use SvelteKit for the framework
- Use TailwindCSS for styling
- Use TypeScript
- Use the canonical SvelteKit file structure:
  ```
  src/
    actions/
    components/
    data/
    routes/
    runes/
    styles/
    utils/
- Follow Next.js patterns, use app router and correctly use server and client components.
- Use Tailwind CSS for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use React Context for state management.
- Use Prisma for database access.
- Follow AirBnB style guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING COMPONENTS AND CODE.
You are an expert AI engineer and Python developer building with LanceDB, a multi-modal database for AI
  - Use dataframes to store and manipulate data
  - Always explicitly define schemas with PyArrow when making tables
You are an experience game developer who specializes in Unity and C# game
development.
# Development Principles
- Propose single-component changes only
- Prioritize testable, self-contained implementations
- Always consider performance implications
- Separate data from behavior when possible
# Code Guidelines
- XML docs for public members
- Error handling and null checks
- Follow Unity component lifecycle best practices
- Use `[SerializeField]` for editor-exposed private fields
# Response Format
- First assess implementation complexity
- For complex tasks, break down into subtasks
- Provide only one implementation per response
- Max 30-50 lines of code per response
- Include test strategy for implementation
- Always specify affected files
# Architecture Principles
- Composition over inheritance
- ScriptableObjects for shared data
- Events for loose coupling
- Consider SOLID principles
- Follow Django style guide
- Avoid using raw queries
- Prefer the Django REST Framework for API development
- Prefer Celery for background tasks
- Prefer Redis for caching and task queues
- Prefer PostgreSQL for production databases
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
# dlt rules
## Basics
1. dlt means "data load tool". It is an open source Python library installable via `pip install dlt`.
2. To create a new pipeline, use `dlt init <source> <destination>`.
3. The dlt library comes with the `dlt` CLI. Add the `--help` flag to any command to verify its specs.
4. The preferred way to configure dlt (sources, resources, destinations, etc.) is to use `.dlt/config.toml` and `.dlt/secrets.toml`. Make sure to fill required fields when adding a source or resource.
5. During development, always set `dev_mode=True` when creating a dlt Pipeline. `pipeline = dlt.pipeline(..., dev_mode=True)`. This allows to reset the pipeline's schema and state between iterations.
6. Use type annotations only if you're certain you're properly importing the types.
7. Use dlt's REST API source if loading data from the web.
8. Use dlt's SQL source when loading data from an SQL database or backend.
9. Use dlt's filesystem source if loading data from files (CSV, PDF, Parquet, JSON, and more). This works for local filesystems and cloud buckets (AWS, Azure, GCP, Minio, etc.).
Jupyterhttps://docs.jupyter.org/en/latest/
Matplotlibhttps://matplotlib.org/stable/
Angular Docshttps://angular.io/docs
Nuxt.jshttps://nuxt.com/docs
Continuehttps://docs.continue.dev
PyTorchhttps://pytorch.org/docs/stable/index.html
Zodhttps://zod.dev/
Next.jshttps://nextjs.org/docs/app
Sveltehttps://svelte.dev/docs/svelte
Symfony Docshttps://symfony.com/doc/current/index.html
Pandashttps://pandas.pydata.org/docs/
Langchain Docshttps://python.langchain.com/docs/introduction/
NumPyhttps://numpy.org/doc/stable/
Reacthttps://react.dev/reference/
Rust docshttps://doc.rust-lang.org/book/
Vercel AI SDK Docshttps://sdk.vercel.ai/docs/
Pythonhttps://docs.python.org/3/
Ethereumhttps://ethereum.org/en/developers/docs/
Kubernetes Docshttps://kubernetes.io/docs/home/
Streamlithttps://docs.streamlit.io
SvelteKithttps://svelte.dev/docs/kit
LanceDB Open Source Docshttps://lancedb.github.io/lancedb/
Uvicorn Docshttps://www.uvicorn.org/
Gradle Documentationhttps://docs.gradle.org/current/userguide/
SQLAlchemyhttps://docs.sqlalchemy.org/en/20
Solidityhttps://docs.soliditylang.org/en/v0.8.0/
Obsidian Developer Docshttps://raw.githubusercontent.com/obsidianmd/obsidian-api/refs/heads/master/obsidian.d.ts
Vue docshttps://vuejs.org/v2/guide/
React Testing Library Docshttps://testing-library.com/docs/react-testing-library/intro/
Condahttps://docs.conda.io/en/latest/
LanceDB Enterprise Docshttps://docs.lancedb.com/enterprise/introduction
Terraform Docshttps://developer.hashicorp.com/terraform/docs
Clerkhttps://clerk.com/docs/
Better Authhttps://www.better-auth.com/docs
MDN JavaScript Documentationhttps://developer.mozilla.org/en-US/docs/Web/JavaScript
Apollo GraphQLhttps://www.apollographql.com/docs/
service-desk Docshttps://developer.atlassian.com/cloud/jira/service-desk/rest/

Prompts

Learn more
New Component
Create a new Angular component
Please create a new Angular component following these guidelines:
- Include JSDoc comments for component and inputs/outputs
- Implement proper lifecycle hooks
- Include TypeScript interfaces for models
- Follow container/presentational component pattern where appropriate
- Include unit tests with Jasmine/Karma in a separate test file
- Make sure to create separate files for any services, pipes, modules, and directives
Next.js Security Review
Check for any potential security vulnerabilities in your code
Please review my Next.js code with a focus on security issues.

Use the below as a starting point, but consider any other potential issues

You do not need to address every single area below, only what is relevant to the user's code.

1. Data Exposure:
- Verify Server Components aren't passing full database objects to Client Components
- Check for sensitive data in props passed to 'use client' components
- Look for direct database queries outside a Data Access Layer
- Ensure environment variables (non NEXT_PUBLIC_) aren't exposed to client

2. Server Actions ('use server'):
- Confirm input validation on all parameters
- Verify user authentication/authorization checks
- Check for unencrypted sensitive data in .bind() calls

3. Route Safety:
- Validate dynamic route parameters ([params])
- Check custom route handlers (route.ts) for proper CSRF protection
- Review middleware.ts for security bypass possibilities

4. Data Access:
- Ensure parameterized queries for database operations
- Verify proper authorization checks in data fetching functions
- Look for sensitive data exposure in error messages

Key files to focus on: files with 'use client', 'use server', route.ts, middleware.ts, and data access functions.
Page
Creates a new Nuxt.js page based on the description provided.
Create a new Nuxt.js page based on the following description.
New Module
Create a new PyTorch module
Please create a new PyTorch module following these guidelines:
- Include docstrings for the model class and methods
- Add type hints for all parameters
- Add basic validation in __init__
New Component
Create a new Svelte component
Please create a new Svelte component following these guidelines:
- Include JSDoc comments for component and props
- Include basic error handling and loading states
- ALWAYS add a TypeScript prop interface
Page
Creates a new Next.js page based on the description provided.
Create a new Next.js page based on the following description.
Review
Review changes
Please review the current code changes looking for:

- Memory leaks (unsubscribed observables)
- Proper change detection strategy
- Proper use of async pipe
- Proper error handling

Format the review as:
```
## <FILENAME>
- <ISSUE>
...
- <ISSUE>
```
New LanceDB
Create a new LanceDB table
Create a new LanceDB table with the description given below. It should follow these rules:
  - Explicitly define the schema of the table with PyArrow
  - Use dataframes to store and manipulate data
  - If there is a column with embeddings, call it "vector"

Here is a basic example: ```python import lancedb import pandas as pd import pyarrow as pa
# Connect to the database db = lancedb.connect("data/sample-lancedb")
# Create a table with an empty schema schema = pa.schema([pa.field("vector", pa.list_(pa.float32(), list_size=2))]) tbl = db.create_table("empty_table", schema=schema)
# Insert data into the table data = pd.DataFrame({"vector": [[1.0, 2.0], [3.0, 4.0]]}) tbl.add(data) ```
RAG Pipeline Design
Comprehensive retrieval-augmented generation system design
Design a RAG (Retrieval-Augmented Generation) system with:

Document Processing:
- Text extraction strategy
- Chunking approach with size and overlap parameters
- Metadata extraction and enrichment
- Document hierarchy preservation

Vector Store Integration:
- Embedding model selection and rationale
- Vector database architecture
- Indexing strategy
- Query optimization

Retrieval Strategy:
- Hybrid search (vector + keyword)
- Re-ranking methodology
- Metadata filtering capabilities
- Multi-query reformulation

LLM Integration:
- Context window optimization
- Prompt engineering for retrieval
- Citation and source tracking
- Hallucination mitigation strategies

Evaluation Framework:
- Retrieval relevance metrics
- Answer accuracy measures
- Ground truth comparison
- End-to-end benchmarking

Deployment Architecture:
- Caching strategies
- Scaling considerations
- Latency optimization
- Monitoring approach

The user's knowledge base has the following characteristics:
Exploratory Data Analysis
Initial data exploration and key insights
Create an exploratory data analysis workflow that includes:

Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions

Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)

Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation

Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook

The user has provided the following information:
Write Unit Test
Write Laravel Unit Tests for attached code
Use Laravel to write a comprehensive suite of unit tests for the attached code.
Ensure that your responses are concise and technical, providing precise PHP examples that adhere to Laravel best practices and conventions. Apply object-oriented programming principles with a focus on SOLID design, prioritizing code iteration and modularization over duplication.
When writing unit tests, select descriptive names for test methods and variables, and use directories in lowercase with dashes following Laravel's conventions (e.g., app/Http/Controllers). Prioritize the use of dependency injection and service containers to create maintainable code that leverages PHP 8.1+ features.
Conform to PSR-12 coding standards and enforce strict typing using declare(strict_types=1);. Utilize Laravel's testing tools, particularly PHPUnit, to efficiently construct tests that validate the code functionality. Implement error handling and logging in your tests using Laravel's built-in features, and employ middleware testing techniques for request filtering and modification validation.
Ensure that your test cases cover the interactions using Laravel's Eloquent ORM and query builder, applying suitable practices for database migrations and seeders in a testing environment. Manage dependencies using the latest stable versions of Laravel and Composer, and rely on Eloquent ORM over raw SQL queries wherever applicable.
Adopt the Repository pattern for testing the data access layer, utilize Laravel's built-in authentication and authorization features in your tests, and implement job queue scenarios for long-running task verifications. Incorporate API versioning checks for endpoint tests and use Laravel's localization features to simulate multi-language support.
Use Laravel Mix in your testing workflow for asset handling and ensure efficient indexing for database operations tested within your suite. Leverage Laravel's pagination features and implement comprehensive error logging and monitoring in your test scenarios. Follow Laravel's MVC architecture, ensure route definitions are verified through tests, and employ Form Requests for validating request data.
Utilize Laravel's Blade engine during the testing of view components and confirm the establishment of database relationships through Eloquent. Implement API resource transformations and mock event and listener systems to maintain decoupled code functionality in your tests. Finally, utilize database transactions during tests to ensure data integrity, and use Laravel's scheduling features to validate recurring tasks.
Prisma schema
Create a Prisma schema.
Create or update a Prisma schema with the following models and relationships. Include necessary fields, relationships, and any relevant enums.
API route inspection
Analyzes API routes for security issues
Review this API route for security vulnerabilities. Ask questions about the context, data flow, and potential attack vectors. Be thorough in your investigation.
Client component
Create a client component.
Create a client component with the following functionality. If writing this as a server component is not possible, explain why.
Data validation check
Checks input validation and sanitization
Analyze this code for data validation vulnerabilities. Ask about data sources, validation rules, and how the data is used throughout the application.
Database schema
Create a database schema.
Create or update a database schema with the following models and relationships. Include necessary fields, relationships, and any relevant enums.
Data Pipeline Development
Create robust and scalable data processing pipelines
Generate a data processing pipeline with these requirements:

Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues

Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations

Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring

Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests

The user has provided the following information:
Next.js Caching Review
Understand the caching behavior of your code
Your task is to analyze the user's code to help them understand it's current caching behavior, and mention any potential issues.
Be concise, only mentioning what is necessary.
Use the following as a starting point for your review:

1. Examine the four key caching mechanisms:
   - Request Memoization in Server Components
   - Data Cache behavior with fetch requests
   - Full Route Cache (static vs dynamic rendering)
   - Router Cache for client-side navigation

2. Look for and identify:
   - Fetch configurations (cache, revalidate options)
   - Dynamic route segments and generateStaticParams
   - Route segment configs affecting caching
   - Cache invalidation methods (revalidatePath, revalidateTag)

3. Highlight:
   - Potential caching issues or anti-patterns
   - Opportunities for optimization
   - Unexpected dynamic rendering
   - Unnecessary cache opt-outs

4. Provide clear explanations of:
   - Current caching behavior
   - Performance implications
   - Recommended adjustments if needed

Lastly, point them to the following link to learn more: https://nextjs.org/docs/app/building-your-application/caching
AWS Terraform Module Best Practices
Create scalable, reusable AWS Terraform modules
Generate a structured, reusable Terraform module for deploying AWS infrastructure components. The module must include:

Module Structure:
- Clearly defined input variables with descriptions and defaults
- Outputs with meaningful resource information
- Secure handling of sensitive inputs (like IAM credentials or secrets)
- Compliance with Terraform best practices for scalability and readability
- Proper file organization (main.tf, variables.tf, outputs.tf)

AWS Infrastructure Components:
- Example using common AWS services (EKS, EC2, S3, IAM roles/policies, security groups, and VPCs)
- Include resource tagging and standard naming conventions

Documentation:
- README with module usage examples
- Inline code comments to clarify configurations and decisions
- Suggestions for module testing and validation

The user has provided the following requirements:
Add login required decorator
Add login required decorator
Add login required decorator
My prompt
Sequential Thinking Activation
<!-- Sequential Thinking Workflow -->
<assistant>
    <toolbox>
        <mcp_server name="sequential-thinking"
                        role="workflow_controller"
                        execution="sequential-thinking"
                        description="Initiate the sequential-thinking MCP server">
            <tool name="STEP" value="1">
                <description>Gather context by reading the relevant file(s).</description>
                <arguments>
                    <argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
                    <argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
                </arguments>
                <result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="2">
                <description>Generate code changes based on the gathered context (from STEP 1).</description>
                <arguments>
                    <argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
                    <argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
                </arguments>
                <result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="3">
                <description>Review the generated changes (from STEP 2) and suggest improvements.</description>
                <arguments>
                    <argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
                </arguments>
                <result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
            </tool>
        </mcp_server>
    </toolbox>
</assistant>

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace
@currentFile
Reference the currently open file
@docs
Reference the contents from any documentation site
@repo-map
Reference the outline of your codebase
@gitlab-mr
Reference an open MR for this branch on GitLab
@open
Reference the contents of all of your open files
@greptile
Query a Greptile index of the current repo/branch
@jira
Reference the conversation in a Jira issue
@clipboard
Reference recent clipboard items
@problems
Get Problems from the current file
@commit
@os
Reference the architecture and platform of your current operating system

S3

${{ secrets.zaim98269/slug/continuedev/s3-dev-data/AWS_SERVER_URL }}

New Relic

https://log-api.newrelic.com/log/v1

Azure Blob Storage

${{ secrets.zaim98269/slug/continuedev/azure-blob-storage-dev-data/AZURE_SERVER_URL }}

Google Cloud Storage

${{ secrets.zaim98269/slug/continuedev/google-cloud-storage-dev-data/GCP_SERVER_URL }}

Logstash

${{ secrets.zaim98269/slug/continuedev/logstash-dev-data/LOGSTASH_URL }}

MCP Servers

Learn more

Exa

npx -y exa-mcp-server

Postgres

docker run -i --rm mcp/postgres ${{ secrets.zaim98269/slug/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}

Playwright

npx -y @executeautomation/playwright-mcp-server

Memory

npx -y @modelcontextprotocol/server-memory

Slack

docker run -i --rm -e SLACK_BOT_TOKEN -e SLACK_TEAM_ID mcp/slack

Browser MCP

npx -y @browsermcp/mcp@latest

Postgres

npx -y @modelcontextprotocol/server-postgres ${{ secrets.zaim98269/slug/anthropic/postgres-mcp/CONNECTION_STRING }}

Sequential Thinking

docker run --rm -i mcp/sequentialthinking

Repomix

npx -y repomix --mcp

GitHub

npx -y @modelcontextprotocol/server-github

Git

docker run --rm -i --mount type=bind,src=${{ secrets.zaim98269/slug/docker/mcp-git/GIT_DIR }},dst=${{ secrets.zaim98269/slug/docker/mcp-git/GIT_DIR }} mcp/git

Filesystem

npx -y @modelcontextprotocol/server-filesystem ${{ secrets.zaim98269/slug/anthropic/filesystem-mcp/PATH }}

Tavily Search

npx -y tavily-mcp@latest

Stakpak

npx @stakpak/mcp@latest --output=text

Gitlab

docker run -e GITLAB_PERSONAL_ACCESS_TOKEN -e GITLAB_API_URL mcp/gitlab

Brave Search

npx -y @modelcontextprotocol/server-brave-search

Github

docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN mcp/github