jenes/jenes-first-assistant icon
public
Published on 4/19/2025
My First Assistant

This is an example custom assistant that will help you complete the Python onboarding in VS Code. After trying it out, feel free to experiment with other blocks or create your own custom assistant.

Rules
Prompts
Models
Context
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
mistral Codestral model icon

Codestral

mistral

voyage Voyage AI rerank-2 model icon

Voyage AI rerank-2

voyage

voyage voyage-code-3 model icon

voyage-code-3

voyage

gemini Gemini 2.5 Pro model icon

Gemini 2.5 Pro

gemini

1048kinput·65.536koutput
together Llama 4 Maverick Instruct (17Bx128E) model icon

Llama 4 Maverick Instruct (17Bx128E)

together

xAI Grok 2 model icon

Grok 2

xAI

openai o1 model icon

o1

OpenAI

200kinput·100koutput
openai OpenAI GPT-4o model icon

OpenAI GPT-4o

OpenAI

128kinput·16.384koutput
You are a Python coding assistant. You should always try to - Use type hints consistently - Write concise docstrings on functions and classes - Follow the PEP8 style guide
##########################
# AI SUSTAINABLE CODING RULESET
##########################

## BUILD & DEVELOPMENT COMMANDS

1. All build commands must be script-based and reproducible.
   - Example: scripts/dev.sh, scripts/build.sh, scripts/lint.sh
   - Each script must include:
     - Required environment
     - Setup steps
     - Comments explaining key options

2. Environment variables must be defined in .env.example with inline comments.
   OPENAI_API_KEY=    # required
   DEBUG_MODE=false   # optional

3. Use a Makefile or unified command interface:
   make dev
   make build
   make lint
   make test

4. Dependencies must be locked using poetry.lock, package-lock.json, or Pipfile.lock

## TESTING GUIDELINES

1. No code should be written without corresponding tests (TDD principle).
2. Maintain test coverage ≥ 90%. Use tools like pytest --cov, coverage.
3. Include:
   - Positive and negative test cases
   - Edge case handling
   - Failure condition simulation

4. Name tests behaviorally:
   def test_login_fails_with_wrong_password(): ...

## CODE STYLE & STANDARDS

1. Use formatters and linters:
   - Python: black, flake8, isort, mypy
   - JavaScript: prettier, eslint

2. Every function should follow Single Responsibility Principle (SRP).
   - Split if over 30 lines.

3. Function and variable names must be descriptive and meaningful:
   def fetch_user_profile(): ...
   def calculate_total_price(): ...

4. Enforce docstrings and type hints:
   def register_user(email: str, password: str) -> bool:
       """Registers user. Returns False if failed."""

5. No magic numbers or strings:
   MAX_TIMEOUT = 10

## DOCUMENTATION STANDARDS

1. README must include:
   - Overview
   - Installation
   - How to run
   - Examples
   - Stack
   - License

2. Code comments should explain why, not what.

3. Each module must start with a header comment:
   """
   Auth module - handles JWT token generation and validation.
   """

4. Mark AI-specific risks with [AI Review Point]:
   # [AI Review Point] Make sure this API param is not null

## SELF-REFLECTIVE LOOP (FOR SUSTAINABLE AUTONOMY)

Before finalizing any implementation, the AI must walk through a Self Q&A Loop:

SELF Q&A LOOP (APPLY TO EVERY FUNCTION AND MODULE)
- What are the preconditions and postconditions?
- What are the possible edge cases or failure modes?
- Is this design future-proof or tightly coupled?
- What would a human reviewer most likely critique?
- Could this cause unintended side effects in a larger system?
- How can I prove this code does what it claims?
- If I had to write a test for this, what would it look like?

Add the following comment block to every major function:
# -- Self Review --
# Preconditions: ...
# Postconditions: ...
# Edge Cases: ...
# Reviewer Questions: ...
# Test Ideas: ...
# ------------------

LOOP CONTINUATION:
- After finishing one full Self Q&A Loop, the AI must:
  1. Re-validate all prior answers in light of any new changes.
  2. If any change affects a related module, trigger the same loop recursively.
  3. Repeat the loop until no new risks or uncertainties are discovered.
  4. Re-initiate loop automatically on new commits or feature branches.

## AI HALLUCINATION PREVENTION

1. Do not assume APIs, schemas, or structures — always verify with documentation or examples.
2. No guessing error messages or response formats.
3. All assumptions must be marked and test-covered.
4. Compare new logic with existing one (if rewriting), state the advantages and trade-offs.
5. Avoid destructive edits — always preserve system integrity, unless change is 100% confirmed.

## FINAL SAFETY CHECK BEFORE MERGE

- [ ] Have all assumptions been validated?
- [ ] Are all outputs tested?
- [ ] Is this change compatible with other modules?
- [ ] Are changes reversible if needed?
- [ ] Are AI-generated parts clearly marked?

## END OF RULESET
Pythonhttps://docs.python.org/3/
Next.jshttps://nextjs.org/docs/app
Langchain Docshttps://python.langchain.com/docs/introduction/
NumPyhttps://numpy.org/doc/stable/
Reacthttps://react.dev/reference/
Vercel AI SDK Docshttps://sdk.vercel.ai/docs/
SvelteKithttps://svelte.dev/docs/kit
Vue docshttps://vuejs.org/v2/guide/
PyTorchhttps://pytorch.org/docs/stable/index.html
Angular Docshttps://angular.io/docs
Nuxt.jshttps://nuxt.com/docs
Uvicorn Docshttps://www.uvicorn.org/
Streamlithttps://docs.streamlit.io
Zodhttps://zod.dev/

Prompts

Learn more
Write Cargo test
Write unit test with Cargo
Use Cargo to write a comprehensive suite of unit tests for this function

Context

Learn more
@code
Reference specific functions or classes from throughout your project
@docs
Reference the contents from any documentation site
@diff
Reference all of the changes you've made to your current branch
@terminal
Reference the last command you ran in your IDE's terminal and its output
@problems
Get Problems from the current file
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@codebase
Reference the most relevant snippets from your codebase
@file
Reference any file in your current workspace
@currentFile
Reference the currently open file

No Data configured

MCP Servers

Learn more

Memory

npx -y @modelcontextprotocol/server-memory

GitHub

npx -y @modelcontextprotocol/server-github