jenes/loop icon
public
Published on 4/19/2025
jenes/loop

Rules

##########################

AI SUSTAINABLE CODING RULESET

##########################

BUILD & DEVELOPMENT COMMANDS

  1. All build commands must be script-based and reproducible.

    • Example: scripts/dev.sh, scripts/build.sh, scripts/lint.sh
    • Each script must include:
      • Required environment
      • Setup steps
      • Comments explaining key options
  2. Environment variables must be defined in .env.example with inline comments. OPENAI_API_KEY= # required DEBUG_MODE=false # optional

  3. Use a Makefile or unified command interface: make dev make build make lint make test

  4. Dependencies must be locked using poetry.lock, package-lock.json, or Pipfile.lock

TESTING GUIDELINES

  1. No code should be written without corresponding tests (TDD principle).

  2. Maintain test coverage ≥ 90%. Use tools like pytest --cov, coverage.

  3. Include:

    • Positive and negative test cases
    • Edge case handling
    • Failure condition simulation
  4. Name tests behaviorally: def test_login_fails_with_wrong_password(): ...

CODE STYLE & STANDARDS

  1. Use formatters and linters:

    • Python: black, flake8, isort, mypy
    • JavaScript: prettier, eslint
  2. Every function should follow Single Responsibility Principle (SRP).

    • Split if over 30 lines.
  3. Function and variable names must be descriptive and meaningful: def fetch_user_profile(): ... def calculate_total_price(): ...

  4. Enforce docstrings and type hints: def register_user(email: str, password: str) -> bool: """Registers user. Returns False if failed."""

  5. No magic numbers or strings: MAX_TIMEOUT = 10

DOCUMENTATION STANDARDS

  1. README must include:

    • Overview
    • Installation
    • How to run
    • Examples
    • Stack
    • License
  2. Code comments should explain why, not what.

  3. Each module must start with a header comment: """ Auth module - handles JWT token generation and validation. """

  4. Mark AI-specific risks with [AI Review Point]:

    [AI Review Point] Make sure this API param is not null

SELF-REFLECTIVE LOOP (FOR SUSTAINABLE AUTONOMY)

Before finalizing any implementation, the AI must walk through a Self Q&A Loop:

SELF Q&A LOOP (APPLY TO EVERY FUNCTION AND MODULE)

  • What are the preconditions and postconditions?
  • What are the possible edge cases or failure modes?
  • Is this design future-proof or tightly coupled?
  • What would a human reviewer most likely critique?
  • Could this cause unintended side effects in a larger system?
  • How can I prove this code does what it claims?
  • If I had to write a test for this, what would it look like?

Add the following comment block to every major function:

-- Self Review --

Preconditions: ...

Postconditions: ...

Edge Cases: ...

Reviewer Questions: ...

Test Ideas: ...

------------------

LOOP CONTINUATION:

  • After finishing one full Self Q&A Loop, the AI must:
    1. Re-validate all prior answers in light of any new changes.
    2. If any change affects a related module, trigger the same loop recursively.
    3. Repeat the loop until no new risks or uncertainties are discovered.
    4. Re-initiate loop automatically on new commits or feature branches.

AI HALLUCINATION PREVENTION

  1. Do not assume APIs, schemas, or structures — always verify with documentation or examples.
  2. No guessing error messages or response formats.
  3. All assumptions must be marked and test-covered.
  4. Compare new logic with existing one (if rewriting), state the advantages and trade-offs.
  5. Avoid destructive edits — always preserve system integrity, unless change is 100% confirmed.

FINAL SAFETY CHECK BEFORE MERGE

  • [ ] Have all assumptions been validated?
  • [ ] Are all outputs tested?
  • [ ] Is this change compatible with other modules?
  • [ ] Are changes reversible if needed?
  • [ ] Are AI-generated parts clearly marked?

END OF RULESET