##########################
AI SUSTAINABLE CODING RULESET
##########################
BUILD & DEVELOPMENT COMMANDS
-
All build commands must be script-based and reproducible.
- Example: scripts/dev.sh, scripts/build.sh, scripts/lint.sh
- Each script must include:
- Required environment
- Setup steps
- Comments explaining key options
-
Environment variables must be defined in .env.example with inline comments.
OPENAI_API_KEY= # required
DEBUG_MODE=false # optional
-
Use a Makefile or unified command interface:
make dev
make build
make lint
make test
-
Dependencies must be locked using poetry.lock, package-lock.json, or Pipfile.lock
TESTING GUIDELINES
-
No code should be written without corresponding tests (TDD principle).
-
Maintain test coverage ≥ 90%. Use tools like pytest --cov, coverage.
-
Include:
- Positive and negative test cases
- Edge case handling
- Failure condition simulation
-
Name tests behaviorally:
def test_login_fails_with_wrong_password(): ...
CODE STYLE & STANDARDS
-
Use formatters and linters:
- Python: black, flake8, isort, mypy
- JavaScript: prettier, eslint
-
Every function should follow Single Responsibility Principle (SRP).
-
Function and variable names must be descriptive and meaningful:
def fetch_user_profile(): ...
def calculate_total_price(): ...
-
Enforce docstrings and type hints:
def register_user(email: str, password: str) -> bool:
"""Registers user. Returns False if failed."""
-
No magic numbers or strings:
MAX_TIMEOUT = 10
DOCUMENTATION STANDARDS
-
README must include:
- Overview
- Installation
- How to run
- Examples
- Stack
- License
-
Code comments should explain why, not what.
-
Each module must start with a header comment:
"""
Auth module - handles JWT token generation and validation.
"""
-
Mark AI-specific risks with [AI Review Point]:
[AI Review Point] Make sure this API param is not null
SELF-REFLECTIVE LOOP (FOR SUSTAINABLE AUTONOMY)
Before finalizing any implementation, the AI must walk through a Self Q&A Loop:
SELF Q&A LOOP (APPLY TO EVERY FUNCTION AND MODULE)
- What are the preconditions and postconditions?
- What are the possible edge cases or failure modes?
- Is this design future-proof or tightly coupled?
- What would a human reviewer most likely critique?
- Could this cause unintended side effects in a larger system?
- How can I prove this code does what it claims?
- If I had to write a test for this, what would it look like?
Add the following comment block to every major function:
-- Self Review --
Preconditions: ...
Postconditions: ...
Edge Cases: ...
Reviewer Questions: ...
Test Ideas: ...
------------------
LOOP CONTINUATION:
- After finishing one full Self Q&A Loop, the AI must:
- Re-validate all prior answers in light of any new changes.
- If any change affects a related module, trigger the same loop recursively.
- Repeat the loop until no new risks or uncertainties are discovered.
- Re-initiate loop automatically on new commits or feature branches.
AI HALLUCINATION PREVENTION
- Do not assume APIs, schemas, or structures — always verify with documentation or examples.
- No guessing error messages or response formats.
- All assumptions must be marked and test-covered.
- Compare new logic with existing one (if rewriting), state the advantages and trade-offs.
- Avoid destructive edits — always preserve system integrity, unless change is 100% confirmed.
FINAL SAFETY CHECK BEFORE MERGE
- [ ] Have all assumptions been validated?
- [ ] Are all outputs tested?
- [ ] Is this change compatible with other modules?
- [ ] Are changes reversible if needed?
- [ ] Are AI-generated parts clearly marked?
END OF RULESET