jruokola/genai-evaluation-prompt icon
public
Published on 2/26/2025
GenAI Evaluation Framework

Prompt for assistants to develop comprehensive evaluation strategies for GenAI applications

Prompts
Comprehensive Evaluation System
Structured approach to evaluating generative AI systems
Design a GenAI evaluation framework that includes:

Evaluation Dimensions:
- Accuracy and factuality
- Relevance to query
- Completeness of response
- Safety and bias metrics
- Stylistic appropriateness

Methodology:
- Automated evaluation techniques
- Human evaluation protocols
- Comparative benchmarking
- Red teaming approach

Metrics Selection:
- ROUGE, BLEU, BERTScore implementation
- Custom domain-specific metrics
- User satisfaction indicators
- Behavioral indicators

Testing Framework:
- Test case generation
- Ground truth dataset creation
- Regression testing suite
- Continuous evaluation pipeline

Analysis Workflow:
- Error categorization
- Failure mode detection
- Performance visualization
- Improvement prioritization

Integration Strategy:
- CI/CD pipeline integration
- Model deployment gating
- Monitoring dashboards
- Feedback loops

The user's GenAI system has the following characteristics: