Rhino
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
<!-- Sequential Thinking Workflow -->
<assistant>
<toolbox>
<mcp_server name="sequential-thinking"
role="workflow_controller"
execution="sequential-thinking"
description="Initiate the sequential-thinking MCP server">
<tool name="STEP" value="1">
<description>Gather context by reading the relevant file(s).</description>
<arguments>
<argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
<argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
</arguments>
<result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="2">
<description>Generate code changes based on the gathered context (from STEP 1).</description>
<arguments>
<argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
<argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
</arguments>
<result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
</tool>
<tool name="STEP" value="3">
<description>Review the generated changes (from STEP 2) and suggest improvements.</description>
<arguments>
<argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
</arguments>
<result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
</tool>
</mcp_server>
</toolbox>
</assistant>
Generate a data processing pipeline with these requirements:
Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues
Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations
Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring
Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests
The user has provided the following information:
Review this API route for security vulnerabilities. Ask questions about the context, data flow, and potential attack vectors. Be thorough in your investigation.
${{ secrets.ryan-walters/rhino/continuedev/google-cloud-storage-dev-data/GCP_SERVER_URL }}
npx -y @modelcontextprotocol/server-memory
npx -y @browsermcp/mcp@latest
npx -y @modelcontextprotocol/server-filesystem ${{ secrets.ryan-walters/rhino/anthropic/filesystem-mcp/PATH }}
npx -y @modelcontextprotocol/server-brave-search
npx -y exa-mcp-server
docker run -i --rm -e SLACK_BOT_TOKEN -e SLACK_TEAM_ID mcp/slack
npx -y @executeautomation/playwright-mcp-server
docker run -i --rm mcp/postgres ${{ secrets.ryan-walters/rhino/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}