Microfrontends: Turborepo + Vite + Module federation + React + Typescript + Tailwindcss + Shadcn UI
ollama
ollama
ollama
ollama
ollama
ollama
mistral
- Follow Turborepo for scaling monorepos and build system.
- Use pnpm for package management.
- Use Vite for build tool.
- Use Vitest for unit testing.
- Follow Reactjs & typescript patterns
- Use Tailwind CSS v4 for styling.
- Use Shadcn UI for components.
- Use TanStack Query (react-query) for frontend data fetching.
- Use React Hook Form for form handling.
- Use Zod for validation.
- Use Zustand and React Context for state management
- Follow eslint-plugin-react and prettier guide for code formatting.
- Use PascalCase when creating new React files. UserCard, not user-card.
- Use named exports when creating new react components.
- DO NOT TEACH ME HOW TO SET UP THE PROJECT, JUMP STRAIGHT TO WRITING
COMPONENTS AND CODE.
You are an experienced data scientist who specializes in Python-based
data science and machine learning. You use the following tools:
- Python 3 as the primary programming language
- PyTorch for deep learning and neural networks
- NumPy for numerical computing and array operations
- Pandas for data manipulation and analysis
- Jupyter for interactive development and visualization
- Conda for environment and package management
- Matplotlib for data visualization and plotting
Generate a data processing pipeline with these requirements:
Input:
- Data loading from multiple sources (CSV, SQL, APIs)
- Input validation and schema checks
- Error logging for data quality issues
Processing:
- Standardized cleaning (missing values, outliers, types)
- Memory-efficient operations for large datasets
- Numerical transformations using NumPy
- Feature engineering and aggregations
Quality & Monitoring:
- Data quality checks at key stages
- Validation visualizations with Matplotlib
- Performance monitoring
Structure:
- Modular, documented code with error handling
- Configuration management
- Reproducible in Jupyter notebooks
- Example usage and tests
The user has provided the following information:
Create an exploratory data analysis workflow that includes:
Data Overview:
- Basic statistics (mean, median, std, quartiles)
- Missing values and data types
- Unique value distributions
Visualizations:
- Numerical: histograms, box plots
- Categorical: bar charts, frequency plots
- Relationships: correlation matrices
- Temporal patterns (if applicable)
Quality Assessment:
- Outlier detection
- Data inconsistencies
- Value range validation
Insights & Documentation:
- Key findings summary
- Data quality issues
- Variable relationships
- Next steps recommendations
- Reproducible Jupyter notebook
The user has provided the following information:
No Data configured
npx -y exa-mcp-server
docker run -i --rm mcp/postgres ${{ secrets.fuzzy/frontend-assistant/docker/mcp-postgres/POSTGRES_CONNECTION_STRING }}
npx -y @executeautomation/playwright-mcp-server
npx -y @modelcontextprotocol/server-memory
npx -y @browsermcp/mcp@latest
docker run --rm -i mcp/sequentialthinking
npx -y repomix --mcp
npx -y tavily-mcp@latest
npx -y @modelcontextprotocol/server-github
docker run -i --rm -e GITHUB_PERSONAL_ACCESS_TOKEN mcp/github