praxent/langgraph-max icon
public
Published on 6/12/2025
LangGraph

Rules
Prompts
Models
Context
anthropic Claude 3.7 Sonnet model icon

Claude 3.7 Sonnet

anthropic

200kinput·8.192koutput
anthropic Claude 3.5 Sonnet model icon

Claude 3.5 Sonnet

anthropic

200kinput·8.192koutput
inception Mercury Coder Small model icon

Mercury Coder Small

inception

openai OpenAI GPT-4.1 mini model icon

OpenAI GPT-4.1 mini

OpenAI

1047kinput·32.768koutput
openai OpenAI GPT-4.1 model icon

OpenAI GPT-4.1

OpenAI

1047kinput·32.768koutput
bedrock Claude 3.7 Sonnet - Bedrock model icon

Claude 3.7 Sonnet - Bedrock

bedrock

bedrock Claude 3.5 Sonnet - Bedrock model icon

Claude 3.5 Sonnet - Bedrock

bedrock

ollama DeepSeek Coder 6.7B model icon

DeepSeek Coder 6.7B

ollama

anthropic Claude 4 Sonnet model icon

Claude 4 Sonnet

anthropic

200kinput·64koutput
bedrock Claude 4 Sonnet - Bedrock model icon

Claude 4 Sonnet - Bedrock

bedrock

200kinput·8.5koutput
ollama devstral - remote model icon

devstral - remote

ollama

openai Morph Fast Apply model icon

Morph Fast Apply

OpenAI

Design queries to optimize performance and scalability when handling complex data relationships.
Ensure node and edge definitions strictly comply with the project schema.
Parameterize queries to prevent injection risks and promote reusability.
Include comments to clarify complex query logic and optimization strategies.
Utilize LangGraph's debugging and monitoring tools to maintain graph integrity during development.
You are an expert AI engineer and Python developer building with LanceDB, a multi-modal database for AI
  - Use dataframes to store and manipulate data
  - Always explicitly define schemas with PyArrow when making tables
LangGraph Docshttps://langchain-ai.github.io/langgraph/
LangGraphhttps://langchain-ai.github.io/langgraph/llms.txt
LanceDB Open Source Docshttps://lancedb.github.io/lancedb/

Prompts

Learn more
My prompt
Sequential Thinking Activation
<!-- Sequential Thinking Workflow -->
<assistant>
    <toolbox>
        <mcp_server name="sequential-thinking"
                        role="workflow_controller"
                        execution="sequential-thinking"
                        description="Initiate the sequential-thinking MCP server">
            <tool name="STEP" value="1">
                <description>Gather context by reading the relevant file(s).</description>
                <arguments>
                    <argument name="instructions" value="Seek proper context in the codebase to understand what is required. If you are unsure, ask the user." type="string" required="true"/>
                    <argument name="should_read_entire_file" type="boolean" default="true" required="false"/>
                </arguments>
                <result type="string" description="Context gathered from the file(s). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="2">
                <description>Generate code changes based on the gathered context (from STEP 1).</description>
                <arguments>
                    <argument name="instructions" value="Generate the proper changes/corrections based on context from STEP 1." type="string" required="true"/>
                    <argument name="code_edit" type="object" required="true" description="Output: The proposed code modifications."/>
                </arguments>
                <result type="object" description="The generated code changes (code_edit object). Output can be passed to subsequent steps."/>
            </tool>
            <tool name="STEP" value="3">
                <description>Review the generated changes (from STEP 2) and suggest improvements.</description>
                <arguments>
                    <argument name="instructions" type="string" value="Review the changes applied in STEP 2 for gaps, correctness, and adherence to guidelines. Suggest improvements or identify any additional steps needed." required="true"/>
                </arguments>
                <result type="string" description="Review feedback, suggested improvements, or confirmation of completion. Final output of the workflow."/>
            </tool>
        </mcp_server>
    </toolbox>
</assistant>

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace

No Data configured

MCP Servers

Learn more

taskmaster-ai

npx -y --package=task-master-ai task-master-ai

Brave Search

npx -y @modelcontextprotocol/server-brave-search

Memory

npx -y @modelcontextprotocol/server-memory

server-sequential-thinking

npx -y @modelcontextprotocol/server-sequential-thinking