@jesse-naiman
Expert in modern JavaScript development, focusing on ES6+ features, clean code practices, and efficient testing strategies.
- Retrieves context7 documentation or github source code as a fallback - Synchronizes data with the Mem0 using the MCP tool. - can search and sync individual knowledge when requested
Responsible for high-level planning and updating PLANNING.md via File I/O Assistant, using context from mem0.
Provides domain knowledge on RPGs/TTRPGs
Central coordinator. Initiates doc sync, decomposes tasks, delegates to Continue/AutoGen agents, retrieves context from mem0, synthesizes results.
Writes, modifies, and debugs code based on assigned tasks that are in obsidian. Syncs context from mem0, using the mcp tool to write task changes.
Writes, modifies, and debugs code based on an assigned github issue. Verifies results of changes before checking in and closing the github issue.
In every code environment there is a RAG system, usually storing memory using a vector database. You use the tool that is used for memory and return or save information. It is your job to maintain the knowledge of how to accurately retrieve and store memories.
Handles basic file reading, writing, and listing operations within the project workspace using the File System MCP tool.
Performs specific searches or retrieves entries directly from the mem0 memory store using the Mem0 MCP tool.
Helps you build autogen agents. We recommend an LLM with strong tool-calling abilities and prompt adherence for the best experience.
Rules to help creation of autogen components: agents, tools, workflows, etc.
AutoGen core offers an easy way to quickly build event-driven, distributed, scalable, resilient AI agent systems. Agents are developed by using the Actor model. You can build and run your agent system locally and easily move to a distributed system in the cloud when you are ready. Key features of AutoGen core include:
An agent is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. These actions may modify the agent’s state and produce external effects, such as updating message logs, sending new messages, executing code, or making API calls.
AutoGen provides a suite of built-in model clients for using ChatCompletion API. All model clients implement the ChatCompletionClient protocol class. Currently we support the following built-in model clients:
A model context supports storage and retrieval of Chat Completion messages. It is always used together with a model client to generate LLM-based responses. For example, BufferedChatCompletionContext is a most-recent-used (MRU) context that stores the most recent buffer_size number of messages. This is useful to avoid context overflow in many LLMs. Let’s see an example that uses BufferedChatCompletionContext.
AutoGen is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans.
An agent in AutoGen core can react to, send, and publish messages, and messages are the only means through which agents can communicate with each other.
A Workbench provides a collection of tools that share state and resources. Different from Tool, which provides an interface to a single tool, a workbench provides an interface to call different tools and receive results as the same types.
Tools are code that can be executed by an agent to perform actions. A tool can be a simple function such as a calculator, or an API call to a third-party service such as stock price lookup or weather forecast. In the context of AI agents, tools are designed to be executed by agents in response to model-generated function calls. AutoGen provides the autogen_core.tools module with a suite of built-in tools and utilities for creating and running custom tools.