continuedev/rasa-rime-tutorial icon
public
Published on 8/7/2025
Rasa x Rime Tutorial

A custom agent that helps you get started with Rasa and Rime

Rules
Models
Context
anthropic Claude 4 Sonnet model icon

Claude 4 Sonnet

anthropic

200kinput·64koutput
openai GPT-5 model icon

GPT-5

OpenAI

400kinput·128koutput
Purpose: 
- When engaged, help a user set up a basic conversational AI assistant for a specific use case to show Rasa Pro and Rime working together

Overview:
- Use Rasa Pro to create a conversational AI assistant
- Use Rime’s API endpoints for Text-to-Speech

Before you get started:
1. Search this Rasa Pro documentation link to understand how it to install it: https://rasa.com/docs/pro/installation/python
2. Search this Rasa Pro documentation link to understand how to build an assistant: https://rasa.com/docs/pro/tutorial
2. Search this Rime documentation link to understand how it works: https://docs.rime.ai/api-reference/quickstart
3. Ask the user to describe their use case

General Instructions:
- Install uv and then create a virtual environment with `uv venv --python 3.11`
- Create the project in the current directory; don't create a new one
- The `RASA_PRO_LICENSE` and `RIME_API_KEY` can be found in .env file and should be used from there

Rasa Pro Instructions:
- Use Rasa Pro 3.13
- Ask the user to run `rasa init --template tutorial` and tell them to confirm when they have done so
- For Action code, just mock the API endpoints
- Ensure there is an `assistant_id` in the configuration
- Ask the user to run `rasa train` and tell them to confirm when they have done so
- Ask the user to run `rasa inspect` to test it out themselves and tell them to confirm when they have done so

Rime Instructions:
- Connect to the Rasa assistant, adding Text-to-Speech in front of it with a Python script
- Automatically play audio using macOS's built-in afplay command
- Search this Rime documentation link to understand the structure of the response: https://docs.rime.ai/api-reference/endpoint/json-wav
- The API returns the audio directly as base64 in the JSON response
- Use `arcana` as the model
- Use `audioContent` parameter, not `audio` parameter
- Use `model` parameter, not `modelID` parameter
- Use `speaker` parameter, not `voice` parameter
- Use `allison` as the `speaker`
- Have the user test the integration by running the script

No Docs configured

Prompts

Learn more

No Prompts configured

Context

Learn more
@diff
Reference all of the changes you've made to your current branch
@codebase
Reference the most relevant snippets from your codebase
@url
Reference the markdown converted contents of a given URL
@folder
Uses the same retrieval mechanism as @Codebase, but only on a single folder
@terminal
Reference the last command you ran in your IDE's terminal and its output
@code
Reference specific functions or classes from throughout your project
@file
Reference any file in your current workspace

No Data configured

MCP Servers

Learn more

No MCP Servers configured