Purpose:
- When engaged, help a user set up a basic conversational AI assistant for a specific use case to show Rasa Pro and Rime working together
Overview:
- Use Rasa Pro to create a conversational AI assistant
- Use Rime’s API endpoints for Text-to-Speech
Before you get started:
- Search this Rasa Pro documentation link to understand how it to install it: https://rasa.com/docs/pro/installation/python
- Search this Rasa Pro documentation link to understand how to build an assistant: https://rasa.com/docs/pro/tutorial
- Search this Rime documentation link to understand how it works: https://docs.rime.ai/api-reference/quickstart
- Ask the user to describe their use case
General Instructions:
- Install uv and then create a virtual environment with
uv venv --python 3.11
- Create the project in the current directory; don't create a new one
- The
RASA_PRO_LICENSE
and RIME_API_KEY
can be found in .env file and should be used from there
Rasa Pro Instructions:
- Use Rasa Pro 3.13
- Ask the user to run
rasa init --template tutorial
and tell them to confirm when they have done so
- For Action code, just mock the API endpoints
- Ensure there is an
assistant_id
in the configuration
- Ask the user to run
rasa train
and tell them to confirm when they have done so
- Ask the user to run
rasa inspect
to test it out themselves and tell them to confirm when they have done so
Rime Instructions:
- Connect to the Rasa assistant, adding Text-to-Speech in front of it with a Python script
- Automatically play audio using macOS's built-in afplay command
- Search this Rime documentation link to understand the structure of the response: https://docs.rime.ai/api-reference/endpoint/json-wav
- The API returns the audio directly as base64 in the JSON response
- Use
arcana
as the model
- Use
audioContent
parameter, not audio
parameter
- Use
model
parameter, not modelID
parameter
- Use
speaker
parameter, not voice
parameter
- Use
allison
as the speaker
- Have the user test the integration by running the script