Skip to content

A simple Next.js demo showing how to integrate a Rive avatar and control its lip-sync animation using a boolean state driven by audio playback or streaming.

Notifications You must be signed in to change notification settings

Eyuvaraj/Interactive-Avatar-Rive

Repository files navigation

🎭 Rive Avatar in Next.js (Lip Sync Demo)

Live demo: 👉 https://ag-avatar-rive.vercel.app/

This project demonstrates how to integrate a Rive avatar into a Next.js application and control its lip-sync animation using a boolean state (isTalking) that can be driven by audio playback or streaming.

When isTalking is:

  • true → the avatar talks 🗣️
  • false → the avatar is idle 😴

The setup is designed to work with audio playback today and can easily be adapted to streaming audio (e.g. Gemini Live API, OpenAI Realtime Agents, etc.) for lower latency and more natural interaction.


✨ Features

  • ✅ Rive avatar rendered with @rive-app/react-canvas
  • ✅ Lip-sync controlled via Rive State Machine input
  • ✅ Simple isSpeaking boolean API
  • ✅ Caption overlay for current spoken text
  • ✅ Ready for audio streaming or TTS playback integration
  • ✅ Works with Next.js (App Router / Client Components)

🧱 Tech Stack

  • Next.js
  • React
  • Rive (@rive-app/react-canvas)
  • HTML5 Audio / Streaming-ready architecture

📦 Installation

Install dependencies:

npm install @rive-app/react-canvas

📁 Rive Asset Setup

  1. Download the Rive file: 👉 chatbot.riv

  2. Place it in your public folder:

/public/chatbot.riv

This allows Next.js to serve it statically at:

/chatbot.riv

🎛️ Rive Asset Configuration

Rive File Details:

Property Value
State Machine Lip Sync
Input Name isTalking (boolean)

Behavior:

  • isTalking = true → Avatar talks
  • isTalking = false → Avatar is idle

🧩 Rive Avatar Component

components/RiveAvatar.js

"use client";

import { useEffect } from "react";
import {
  useRive,
  useStateMachineInput,
  Layout,
  Fit,
  Alignment,
} from "@rive-app/react-canvas";

const RIVE_FILE = "/chatbot.riv";
const STATE_MACHINE_NAME = "Lip Sync";
const INPUT_NAME = "isTalking";

export function RiveAvatar({ isSpeaking }) {
  const { rive, RiveComponent } = useRive({
    src: RIVE_FILE,
    stateMachines: STATE_MACHINE_NAME,
    autoplay: true,
    layout: new Layout({
      fit: Fit.Cover,
      alignment: Alignment.Center,
    }),
  });

  const isTalkingInput = useStateMachineInput(
    rive,
    STATE_MACHINE_NAME,
    INPUT_NAME
  );

  useEffect(() => {
    if (isTalkingInput) {
      isTalkingInput.value = isSpeaking;
    }
  }, [isSpeaking, isTalkingInput]);

  return <RiveComponent style={{ width: "100%", height: "100%" }} />;
}

💬 Caption Component

components/Caption.js

"use client";

export default function Caption({ text }) {
  return (
    <div
      style={{
        position: "absolute",
        bottom: "120px",
        left: "50%",
        transform: "translateX(-50%)",
        padding: "12px 20px",
        background: "rgba(0, 0, 0, 0.7)",
        color: "white",
        fontSize: "18px",
        borderRadius: "12px",
        maxWidth: "80%",
        textAlign: "center",
        zIndex: 10,
      }}
    >
      {text}
    </div>
  );
}

🖥️ Example Usage

import { RiveAvatar } from "../components/RiveAvatar";
import Caption from "../components/Caption";

export default function ChatUI({ isSpeaking, messages }) {
  return (
    <div
      style={{
        flexGrow: 1,
        display: "flex",
        flexDirection: "column",
        height: "100vh",
        backgroundColor: "#000",
        position: "relative",
      }}
    >
      <RiveAvatar isSpeaking={isSpeaking} />
      {isSpeaking && (
        <Caption text={messages[messages.length - 1]?.content || ""} />
      )}
    </div>
  );
}

🔊 Audio Integration Logic

To sync the avatar with audio:

  • When audio playback or stream starts:

    setIsSpeaking(true);
  • When audio stops or there is silence:

    setIsSpeaking(false);

This boolean drives the Rive State Machine input isTalking.


⚡ Performance & Realtime Notes

The current demo uses audio file playback. For lower latency and more natural interactions, consider:

  • Gemini Live API
  • OpenAI Realtime Agents
  • WebRTC / streaming TTS pipelines

The avatar already supports streaming—just toggle isSpeaking based on stream activity.


🚀 Future Ideas

  • Word-level or phoneme-level lip sync
  • Emotion / expression state machines
  • Viseme-driven animation
  • Live microphone input
  • Multi-avatar scenes

About

A simple Next.js demo showing how to integrate a Rive avatar and control its lip-sync animation using a boolean state driven by audio playback or streaming.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published