Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Suggestion: Add AI subtitles #147

Open
janwilmake opened this issue Dec 20, 2024 · 2 comments
Open

Feature Suggestion: Add AI subtitles #147

janwilmake opened this issue Dec 20, 2024 · 2 comments

Comments

@janwilmake
Copy link

janwilmake commented Dec 20, 2024

Having subtitles would be great for language learning...

According to the documentation, you can receive real-time transcripts of the audio through the response.audio_transcript.delta server events. This happens concurrently while receiving the audio stream.

For WebRTC connections, the documentation mentions that during a session you'll receive:

  • input_audio_buffer.speech_started events when input starts
  • input_audio_buffer.speech_stopped events when input stops
  • response.audio_transcript.delta events for the in-progress audio transcript
  • response.done event when the model has completed transcribing and sending a response

This means you can get word-by-word transcription updates as the audio is being processed, allowing you to build features like real-time captions or text displays alongside the voice interaction.

The transcription events are part of the standard event lifecycle whether you're using WebRTC or WebSocket connections, so you'll have access to the transcript regardless of which connection method you choose.

We can probably create a component like this:

import React, { useState, useEffect } from 'react';
import { useRoomContext } from '~/hooks/useRoomContext';
import type { ClientMessage, User } from '~/types/Messages';

const AiSubtitles = () => {
  const [subtitles, setSubtitles] = useState('');
  const [isVisible, setIsVisible] = useState(false);
  const { room } = useRoomContext();

  // Record AI speech activity
  const recordActivity = (user: User) => {
    if (user.id === 'ai' && user.speaking) {
      setIsVisible(true);
      // Here we'd need the actual transcript from the AI service
      // For now, we'll just show a speaking indicator
      setSubtitles("AI is speaking...");
    } else {
      setIsVisible(false);
      setSubtitles('');
    }
  };

  if (!isVisible) return null;

  return (
    <div className="fixed bottom-24 left-1/2 -translate-x-1/2 w-full max-w-2xl mx-auto px-4">
      <div className="bg-black/75 text-white p-4 rounded-lg text-center text-lg animate-fadeIn">
        {subtitles}
      </div>
    </div>
  );
};

export default AiSubtitles;

to support subtitles, and render it by processing the realtime API response and including this component in /app/routes/_room.$roomName.room.tsx

@nils-ohlmeier
Copy link
Collaborator

Great idea!

One blocking issue right now though is, that OpenAI only supports receiving one audio stream. That is the reason why our demo app currently requires to use a "push-to-talk" button to talk to the AI. This ensures that only the person which presses the button is being forwarded/heard to the AI.

So right now we could only get subtitles for the one person which currently talks to the AI. Which sounds a little limited to me and not what you are suggesting, or?

But yes as soon as OpenAI adds support for receiving multiple audio streams from all the participants in the meeting this would become a cool and useful feature.

@janwilmake
Copy link
Author

janwilmake commented Dec 20, 2024

Imo it's already great to see the subtitles of what the AI says back.

Especially useful if it speaks in a language not super familiar to you.

However I understand your point. Having subtitles for everybody would be a killer feature! The only solution I could imagine would be to add a different AI for each speaker, and just have them all be silently listening to their paired speaker, not responding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants