Thanks for opening this up, Jan. If you listen to the video segment of Lex Fridman’s intro, he explains that teh recording was done mostly in Russian languages for which he and Zelenskyy are fluent, with mixed in parts in English and Ukranian. The time lag for live translations, to him, got in the way.
If I understand this then, they took the recording, and used AI/LLM to transcribe the audio, and then dod voice generation on the transcripts. I am not clear if the transcription source fed to ElevenLabs was a compilation into all one language.
Jan was responding to me team message about an upcoming podcast recording with one of our colleagues in Japan. I had suggested to our guest that I can send the questions ahead and ask in English, and allow them to respond in Japanese, where ideally they would be best able to communicate their ideas.
This was something we did previously in OEGlobal Voices Episode 68 where María Soledad Ramírez Montoya listened to my questions in English and responded in Spanish.
Our published audio us in mixed language, but I had the Descript software we use do separate transcripts in Spanish and English, which were then combined back into English and Spanish transcripts. I would like to think the mixed language version is worth listening if one is fluent in English and Spanish, and for someone like me, I’d be happy to listen to Marisols voice as I follow along with a translated transcript.
I am curious to explore Resemble AI, thanks @megolosk! It still looks like steps are needed to extract from a mixed language recording a single language transcript, right? Or are there transcription tools that can generate single language transcripts from mixed sources?
I will have to look for some tool, as I note that Descript does not transcribe from Japanese.
Indeed there is much of interest and value to provide content in multi languages, especially for regionally specific ones, as @danmcguire has been doing for his projects in Africa.
Are we nearing the Universal Translator? Beam one to me