Applied sciences
Printed
30 October 2024
Authors
Zalán Borsos, Matt Sharifi and Marco Tagliasacchi
Our pioneering speech era applied sciences are serving to folks world wide work together with extra pure, conversational and intuitive digital assistants and AI instruments.
Speech is central to human connection. It helps folks world wide change info and concepts, specific feelings and create mutual understanding. As our know-how constructed for producing pure, dynamic voices continues to enhance, we’re unlocking richer, extra partaking digital experiences.
Over the previous few years, we’ve been pushing the frontiers of audio era, growing fashions that may create prime quality, pure speech from a spread of inputs, like textual content, tempo controls and specific voices. This know-how powers single-speaker audio in lots of Google merchandise and experiments — together with Gemini Reside, Challenge Astra, Journey Voices and YouTube’s auto dubbing — and helps folks world wide work together with extra pure, conversational and intuitive digital assistants and AI instruments.
Working along with companions throughout Google, we just lately helped develop two new options that may generate long-form, multi-speaker dialogue for making advanced content material extra accessible:
NotebookLM Audio Overviews turns uploaded paperwork into partaking and energetic dialogue. With one click on, two AI hosts summarize consumer materials, make connections between subjects and banter backwards and forwards.Illuminate creates formal AI-generated discussions about analysis papers to assist make information extra accessible and digestible.
Right here, we offer an summary of our newest speech era analysis underpinning all of those merchandise and experimental instruments.
Pioneering methods for audio era
For years, we have been investing in audio era analysis and exploring new methods for producing extra pure dialogue in our merchandise and experimental instruments. In our earlier analysis on SoundStorm, we first demonstrated the flexibility to generate 30-second segments of pure dialogue between a number of audio system.
This prolonged our earlier work, SoundStream and AudioLM, which allowed us to use many text-based language modeling methods to the issue of audio era.
SoundStream is a neural audio codec that effectively compresses and decompresses an audio enter, with out compromising its high quality. As a part of the coaching course of, SoundStream learns how you can map audio to a spread of acoustic tokens. These tokens seize the entire info wanted to reconstruct the audio with excessive constancy, together with properties reminiscent of prosody and timbre.
AudioLM treats audio era as a language modeling job to supply the acoustic tokens of codecs like SoundStream. Because of this, the AudioLM framework makes no assumptions in regards to the kind or make-up of the audio being generated, and might flexibly deal with quite a lot of sounds without having architectural changes — making it a great candidate for modeling multi-speaker dialogues.
Constructing upon this analysis, our newest speech era know-how can produce 2 minutes of dialogue, with improved naturalness, speaker consistency and acoustic high quality, when given a script of dialogue and speaker flip markers. The mannequin additionally performs this job in below 3 seconds on a single Tensor Processing Unit (TPU) v5e chip, in a single inference go. This implies it generates audio over 40-times quicker than actual time.
Scaling our audio era fashions
Scaling our single-speaker era fashions to multi-speaker fashions then grew to become a matter of information and mannequin capability. To assist our newest speech era mannequin produce longer speech segments, we created an much more environment friendly speech codec for compressing audio right into a sequence of tokens, in as little as 600 bits per second, with out compromising the standard of its output.
The tokens produced by our codec have a hierarchical construction and are grouped by time frames. The primary tokens inside a gaggle seize phonetic and prosodic info, whereas the final tokens encode nice acoustic particulars.
Even with our new speech codec, producing a 2-minute dialogue requires producing over 5000 tokens. To mannequin these lengthy sequences, we developed a specialised Transformer structure that may effectively deal with hierarchies of knowledge, matching the construction of our acoustic tokens.
With this system, we will effectively generate acoustic tokens that correspond to the dialogue, inside a single autoregressive inference go. As soon as generated, these tokens will be decoded again into an audio waveform utilizing our speech codec.
To show our mannequin how you can generate lifelike exchanges between a number of audio system, we pretrained it on a whole bunch of hundreds of hours of speech information. Then we finetuned it on a a lot smaller dataset of dialogue with excessive acoustic high quality and exact speaker annotations, consisting of unscripted conversations from numerous voice actors and lifelike disfluencies — the “umm”s and “aah”s of actual dialog. This step taught the mannequin how you can reliably change between audio system throughout a generated dialogue and to output solely studio high quality audio with lifelike pauses, tone and timing.
According to our AI Ideas and our dedication to growing and deploying AI applied sciences responsibly, we’re incorporating our SynthID know-how to watermark non-transient AI-generated audio content material from these fashions, to assist safeguard in opposition to the potential misuse of this know-how.
New speech experiences forward
We’re now targeted on bettering our mannequin’s fluency, acoustic high quality and including extra fine-grained controls for options, like prosody, whereas exploring how greatest to mix these advances with different modalities, reminiscent of video.
The potential purposes for superior speech era are huge, particularly when mixed with our Gemini household of fashions. From enhancing studying experiences to creating content material extra universally accessible, we’re excited to proceed pushing the boundaries of what’s doable with voice-based applied sciences.
Acknowledgements
Authors of this work: Zalán Borsos, Matt Sharifi, Brian McWilliams, Yunpeng Li, Damien Vincent, Félix de Chaumont Quitry, Martin Sundermeyer, Eugene Kharitonov, Alex Tudor, Victor Ungureanu, Sertan Girgin, Jonas Rothfuss, Jake Walker and Marco Tagliasacchi.
We thank Leland Rechis, Ralph Leith, Paul Middleton, Poly Pata, Minh Truong and RJ Skerry-Ryan for his or her essential efforts on dialogue information.
We’re very grateful to our collaborators throughout Labs, Illuminate, Cloud, Speech and YouTube for his or her excellent work bringing these fashions into merchandise.
We additionally thank Françoise Beaufays, Krishna Bharat, Tom Hume, Simon Tokumine, James Zhao for his or her steerage on the undertaking.