Last year the iBUG group out of the Imperial College London and the Samsung AI Centre published a paper on speech reconstruction from video. The model presented is novel in its ability to generate interpretable speech from video only of previously unseen participants. The main ML engine for the workflow is the Wasserstein GAN and is part of a collection of networks working together to generate speech. The model is composed of three parts: the generator network, the critic which forces the generation of ‘natural’ sounding waveforms, and a speech encoder.
The generator network…is responsible for transforming the sequence of video frames into a waveform. During the training phase the critic network drives the generator to produce waveforms that sound similar to natural speech. Finally, a pretrained speech encoder is used to conserve the speech content of the waveform.
The model is trained on the GRID dataset which is a freely available audiovisual corpus of participants reading sentences. The model was evaluated based on “sound quality” and on “accuracy of the spoken words”. They also posted videos of model performance and comparisons with another recent framework/model Lip2AudSpec with quite impressive results.
If you’d like to learn more about the authors checkouttheirpages on iBUG. If you’d like to check out their work you can find the first and second authors on GitHub. Aaaand…if you’re still interested in more lip reading fun, take a look at this video of Rasputin killing it at some Beyoncé karaoke.
Written by Rebecca Minich, Product Analyst, Data Science at Google. Opinions expressed are solely my own and do not express the views or opinions of my employer.
Adafruit publishes a wide range of writing and video content, including interviews and reporting on the maker market and the wider technology world. Our standards page is intended as a guide to best practices that Adafruit uses, as well as an outline of the ethical standards Adafruit aspires to. While Adafruit is not an independent journalistic institution, Adafruit strives to be a fair, informative, and positive voice within the community – check it out here: adafruit.com/editorialstandards
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.
Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7:30pm ET! To join, head over to YouTube and check out the show’s live chat and our Discord!
Python for Microcontrollers – Adafruit Daily — Select Python on Microcontrollers Newsletter: PyCon AU 2024 Talks, New Raspberry Pi Gear Available and More! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi
EYE on NPI – Adafruit Daily — EYE on NPI Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey