In today's world, Lyra (codec) is a topic that has captured the attention of millions of people around the world. Whether due to its relevance today, its impact on society or its historical importance, Lyra (codec) has managed to position itself as a topic of general interest in the social, cultural, political and economic spheres. Over the years, Lyra (codec) has been the subject of numerous studies, debates and controversies, leading to greater understanding and analysis of its different dimensions. In this article, we will thoroughly explore the phenomenon of Lyra (codec) and its implications in our daily lives, with the aim of shedding light on a topic that continues to generate great interest and curiosity in today's society.
![]() | |
Filename extension |
.lyra |
---|---|
Developed by | |
Initial release | 2021 |
Latest release | 1.3.2 December 20, 2022 |
Type of format | speech codec |
Free format? | Yes (Apache-2.0) |
Lyra is a lossy audio codec developed by Google that is designed for compressing speech at very low bitrates. Unlike most other audio formats, it compresses data using a machine learning-based algorithm.
The Lyra codec is designed to transmit speech in real-time when bandwidth is severely restricted, such as over slow or unreliable network connections.[1] It runs at fixed bitrates of 3.2, 6, and 9 kbit/s and it is intended to provide better quality than codecs that use traditional waveform-based algorithms at similar bitrates.[2][3] Instead, compression is achieved via a machine learning algorithm that encodes the input with feature extraction, and then reconstructs an approximation of the original using a generative model.[1] This model was trained on thousands of hours of speech recorded in over 70 languages to function with various speakers.[2] Because generative models are more computationally complex than traditional codecs, a simple model that processes different frequency ranges in parallel is used to obtain acceptable performance.[4] Lyra imposes 20 ms of latency due to its frame size.[3] Google's reference implementation is available for Android and Linux.[4]
Lyra's initial version performed significantly better than traditional codecs at similar bitrates.[1][4][5] Ian Buckley at MakeUseOf said, "It succeeds in creating almost eerie levels of audio reproduction with bitrates as low as 3 kbps." Google claims that it reproduces natural-sounding speech, and that Lyra at 3 kbit/s beats Opus at 8 kbit/s.[2] Tsahi Levent-Levi writes that Satin, Microsoft's AI-based codec, outperforms it at higher bitrates.[5]
In December 2017, Google researchers published a preprint paper on replacing the Codec 2 decoder with a WaveNet neural network. They found that a neural network is able to extrapolate features of the voice not described in the Codec 2 bitstream and give better audio quality, and that the use of conventional features makes the neural network calculation simpler compared to a purely waveform-based network. Lyra version 1 would reuse this overall framework of feature extraction, quantization, and neural synthesis.[6]
Lyra was first announced in February 2021,[2] and in April, Google released the source code of their reference implementation.[1] The initial version had a fixed bitrate of 3 kbit/s and around 90 ms latency.[1][2] The encoder calculates a log mel spectrogram and performs vector quantization to store the spectrogram in a data stream. The decoder is a WaveNet neural network that takes the spectrogram and reconstructs the input audio.[2]
A second version (v2/1.2.0), released in September 2022, improved sound quality, latency, and performance, and permitted multiple bitrates. V2 uses a "SoundStream" structure where both the encoder and decoder are neural networks, a kind of autoencoder. A residual vector quantizer is used to turn the feature values into transferrable data.[3]
Google's implementation is available on GitHub under the Apache License.[1][7] Written in C++, it is optimized for 64-bit ARM but also runs on x86, on either Android or Linux.[4]
Google Meet uses Lyra to transmit sound for video chats when bandwidth is limited.[1][5]