Wav2Vec

pip install vectorhub[encoders-audio-pytorch]

Details

Release date: 2020-06-20

Vector length: 512 (default)

Repo: https://github.com/pytorch/fairseq

Paper: https://arxiv.org/abs/2006.11477

Example

#pip install vectorhub[encoders-audio-pytorch]
from vectorhub.encoders.audio.pytorch import Wav2Vec
model = Wav2Vec()
sample = model.read('https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_2.wav')
model.encode(sample)

Index and search vectors

Index and search your vectors easily on the cloud using 1 line of code!

username = '<your username>'
email = '<your email>'
# You can request an api_key using - type in your username and email.
api_key = model.request_api_key(username, email)

# Index in 1 line of code
items = ['https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_69.wav', 'https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_99.wav', 'https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_10.wav', 'https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_5.wav']
model.add_documents(user, api_key, items)

# Search in 1 line of code and get the most similar results.
model.search('https://vecsearch-bucket.s3.us-east-2.amazonaws.com/voices/common_voice_en_69.wav')

# Add metadata to your search
metadata = None
model.add_documents(user, api_key, items, metadata=metadata)

Description

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/noisy test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 5.2/8.6 WER on the noisy/clean test sets of Librispeech. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.