Tuesday, August 23, 2022

Acoustic Feature Extraction with Transformers

The example in Transformers' documentation here shows how to use the wav2vec 2.0 model for automatic speech recognition. However, there are two crucial issues in that example. First, we usually use our data (set) instead of their (available) dataset. Second, we need to extract acoustic features (the last hidden states instead of logits). The following is my example of adapting Transformers to extract acoustic embedding given any audio file (WAVE) using several models. It includes the pooling average from frame-based processing to utterance-based processing for given any audio file. You don't need to perform the pooling average if you want to process your audio file in frame-based processing (remove the `.mean(axis=0)` in the variable `last_hidden_states`).

Basic syntax: wav2vec2 base model

This is the example from the documentation. I replaced the use of the dataset with the defined path of the audio file ('00001.wav').

from transformers import Wav2Vec2Processor, Wav2Vec2Model
import torchaudio
import torch
# load model
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base-960h")

# audio file is decoded on the fly
array, fs = torchaudio.load("/data/A-VB/audio/wav/00001.wav")
input = processor(array.squeeze(), sampling_rate=fs, return_tensors="pt")

# apply the model to the input array from wav
with torch.no_grad():
    outputs = model(**input)

# extract last hidden state, compute average, convert to numpy
last_hidden_states = outputs.last_hidden_state.squeeze().mean(axis=0).numpy()

# print shape
print(f"Hidden state shape: {last_hidden_states.shape}")
# Hidden state shape: (768,)


The syntax for the wav2vec2 large and robust model

In this second example, I replace the base model with the large and robust model without finetuning. This example is adapted from here. Note that I replaced 'Wav2Vec2ForCTC' with 'wav2vec2Model'. The former is used when we want to obtain the logits (for speech-to-text transcription) instead of obtaining the hidden states.

from transformers import Wav2Vec2Processor, Wav2Vec2Model
import torch
import torchaudio

# load model
processor = Wav2Vec2Processor.from_pretrained(
    "facebook/wav2vec2-large-robust-ft-swbd-300h")
model = Wav2Vec2Model.from_pretrained(
    "facebook/wav2vec2-large-robust-ft-swbd-300h")

# audio file is decoded on the fly
array, fs = torchaudio.load("/data/A-VB/audio/wav/00001.wav")
input = processor(array.squeeze(), sampling_rate=fs, return_tensors="pt")

with torch.no_grad():
    outputs = model(**input)

last_hidden_states = outputs.last_hidden_state.squeeze().mean(axis=0).numpy()
# printh shape
print(f"Hidden state shape: {last_hidden_states.shape}")
You can replace "facebook/wav2vec2-large-robust-ft-swbd-300h" with "facebook/wav2vec2-large-robust-ft-libri-960h" for the larger fine-tuned model.

 For other models, you may need to change `Wav2Vec2Processor` with `Wav2Vec2FeatureExtractor` for processor variable. In my case, this is needed for the following models:
  • facebook/wav2vec2-large-robust
  • facebook/wav2vec2-large-xlsr-53

The syntax for the custom model (wav2vec-R-emo-vad)

The last one is the example of the custom model. The model is wav2vec 2.0 fine-tuned on the MSP-Podcast dataset for speech emotion recognition. This last example differs from the previous one since the configuration is given by the authors of the model (read the code thoroughly to inspect the details). I replaced the dummy audio file with the real audio file. It is assumed to process in batch (with batch_size=2) by replicating the same audio file.

import torch
import torch.nn as nn
from transformers import Wav2Vec2Processor
from transformers.models.wav2vec2.modeling_wav2vec2 import (
    Wav2Vec2Model,
    Wav2Vec2PreTrainedModel,
)
import torchaudio


class RegressionHead(nn.Module):
    r"""Classification head."""

    def __init__(self, config):

        super().__init__()

        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        self.dropout = nn.Dropout(config.final_dropout)
        self.out_proj = nn.Linear(config.hidden_size, config.num_labels)

    def forward(self, features, **kwargs):

        x = features
        x = self.dropout(x)
        x = self.dense(x)
        x = torch.tanh(x)
        x = self.dropout(x)
        x = self.out_proj(x)

        return x


class EmotionModel(Wav2Vec2PreTrainedModel):
    r"""Speech emotion classifier."""

    def __init__(self, config):

        super().__init__(config)

        self.config = config
        self.wav2vec2 = Wav2Vec2Model(config)
        self.classifier = RegressionHead(config)
        self.init_weights()

    def forward(
            self,
            input_values,
    ):

        outputs = self.wav2vec2(input_values)
        hidden_states = outputs[0]
        hidden_states = torch.mean(hidden_states, dim=1)
        logits = self.classifier(hidden_states)

        return hidden_states, logits


def process_func(
    wavs,
    sampling_rate: int
    # embeddings: bool = False,
):
    r"""Predict emotions or extract embeddings from raw audio signal."""

    # run through processor to normalize signal
    # always returns a batch, so we just get the first entry
    # then we put it on the device
    # wavs = pad_sequence(wavs, batch_first=True)
    # load model from hub
    device = 'cpu'
    model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
    processor = Wav2Vec2Processor.from_pretrained(model_name)
    model = EmotionModel.from_pretrained(model_name)

    y = processor([wav.cpu().numpy() for wav in wavs],          
                   sampling_rate=sampling_rate,
                   return_tensors="pt",
                   padding="longest"
        )
    y = y['input_values']
    y = y.to(device)


    y = model(y)

    return {
        'hidden_states': y[0],
        'logits': y[1],
    }


## test to an audiofile
sampling_rate = 16000
signal = [torchaudio.load('train_001.wav')[0].squeeze().to('cpu') for _ in range(2)]

# extract hidden states
with torch.no_grad():
    hs = process_func(signal, sampling_rate)['hidden_states']
print(f"Hidden states shape={hs.shape}")

Please note for all models, the audio file must be sampled with 16000 Hz, otherwise, you must resample it before extracting acoustic embedding using the methods above. It may not throw an error even if the sampling rate is not 16000 Hz but the results, hence, is not valid since all models were generated based on 16 kHz of sampling rate speech datasets. 

You may also want to extract acoustic features using the opensmile toolkit. The tutorial for Windows users using WSL is available here: http://bagustris.blogspot.com/2021/08/extracting-emobase-feature-using-python.html.

Happy reading. Don't wait for more time to apply these methods to your own audio file.

Wednesday, August 17, 2022

Siapa yang seharusnya membersihkan sampah B?

Andaikan sebuah eksperimen pikiran sebagai berikut.

A mengadakan suatu acara (panitia acara). B mengikuti acara tersebut (peserta acara). Jika B membuang sampah secara sembarangan di saat mengikuti acara tersebut, siapa yang wajib membersihkannya?

Jika anda masih menjawab A. Kita tambahkan kasus lain seperti ini.

B berada di rumahnya sendiri. B membuang sampah secara sembarangan di rumahnya sendiri. Siapa yang seharusnya membersihkan sampah B?

B yang seharusnya membuang sampahnya sendiri, tak peduli dimanapun. Selama itu sampahnya, maka dia sendiri yang wajib membuangnya, bukan orang lain.

Monday, August 01, 2022

Maksimal jumlah referensi self-citation

Best practice jumlah self-citation pada makalah akademik adalah 10% dari total jumlah referensi. Sumber lain membolehkan 7-20%  [1]. Untuk saya pribadi, jumlah maksimalnya adalah berdasarkan tabel dan rumus di bawah ini.

Number of references Max. self-citation
1-10 1
11-20 2
21-30 3
... ...
91-100 10

Contoh, jumlah referensi: 24, maksimal self-citation: 3.


Rumus

$$ n\_cite=10\% \times ceil(n\_ref/10) \times 10 $$

Dimana n_cite adalah jumlah maksimal self-citation and n_ref adalah jumlah referensi.


Kenapa self-citation?

Karena biasanya kita meneliti dan menulis makalah akademik tidak dari nol, tapi dari penelitian-penelitian kita sebelumnya. Disinilah self-citation masuk.

Alasan kedua adalah untuk mendongkrak h-indeks (Scopus, G-scholar, WOS) peneliti yang bersangkutan.


Referensi: 

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3049640/#:~:text=Self%2Dcitation%20ranges%20from%207,with%20many%20authors%20%5B13%5D.

Related Posts Plugin for WordPress, Blogger...