Monday, July 27, 2020

Latex add directory for figure

In the practice of Latex writing, it is common to use the same figure or figure directory across different projects (papers). Instead of a copy-paste-ing figure directory, which is usually big in size, it is better to just drop a line of latex code to specify where the location of the figure directory. By using this technique, we can avoid increasing memory size by preventing copying the same files/figures used by different tex files.

Here is how.
% for a single directory
\graphicspath{{figures/}} %Setting the graphicspath

% for multiple directories
\graphicspath{{figures1/}{figures2/}{figures3/}} %Setting the graphicspath


Don't forget the slash '/' after the name of the directory.
That's all. It works!

Monday, July 13, 2020

Menginstall static library di (JAIST) cluster

Salah satu kelemahan (sekaligus keunggulan) menggunakan superkomputer atau komputer cluster adalah ketiadaan akses root. Akses root ditiadakan agar user tidak bisa mengutak-atik system. Apa jadinya kalau user bisa masuk /usr, /lib, /etc, dll...? Sistem akan dengan sangat mudah dilumpuhkan, pun tanpa serangan (ketidak sengajaan, ketidaktahuan, dll).

Tidak punya akses ke root artinya tidak bisa menginstall program atau library di sistem. Hanya admin yang bisa melakukannya. Permasalahan yang sering muncul adalah menghadapi admin yang bandel alias strict. Contoh: saya minta di-installkan library A, "Dear Pak Admin, tolong install library A di server/superkomputer." Alih-alih memberi jawaban, biasanya admin ngeles seperti ini, "maaf saya tidak terbiasa menggunakan library A, di kluster sudah tersedia library B." Tamat sudah kalau begini jawabannya. Cari cara lainnya.

Kalau anda sering menggunakan Linux atau Unix, berita baiknya, anda bisa menggunakan library atau program tanpa perlu meningstallnya ke system (/usr/bin). Ini dinamakan library statik. Kita bisa menginstall library di home direktory dan memerintahkan sistem untuk mencari di library home tersebut. Singkatnya begini: Hai sistem, cari librari ini ditempat ini. Berikut dua contoh instalasi library statik: ffmpeg dan libsndfile. Dua-duanya dipakai untuk pemrosesan suara.

ffmpeg

Jika menggunakan komputer biasa, library ini dengan mudah bisa di-install dengan `sudo apt intall ffmpeg`. Di superkomputer atau kluster hal ini tidak bisa dilakukan. Untung ada pre-built statik library ffmpeg yang disediakan oleh https://johnvansickle.com/. Download file-nya, uncompress, dan update $PATH seperti berikut.


wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
tar -xvf ffmpeg-git-amd64-static.tar.xz

Hasilnya, kita akan memiliki direktory yang berisi program ffmpeg. Misalnya direktori dengan versi berikut: "ffmpeg-git-20200130-amd64-static". Di dalamnya ada direktori bin dan didalam bin ada file program ffmpeg. Jadi kita arahkan $PATH pada lokasi tersebut. Ketik perintah berikut pada terminal.
export PATH="/home/$USER/ffmpeg-git-20200130-amd64-static:$PATH"

Untuk mempermamenkannya, kita bisa letakkan perintah tersebut di .bashrc atau .bash_profile dan kita `source` untuk mengupdate file config bash tersebut. Sangat simpel dan cepat (daripada meminta tolong admin kluster).

libsndfile

Pada contoh kedua saya menggunakan linuxbrew. Brew adalah program seperti apt/apt-get yang didesain untuk Unix (Mac dan Linux). Pertama kita harus menginstall linuxbrew di home direktori cluster.
wget https://raw.githubusercontent.com/Homebrew/install/master/install.sh
bash ./install.sh
Kadang kita perlu mengganti akses permisi dari file install.sh. Jika dibutuhkan, coba dengan `chmod +x install.sh`.
Brew akan memberi pilihan dimana kita menginstall. By default akan diinstall di system (root). Lagi, karena kita tidak punya akses ke root, maka installasi brew kita tempatkan di home direktori (pilih opsi ini dengan Ctrl-D). Nah setelah terinstall, seperti halnya ffmpeg, kita akan memiliki program brew di .linuxbrew/bin. Ini contohnya pada kasus saya.

Singularity tensorflow_1.14.0-gpu-py3.sif:~> ls .linuxbrew/bin/brew 
.linuxbrew/bin/brew
Sekarang kita bisa menginstall program yang tersedia di brew, termasuk ffmpeg dan libsndfile. Libsndfile kita install dengan perintah berikut,

Singularity tensorflow_1.14.0-gpu-py3.sif:~> .linuxbrew/bin/brew install libsndfile 
/home/s1820002/.linuxbrew/Homebrew/Library/Homebrew/brew.sh: line 4: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/home/s1820002/.linuxbrew/Homebrew/Library/Homebrew/brew.sh: line 4: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
Updating Homebrew...
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/home/s1820002/.linuxbrew/Homebrew/Library/Homebrew/brew.sh: line 4: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
No changes to formulae.

/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/home/s1820002/.linuxbrew/Homebrew/Library/Homebrew/brew.sh: line 4: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory
Warning: libsndfile 1.0.28 is already installed and up-to-date
Karena saya sudah menginstallnya maka brew mengabari kalau libsndfile sudah terinstall dan update. Langkah terakhir adalalah mengupdate #LD_LIBRARY_PATH. Semua library yang terinstall dengan brew akan terinstall di .linuxbrew/lib. Karenanya kita perintahkan LD_LIBRARY_PATH untuk mencari library di situ.

export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$HOME/.linuxbrew/lib"

Bisa juga perintah tersebut ditambahkan di .bashrc. Saya juga perlu menambahkan ini dalam .bashrc,
eval $(~/.linuxbrew/bin/brew shellenv)

Seleasai. Sekarang kita bisa menggunakan container tensorflow dengan library LibROSA yang membutuhkan libsndfile untuk membaca file suara.

Studi case di JAIST cluster
Di JAIST cluster, saat kita masuk ke vpcc (cluster yg memiliki gpu) telah tersedia tensorflow-gpu via container (singularity). By default, python yang diload adalah python bawaan Anaconda yang menyebabkan library tidak dapat ditemukan (karena perbedaan path, kemungkinan). Solusinya adalah dengan tidak menggunakan python bawaan anaconda, dan menggunakan python bawaan container/singularity dengan perintah berikut.
module remove py35

Cara tersebut cukup ampuh daripada meminta admin untuk menginstall library (sndfile) di cluster.

Thursday, July 09, 2020

How-to: Install Numba and LibROSA in Jetson AGX Xavier (Ubuntu 18.04)

Installing Numba and Librosa in Jetson AGX Xavier is very painful. To document my experience, I wrote this note. Here is how-to install Numba and Librosa on AGX Xavier.

Device: AGX Xavier Dev Kit (32GB RAM)
Software
OS: Ubuntu 18.04
nvidia-jetpack: Version: 4.4-b186
python: Version 3.6.9
llvm: Version 9.0.1 (and 7.1.0)

First, we need to install llvmlite. We can install either from source or via apt. I installed it from the source as shown below.

cd /tmp
wget https://github.com/llvm/llvm-project/releases/download/llvmorg-9.0.1/llvm-9.0.1.src.tar.xz
tar -xvf llvm-9.0.1.src.tar.xz 
cd llvm-9.0.1.src/
mkdir llvm_build_dir
cd llvm_build_dir
cmake ../ -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="ARM;X86;AArch64"
make -j4
sudo make install
cd bin/
echo "export LLVM_CONFIG=\""`pwd`"/llvm-config\"" >> ~/.bashrc
echo "alias llvm='"`pwd`"/llvm-lit'" >> ~/.bashrc
source ~/.bashrc
python3.6 -m pip install --user -U llvmlite==0.31

If you want to install via apt, do it as follows. I use llvm-7 from Ubuntu repository (llvm-9 fails installing llvmlite 0.31 but success for 0.33).

sudo apt install llvm-7

If you install llvm via apt, you need to specify where llvm-config is located in .bashrc. For example here is mine. First, locate llvm-config.
s1820002@s1820002-desktop:~$ locate llvm-config
/usr/bin/llvm-config-7
/usr/include/llvm-7/llvm/Config/llvm-config.h
/usr/lib/llvm-7/bin/llvm-config

Then add the location of llvm-config to .bashrc.
export LLVM_CONFIG=/usr/lib/llvm-7/bin/llvm-config

Please be remembered that you need to `source .bashrc` after you edit your .bashrc.

As an alternative to using .bashrc, you can make softlink by mapping llvm-config-9 to llvm-config.
$ cd /usr/bin
$ sudo ln llvm-config-9 llvm-config

The installation of llvm must success before installing llvmlite. If installation of llvmlite by the last line on the script above success, it shows like this one,

Collecting llvmlite
Installing collected packages: llvmlite
Successfully installed llvmlite-0.33.0

If you face an error regarding `setuptools`, you may upgrade your setuptools as follows.

python3.6 -m pip install --user -U setuptools

Now, we can install numba. We need numba version 0.48 to install librosa 0.7.2. To install Numba, first, we must disable tbb.h according to this.

sudo mv /usr/include/tbb/tbb.h /usr/include/tbb/tbb.h.bak

Then, install numba as the following:
bagus@s1820002:~$ python3.6 -m pip install --user -U numba==0.48
Collecting numba==0.48
Collecting setuptools (from numba==0.48) Using cached
 https://files.pythonhosted.org/packages/41/fa/60888a1d591db07bc9c17dce2bcfb9f00ac507c0a23ecb827e76feb8f816/setuptools-49.1.0-py3-none-any.whl
Collecting numpy>=1.15 (from numba==0.48)
Collecting llvmlite<0 .32.0="">=0.31.0dev0 (from numba==0.48)
Installing collected packages: setuptools, numpy, llvmlite, numba
Successfully installed llvmlite-0.33.0 numba-0.50.1 numpy-1.19.0 setuptools-49.1.0

Again, it must install numba without fail to be able to install Librosa.
Before installing librosa, the following packages must be installed via apt.
sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran

Finally, we can install librosa via pip as below.
bagus@s1820002:~$ python3.6 -m pip install --user -U librosa
Collecting librosa
Collecting resampy>=0.2.2 (from librosa)
Collecting numpy>=1.15.0 (from librosa)
Collecting numba>=0.43.0 (from librosa)
Collecting scikit-learn!=0.19.0,>=0.14.0 (from librosa)
Collecting joblib>=0.12 (from librosa) Using cached
 https://files.pythonhosted.org/packages/51/dd/0e015051b4a27ec5a58b02ab774059f3289a94b0906f880a3f9507e74f38/joblib-0.16.0-py3-none-any.whl
Collecting scipy>=1.0.0 (from librosa)
Collecting soundfile>=0.9.0 (from librosa) Using cached
 https://files.pythonhosted.org/packages/eb/f2/3cbbbf3b96fb9fa91582c438b574cff3f45b29c772f94c400e2c99ef5db9/SoundFile-0.10.3.post1-py2.py3-none-any.whl
Collecting decorator>=3.0.0 (from librosa) Using cached
 https://files.pythonhosted.org/packages/ed/1b/72a1821152d07cf1d8b6fce298aeb06a7eb90f4d6d41acec9861e7cc6df0/decorator-4.4.2-py2.py3-none-any.whl
Collecting six>=1.3 (from librosa) Using cached
 https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl
Collecting audioread>=2.0.0 (from librosa)
Collecting llvmlite<0 .34="">=0.33.0.dev0 (from numba>=0.43.0->librosa)
Collecting setuptools (from numba>=0.43.0->librosa) Using cached
 https://files.pythonhosted.org/packages/41/fa/60888a1d591db07bc9c17dce2bcfb9f00ac507c0a23ecb827e76feb8f816/setuptools-49.1.0-py3-none-any.whl
Collecting threadpoolctl>=2.0.0 (from scikit-learn!=0.19.0,>=0.14.0->librosa) Using cached
 https://files.pythonhosted.org/packages/f7/12/ec3f2e203afa394a149911729357aa48affc59c20e2c1c8297a60f33f133/threadpoolctl-2.1.0-py3-none-any.whl
Collecting cffi>=1.0 (from soundfile>=0.9.0->librosa)
Collecting pycparser (from cffi>=1.0->soundfile>=0.9.0->librosa) Using cached
 https://files.pythonhosted.org/packages/ae/e7/d9c3a176ca4b02024debf82342dab36efadfc5776f9c8db077e8f6e71821/pycparser-2.20-py2.py3-none-any.whl
Installing collected packages: six, numpy, scipy, llvmlite, setuptools, numba, resampy, threadpoolctl, joblib, scikit-learn, pycparser, cffi, soundfile, decorator, audioread, librosa
Successfully installed audioread-2.1.8 cffi-1.14.0 decorator-4.4.2 joblib-0.16.0 librosa-0.7.2 llvmlite-0.33.0 numba-0.50.1 numpy-1.19.0 pycparser-2.20 resampy-0.2.2 scikit-learn-0.23.1 scipy-1.5.1 setuptools-49.1.0 six-1.15.0 soundfile-0.10.3.post1 threadpoolctl-2.1.0

Now everything seems good. We can do audio processing in Jetson AGX. Librosa is the best audio library in python so far. Having it installed on Jetson is a basic requirement.

BONUS: Installing TensorFlow
Step-by-step:
1. Install the following packages.
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev
2. Install/upgrade testresources
python3.6 -m pip install --user -U testresources
3. Installing other python dependencies
python3.6 -m pip future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11
4. Install tensorflow.
python3.6 -m pip install --user --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 'tensorflow<2' 
output:
Successfully installed absl-py-0.9.0 astor-0.8.1 google-pasta-0.2.0 grpcio-1.30.0 importlib-metadata-1.7.0 markdown-3.2.2 opt-einsum-3.2.1 tensorboard-1.15.0 tensorflow-1.15.2+nv20.3.tf1 tensorflow-estimator-1.15.1 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1 zipp-3.1.0
5. Test the GPU device.
import tensorflow as tf
tf.test.gpu_device_name()
output:
>>> tf.test.gpu_device_name()
2020-07-10 16:01:36.242822: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-07-10 16:01:36.244367: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x37b6c3f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-10 16:01:36.244536: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-07-10 16:01:36.259171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-07-10 16:01:36.432077: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:36.432674: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x37a81480 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-07-10 16:01:36.432768: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2020-07-10 16:01:36.433580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:36.433892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377
pciBusID: 0000:00:00.0
2020-07-10 16:01:36.433987: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-07-10 16:01:36.467839: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-07-10 16:01:36.496813: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-07-10 16:01:36.534413: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-07-10 16:01:36.576197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-07-10 16:01:36.600464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-07-10 16:01:36.688185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-07-10 16:01:36.688573: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:36.690082: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:36.690253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-07-10 16:01:36.690456: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-07-10 16:01:38.484495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-07-10 16:01:38.484668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-07-10 16:01:38.484715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-07-10 16:01:38.485215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:38.485695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-07-10 16:01:38.485953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/device:GPU:0 with 23699 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
'/device:GPU:0
Source:
[1] https://github.com/jefflgaol/Install-Packages-Jetson-ARM-Family
[2] https://forums.developer.nvidia.com/t/install-python-packages-librosa-jetson-tx2-developer-kit-problem/126337/5
[3] https://learninone209186366.wordpress.com/2019/07/24/how-to-install-the-librosa-library-in-jetson-nano-or-aarch64-module/

Friday, July 03, 2020

Outline: Persepsi pendengaran manusia dan pemodelannya

Berikut adalah outline catatan kuliah "Human Perceptual Systems and its Models" yang diajarkan oleh Prof. Unoki di JAIST pada tahun 2018. Agar tidak menguap begitu saja, catatan-catatan tersebut saya daftar disini. Kuliah ini sangat berguna bagi mereka yang ingin mempelajari bagaimana sistem pendengaran manusia bekerja (auditory system) serta model komputasinya.
  1. Pengantar persepsi manusia dan pemodelannya
  2. Pengolahan sinyal auditori
  3. Representasi suara dan pemrosesannya
  4. Analisa sinyal suara dan teknik sintesis
  5. Fisiologi dari peripheral auditori dan pemodelannya
  6. Ketidaklinearan sistem pendengaran
  7. Atensi, hubungannya dengan cocktail party problem dan deeplearning
  8. Ketidaksinkronan mata dan telinga: Efek McGurk
Satu hal yang paling menarik dari kuliah tersebut adalah bagaimana membuat model yang meniru (imitate) bagaimana sistem pendengaran manusia bekerja. Meski tidak semua dapat dimodelkan dengan eksak, dan tidak semua model yang meniru cara kerja manusia lebih baik dari model lain (misal, deep learning), model dengan hasil terbaiklah yang dipakai. Dan biasanya model terbaik tersebut merupakan kompromi antara peniruan cara kerja manusia dan, misalnya, deep learning.
Related Posts Plugin for WordPress, Blogger...