Device: AGX Xavier Dev Kit (32GB RAM)
Software
OS: Ubuntu 18.04
nvidia-jetpack: Version: 4.4-b186
python: Version 3.6.9
llvm: Version 9.0.1 (and 7.1.0)
First, we need to install llvmlite. We can install either from source or via apt. I installed it from the source as shown below.
cd /tmp wget https://github.com/llvm/llvm-project/releases/download/llvmorg-9.0.1/llvm-9.0.1.src.tar.xz tar -xvf llvm-9.0.1.src.tar.xz cd llvm-9.0.1.src/ mkdir llvm_build_dir cd llvm_build_dir cmake ../ -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGETS_TO_BUILD="ARM;X86;AArch64" make -j4 sudo make install cd bin/ echo "export LLVM_CONFIG=\""`pwd`"/llvm-config\"" >> ~/.bashrc echo "alias llvm='"`pwd`"/llvm-lit'" >> ~/.bashrc source ~/.bashrc python3.6 -m pip install --user -U llvmlite==0.31
If you want to install via apt, do it as follows. I use llvm-7 from Ubuntu repository (llvm-9 fails installing llvmlite 0.31 but success for 0.33).
sudo apt install llvm-7
If you install llvm via apt, you need to specify where llvm-config is located in .bashrc. For example here is mine. First, locate llvm-config.
s1820002@s1820002-desktop:~$ locate llvm-config /usr/bin/llvm-config-7 /usr/include/llvm-7/llvm/Config/llvm-config.h /usr/lib/llvm-7/bin/llvm-config
Then add the location of llvm-config to .bashrc.
export LLVM_CONFIG=/usr/lib/llvm-7/bin/llvm-config
Please be remembered that you need to `source .bashrc` after you edit your .bashrc.
As an alternative to using .bashrc, you can make softlink by mapping llvm-config-9 to llvm-config.
$ cd /usr/bin $ sudo ln llvm-config-9 llvm-config
The installation of llvm must success before installing llvmlite. If installation of llvmlite by the last line on the script above success, it shows like this one,
Collecting llvmlite Installing collected packages: llvmlite Successfully installed llvmlite-0.33.0
If you face an error regarding `setuptools`, you may upgrade your setuptools as follows.
python3.6 -m pip install --user -U setuptools
Now, we can install numba. We need numba version 0.48 to install librosa 0.7.2. To install Numba, first, we must disable tbb.h according to this.
sudo mv /usr/include/tbb/tbb.h /usr/include/tbb/tbb.h.bak
Then, install numba as the following:
bagus@s1820002:~$ python3.6 -m pip install --user -U numba==0.48 Collecting numba==0.48 Collecting setuptools (from numba==0.48) Using cached https://files.pythonhosted.org/packages/41/fa/60888a1d591db07bc9c17dce2bcfb9f00ac507c0a23ecb827e76feb8f816/setuptools-49.1.0-py3-none-any.whl Collecting numpy>=1.15 (from numba==0.48) Collecting llvmlite<0 .32.0="">=0.31.0dev0 (from numba==0.48) Installing collected packages: setuptools, numpy, llvmlite, numba Successfully installed llvmlite-0.33.0 numba-0.50.1 numpy-1.19.0 setuptools-49.1.0 0>
Again, it must install numba without fail to be able to install Librosa.
Before installing librosa, the following packages must be installed via apt.
sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran
Finally, we can install librosa via pip as below.
bagus@s1820002:~$ python3.6 -m pip install --user -U librosa Collecting librosa Collecting resampy>=0.2.2 (from librosa) Collecting numpy>=1.15.0 (from librosa) Collecting numba>=0.43.0 (from librosa) Collecting scikit-learn!=0.19.0,>=0.14.0 (from librosa) Collecting joblib>=0.12 (from librosa) Using cached https://files.pythonhosted.org/packages/51/dd/0e015051b4a27ec5a58b02ab774059f3289a94b0906f880a3f9507e74f38/joblib-0.16.0-py3-none-any.whl Collecting scipy>=1.0.0 (from librosa) Collecting soundfile>=0.9.0 (from librosa) Using cached https://files.pythonhosted.org/packages/eb/f2/3cbbbf3b96fb9fa91582c438b574cff3f45b29c772f94c400e2c99ef5db9/SoundFile-0.10.3.post1-py2.py3-none-any.whl Collecting decorator>=3.0.0 (from librosa) Using cached https://files.pythonhosted.org/packages/ed/1b/72a1821152d07cf1d8b6fce298aeb06a7eb90f4d6d41acec9861e7cc6df0/decorator-4.4.2-py2.py3-none-any.whl Collecting six>=1.3 (from librosa) Using cached https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl Collecting audioread>=2.0.0 (from librosa) Collecting llvmlite<0 .34="">=0.33.0.dev0 (from numba>=0.43.0->librosa) Collecting setuptools (from numba>=0.43.0->librosa) Using cached https://files.pythonhosted.org/packages/41/fa/60888a1d591db07bc9c17dce2bcfb9f00ac507c0a23ecb827e76feb8f816/setuptools-49.1.0-py3-none-any.whl Collecting threadpoolctl>=2.0.0 (from scikit-learn!=0.19.0,>=0.14.0->librosa) Using cached https://files.pythonhosted.org/packages/f7/12/ec3f2e203afa394a149911729357aa48affc59c20e2c1c8297a60f33f133/threadpoolctl-2.1.0-py3-none-any.whl Collecting cffi>=1.0 (from soundfile>=0.9.0->librosa) Collecting pycparser (from cffi>=1.0->soundfile>=0.9.0->librosa) Using cached https://files.pythonhosted.org/packages/ae/e7/d9c3a176ca4b02024debf82342dab36efadfc5776f9c8db077e8f6e71821/pycparser-2.20-py2.py3-none-any.whl Installing collected packages: six, numpy, scipy, llvmlite, setuptools, numba, resampy, threadpoolctl, joblib, scikit-learn, pycparser, cffi, soundfile, decorator, audioread, librosa Successfully installed audioread-2.1.8 cffi-1.14.0 decorator-4.4.2 joblib-0.16.0 librosa-0.7.2 llvmlite-0.33.0 numba-0.50.1 numpy-1.19.0 pycparser-2.20 resampy-0.2.2 scikit-learn-0.23.1 scipy-1.5.1 setuptools-49.1.0 six-1.15.0 soundfile-0.10.3.post1 threadpoolctl-2.1.0 0>
Now everything seems good. We can do audio processing in Jetson AGX. Librosa is the best audio library in python so far. Having it installed on Jetson is a basic requirement.
BONUS: Installing TensorFlow
Step-by-step:
1. Install the following packages.
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev2. Install/upgrade testresources
python3.6 -m pip install --user -U testresources3. Installing other python dependencies
python3.6 -m pip future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind114. Install tensorflow.
python3.6 -m pip install --user --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 'tensorflow<2'output:
Successfully installed absl-py-0.9.0 astor-0.8.1 google-pasta-0.2.0 grpcio-1.30.0 importlib-metadata-1.7.0 markdown-3.2.2 opt-einsum-3.2.1 tensorboard-1.15.0 tensorflow-1.15.2+nv20.3.tf1 tensorflow-estimator-1.15.1 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1 zipp-3.1.05. Test the GPU device.
import tensorflow as tf tf.test.gpu_device_name()output:
>>> tf.test.gpu_device_name() 2020-07-10 16:01:36.242822: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency 2020-07-10 16:01:36.244367: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x37b6c3f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-10 16:01:36.244536: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-10 16:01:36.259171: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-07-10 16:01:36.432077: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:36.432674: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x37a81480 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-07-10 16:01:36.432768: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Xavier, Compute Capability 7.2 2020-07-10 16:01:36.433580: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:36.433892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377 pciBusID: 0000:00:00.0 2020-07-10 16:01:36.433987: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-07-10 16:01:36.467839: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-07-10 16:01:36.496813: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-07-10 16:01:36.534413: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-07-10 16:01:36.576197: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-07-10 16:01:36.600464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-07-10 16:01:36.688185: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-07-10 16:01:36.688573: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:36.690082: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:36.690253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0 2020-07-10 16:01:36.690456: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-07-10 16:01:38.484495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-07-10 16:01:38.484668: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186] 0 2020-07-10 16:01:38.484715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0: N 2020-07-10 16:01:38.485215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:38.485695: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero 2020-07-10 16:01:38.485953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/device:GPU:0 with 23699 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2) '/device:GPU:0Source:
[1] https://github.com/jefflgaol/Install-Packages-Jetson-ARM-Family
[2] https://forums.developer.nvidia.com/t/install-python-packages-librosa-jetson-tx2-developer-kit-problem/126337/5
[3] https://learninone209186366.wordpress.com/2019/07/24/how-to-install-the-librosa-library-in-jetson-nano-or-aarch64-module/2>