GPU basecalling with MinION

While ago I’ve been strugglin with enabling GPU live basecalling in MinKNOW on non-GridION systems. Naturally, ONT wasn’t providing easy way to use GPU in your custom machine, otherwise there wouldn’t be much motivation to buy GridION, right? Still, it turns out you can enable live GPU basecalling in MinKNOW given you have GPU with CUDA-support in your computer. Below I’ll describe briefly what needs to be done. I’m assuming you have MinKNOW and GPU with CUDA support already installed.

First of all, make sure you have CUDA version 6+ correctly installed in your system (instruction to install CUDA are here).

nvidia-smi

If you see something like the image below, you are ready to go 🙂

Now you’ll need to get guppy binaries with CUDA support as those provided with MinKNOW have no GPU support. You can get them from ONT website. Note, guppy major and minor version has to match to the version currently being used in MinKNOW. You can check this version using:

/opt/ont/minknow/guppy/bin/guppy_basecall_server -v

So, I can install guppy v4.0.x (I’ve chose v4.0.15) with CUDA support using (note, you may need to adjust version in below commands depending on what you get from the previous command):

mkdir -p ~/src; cd ~/src
# you may need to change the guppy version
wget https://mirror.oxfordnanoportal.com/software/analysis/ont-guppy_4.0.15_linux64.tar.gz
tar xpfz ont-guppy_4.0.15_linux64.tar.gz
mv ont-guppy ont-guppy_4.0.15

Now just link you guppy binaries inside /opt/ont/minknow (again, you may need to adjust guppy version here)

cd /opt/ont/minknow
sudo mv guppy guppy0
# you may need to change the guppy version
sudo ln -s ~/src/ont-guppy_4.0.15 guppy

Then edit /opt/ont/minknow/conf/app_conf (use sudo!) and change line with gpu_calling to true and also num_threads and ipc_threads to 3 and 2, respectively (you can also define which GPUs you want to enable – by default all available cuda devices will be used):

    "gpu_calling": true,  
    "gpu_devices": "cuda:all",
    ...
    "num_threads": 3,
    "ipc_threads": 2, 

Finally close MinKNOW client (if any is running) and restart MinKNOW system service:

sudo service minknow stop && sudo killall guppy_basecall_server && sudo service minknow start

Now you should see guppy using GPU (-x cuda:all) and you GPU will be used if you run sequencing with live basecalling. Note, you can monitor your gpu usage using gpustat or glances.

ps ax | grep guppy_basecall_server

Voila!

Monitoring GPU usage

If you (like me) happen to be the performance freak, most likely you are well aware of process viewers like htop. Since I’ve started working with GPU-computing I missed htop-like tool tailored to monitor GPU usage. This is becoming more of an issue if you’re working in multi-GPU setups.

You can use `nvidia-smi` which is shipped with NVIDIA drivers, but it’s not very interactive.

gpustat provide nice and interactive view of the processes running and resources used across your GPUs, but you’ll need to switch between windows if you want to also monitor CPU usage.

pip install -U gpustat
gpustat -i

Some time ago I’ve discovered glances – really powerful htop replacement. What’s best about glances (at least for me) is that beside I/O and information from sensors, you can see GPU usage. This is done thanks to py3nvml.

pip install -U glances py3nvml
glances

At first glances window may look a bit overwhelming, but after a few uses you’ll likely fell in love with it!

And what’s your favorite GPU process viewer?

How to setup NVIDIA / CUDA for accelerated computation in Ubuntu 18.04 / Gnome / X11 workstation?

I’ve experienced a bit of difficulties when I’ve tried to enable CUDA in my workstation. Those were mostly related to system lags while I’ve been performing CUDA computations. That was because Gnome/Xserver were using NVIDIA card. I’ve realised you’d be much better of using your discrete graphic card for the system and leaving NVIDIA GPU only for serious tasks 🙂 Note, this will disable NVIDIA GPU for GNOME / X11 and also for gaming, so be aware…

Below I’ll describe briefly how I’ve installed NVIDIA drivers and configured Ubuntu 18.04 with Gnome3 and Xserver for comfortable CUDA computations.

The best if you install CUDA toolking and drivers before you plug the card, as just plugging the card may cause issues with running Ubuntu otherwise (it did in my case). In order to install NVIDIA drivers, just follow official Nvidia guide. 

Then after reboot plug the card to your computer and in the BIOS select integrated card as your main card. In my BIOS it was under Advanced > Built-in Device Options > Select Boot card > CPU integrated or Nvidia GPU.

If you experience any problems, uncomment WaylandEnable=false in /etc/gdm3/custom.conf to use X11 for GDM and Gnome. Don’t do that, if you plan to use Wayland!

Now make sure you have Nvidia plugged in and working.

# show available graphic cards
lspci -k | grep -A 2 -i "VGA"

If you installed the drivers from NVIDIA website, you may need to restore java

sudo rm /etc/alternatives/java
jpath=/opt/java/jre1.8.0_211/bin
sudo ln -s $jpath/java /etc/alternatives/java

Make sure to switch to integrated graphics card using either

  • nvidia-settings  > PRIME Profiles and select Intel (Power Saving Mode) (this should work for both, X11 and Wayland)
  • or by editing /etc/X11/xorg.conf to something like that (if you use Wayland, this won’t work!)
Section "Device"
         Identifier "Intel"
         Driver "intel"
 Option "AccelMethod" "uxa"
 EndSection

Reboot your system and make sure Gnome isn’t using NVIDIA GPU (there should be no processes running on your GPU after reboot).

# check processed running on GPU
nvidia-smi

Now, when you run any CUDA computation, your system shouldn’t be affected by high NVIDIA GPU usage.