GPU basecalling with MinION

While ago I’ve been strugglin with enabling GPU live basecalling in MinKNOW on non-GridION systems. Naturally, ONT wasn’t providing easy way to use GPU in your custom machine, otherwise there wouldn’t be much motivation to buy GridION, right? Still, it turns out you can enable live GPU basecalling in MinKNOW given you have GPU with CUDA-support in your computer. Below I’ll describe briefly what needs to be done. I’m assuming you have MinKNOW and GPU with CUDA support already installed.

First of all, make sure you have CUDA version 6+ correctly installed in your system (instruction to install CUDA are here).

nvidia-smi

If you see something like the image below, you are ready to go 🙂

Now you’ll need to get guppy binaries with CUDA support as those provided with MinKNOW have no GPU support. You can get them from ONT website. Note, guppy major and minor version has to match to the version currently being used in MinKNOW. You can check this version using:

/opt/ont/minknow/guppy/bin/guppy_basecall_server -v

So, I can install guppy v4.0.x (I’ve chose v4.0.15) with CUDA support using (note, you may need to adjust version in below commands depending on what you get from the previous command):

mkdir -p ~/src; cd ~/src
# you may need to change the guppy version
wget https://mirror.oxfordnanoportal.com/software/analysis/ont-guppy_4.0.15_linux64.tar.gz
tar xpfz ont-guppy_4.0.15_linux64.tar.gz
mv ont-guppy ont-guppy_4.0.15

Now just link you guppy binaries inside /opt/ont/minknow (again, you may need to adjust guppy version here)

cd /opt/ont/minknow
sudo mv guppy guppy0
# you may need to change the guppy version
sudo ln -s ~/src/ont-guppy_4.0.15 guppy

Then edit /opt/ont/minknow/conf/app_conf (use sudo!) and change line with gpu_calling to true and also num_threads and ipc_threads to 3 and 2, respectively (you can also define which GPUs you want to enable – by default all available cuda devices will be used):

    "gpu_calling": true,  
    "gpu_devices": "cuda:all",
    ...
    "num_threads": 3,
    "ipc_threads": 2, 

Finally close MinKNOW client (if any is running) and restart MinKNOW system service:

sudo service minknow stop && sudo killall guppy_basecall_server && sudo service minknow start

Now you should see guppy using GPU (-x cuda:all) and you GPU will be used if you run sequencing with live basecalling. Note, you can monitor your gpu usage using gpustat or glances.

ps ax | grep guppy_basecall_server

Voila!

Ubuntu with Gnome extensions for productivity

Time ago I’ve written about KDE configuration. I’ve been using KDE for a while in my personal laptop, but never really got into using KDE in my workstation. Simply I find Gnome much more productive environment in the long-term. (Disclaimer: likely I’m very biased here since I’ve been using Gnome-like desktops for over 10 years now and those seem natural to me. Still I think KDE is fantastic.)

Gnome3 came with extensions. Those are really cool, but be aware that some extensions may brake from release-to-release. Also some may have certain incompatibilities. Below I’m describing briefly which extensions I’m using currently and why (those should be working fine with both, Ubuntu 18.04 and 20.04):

  • Workspace Matrix will arrange your virtual desktops into 2D grid and you can switch easily between rows/columns with Ctrl+Alt+any arrow.
  • Unite (No Title Bar – Forked and PixelSaver have problems with Ubuntu 22.04) will get rid of window titlebar. That’s very useful for small laptop screens, but I’m also using it with dual monitor at work as titlebar is just waste of space…
  • system-monitor will show details about system usage (CPU, RAM, I/O) right in your system tray.
  • Bing Wallpaper Changer gets really good wallpapers daily. You can read a bit more about each picture here – it’s really great resource if you’re looking for some want-to-go places in your vicinity!

On top of that, definitely try:

  • guake is a drop-down terminal. It’s super useful if you need to get quick access to the terminal across multiple desktops. Funny enough I’ve felt in love with yakuake in KDE first and learnt Gnome version is closer to my ideals 😛
  • glances (I’ve discussed GPU support earlier) or htop for process viewing
  • screen (or even better tmux) for terminal multiplexing. This comes handy especially if you work remotely a lot.
  • workrave will remind you to have a break once in a while. Try it, it’s really healthy!

If you find installation/updates of packages slow, definitely check apt-fast

sudo add-apt-repository ppa:apt-fast/stable 
sudo apt-get update 
sudo apt-get -y install apt-fast

Finally I’d recommend to isolate windows from individual workspaces, both for the dock and app-switcher (this will show only windows from current desktop in the dock and when Alt+TAB / ` are pressed):

gsettings set org.gnome.shell.extensions.dash-to-dock isolate-workspaces true
gsettings set org.gnome.shell.app-switcher current-workspace-only true

Above we’ll result in desktop similar to this

Do you have any recommendations or Gnome-related tricks?

Edits:

In order to get rid of terminal title-bar, follow this.

To get extension to work with newer versions of Gnome, edit config file as explained here.

Python code profiling and accelerating your calculations with numba

You wrote up your excellent idea as Python program/module but you are unsatisfied with its performance. The chances are high most of us have been there at least once. I’ve been there last week.

I found excellent method for outlier detection (Enhanced Isolation Forest). eIF was initially written in Python and later optimised in Cython (using C++). C++ is ~40x faster than vanilla Python version, but it lacks the possibility to save the model (which is crucial for my project). Since adding model saving to C++ version is rather complicated buisness, I’ve decided to optimise Python code. Initially I hoped for ~5-10x speed improvement. The final effect surprised me, as rewritten Python code was ~40x faster than initial version matching C++ version performance!

How is it possible? Speeding up your code isn’t trivial. First you need to find which parts of your code are slow (so-called code profiling). Once you know that, you can start tinkering with the code itself (code optimisation).

Code profiling

Traditionally I’ve been relying on %timeit which reports precise execution time for expressions in Python.

%timeit F3.fit(X)
# 1.25 s ± 792 µs per loop (mean ± std. dev. of 7 runs, 1 l oop each)

As awesome as %timeit is, it won’t really tell you which parts of your code are time consuming. At least not directly. For that you’ll need something more advanced.

Code profiling became easier thanks to line_profiler. You can install, load and use it in Jupyter notebook as follows:

# install line_profiler in your system
!pip install line_profiler 
# load the module into current Jupyter notebook
%load_ext line_profiler

# evaluate populate_nodes function of F3.fit program
%lprun -f F3.populate_nodes F3.fit(X)

The example above tells you that although line 134 takes just 11.7 µs per single execution, overall it takes 42.5% of the execution time as it’s executed over 32k times. So starting optimisation of the code from this single line could have dramatic effect on overall execution time.

Code optimisation

First thing I’ve noticed in the original Python code was that in order to calculate outlier score individual samples were streamed through individual trees in the iForest.

        for i in  range(len(X_in)):
            h_temp = 0
            for j in range(self.ntrees):
                h_temp += PathFactor(X_in[i],self.Trees[j]).path*1.0            # Compute path length for each point
            Eh = h_temp/self.ntrees                                             # Average of path length travelled by the point in all trees.
            S[i] = 2.0**(-Eh/self.c)                                            # Anomaly Score
        return S

Since those are operations on arrays, lots of time can be saved if either all samples are processed by individual trees or if individual samples are processed by all trees. Implementing this wasn’t difficult and, combined with cleaning the code from unnecessary variables & classes, resulted in ~6-7x speed-up.

Speeding array operations with numba

Further improvements were much more mild and required detailed code profiling. As mentioned above, single line took 42% overall execution time. Upon closer inspection, I’ve realised that calling X.min(axis=0) and X.max(axis=0) was really time consuming.

x = np.random.random(size=(256, 12))
%timeit x.min(axis=0), x.max(axis=0)
# 15.6 µs ± 43.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Python code can be optimised with numba. For example calculating min and max simultaneously using numba just-in-time compiler results in over 7x faster execution!

from numba import jit

@jit
def minmax(x):
    """np.min(x, axis=0), np.max(x, axis=0) for 2D array but faster"""
    m, n = len(x), len(x[0])
    mi, ma = np.empty(n), np.empty(n)
    mi[:] = ma[:] = x[0]
    for i in range(1, m):
        for j in range(n):
            if x[i, j]>ma[j]: ma[j] = x[i, j]
            elif x[i, j]<mi[j]: mi[j] = x[i, j]
    return mi, ma

%timeit minmax(x) 
# 2.19 µs ± 4.61 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

# make sure the results are the same
np.all([minmax(x), (x.min(axis=0), x.max(axis=0))], axis=0)

Apart from that, there have been several other parts that could be optimised with numba. You can have a look at eif_new.py and compare it with older and C++ version using this notebook. If you want to know details, just comment below – I’ll be more than happy to discuss them 🙂

If you’re looking for ways of speeding up array operations, definitely check numexpr beside numba. eIF case didn’t really need numexpr optimisations, but it’s really impressive project and I can imagine many people could benefit from it. So spread the word!

Investigate & reduce the size of Drupal sqlite3 database

Today while performing regular Drupal update and backup, I’ve realised Drupal sqlite3 database sites/default/files/.ht.sqliteis over 440 Mb! I found it peculiar, as our website isn’t storing that much information and the size grew significantly since last time I’ve looked it up couple of months ago. I’ve decided to investigate what’s eating up so much DB space.

Investigate what’s eating up space within your sqlite3 db

There is super useful program called sqlite3_analyzer. This program analyses your database file and reports what’s actually taking your disk space. You can download it from here (download precompiled sqlite3-tools). Note, under Linux you’ll likely need to install 32bit-libraries ie. under Ubuntu/Debian execute

sudo apt install libc6-i386 lib32stdc++6 lib32gcc1 lib32ncurses5 lib32z1  

Once you have the program, simply execute sqlite3_analyzer DB_NAME | less and the program will produce detailed report about your DB space consumption. For me it looked like that:

Can you spot how much space the actual data is taking? Yes, only 4.7% (20k pages). And what’s taking most of the space? Freelist.

Quick googling taught me, that freelist is simply empty space left after deletes or data moving. You may ask, why isn’t it cleaned up later? You see, having entire database with all tables in one file is very handy, but troublesome. Every time given table is edited, the space that is freed isn’t used, but rather marked as freelist. And those regions get cleaned up only when vacuumcommand is issued. This should happen automatically from time-to-time if auto vacuum is enabled. I couldn’t know why isn’t it working by default with Drupal…

Reduce the size of sqlite3 DB file

Nevertheless, I’ve decided to perform vacuummanually. Of course I’ve backed-up the db, just in case (you should always do that!). But sqlite3 .ht.sqlite vacuum returned Error: no such collation sequence: NOCASE_UTF8. At this point, I though maybe simple DB dump and recovery would solve my problem – after all that’s more or less what happens under the hood when you perform vacuum.

sqlite3 .ht.sqlite.bck .dump > db.sql
sqlite3 .ht.sqlite < db.sql

DB recovered after dump was indeed smaller (16 Mb), but it was missing some tables (sqlite3 .ht.sqlite .tables). Interestingly, when I’ve investigated the schema of the missing tables (sqlite3 .ht.sqlite.bck .schema block_content), I’ve realised that all of those contain NOCASE_UTF8 in table schema. I found that really peculiar! After further googling and rather lengthy reading, I’ve realised NOCASE_UTF8 is invalid in sqlite3, but it can be replaced simply with NOCASE.

Replace DB schema directly on sqlite3 db

In the brave (and firstly stupid I though) attempt, I’ve decided just to replace wrong statements directly on the DB file using sed (sed 's/NOCASE_UTF8/NOCASE/g' .ht.sqlite.bck > .ht.sqlite). As expected, the database file got corrupted. This is because all tables location are stored internally in the same file, so truncating some text from the DB file isn’t the wisest idea as I’ve expected. Then, I’ve decided to replace NOCASE_UTF8, but keeping the same size of the statement after replacement using white spaces. To my surprise it worked & allowed me to reduce the size of DB from 440 to 30 Mb 🙂

sed 's/NOCASE_UTF8/NOCASE     /g' .ht.sqlite.bck > .ht.sqlite
sqlite3 .ht.sqlite vacuum
-rw-rw-r--  1 lpryszcz www-data  32638976 Feb 28 13:57 .ht.sqlite
-rw-rw-r-- 1 lpryszcz www-data 451850240 Feb 28 13:45 .ht.sqlite.bck

Finally, to make sure, that there is no data missing between old and new, reduced DB, you can use sqldiff .ht.sqlite .ht.sqlite.bck. It’ll simply report all SQL command that will transform one DB into another and nothing if DB contain identical information.

Hopefully replacing NOCASE_UTF8 with NOCASE will allow auto vacuum to proceed as expected on the Drupal DB in the future!

EDIT: The db failed after update to drupal v8.7.6

Lately, I’ve updated drupal and discovered this morning the drupal db file to be corrupted Error: no such collation sequence: NOCASE_UTF8. This is because in the latest update, drupal rebuilt table definitions and NOCASE_UTF8 came back which causes sqlite vacuum crashing again. The solution is very simple, just recover your db from backup and remove replace NOCASE_UTF8 with NOCASE .

sed -i.bck 's/NOCASE_UTF8/NOCASE     /g' .ht.sqlite

Create book of abstracts from spreadsheet / google forms

Lately a friend of mine complained about interoperability of abstract submissions from numerous applicants. Having the Book of Abstracts is crucial and we faced similar problem organising #NGSchool events.

Note, you’ll need to be somewhat familiar with LaTeX in order to edit the main.tex file to your liking. If you are not afraid of that, the way to proceed is as follows:

  1. Create google form to collect necessary info, such at this one
  2. Create a new spreadsheet to accumulate responses: Responses > Create new spreadsheet
  3. Download responses spreadsheet as Abstracts.xlsx
  4. Clone abstracts repository
  5. git clone https://github.com/lpryszcz/abstracts.git
    cd abstracts
    # install dependencies
    sudo apt install texlive-base texlive-latex-recommended texlive-fonts-recommended texlive-latex-extra make
    
  6. Edit main.tex to your liking
  7. Copy Abstracts.xlsx to the repository
  8. Create pdf
  9. # prepare abstracts.tex
    ./xls2tex.py
    
    # create main.pdf
    make all
    
    # in the case of problems, just run again this point, but first remove the clutter
    rm main.{aux,blg,log,out,toc,pdf}
    

    You’ll find the abstract book in main.pdf.

Raspberry Pi2 with Ubuntu Sever and Drupal?

I decided to celebrate 25th B-day of Linux by putting the latest Ubuntu 16.04 on my Raspberry Pi 2 and setting up a webserver.
This is how I did it:

  1. First, get Ubuntu armf image and prepare memory card
  2. # get image
    wget http://cdimage.ubuntu.com/ubuntu/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz
    
    # make sure your SD card is on sdb ie by df -h
    xzcat ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz | sudo dd of=/dev/sdb
    
  3. Configure new user & setup Drupal8 webserver
  4. # create new user & change hostname
    sudo adduser USERNAME && sudo usermod -a -G sudo USERNAME
    # edit /etc/hostname and add `127.0.1.1 newHostname` to /etc/hosts
    sudo reboot
    
    # generate locales
    sudo locale-gen en_US.UTF-8
    sudo dpkg-reconfigure locales
    
    # install software
    sudo apt install htop apache2 mysql-server libapache2-mod-php php-mysql php-sqlite3 php-curl php-xml php-gd git sqlite3 emacs-nox
    

My first impressions?
sudo apt is veeery slow. At first, I thought it’s due to old SD card I’ve been using, but it’s also true for newer SD card.
Some packages are missing (ie. git-lfs), but you can get them using some workarounds.

But everything just works!
You can check the mirror of https://ngschool.eu/ running on RPi2 here.
Maybe it’s not speed devil, but it stable and uses almost no energy 🙂

Cheers!

Inspired by Ubuntu’s Insights.

Reducing the size of large git repository

The github repository of #NGSchool website has grown to over 5GB. I wanted to reduce the size & simplify this repository, but this task turned out to quite complicated. Instead, I have decided to leave current repo as is (and probably removed it soon) and start new repo for existing version. I could do that, as I don’t care about version earlier than the one I’m currently using. This is short how-to:

  1. Push all changes and remove .git folder
  2. git push origin master
    rm -rI .git
    
  3. Rename existing repo
  4. Settings > Repository name > RENAME

  5. Start new repository using old repo name
  6. Don’t need to create any files as all already exists.

  7. Init your local repo and add new remote
  8. git init
    git remote add origin git@github.com:USER/REPO
    
  9. Commit changes and push
  10. git add --all . && git commit -m "fresh" && git push origin master
    

Doing so, my new repo size is below 1GB, which is much better compared to 5GB previously.

Convert xls table into abstract book PDF

I had to generate Abstract book for #NGSchool2016 (). I had spreadsheet generated by Google Forms with all necessary information. I could copy-paste all entries and format it later on, but I found LaTeX more robust for the task.
As I had already LaTeX template, the only missing part was conversion of .xls to .tex. Thus I have written simple script, xls2tex.py, that generate .tex file based on table from .xls file.
This script, among many other things, convert utf into LaTeX escape characters.

xls2tex.py depends on xlrd and utf8tolatex (from pylatexenc/latexencode, but this is given as single file)

# install dependencies
sudo apt-get install python-xlrd
# generate tex
xls2tex.py

# generate pdf
make

Output pdf.

Github push fails due to large files

Lately, I have had lots of problems with pushing large files to github. I am maintaining compilation of materials and software deposited by other people, so cannot control the size of files… and this makes push to fail often.

git push
remote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
remote: error: Trace: 6f0f7f66995a394598595375954732db
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File chip_seq/reads/sox2_chip.fastq.gz is 109.69 MB; this exceeds GitHub's file size limit of 100.00 MB

To remove large files from commit, execute

git filter-branch -f --index-filter 'git rm --cached --ignore-unmatch chip_seq/reads/sox2_chip.fastq.gz'
git push

To add large files using git-lfs, execute

# tract by git lfs files larger than 50MB, skipping those in .git folder
find . -type f -size +50M ! -iwholename "*.git*" | rev | cut -f1 -d'/' | rev | xargs git lfs track
# 
git add --all . && git commit -m "final" && git push origin

Make sure that your file are smaller than 2GB, otherwise your push will fail again 😉

Then, to before pull in another machine, make sure to install git-lfs

git lfs install
git pull