Firstly, the location of guppy binaries changed. So in order to know the version, you’ll need to execute
/opt/ont/guppy/bin/guppy_basecall_server -v # in my case it’s ver. 5.0.11
Secondly, it seems ONT started to distribute guppy as a service starting from MinION 21.06.0. Because of that, changing MinION configuration has no effect on the service itself.
So in order to enable GPU basecalling, you’ll need to edit the guppyd.service in /etc/systemd/system/guppyd.service.d/override.conf to something like this
While ago I’ve been strugglin with enabling GPU live basecalling in MinKNOW on non-GridION systems. Naturally, ONT wasn’t providing easy way to use GPU in your custom machine, otherwise there wouldn’t be much motivation to buy GridION, right? Still, it turns out you can enable live GPU basecalling in MinKNOW given you have GPU with CUDA-support in your computer. Below I’ll describe briefly what needs to be done. I’m assuming you have MinKNOW and GPU with CUDA support already installed.
First of all, make sure you have CUDA version 6+ correctly installed in your system (instruction to install CUDA are here).
nvidia-smi
If you see something like the image below, you are ready to go 🙂
Now you’ll need to get guppy binaries with CUDA support as those provided with MinKNOW have no GPU support. You can get them from ONT website. Note, guppy major and minor version has to match to the version currently being used in MinKNOW. You can check this version using:
So, I can install guppy v4.0.x (I’ve chose v4.0.15) with CUDA support using (note, you may need to adjust version in below commands depending on what you get from the previous command):
mkdir -p ~/src; cd ~/src
# you may need to change the guppy version
wget https://mirror.oxfordnanoportal.com/software/analysis/ont-guppy_4.0.15_linux64.tar.gz
tar xpfz ont-guppy_4.0.15_linux64.tar.gz
mv ont-guppy ont-guppy_4.0.15
Now just link you guppy binaries inside /opt/ont/minknow (again, you may need to adjust guppy version here)
cd /opt/ont/minknow
sudo mv guppy guppy0
# you may need to change the guppy version
sudo ln -s ~/src/ont-guppy_4.0.15 guppy
Then edit /opt/ont/minknow/conf/app_conf (use sudo!) and change line with gpu_calling to true and also num_threadsandipc_threads to 3 and 2, respectively (you can also define which GPUs you want to enable – by default all available cuda devices will be used):
For couple of weeks, I’ve been looking for an easy way of migrating virtual machine from one Google Cloud Platform (GCP) account to another. At first, I wanted to follow an old Medium post, but I’ve found it rather complicated. Therefore, I’ve decided to tinker myself. It turns out you can easily transfer VM images between projects/accounts in three simple steps thanks to Create imagefeature as follows:
Add read-access (Viewer) for new account/project using IAM admin console
Previously, I’ve written on how to create Abstract book easily. Today, another friend asked for help with generation of badges for attendees of a conference he is co-organising. We have automatised the process for #NGSchool. You can find templates and code in my github repo: https://github.com/lpryszcz/badges
First of all, for our courses we need to create user accounts for all participants in remote machines. Therefore we decided to print user data (username and auto-generated password) on the back of every badge (easy to fold). And since user data are auto-generated, we can easily create user accounts in remote machines using newusers. But for a conference, this can be skipped.
All you need to have to start is
tab-delimited file with first name, surname and affiliation participants.txt
badge template badges.svg for example generated by Inkscape (I have provided two templates we’ve been using lately)
For badges.svg template, the easiest way to go is to create new .svg with the prototype of 1 badge. Make sure instead to put in your badge “Name surname”, “username” and “AFFILIATION” that will be replaced by every attendees data. Once you created single badge (make sure it’s right size!). Then add some border (light grey dots?) in order to facilitate easier cutting. Finally, copy your badge to fill entire A4 page – you can easily fit 9 badges per A4 page.
Once you have the above, simply execute below commands.
# if you need to generate random passwords ./get_usernames.py participants.txt
# generate pdf with badges ./tsv2badges.py badges.svg participants.txt.badges.tsv
# you can create user accounts easily if needed while read line; do if [ ! -d `echo $line | cut -f6 -d":"` ]; then echo $line; echo $line | sudo newusers; fi; done < participants.txt.newusers
tsv2badges.py takes user data and fills svg with them. Once given page is full, it stores it as .pdf and proceed to another page. At the end all pages are merged into one pdf with multiple pages badges.svg.pdf that you should print & cut.
I’m Ubuntu enthusiast. However, since Gnome introduction as default in Ubuntu, I’ve been experiencing stability issues. I don’t mind to reboot my laptop from time to time, but my workstation is a different story – often many weeks without reboot.
After many discussions with my friend, I’ve decided to give a try to KDE. I’ve been experimenting with KDE years ago and I found it not straightforward to use. But apparently since version 5 it’s possible to customise KDE to look & feel nearly whatever you like. And I have to admit, I got sucked by it after just a few hours. First of all, it’s very stable, quite lightweight and very practical. It’s also pretty – it doesn’t matter that much for productivity, but it’s nice add-on. I felt in love with drop-down terminal. Setting everything so migration from Gnome was smooth took me a few hours for the first time. But it paid off rather quickly, cause I’m way more productive than before. That’s how my screen looks like more or less.
If you want to try it, I’d recommend trying KDE Neon instead of Kubuntu, as Neon is developed by KDE Community, therefore it’s the purest KDE experience you can get. Below, you can find a list a widgets, applications and customisations which made my life easier (again, big thanks to Maciek for helping with the migration!).
Widgets:
(Add widgets)
system load viewer [set compact view]
Global menu
(Add widgets > Get new widgets > Download new plasma widgets)
event calendar (replace standard clock & calendar)
Today while performing regular Drupal update and backup, I’ve realised Drupal sqlite3 database sites/default/files/.ht.sqliteis over 440 Mb! I found it peculiar, as our website isn’t storing that much information and the size grew significantly since last time I’ve looked it up couple of months ago. I’ve decided to investigate what’s eating up so much DB space.
Investigate what’s eating up space within your sqlite3 db
There is super useful program called sqlite3_analyzer. This program analyses your database file and reports what’s actually taking your disk space. You can download it from here (download precompiled sqlite3-tools). Note, under Linux you’ll likely need to install 32bit-libraries ie. under Ubuntu/Debian execute
Once you have the program, simply execute sqlite3_analyzer DB_NAME | less and the program will produce detailed report about your DB space consumption. For me it looked like that:
Can you spot how much space the actual data is taking? Yes, only 4.7% (20k pages). And what’s taking most of the space? Freelist.
Quick googling taught me, that freelist is simply empty space left after deletes or data moving. You may ask, why isn’t it cleaned up later? You see, having entire database with all tables in one file is very handy, but troublesome. Every time given table is edited, the space that is freed isn’t used, but rather marked as freelist. And those regions get cleaned up only when vacuumcommand is issued. This should happen automatically from time-to-time if auto vacuum is enabled. I couldn’t know why isn’t it working by default with Drupal…
Reduce the size of sqlite3 DB file
Nevertheless, I’ve decided to perform vacuummanually. Of course I’ve backed-up the db, just in case (you should always do that!). But sqlite3 .ht.sqlite vacuum returned Error: no such collation sequence: NOCASE_UTF8. At this point, I though maybe simple DB dump and recovery would solve my problem – after all that’s more or less what happens under the hood when you perform vacuum.
DB recovered after dump was indeed smaller (16 Mb), but it was missing some tables (sqlite3 .ht.sqlite .tables). Interestingly, when I’ve investigated the schema of the missing tables (sqlite3 .ht.sqlite.bck .schema block_content), I’ve realised that all of those contain NOCASE_UTF8 in table schema. I found that really peculiar! After further googling and rather lengthy reading, I’ve realised NOCASE_UTF8 is invalid in sqlite3, but it can be replaced simply with NOCASE.
Replace DB schema directly on sqlite3 db
In the brave (and firstly stupid I though) attempt, I’ve decided just to replace wrong statements directly on the DB file using sed (sed 's/NOCASE_UTF8/NOCASE/g' .ht.sqlite.bck > .ht.sqlite). As expected, the database file got corrupted. This is because all tables location are stored internally in the same file, so truncating some text from the DB file isn’t the wisest idea as I’ve expected. Then, I’ve decided to replace NOCASE_UTF8, but keeping the same size of the statement after replacement using white spaces. To my surprise it worked & allowed me to reduce the size of DB from 440 to 30 Mb 🙂
sed 's/NOCASE_UTF8/NOCASE /g' .ht.sqlite.bck > .ht.sqlite sqlite3 .ht.sqlite vacuum
-rw-rw-r-- 1 lpryszcz www-data 32638976 Feb 28 13:57 .ht.sqlite -rw-rw-r-- 1 lpryszcz www-data 451850240 Feb 28 13:45 .ht.sqlite.bck
Finally, to make sure, that there is no data missing between old and new, reduced DB, you can use sqldiff .ht.sqlite .ht.sqlite.bck. It’ll simply report all SQL command that will transform one DB into another and nothing if DB contain identical information.
Hopefully replacing NOCASE_UTF8 with NOCASE will allow auto vacuum to proceed as expected on the Drupal DB in the future!
EDIT: The db failed after update to drupal v8.7.6
Lately, I’ve updated drupal and discovered this morning the drupal db file to be corrupted Error: no such collation sequence: NOCASE_UTF8. This is because in the latest update, drupal rebuilt table definitions and NOCASE_UTF8 came back which causes sqlite vacuum crashing again. The solution is very simple, just recover your db from backup and remove replace NOCASE_UTF8 with NOCASE .
What I like a lot about Raspbian Stretch Lite, is that beside natively supporting all Raspberry Pi features, it’s also cross platform compatible – it works super well on both, RPi2 and RPi3.
And yes, this blog among few other things is server from RPi2 🙂
It’s been long time since the last post… But time came that I’ve faced serious problem when trying to change MAC address of my USB LAN adapter.
As recommended by numerous pages found by googling change MAC address Linux, I’ve tried ifconfig eth0 hw ether NEWMAC and macchanger. It changed MAC of my devices (as seen in ifconfig output), yet after plugging the LAN cable, the MAC was automatically restored to permanent one.
At first, I thought it’s the fault of NetworkManager, so I’ve stopped it. But the problem still persisted. After some tinkering, I’ve realised, the MAC can be specified also in NetworkManager alone by adding to /etc/NetworkManager/NetworkManager.conf two lines:
[connection]
ethernet.cloned-mac-address=NEWMAC
and restarting NetworkManager
sudo service network-manager restart
Note, when I’ve changed MAC in NetworkManager using GUI, the permanent MAC was also restored upon LAN cable connection.
Hope this helps someone having similar problem with USB LAN adapter.
For some weeks already, I’ve been annoyed by not working VLSub extension of VLC. It simply hangs during downloading the subtitles. Apparently, this is associated with changes in OpenSubtitles.org remote access. Today, I’ve found simple solution for this issue: