Copy data from Android phone with broken screen

The screen of the phone broke and you want to retrieve your contacts / files… Quite typical story. While getting your photos / files is quite trivial, plugging your phone to computer and copying necessary files would be enough.
The situation with getting out your contacts (if you happened not to sync them with Google) is slightly more complicated. Here is what I did in the case of Samsung S4 mini with broken screen. Note, the digitizer (touch screen) worked, but the USB debugging was OFF. Also, S4 mini has no video output. If your phone happens to have HDMI or MDL, just get the cable and plug it to your TV / monitor 😉

  1. Enable USB debugging
  2. This is hard part and can be done only manually! But it’s quite complicated with broken screen. You need to repeat 3 steps until you reach what you want: make a screenshot (HOME + POWER button in S4 mini), see what’s on the screen (navigate to you Phone storage > Pictures > Screenshots), do some action and repeat… This is extremely tedious, but proved to work with me.
    In Android 4.4 which my phone had, you need to enter Settings > About, scroll down and press many times (~7 should work) Build Number. This will enable `Developer options` in Settings. You need to enter it and tick `USB Debugging` and press OK (here I needed to rotate the screen, as the right side of my digitizer didn’t work…).
    I recommend clicking `Revoke access` and OK, as my computer couldn’t connect till I pressed it.
    Then unplug the mobile phone and plug it again. New dialog asking for permission to access for your computer will appear on the screen. You should tick `Always allow access` and OK. From now on, the access through ADB is possible. You can check it with:

    # install ADB
    sudo apt-get install android-tools-adb
    
    # connect
    adb devices
    adb shell
    

    Note, if your digitizer is working only partially (my case), it’s usefull to enable autorotation first.

  3. Screencast Android to computer monitor
  4. # install seversquare
    sudo apt-get install qt4-qmake libqt4-dev libqtcore4 libqtgui4
    git clone https://github.com/yangh/sevensquare
    cd sevensquare
    
    # for Ubuntu 16.04 replace 5th line of Makefile with 
    	(cd build && qmake-qt4 -o Makefile ../seven-square.pro)
    
    # compile
    make
    
    # and run
    build/seven-square &
    

    Now, you should see the mobile screen and be able to interact with it your mouse & keyboard. Now exporting contacts should be trivial, right?

I have tried to dump userdata partition, but on original system version there is no root access and getting one would erase the data…

Let me know if there is any simpler solution!

Create Windows USB stick under Ubuntu

Today, I needed to create Windows 10 USB key in order to install it in the laptop. I found it not so straightforward under Ubuntu… But quickly I found a simple solution, WinUSB.

# install WinUSB
sudo add-apt-repository ppa:nilarimogard/webupd8 && sudo apt update && sudo apt install winusb

# without USB formatting
sudo winusb --install Win10.iso /dev/sdd

# with USB formatting - this didn't work for me, due to boot loaded installation fauilure
sudo winusb --format Win10.iso /dev/sdd

Source: webupd8.

Raspberry Pi2 with Ubuntu Sever and Drupal?

I decided to celebrate 25th B-day of Linux by putting the latest Ubuntu 16.04 on my Raspberry Pi 2 and setting up a webserver.
This is how I did it:

  1. First, get Ubuntu armf image and prepare memory card
  2. # get image
    wget http://cdimage.ubuntu.com/ubuntu/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz
    
    # make sure your SD card is on sdb ie by df -h
    xzcat ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz | sudo dd of=/dev/sdb
    
  3. Configure new user & setup Drupal8 webserver
  4. # create new user & change hostname
    sudo adduser USERNAME && sudo usermod -a -G sudo USERNAME
    # edit /etc/hostname and add `127.0.1.1 newHostname` to /etc/hosts
    sudo reboot
    
    # generate locales
    sudo locale-gen en_US.UTF-8
    sudo dpkg-reconfigure locales
    
    # install software
    sudo apt install htop apache2 mysql-server libapache2-mod-php php-mysql php-sqlite3 php-curl php-xml php-gd git sqlite3 emacs-nox
    

My first impressions?
sudo apt is veeery slow. At first, I thought it’s due to old SD card I’ve been using, but it’s also true for newer SD card.
Some packages are missing (ie. git-lfs), but you can get them using some workarounds.

But everything just works!
You can check the mirror of https://ngschool.eu/ running on RPi2 here.
Maybe it’s not speed devil, but it stable and uses almost no energy 🙂

Cheers!

Inspired by Ubuntu’s Insights.

Stream audio & video from webcam using VLC

Yesterday, I’ve posted about streaming webcam image to www using motion. This solution, although very simple, has many limitations, lack of sound, usage of high bandwidth and low image quality, just to mention a few. In a way, motion stream is just a set of jpeg files.
In order to solve all of these, I have spend quite some time playing with VLC, an open source cross-platform multimedia player, that is able to transcode and stream audio & video.
Streaming can be started from graphical interface, just go to:

Media >> Stream… >> Capture Device, select your devices, Add HTML destination (ie. :8080/webcam.ogg), select Video-Theora + Vorbis (OGG) profile & press Stream.

You stream will be available at: http://localhost:8081/webcam.ogg

But normally, using command line is preferred under Linux:

vlc v4l2:// :input-slave=alsa:// :v4l2-standard=1 :v4l2-dev=/dev/video0 :v4l2-width=1280 :v4l2-height=720 :sout="#transcode{vcodec=theo,vb=2000,acodec=vorb,ab=128,channels=2,samplerate=44100}:http{dst=:8081/webcam.ogg}" -I dummy

Initially, I had problem with streaming sound along with video. Adding, `:input-slave=alsa:// :v4l2-standard=1` solved this. You can try another values for `:v4l2-standard` ie. 0, 1 or 2, depending which microphone you want to use.

Above command will stream HD video (1280×720) in .ogg format (natively suported by most browsers) @ ~2Mbps (2000kbps). If you have slower connection, you can change `vb=2000` to `vb=1000` (~1Mbps) and play with lower resolutions. You can check available resolutions of your camera by:

lsusb -v | egrep -B10 'Width|Height'

This stream, however, is available to everyone. To limit it only to localhost, you can use iptables:

sudo iptables -A INPUT -p tcp -s localhost --dport 8081 -j ACCEPT && sudo iptables -A INPUT -p tcp --dport 8081 -j DROP && vlc v4l2:// :input-slave=alsa:// :v4l2-standard=1 :v4l2-dev=/dev/video0 :v4l2-width=1280 :v4l2-height=720 :sout="#transcode{vcodec=theo,vb=2000,acodec=vorb,ab=128,channels=2,samplerate=44100}:http{dst=:8081/webcam.ogg}" -I dummy

Now, you can create apache2 proxy, similarly to previous post:

# install apache2-utils
sudo apt install apache2-utils
 
# setup new user & passwd
sudo htpasswd -c /etc/apache2/.htpasswd webcam

# configure apache2 - add to your VirtualHost config
    # webcam
    <Location "/webcam.ogg">
        ProxyPass http://localhost:8081/webcam.ogg
        ProxyPassReverse http://localhost:8081/webcam.ogg
        # htpasswd
        AuthType Basic
        AuthName "Restricted Content"
        AuthUserFile /etc/apache2/.htpasswd
        Require valid-user
    </Location>

Enable HTTPS for your domains in 5 minutes & for free!

For a while, I’ve been thinking about encryption domains, like this one. But cost & complications associated with enabling SSL encryption prohibited me to do so…
Today, I’ve realised, Let’s encrypt, new certificate authority, that is completely free, automated and open, makes SSL encryption super easy!
Try it yourself (this if for Ubuntu 14.04 & Apache, for another system configuration check https://certbot.eff.org/):

sudo apt-get install git

sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
cd /opt/letsencrypt
sudo ./letsencrypt-auto --apache -d DOMAIN1 -d DOMAIN2

# setup weekly cron autorenewal on Monday at 2:30
sudo crontab -e
# and paste `30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log`

If you wish to redirect all traffic domain through HTTPS, do following:

# enable mod_rewrite engine in apache2
sudo a2enmod rewrite

# add to your apache conf file
    # redirect to HTTPS
    RewriteEngine on
    RewriteCond %{HTTPS} off [OR]
    RewriteCond %{HTTP_HOST} ^YOUR_DOMAIN\.COM*
    RewriteRule ^(.*)$ https://YOUR_DOMAIN.COM/$1 [L,R=301]

# reload apache2 configuration
sudo service apache2 reload

Voilà!

Inspired by digitalocean.
Thanks to @sheebang for underlining the importance of renewing the certificates!

Working with large binary files in git

Git is great, there is no doubt about that. Being able to revert any changes and recover lost data is simply priceless. But recently, I have started to be concerned about the size of some of my repositories. Some, especially those containing changing binary files, were really large!!!
You can check the size of your repository by simple command:

git count-objects -vH

Here, git Large File Storage (LSF) comes into action. Below, I’ll describe how to install and mark large binary files, so they are not uploaded as a whole, but only relevant chunks of changed binary file is uploaded.

  1. Installation of git-lfs
  2. # add packagecloud repo
    curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
    
    # install git-lsf
    sudo apt-get install git-lfs 
    
    # end enable it
    git lfs install
    
  3. Marking and commiting binary file
  4. # mark large binary file
    git lfs track some.file
    
    # add, commit & push changes
    git add some.file
    git commit -m "some.file as LSF"
    git push origin master
    

On handy docker images

Motivated by successful stripping problematic dependencies from Redundans, I have decided to generate smaller Docker image, starting with Alpine Linux image (2Mb / 5Mb after downloading) instead of Ubuntu (49Mb / 122Mb). Previously, I couldn’t really rely on Alpine Linux, because it was impossible to make these problematic dependencies running… But now it’s whole new world of possibilities 😉

There are very few dependencies left, so I have started… (You can find all the commands below).

  1. First, I have check what can be installed from package manager.
    Only Python and Perl.

  2. Then I have checked if any of binaries are working.
    For example, GapCloser is provided as binary. It took me some time to find source code…
    Anyway, none of the binaries worked out of the box. It was expected, as Alpine Linux is super stripped…

  3. I have installed build-base in order to be able to build things.
    Additionally, BWA need zlib-dev.

  4. Alpine Linux doesn’t use standard glibc library, but musl-libc (you can read more about differences between the two), so some programmes (ie. BWA) may be quite reluctant to compile.
    After some hours of trying & thanks to the help of mp15, I have found a solution, not so complicated 🙂

  5. I have realised, that Dockerfile doesn’t like standard BASH brace expansion, that is working otherwise in Docker Alpine console…
    so ls *.{c,h} should be ls *.c *.h

  6. After that, LAST and GapCloser compilation were easy, relatively 😉

Below, you can find the code from Docker file (without RUN commands).

apk add --update --no-cache python perl bash wget build-base zlib-dev
mkdir -p /root/src && cd /root/src && wget http://downloads.sourceforge.net/project/bio-bwa/bwa-0.7.15.tar.bz2 && tar xpfj bwa-0.7.15.tar.bz2 && ln -s bwa-0.7.15 bwa && cd bwa && \
cp kthread.c kthread.c.org && echo "#include <stdint.h>" > kthread.c && cat kthread.c.org >> kthread.c && \
sed -ibak 's/u_int32_t/uint32_t/g' `grep -l u_int32_t *.c *.h` && make && cp bwa /bin/ && \
cd /root/src && wget http://liquidtelecom.dl.sourceforge.net/project/soapdenovo2/GapCloser/src/r6/GapCloser-src-v1.12-r6.tgz && tar xpfz GapCloser-src-v1.12-r6.tgz && ln -s v1.12-r6/ GapCloser && cd GapCloser && make && cp bin/GapCloser /bin/ && \
cd /root/src && wget http://last.cbrc.jp/last-744.zip && unzip last-744.zip && ln -s last-744 last && cd last && make && make install && \
cd /root/src && rm -r last* bwa* GapCloser* v* 

# SSPACE && redundans in /root/srt
cd /root/src && wget -q http://www.baseclear.com/base/download/41SSPACE-STANDARD-3.0_linux-x86_64.tar.gz && tar xpfz 41SSPACE-STANDARD-3.0_linux-x86_64.tar.gz && ln -s SSPACE-STANDARD-3.0_linux-x86_64 SSPACE && wget -O- -q http://cpansearch.perl.org/src/GBARR/perl5.005_03/lib/getopts.pl > SSPACE/dotlib/getopts.pl && \
wget --no-check-certificate -q -O redundans.tgz https://github.com/lpryszcz/redundans/archive/master.tar.gz && tar xpfz redundans.tgz && mv redundans-master redundans && ln -s /root/src/redundans /redundans && rm *gz

apk del wget build-base zlib-dev 
apk add libstdc++

After building & pushing, I have noticed that Alpine-based image is slightly smaller (99Mb), than the one based on Ubuntu (127Mb). Surprisingly, Alpine-based image is larger (273Mb) than Ubuntu-based (244Mb) after downloading. So, I’m afraid all of these hours didn’t really bring any substantial reduction in the image size.

Conclusion?
I was very motivated to build my application on Alpine Linux and expected substantial size reduction. But I’d say that relying on Alpine Linux image doesn’t always pay off in terms of smaller image size, forget about production time… And this I know from my own experience.
But maybe I didn’t something wrong? I’d be really glad for some advices/comments!

Nevertheless, stripping a few dependencies from my application (namely Biopython, numpy & scipy), resulted in much more compact image even using Ubuntu-based image (127Mb vs 191Mb; and 244Mb vs 440Mb after downloading). So I think this is the way to go 🙂

On simplifying dependencies

Lately, to make Redundans more user friendly, I have simplified it’s dependencies, by replacing Biopython, numpy, scipy and SQLite with some (relatively) simple functions or modules.

Here, I will just focus on replacing Biopython, particularly SeqIO.index_db with FastaIndex. You may ask yourself, why I have invested time in reinventing the wheel. I’m big fan of Biopython, yet it’s huge project and some solutions are not optimal or require problematic dependencies. This is the case with SeqIO.db_index, that relies on SQLite3. Here again, I’m a big fan of SQLite, yet building Biopython with SQLite enabled proved not to be very straightforward for non-standard systems or less experience users. Beside, on some NFS settings, the SQLite3 db cannot be created at all.

Ok, let’s start from the basics. SeqIO.index_db allows random access to sequence files, so for example you can rapidly retrieve any entry from very large file. This is achieved by storing the ID and position of each entry from particular file in database, SQLite3 db. Then, if you want to retrieve particular record, SeqIO.index_db looks up if this record is present in SQLite3 db, retrieves record position in the file and reads only small chunk of this file instead of parsing entire file every time you want to get some record(s).
Similar feature is offered by samtools faidx, but in this case, the coordinates of each entry are stored in tab-delimited file .fai (more info about .fai). This format can be easily read & write by any programme, so I have decided to use it. In addition, I have realised, that samtools faidx is flexible enough, so you can add additional columns to the .fai without interrupting its functionality, but about that later…

In Redundans, I’ve been using SeqIO.index_db during assembly reduction (fasta2homozygous.py). Additionally, beside storing index, I’ve been also generating statistics for every FastA file, like number of contigs, cumulative size, N50, N90, GC and so on. I have realised, that these two can be easily combined, by extending .fai with four additional columns, storing number of occurencies for A, C, G & T in every sequence. Such .fai is compatible with samtools faidx and provides very easy way of calculating bunch of statistics about this file.
All of these, I’ve implemented in FastaIndex. Beside being dependency-free & very handy indexer, it can be used also as alternative to samtools faidx to retrieve sequences from large FastA files.

# retrieve bases from 20 to 60 from NODE_2
./FastaIndex.py -i test/run1/contigs.fa -r NODE_2_length_7674_cov_46.7841_ID_3:20-60
>NODE_2_length_7674_cov_46.7841_ID_3
CATAGAACGACTGGTATAAGCCAAACATGACCCATTGTTGC
#Time elapsed: 0:00:00.014243

samtools faidx test/run1/contigs.fa NODE_2_length_7674_cov_46.7841_ID_3:20-60
>NODE_2_length_7674_cov_46.7841_ID_3:20-60
CATAGAACGACTGGTATAAGCCAAACATGACCCATTGTTGC

Using docker for application development

I found Docker super useful, but going through a manual is quite time consuming. Here, very stripped manual to create your first image and push it online 🙂

# install docker
wget -qO- https://get.docker.com/ | sh
 
# add your user to docker group
sudo usermod -aG docker $USER
 
# check if it's working
docker run docker/whalesay cowsay "hello world!"
 
# create an account on https://hub.docker.com
# and login
docker login -u $USER --email=EMAIL
 
# run image
docker run -it ubuntu
 
# make some changes ie. create user, install needed software etc
 
# finally open new terminal & commit changes (SESSIONID=HOSTNAME)
docker commit SESSIONID $USER/image:version
 
# mount local directory `pwd`/test as /test in read/write mode
docker run -it -v `pwd`/test:/test:rw $USER/image:version some command with arguments
 
# push image
docker push $USER/image:version

From now, you can get your image from any other machine connected to Internet by executing:

docker run -it $USER/image:version
# ie. redundans image
docker run -it -w /root/src/redundans lpryszcz/redundans:v0.11b ./redundans.py -v -i test/{600,5000}_{1,2}.fq.gz -f test/contigs.fa -o test/run1
 
# you can create alias latest, then version can be skipped on running
docker tag lpryszcz/redundans:v0.11b lpryszcz/redundans:latest
docker push lpryszcz/redundans:latest
 
docker run -it lpryszcz/redundans

You can add info about your repository at https://hub.docker.com/r/$USER/image/