Investigate & reduce the size of Drupal sqlite3 database

Today while performing regular Drupal update and backup, I’ve realised Drupal sqlite3 database sites/default/files/.ht.sqliteis over 440 Mb! I found it peculiar, as our website isn’t storing that much information and the size grew significantly since last time I’ve looked it up couple of months ago. I’ve decided to investigate what’s eating up so much DB space.

Investigate what’s eating up space within your sqlite3 db

There is super useful program called sqlite3_analyzer. This program analyses your database file and reports what’s actually taking your disk space. You can download it from here (download precompiled sqlite3-tools). Note, under Linux you’ll likely need to install 32bit-libraries ie. under Ubuntu/Debian execute

sudo apt install libc6-i386 lib32stdc++6 lib32gcc1 lib32ncurses5 lib32z1  

Once you have the program, simply execute sqlite3_analyzer DB_NAME | less and the program will produce detailed report about your DB space consumption. For me it looked like that:

Can you spot how much space the actual data is taking? Yes, only 4.7% (20k pages). And what’s taking most of the space? Freelist.

Quick googling taught me, that freelist is simply empty space left after deletes or data moving. You may ask, why isn’t it cleaned up later? You see, having entire database with all tables in one file is very handy, but troublesome. Every time given table is edited, the space that is freed isn’t used, but rather marked as freelist. And those regions get cleaned up only when vacuumcommand is issued. This should happen automatically from time-to-time if auto vacuum is enabled. I couldn’t know why isn’t it working by default with Drupal…

Reduce the size of sqlite3 DB file

Nevertheless, I’ve decided to perform vacuummanually. Of course I’ve backed-up the db, just in case (you should always do that!). But sqlite3 .ht.sqlite vacuum returned Error: no such collation sequence: NOCASE_UTF8. At this point, I though maybe simple DB dump and recovery would solve my problem – after all that’s more or less what happens under the hood when you perform vacuum.

sqlite3 .ht.sqlite.bck .dump > db.sql
sqlite3 .ht.sqlite < db.sql

DB recovered after dump was indeed smaller (16 Mb), but it was missing some tables (sqlite3 .ht.sqlite .tables). Interestingly, when I’ve investigated the schema of the missing tables (sqlite3 .ht.sqlite.bck .schema block_content), I’ve realised that all of those contain NOCASE_UTF8 in table schema. I found that really peculiar! After further googling and rather lengthy reading, I’ve realised NOCASE_UTF8 is invalid in sqlite3, but it can be replaced simply with NOCASE.

Replace DB schema directly on sqlite3 db

In the brave (and firstly stupid I though) attempt, I’ve decided just to replace wrong statements directly on the DB file using sed (sed 's/NOCASE_UTF8/NOCASE/g' .ht.sqlite.bck > .ht.sqlite). As expected, the database file got corrupted. This is because all tables location are stored internally in the same file, so truncating some text from the DB file isn’t the wisest idea as I’ve expected. Then, I’ve decided to replace NOCASE_UTF8, but keeping the same size of the statement after replacement using white spaces. To my surprise it worked & allowed me to reduce the size of DB from 440 to 30 Mb 🙂

sed 's/NOCASE_UTF8/NOCASE     /g' .ht.sqlite.bck > .ht.sqlite
sqlite3 .ht.sqlite vacuum
-rw-rw-r--  1 lpryszcz www-data  32638976 Feb 28 13:57 .ht.sqlite
-rw-rw-r-- 1 lpryszcz www-data 451850240 Feb 28 13:45 .ht.sqlite.bck

Finally, to make sure, that there is no data missing between old and new, reduced DB, you can use sqldiff .ht.sqlite .ht.sqlite.bck. It’ll simply report all SQL command that will transform one DB into another and nothing if DB contain identical information.

Hopefully replacing NOCASE_UTF8 with NOCASE will allow auto vacuum to proceed as expected on the Drupal DB in the future!

Create book of abstracts from spreadsheet / google forms

Lately a friend of mine complained about interoperability of abstract submissions from numerous applicants. Having the Book of Abstracts is crucial and we faced similar problem organising #NGSchool events.

Note, you’ll need to be somewhat familiar with LaTeX in order to edit the main.tex file to your liking. If you are not afraid of that, the way to proceed is as follows:

  1. Create google form to collect necessary info, such at this one
  2. Create a new spreadsheet to accumulate responses: Responses > Create new spreadsheet
  3. Download responses spreadsheet as Abstracts.xlsx
  4. Clone abstracts repository
  5. git clone https://github.com/lpryszcz/abstracts.git
    cd abstracts
    # install dependencies
    sudo apt install texlive-base texlive-latex-recommended texlive-fonts-recommended texlive-latex-extra make
    
  6. Edit main.tex to your liking
  7. Copy Abstracts.xlsx to the repository
  8. Create pdf
  9. # prepare abstracts.tex
    ./xls2tex.py
    
    # create main.pdf
    make all
    
    # in the case of problems, just run again this point, but first remove the clutter
    rm main.{aux,blg,log,out,toc,pdf}
    

    You’ll find the abstract book in main.pdf.

Raspberry Pi2 with Ubuntu Sever and Drupal?

I decided to celebrate 25th B-day of Linux by putting the latest Ubuntu 16.04 on my Raspberry Pi 2 and setting up a webserver.
This is how I did it:

  1. First, get Ubuntu armf image and prepare memory card
  2. # get image
    wget http://cdimage.ubuntu.com/ubuntu/releases/16.04/release/ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz
    
    # make sure your SD card is on sdb ie by df -h
    xzcat ubuntu-16.04-preinstalled-server-armhf+raspi2.img.xz | sudo dd of=/dev/sdb
    
  3. Configure new user & setup Drupal8 webserver
  4. # create new user & change hostname
    sudo adduser USERNAME && sudo usermod -a -G sudo USERNAME
    # edit /etc/hostname and add `127.0.1.1 newHostname` to /etc/hosts
    sudo reboot
    
    # generate locales
    sudo locale-gen en_US.UTF-8
    sudo dpkg-reconfigure locales
    
    # install software
    sudo apt install htop apache2 mysql-server libapache2-mod-php php-mysql php-sqlite3 php-curl php-xml php-gd git sqlite3 emacs-nox
    

My first impressions?
sudo apt is veeery slow. At first, I thought it’s due to old SD card I’ve been using, but it’s also true for newer SD card.
Some packages are missing (ie. git-lfs), but you can get them using some workarounds.

But everything just works!
You can check the mirror of https://ngschool.eu/ running on RPi2 here.
Maybe it’s not speed devil, but it stable and uses almost no energy 🙂

Cheers!

Inspired by Ubuntu’s Insights.

Reducing the size of large git repository

The github repository of #NGSchool website has grown to over 5GB. I wanted to reduce the size & simplify this repository, but this task turned out to quite complicated. Instead, I have decided to leave current repo as is (and probably removed it soon) and start new repo for existing version. I could do that, as I don’t care about version earlier than the one I’m currently using. This is short how-to:

  1. Push all changes and remove .git folder
  2. git push origin master
    rm -rI .git
    
  3. Rename existing repo
  4. Settings > Repository name > RENAME

  5. Start new repository using old repo name
  6. Don’t need to create any files as all already exists.

  7. Init your local repo and add new remote
  8. git init
    git remote add origin git@github.com:USER/REPO
    
  9. Commit changes and push
  10. git add --all . && git commit -m "fresh" && git push origin master
    

Doing so, my new repo size is below 1GB, which is much better compared to 5GB previously.

Convert xls table into abstract book PDF

I had to generate Abstract book for #NGSchool2016 (). I had spreadsheet generated by Google Forms with all necessary information. I could copy-paste all entries and format it later on, but I found LaTeX more robust for the task.
As I had already LaTeX template, the only missing part was conversion of .xls to .tex. Thus I have written simple script, xls2tex.py, that generate .tex file based on table from .xls file.
This script, among many other things, convert utf into LaTeX escape characters.

xls2tex.py depends on xlrd and utf8tolatex (from pylatexenc/latexencode, but this is given as single file)

# install dependencies
sudo apt-get install python-xlrd
# generate tex
xls2tex.py

# generate pdf
make

Output pdf.

Github push fails due to large files

Lately, I have had lots of problems with pushing large files to github. I am maintaining compilation of materials and software deposited by other people, so cannot control the size of files… and this makes push to fail often.

git push
remote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
remote: error: Trace: 6f0f7f66995a394598595375954732db
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File chip_seq/reads/sox2_chip.fastq.gz is 109.69 MB; this exceeds GitHub's file size limit of 100.00 MB

To remove large files from commit, execute

git filter-branch -f --index-filter 'git rm --cached --ignore-unmatch chip_seq/reads/sox2_chip.fastq.gz'
git push

To add large files using git-lfs, execute

# tract by git lfs files larger than 50MB, skipping those in .git folder
find . -type f -size +50M ! -iwholename "*.git*" | rev | cut -f1 -d'/' | rev | xargs git lfs track
# 
git add --all . && git commit -m "final" && git push origin

Make sure that your file are smaller than 2GB, otherwise your push will fail again 😉

Then, to before pull in another machine, make sure to install git-lfs

git lfs install
git pull

Malformed column reporting and joining in BASH by paste or awk

I’ve spent some hours trying to figure out, why the heck my scripts using awk and paste are returning malformed output. Simply, lines were wrongly pasted together, some columns were missing, while some were malformed… and in case of awk, trying to print columns in unsorted order (ie. column #3 before column #2 awk '{print $3,$2}') was producing malformed output.
After some time, I have realised it was due to windows-like new line escape \r\n, instead of standard Linux-like \n (of course I got this file from third party using Windows…).

Below, you can find more details.

# first, let's create dummy files containing 4 lines and 5 columns, each line ending with \r\n
python -c "with open('wrong.tsv','w') as out: out.write(''.join('line%s\t%s\r\n'%(i, '\t'.join('column%s'%j for j in range(1,5))) for i in range(1,4)))"
# and ending just with \n
python -c "with open('correct.tsv','w') as out: out.write(''.join('line%s\t%s\n'%(i, '\t'.join('column%s'%j for j in range(1,5))) for i in range(1,4)))"

# now let's paste wrong and correct files
paste wrong.tsv wrong.tsv
line1	line1n1	column1	column2	column3	column4
line2	line2n1	column1	column2	column3	column4
line3	line3n1	column1	column2	column3	column4

paste correct.tsv correct.tsv
line1	column1	column2	column3	column4	line1	column1	column2	column3	column4
line2	column1	column2	column3	column4	line2	column1	column2	column3	column4
line3	column1	column2	column3	column4	line3	column1	column2	column3	column4

# can you see the difference?

Simply, \r is interpreted as return to the beginning of the line in Unix, thus pasting lines containing such character will fail.
In order to convert files containing \r\n into Unix style \n, simply execute:

# replaces file and creates backup: inputfile.bak
sed -i.bak 's/\r$//' inputfile

# creates outputfile with correct formatting
tr -d '\r' < inputfile > outputfile

You can read more on new-line escape characters at Wikipedia.

Running Jupyter as public service

Some time ago, I’ve written about setting up IPython as a public service. Today, I’ll write about setting up Jupyter, IPython descendant, that beside Python supports tons of other languages and frameworks.

Jupyter notebook will be running in separate user, so your personal files are safe, but not as system service. Therefore, you will need to restart it upon system reboot. I recommend running it in SCREEN session, so you can easily login into the server and check the Jupyter state.

  1. Install & setup Jupyter
  2. #
    sudo apt-get install build-essential python-dev
    sudo pip install jupyter
    
    # create new user
    sudo adduser jupyter
     
    # login as new user
    su jupyter
    
    # make sure to add `unset XDG_RUNTIME_DIR` to ~/.bashrc
    # otherwise you'll encounter: OSError: [Errno 13] Permission denied: '/run/user/1003/jupyter'
    echo 'unset XDG_RUNTIME_DIR' >> ~/.bashrc
    source ~/.bashrc
    
    # generate ssl certificates
    mkdir ~/.ssl
    openssl req -x509 -nodes -days 999 -newkey rsa:1024 -keyout ~/.ssl/mykey.key -out ~/.ssl/mycert.pem
    
    # generate config
    jupyter notebook --generate-config
    
    # generate pass and checksum
    ipython -c "from IPython.lib import passwd; passwd()"
    # enter your password twice, save it and copy password hash
    ## Out[1]: 'sha1:[your hashed password here]'
     
    # add to ~/.jupyter/jupyter_notebook_config.py
    c.NotebookApp.ip = '*'
    c.NotebookApp.open_browser = False
    c.NotebookApp.port = 8881
    c.NotebookApp.password = u'sha1:[your hashed password here]'
    c.NotebookApp.certfile = u'/home/jupyter/.ssl/mycert.pem'
    c.NotebookApp.keyfile = u'/home/jupyter/.ssl/mykey.key'
    
    # create some directory for notebook files ie. ~/Public/jupyter
    mkdir -p ~/Public/jupyter && cd ~/Public/jupyter
     
    # start notebook server
    jupyter notebook
    
  3. Add kernels
  4. You can add multiple kernels to Jupyter. Here I’ll cover installation of some:

    • Python
    • sudo pip install ipykernel
      
      # if you wish to use matplotlib, make sure to add to 
      # ~/.ipython/profile_default/ipython_kernel_config.py
      c.InteractiveShellApp.matplotlib = 'inline'
      
    • BASH kernel
    • sudo pip install bash_kernel
      sudo python -m bash_kernel.install
      
    • Perl
    • This didn’t worked for me:/

      sudo cpan Devel::IPerl
    • IRkernel
    • Follow this tutorial.

    • Haskell
    • sudo apt-get install cabal-install
      git clone http://www.github.com/gibiansky/IHaskell
      cd IHaskell
      ./ubuntu-install.sh
      

Then, just navigate to https://YOURDOMAIN.COM:8881/, accept self-signed certificate and enjoy!
Alternatively, you can obtain certificate from Let’s encrypt.

Using existing domain encryption aka Apache proxy
If your domain is already HTTPS, you may consider setting up Jupyter on localhost and redirect all incoming traffic (already encrypted) to particular port on localhost (as suggested by @shebang).

# enable Apache mods
sudo a2enmod proxy proxy_http proxy_wstunnel && sudo service apache2 restart

# add to your Apache config
    <Location "/jupyter" >
        ProxyPass http://localhost:8881/jupyter
        ProxyPassReverse http://localhost:8881/jupyter
    </Location>
    <Location "/jupyter/api/kernels/" >
        ProxyPass        ws://localhost:8881/jupyter/api/kernels/
        ProxyPassReverse ws://localhost:8881/jupyter/api/kernels/
    </Location>
    <Location "/jupyter/api/kernels/">
        ProxyPass        ws://localhost:8881/jupyter/api/kernels/
        ProxyPassReverse ws://localhost:8881/jupyter/api/kernels/
    </Location>

# update you Jupyter config (~/.jupyter/jupyter_notebook_config.py)
c.NotebookApp.ip = 'localhost'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8881
c.NotebookApp.base_url = '/jupyter'
c.NotebookApp.password = u'sha1:[your hashed password here]'
c.NotebookApp.allow_origin = '*'

Note, it’s crucial to add Apache proxy for kernels (/jupyter/api/kernels/), otherwise you won’t be able to use terminals due to failed: Error during WebSocket handshake: Unexpected response code: 400 error.