Debian Packaging and Distribution at Ninja Blocks

This is the first in our series of Ninja Engineering posts, it is quite a long post, hopefully it provides some insight into our internal processes.

Ninja Blocks has been pushing out updates across thousands of little devices, in the form of BeagleBones (and now Raspberry Pi), for the last 12 months or so. The current process is an internally developed update solution which pushes out bundles of source code to the devices, and then unpacks all the files on the device. This method has a few challenges, the biggest being:

  • It stops all the services while extracting the update
  • The update mechanism is too closely tied into core services, thus any failure in these services compromises our ability to push out new updates.
  • The install is packaged as one large bundle which can take some time to download and extract.
  • This can take quite a bit of time, with some users restarting the device thinking it is not responding, this leads to issues relating to the first point.
  • While we do checksum the update bundle and deliver it over TLS, we have not been signing it.

Overall a lot of things can go wrong in this process which are out of our control, like network outages and suchlike. Best case the update process reruns, worst case we leave the customer with a bricked device that needs to be re-imaged, which is not ideal.

To alleviate some of these issue and align ourselves better with the operating system we host Ninja Blocks on, we are moving to using Debian packages to deliver our product.

So why Debian packages?

The main things which they bring to the party are:

  • Simplified deployment on our target platforms, being Ubuntu and Debian
  • Ensuring everything needed for an upgrade is staged on the device before the upgrade proceeds
  • Removing the compilation stage therefore speeding up deployment and avoiding partial builds
  • Providing better security via signed packages
  • Enabling decentralised updates via S3 and CloudFront
  • Only restarting the service once everything is ready to push into place
  • The ability to re-install packages to put the device back into a working state.

So how do we manage to get what is essentially a bunch of Node.JS software, along with a wide array of scripts, onto Debian and Ubuntu? We use some great tools, namely:

Firstly we have an Ubuntu system running on AWS which has all of the cross compile tools installed, along with Ruby 1.9.3, Python and OpenJDK.

To set this system up we used Chef, with mostly standard Opscode recipes, along with Vagrant to test the system build. In summary this went as follows:

  • Install Jenkins
  • Install Ruby along with the fpm and repomate gems.
  • Install s3cmd
  • Install nginx, configure the site to proxy to Jenkins.
  • Install build-essential and NodeJS versions.

The nginx configuration required to proxy Jenkins is quite complicated so I will include it here.

server {
  listen 80;
  rewrite ^ https://$server_name$request_uri? permanent;

server {

  listen 443 default ssl;
  ssl_certificate           /etc/ssl/certs/;
  ssl_certificate_key       /etc/ssl/private/;
  ssl_session_timeout  5m;
  ssl_protocols  SSLv3 TLSv1;
  ssl_ciphers HIGH:!ADH:!MD5;
  ssl_prefer_server_ciphers on;

  location / {
    proxy_pass              http://localhost:8080;
    proxy_redirect http:// https://;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_connect_timeout   150;
    proxy_send_timeout      100;
    proxy_read_timeout      100;
    proxy_buffers           4 32k;
    client_max_body_size    8m;
    client_body_buffer_size 128k;


Once we had the system built we configured a few extra packages under the Jenkins account. This was pretty much just AWS credentials and GPG keys for signing the Debian packages.

To build a package with fpm, in this case we are cross compiling a Node.JS application for armv6 (RaspberryPi).


set -e
set -o errtrace



# store the location of the base working area.
pushd . 

curl ${source_bundle} | tar xvz

temp_dir=$(mktemp -d)

# cross compile vars

# Toolchain pulled down from
# and installed in jenkins home directory
export PATH=$HOME/tools/arm-bcm2708/arm-bcm2708hardfp-linux-gnueabi/bin:$PATH

export TOOL_PREFIX="arm-bcm2708hardfp-linux-gnueabi"
export CC="${TOOL_PREFIX}-gcc"
export CXX="${TOOL_PREFIX}-g++"
export AR="${TOOL_PREFIX}-ar"
export RANLIB="${TOOL_PREFIX}-ranlib"
export LINK="${CXX}"
export CCFLAGS="-march=armv6j -mfpu=vfp -mfloat-abi=hard -DUSE_EABI_HARDFLOAT"
export CXXFLAGS="-march=armv6j -mfpu=vfp -mfloat-abi=hard -DUSE_EABI_HARDFLOAT"
export OPENSSL_armcap=6
export GYPFLAGS="-Darmeabi=hard -Dv8_use_arm_eabi_hardfloat=true -Dv8_can_use_vfp3_instructions=false -Dv8_can_use_vfp2_instructions=true -Darm7=0 -Darm_vfp=vfp"
export VFP3=off
export VFP2=on

mkdir -p ${temp_dir}/usr

# navigate into the sources directory
cd node-v${node_version}

# configure and install
./configure --prefix=/usr --without-dtrace --dest-os=linux --without-snapshot
make install DESTDIR=${temp_dir} DESTCPU=arm

# copy over docs
mkdir -p ${temp_dir}/usr/share/doc/nodejs

for i in doc/\* AUTHORS ChangeLog LICENSE
  cp -R $i ${temp_dir}/usr/share/doc/nodejs

# back to the base working area.


fpm -s dir -t deb -n nodejs -v ${package_version} -C ${temp_dir} \
  --deb-user root --deb-group root \
  --deb-compression xz \
  --description "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications." \
  --category "web" \
  --url \
  -m "Ninjablocks CI<>"
  --architecture armhf \
  -p nodejs-${package_version}_armhf.deb \
  -d "libstdc++6 (>= 4.7.2)" \
  usr/bin usr/lib usr/share

repomate add -s squeeze nodejs-${package_version}_armhf.deb

This script is run from within Jenkins. While the cross compile bits are interesting, the area which does package building is the fpm command. This amazing utility wraps all the arcane Debian packaging it as one simple command. Next we use repomate to add the package to the staging pool.

Once we have completed testing we run repomate and publish the packages we want to deploy to production, at the moment this is a manual process as I haven't found a way to automate it in a build environment, hopefully that changes in the future.

repomate publish

To synchronise our repomate repository to s3 we currently using s3cmd, again this is run manually mainly due to limitations of repomate.

s3cmd --verbose --follow-symlinks \
  --acl-public --exclude=\*.db --delete-removed  \
  sync /var/lib/repomate/repository/ s3://repo-bucket-name/

The ultimate goal of this system is to eventually be able to automatically build and distribute new packages when a developer commits. So far we have got most of the way there, really the biggest limitation is repomate's final publish step. We have overcome quite a few issues along the way, our build process is still a bit complicated, mainly due to issues with cross compiling for RaspberryPi (I may do a post just on this madness) and also balancing not trying to change everything in one hit.

Overall this has been a very fruitful exercise, we can now install ninjablocks on a pi by adding a Debian repo, importing it's GPG key and running a couple of apt commands, this in our view is a big step forward.

echo "deb wheezy beta" > /etc/apt/sources.list.d/ninjablocks.list

apt-key adv --keyserver --recv-keys 0E86E52B682BF664

apt-get update
apt-get install -y ninjablocks

Don't forget to checkout how to install Ninja Blocks on your Raspberry Pi

October 09, 2013


Introducing the new Ninja Engineering blog

G'Day fellow Ninjas!

Quite a few people have told us via our forums they want to know what we're working on, and that we could be doing a better job of communicating this. We agree. Part of the reason we've been a little quiet as of late, is that we've been working on a lot of fairly technical stuff that, while not fit for general consumption, might still be interesting to quite a few people.

Therefore, we've started a blog specifically for engineering related content. The aim is to post regularly about what we're working on internally on a more technical level.

We are writing a thorough post on exactly what we've been up to now. Expect this by the end of the week.

Thanks for reading!

Stay Ninja,