2 Ways to Backup Your Stuff

People often ask me for advice on how to backup their stuff. Here’s 2 options that I find can cover pretty much any use-case.

Option 1: Operate Out of a Cloud Drive.

Suitable for: people who have less than about 100 GB of files they care about. With more than this it can get expensive, and unless you have G Suite, hard to manage.

This option is really simple. Install a consumer cloud drive product like Google Drive, move literally all your files into it (everything in “Documents” for example), and from there on, use this as your root folder. Everything you do happens in Drive. Works great if you use more than 1 machine too!

Bonus: if you pay for G Suite, you can use the “File Stream” version Google Drive which means not all the files have to be copied locally, good if you have a smaller HDD.

Option 2: Backup to the Cloud

Suitable for: people who have a lot of data like photos or videos that would be expensive or cumbersome to store in a cloud drive.

With this approach, you buy a copy of Arq, and backup to a cloud provider. The trick is to use an enterprise cloud provider like Google ($0.007/GB/Month) or AWS ($0.004/GB/Month) rather than a consumer product. The reason is that most consumer products charge in buckets for example, 100MB to 1TB, and once you go, for example 101MB you’re paying for the next tier enterprise clouds on the other hand change less than a penny per GB, and it’s calculated per MB.

On AWS Glacier, a 3TiB backup is $12.28/month, and importantly scales linearly.  The main difference between AWS and Google is price, and speed to recover. AWS is significantly cheaper, but it takes 4h to start your backups, where as Google is instantaneous. So if you need your backups to be at-hand, use Google.

To use, simply select the folders to backup, and let it do it’s thing. You may wish to tune the upload parameters so it doesn’t soak your connection (you’ll notice if it does as everything will grind to a halt, this is also a good time to get an internet connection with decent uploads). This is more or less like Time Machine if you’re familiar with that, but in the cloud. It will store incremental versions, which helps to protect against accidental damage or ransomeware.  Obviously a cloud drive can also be used for daily documents that need to be shared.

What About Local Backups?

The problem with local backups is that they exist physically with your primary machine. If your primary machine falls due to fire, theft, flood and the likes, your backup is subject to the same event. Sure, offsite backups can fix this, but it’s quite a hassle to maintain. Cloud is the ultimate offsite backup, with the added advantage is that you don’t have to worry about data durability (e.g. RAID) as you do with personal backups.

Add to my Wunderlist with Google Home

Here’s my recipe to light up “ok Google, add eggs to my list” and have my Google Home add the item to my family’s shared grocery list on Wunderlist.

Bear with me, as there are a few steps. We’ll be using the Mail to Wunderlist feature (where you can send emails to [email protected] and they get added to your list), with IFTTT, since there is no direct Wunderlist action for Google Assistant.

First you’ll need a Wunderlist account, duh.

Next, add an email address to Wunderlist at the email settings page. I suggest using a desktop computer for this step, as Wunderlist’s site on mobile may be different. If you must do it on Mobile, try it in Chrome and use “request desktop version”.  Final piece of advice: I suggest using an email account here that you trust to give IFTTT complete access to (which is the next step), I personally chose an old address that I don’t use for anything much. It’s not that I don’t trust IFTTT specifically – I basically just don’t trust anyone except my email provider with my email account. You’ll need to verify this email address by clicking the link in the email Wunderlist sends you.

Now go to the Mail to Wunderlist setting. Beside your email address, select the Wunderlist you wish incoming emails to go to (e.g. “Family groceries”).

The rest you can do on your phone, which I recommend. Open the IFTTT app, and create a new recipie.

For the “IF”, select Google Assistant. For the phrases, I used “add $ to my list”, “add $ to my shopping list”, “add $ to my wunderlist” but feel free to improvise. “Ok, added $ to your list” is the response I chose.  For the THEN part, choose send an email, and link Gmail selecting the account you configured with Wunderlist above. The to address is “[email protected]”, for the Subject choose the recipe “TextField”. CC yourself for debug purposes if you need.

That’s it, you’re done. Try it out on your home, with “OK Google, add eggs to my list”.

Reachability considered harmful

Like many, I have relied on Apple’s Reachability samples over the years to determine internet connectivity.

Generally it works really well, and returns immediately. However, in certain flakey internet situations (for example, on the slopes of Japan with ‘3G’ that isn’t actually working), the method can hang until it times-out (which can take over a minute). Toggling airplane mode is a workaround to make reachability return immediately, and can be used to diagnose a Reachability-related hang.

Recommendation: don’t perform reachability tests on the main thread. Rather, do it periodically in the background, use only cached values on the main thread.

Recording Video from iPhone

Finally, a decent solution for recording video from iPhone, one that is both perfect and free.  Apple has an awesome (yet hidden) feature with Yosemite + iOS 8: QuickTime recording via your Lightening cable.

Follow the steps (copied below after the jump). You can also use it to present an iOS demo (follow the steps, but don’t tap ‘record’, and then present/cast your screen in the usual way).  They even set the clock to 9:41 ;-)

For App Previews, to edit and crop the video, use iMovie, (File -> “New App Preview”, File -> Share -> App Preview…).  While quicktime has edit controls, it can’t export at the resolution needed.

To resize for iPhone 6 Plus (and avoid making another video), use ffmpeg:

ffmpeg -i iphone6.mp4 -s 1080x1920 -aspect 1080:1920 -c:a copy iphone6_plus.mp4

Read more…

The Rails Grim Reaper

After upgrading to Rails4, I started to see this error ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query: in my logs. It seemed random in some places (normal, but generally larger than average queries), and less random in others (high failure rate in long running queries, e.g. in workers).

It turns out, I had inadvertently set the reaping_frequency to 10s during my upgrade to Rails4. It wasn’t Rails4’s fault, but part of the config changes I made around this upgrade.

At some point in Rails, 10 seconds was flagged as the default (replacing the previous “no reaping” default). This caused problems, and was reverted. Sadly not before the practice spread over the internet, including into Heroku’s official recommendation for threaded servers which I implemented, and at the time of writing is still Heroku’s recommendation.

ActiveRecord’s connection reaper is designed to remove “dead” connections, and it seems this was promoted as a good thing to do at the time, especially in multi-threaded environments. However, it appears that it does not function correctly and can kill alive connections too. Rails issue #9907 indicated that it can kill longer running queries, and the Rails commit message when it was disabled by default (again) indicated it could cause segfaults in multi-threaded environments. Sounds bad all-round!

Searching my codebase for reaping_frequency and removing all traces of this config (especially the || 10 on the config line recommended by Heroku – which was enabling the reaper, with a frequency of 10s by default) fixed the issue.

If you needed the reaper for it’s “good” side effects (as in you have connections that are for some reason actually dying that need clearing up), then I suggest it might be better to solve that problem another way, one with less side-effects, for starters you could try and stop them dying in the first place.

An open suggestion to 99designs: Support HTML5/CSS3 design contests.

I recently launched a landing-page contest on 99designs that was unsuccessful (following 3 very successful experiences with 99designs, mostly for logos).

One thing I realised when looking at the responses to the contest, and re-evaluating my own requirements, is that I don’t actually want someone’s attempt at a landing page in Photoshop, but rather I want a landing page beautifully crafted in HTML5/CSS, and ready for me to use.

This is quite different to the “PS2HTML” service offered along side the contest where an image-based design is translated into HTML. The goal here is not to try and skip that step, and save money on the image-to-html translation, instead it is about designing with HTML/CSS in the first place.  Frameworks such as Bootstrap have shown it is very possible to do html-first designs, and I believe that designing in this way will have a very different outcome on the final result compared to starting from a 2D image.

I suggested this while on the phone to Steven in support. He told me that in the distant past they did have an option for HTML for a bit extra, but what happened is that people tended to expect their entire website to be created even with a shopping cart system! I do understand how that situation would occur. Certainly I’m not suggesting a complete website service here.  He also mentioned that many graphic designers are not very good at HTML – that I think is fine, this new category of contests would surely attract a new category of designers.

There would be some hurdles to overcome, such as to ensure people have the right expectations for such a contest (i.e. it won’t include a shopping cart, or PHP programming!). But I believe that the ability to launch a contest seeking a beautiful HTML5/CSS design would be a great addition to the existing image design contest option.

Failing that, here’s a startup idea for someone: “99designs for HTML” ;-)

MCJ Consulting

Congrats to my cousin who has launched her business MCJ Consulting, an Australian accounting contractor and training business!

Working around ‘error: failed to push some refs’ on Heroku

I had a bad week this week trying to spin up some new Heroku dynos, getting loads of “pack-objects died of signal 13” which is basically the connection being interrupted, even on our relatively small 130MB repo.

The Problem: the initial push to Heroku takes a while and can fail especially on large repositories. If you’re deploying an experimental branch for a new Heroku setup with a high probability of the deploy being refused, even if the push succeeds, it can be very time consuming and frustrating.

The solution? Use EC2 as a conduit.

That is, spin up an instance of EC2, checkout and deploy your code right from within the same datacenter that Heroku uses.

Read more…

Switching to Rubinus on OS X & Heroku

Rubinus is a ruby VM written in C with true parallelism support, and a cool design that attempts to write as much of the interpreter as possible in ruby itself.

It’s been developed for a few years now, with financial supported by Engine Yard and is stating to mature. Heroku have included it as pre-installed ruby option, so it’s now quite easy to use.

Here’s how I set it up in dev & on heroku:

Read more…

Puma on Heroku with MRI

Or, How to use Puma like Unicorn

I’ve been reading some very good things about Puma (all-Rack stack, proper thread support, & more). I gave it a try with the thread-friendly Rails4, on the thread-friendly Rubinius. To my surprise, the heroku+rails4+puma+rubinius performance was woeful. I’m not sure what I got wrong, but I was seeing 2 orders of magnitude slower requests than before, with most timing out. I’m not trying to bag Rubinius here, I think it’s a fantastic idea, but for whatever reason it isn’t working for me. Even though Heroku now supports it, perhaps it’s still early days – I’m going to leave it for a while and let it stabilise, at least to the 2.0 release.

So Rubinius didn’t work so great for me – but what about Puma itself? Puma suggests1 you use JRuby or Rubinius as they support true multi-threading, not the total lack of concurrency present in the RMI (the standard/reference ruby interprater). However, it turns out you can use it quite effectively as a Unicorn replacement simply by treating it like Unicorn.

In Unicorn you only have workers (forked processes) for parallelism. Puma introduces threads, and the default is 16. But it also has workers (which are not enabled by default). In a true concurrency environment you would normally have 1 worker per CPU core, and use threads for the rest – in the case of the RMI though, without operating-system-level threads, these workers are the only way to use OS concurrency.

So the key to make Puma behave like Unicorn is to specify multiple workers, but just 1 thread per worker. Your config will look like so:

# config/puma.rb
workers 5
threads 1,1

That is, 5 workers (for my app, 5 is the golden number to work within Heroku’s 512MB memory limit) and 1 thread per worker. As long as you leave the threads at 1,1. Now your app is using Puma as a drop-in replacement for Unicorn, ready to be scaled with threads when your underlying Ruby implementation can support them.

Actually what I do on heroku is to use ENV variables (as they suggest for unicorn):

# config/puma.rb
workers Integer(ENV["PUMA_WORKERS"] || 5)
threads Integer(ENV["PUMA_THREADS_MIN"] || 1), Integer(ENV["PUMA_THREADS"] || 1)

That allows you to tune the performance with a simple heroku config:set command rather than needing to push & redeploy.

To complete the setup, modify your Profile and Gemfile

# Procfile
web: bundle exec puma -p $PORT -e $RACK_ENV -C config/puma.rb
# Gemfile
gem "puma", "~> 2.0.1"

If you want to try it with rubinius, read my next post.

What about allowing just 2 threads per worker with Puma on RMI you ask? I ran a bunch of simple ‘ab’ profiles. It seems with any more than 1 thread, the threads just fall over each other and block horribly (not particularly surprisingly due to the GIL). It’s better to leave those requests in the Heroku request queue than jam them through a single worker. No doubt this will be a different story with a true-concurrency ruby engine, but until then my conclusion is that it has a distinctly negative effect.

I also tried doubling the workers on a 2x sized Heroku dyno. I didn’t get any performance benefit from this (it actually went down), so my conclusion is that two 1x dynos with 5 workers each is better than one 2x dyno with 10. As always, your milage will certainly vary!

Why bother with Puma if you’re going to treat it just like Unicorn? Aside from a small speed increase, and being thread-ready, Puma also handles incoming requests better. In Unicorn, the worker is tied up from the moment the client hits the server until the request is finished, so if the client request is slow, or the HTTP body is large, that worker is wasted. You can read more about that problem on heroku.

In conclusion: you can treat Puma like unicorn by setting the threads to 1, and using workers as you do in unicorn. The speed increase may not be massive, since you’re still process bound, but it’s a more modern server and ready to go multi-threaded when ruby is. Play around with your worker count, the perfect number is 1 below whatever number causes memory warnings on Heroku.

  1. “Puma is designed to be used on a Ruby implementation which provides true parallelism, such as Rubinius and JRuby.” – puma []