Finally, a decent solution for recording video from iPhone, one that is both perfect and free. Apple has an awesome (yet hidden) feature with Yosemite + iOS 8: QuickTime recording via your Lightening cable.
Read all about it from Stack Exchange (copied below after the jump). You can also use it to present an iOS demo (follow the steps, but don’t tap ‘record’, and then present/cast your screen in the usual way). They even set the clock to 9:41 ;-)
Over the years I bought four Xbox 360’s. The first was left behind in another country (still works), the second red-ringed on the *day* the Xbox One launched, it’s replacement was sold when I moved country again, the funds going to purchase the 4th (still connected) in the new region. So you could say I rather liked that console.
After holding out for a year, I picked up a One for $349+tax with Assassins Creed Unity & Black Flag thrown in as digital downloads, and promptly bought a copy of Destiny (going digital so I could download it on the 360 as well, if I needed).
Destiny was very disappointing – it’s like Halo merged with WoW but taking only the worst elements of both games. When you shoot at people numbers fly off, there’s an utterly pointless WoW-like lobby which is basically a glorified menu, and the missions are repetitive and grindy. Worse, you have to level up (grind) before you can “progress” in the storyline. I think some of the designers were bit too WoW obsessed (I’ve seen this problem before first hand). I would have preferred a good ‘ol Halo campaign with awesome multiplayer any day.
But Black Flag is awesome, what’s not to like about commanding a pirate ship, and the story is a hoot (you’re not actually the Assassin, you pretend to be him – but get all his cool abilities anyway). I’ve yet to try my free copy of Unity, maybe I’ll wait till the bugs are fixed ;) GTA V… I didn’t think I would buy the game twice, but I will – the graphics are superb and first-person mode is cool, plus I never actually got through the story on the 360, so may as well do it with nicer graphics.
CoD Advanced Warfare seems solid, I love how they tell their single-player story without interruption. Cutscenes use the game engine, and are often interactive or at least seamlessly meld, as is the CoD tradition – even load screens are mood-setting fades-to-black (can’t say that about Destiny). Multiplayer is fun, with a unique load-out model that uses a point system rather than specific slots – so you can customise it a lot more than say Halo 4, and it supports on-couch multiplayer!! (aside: it saddens me that so few games do this anymore, I’m committed to voting with my wallet on the ones that do).
I’m now consuming all my media content on the One as well. It’s my 5th “smart” device (including the TV itself), but the interface is so nice, and the processor is much more powerful making everything load super fast (I like my Roku but it takes 30s just to boot the main menu). It’s also nice to have a BlueRay player, which I’ve been using a lot (thanks to the confusingly named Netflix-DVD which actually offers BlueRay for most movies – this is the product they tried to brand ‘Quickster’). There’s YouTube, Netflix, Hulu, Amazon Movies, HBO Go apps, all that’s missing is Spotify for me, but I can live without it.
Conclusion: with the price at it is, and decent games on the market, seems like a good time to jump in.
Hunger Games – BlueRay + Digital Download (iTunes or UltraViolet) $8.98.
Hunger Games – iTunes $14.99
So you can get an actual physical disk, shipped to your door for free (in under 2 days with prime), that you can loan, re-sell, watch at high quality without downloading, all for less than the digital copy, AND you get a free digital copy. This patten is repeated on nearly all movies, sometimes they even throw in a third format (DVD) of the same movie!
I must say, printers have really come a long way in the last few years! I picked up an Canon PIXMA MG5520 this week I’m truly impressed.
It’s an all-in-one scanner and printer, with 5 ink cartridges (3x color, black for photo paper and a separate larger black for regular paper).
Here’s a selection of my favourite features:
Basically this device operates as a stand-alone device. Your Mac/PC is a companion (receive scans / send documents) but isn’t a crucial part of the chain.
All this for just $69 on Amazon!
After upgrading to Rails4, I started to see this error
ActiveRecord::StatementInvalid: Mysql2::Error: Lost connection to MySQL server during query: in my logs. It seemed random in some places (normal, but generally larger than average queries), and less random in others (high failure rate in long running queries, e.g. in workers).
It turns out, I had inadvertently set the
reaping_frequency to 10s during my upgrade to Rails4. It wasn’t Rails4’s fault, but part of the config changes I made around this upgrade.
At some point in Rails, 10 seconds was flagged as the default (replacing the previous “no reaping” default). This caused problems, and was reverted. Sadly not before the practice spread over the internet, including into Heroku’s official recommendation for threaded servers which I implemented, and at the time of writing is still Heroku’s recommendation.
ActiveRecord’s connection reaper is designed to remove “dead” connections, and it seems this was promoted as a good thing to do at the time, especially in multi-threaded environments. However, it appears that it does not function correctly and can kill alive connections too. Rails issue #9907 indicated that it can kill longer running queries, and the Rails commit message when it was disabled by default (again) indicated it could cause segfaults in multi-threaded environments. Sounds bad all-round!
Searching my codebase for
reaping_frequency and removing all traces of this config (especially the
|| 10 on the config line recommended by Heroku – which was enabling the reaper, with a frequency of 10s by default) fixed the issue.
If you needed the reaper for it’s “good” side effects (as in you have connections that are for some reason actually dying that need clearing up), then I suggest it might be better to solve that problem another way, one with less side-effects, for starters you could try and stop them dying in the first place.
I recently launched a landing-page contest on 99designs that was unsuccessful (following 3 very successful experiences with 99designs, mostly for logos).
One thing I realised when looking at the responses to the contest, and re-evaluating my own requirements, is that I don’t actually want someone’s attempt at a landing page in Photoshop, but rather I want a landing page beautifully crafted in HTML5/CSS, and ready for me to use.
This is quite different to the “PS2HTML” service offered along side the contest where an image-based design is translated into HTML. The goal here is not to try and skip that step, and save money on the image-to-html translation, instead it is about designing with HTML/CSS in the first place. Frameworks such as Bootstrap have shown it is very possible to do html-first designs, and I believe that designing in this way will have a very different outcome on the final result compared to starting from a 2D image.
I suggested this while on the phone to Steven in support. He told me that in the distant past they did have an option for HTML for a bit extra, but what happened is that people tended to expect their entire website to be created even with a shopping cart system! I do understand how that situation would occur. Certainly I’m not suggesting a complete website service here. He also mentioned that many graphic designers are not very good at HTML – that I think is fine, this new category of contests would surely attract a new category of designers.
There would be some hurdles to overcome, such as to ensure people have the right expectations for such a contest (i.e. it won’t include a shopping cart, or PHP programming!). But I believe that the ability to launch a contest seeking a beautiful HTML5/CSS design would be a great addition to the existing image design contest option.
Failing that, here’s a startup idea for someone: “99designs for HTML” ;-)
I had a bad week this week trying to spin up some new Heroku dynos, getting loads of “pack-objects died of signal 13″ which is basically the connection being interrupted, even on our relatively small 130MB repo.
The Problem: the initial push to Heroku takes a while and can fail especially on large repositories. If you’re deploying an experimental branch for a new Heroku setup with a high probability of the deploy being refused, even if the push succeeds, it can be very time consuming and frustrating.
The solution? Use EC2 as a conduit.
That is, spin up an instance of EC2, checkout and deploy your code right from within the same datacenter that Heroku uses.
Rubinus is a ruby VM written in C with true parallelism support, and a cool design that attempts to write as much of the interpreter as possible in ruby itself.
It’s been developed for a few years now, with financial supported by Engine Yard and is stating to mature. Heroku have included it as pre-installed ruby option, so it’s now quite easy to use.
Here’s how I set it up in dev & on heroku:
I’ve been reading some very good things about Puma (all-Rack stack, proper thread support, & more). I gave it a try with the thread-friendly Rails4, on the thread-friendly Rubinius. To my surprise, the heroku+rails4+puma+rubinius performance was woeful. I’m not sure what I got wrong, but I was seeing 2 orders of magnitude slower requests than before, with most timing out. I’m not trying to bag Rubinius here, I think it’s a fantastic idea, but for whatever reason it isn’t working for me. Even though Heroku now supports it, perhaps it’s still early days – I’m going to leave it for a while and let it stabilise, at least to the 2.0 release.
So Rubinius didn’t work so great for me – but what about Puma itself? Puma suggests1 you use JRuby or Rubinius as they support true multi-threading, not the total lack of concurrency present in the RMI (the standard/reference ruby interprater). However, it turns out you can use it quite effectively as a Unicorn replacement simply by treating it like Unicorn.
In Unicorn you only have workers (forked processes) for parallelism. Puma introduces threads, and the default is 16. But it also has workers (which are not enabled by default). In a true concurrency environment you would normally have 1 worker per CPU core, and use threads for the rest – in the case of the RMI though, without operating-system-level threads, these workers are the only way to use OS concurrency.
So the key to make Puma behave like Unicorn is to specify multiple workers, but just 1 thread per worker. Your config will look like so:
# config/puma.rb workers 5 threads 1,1
That is, 5 workers (for my app, 5 is the golden number to work within Heroku’s 512MB memory limit) and 1 thread per worker. As long as you leave the threads at 1,1. Now your app is using Puma as a drop-in replacement for Unicorn, ready to be scaled with threads when your underlying Ruby implementation can support them.
Actually what I do on heroku is to use ENV variables (as they suggest for unicorn):
# config/puma.rb workers Integer(ENV["PUMA_WORKERS"] || 5) threads Integer(ENV["PUMA_THREADS_MIN"] || 1), Integer(ENV["PUMA_THREADS"] || 1)
That allows you to tune the performance with a simple
heroku config:set command rather than needing to push & redeploy.
To complete the setup, modify your Profile and Gemfile
# Procfile web: bundle exec puma -p $PORT -e $RACK_ENV -C config/puma.rb
# Gemfile gem "puma", "~> 2.0.1"
If you want to try it with rubinius, read my next post.
What about allowing just 2 threads per worker with Puma on RMI you ask? I ran a bunch of simple ‘ab’ profiles. It seems with any more than 1 thread, the threads just fall over each other and block horribly (not particularly surprisingly due to the GIL). It’s better to leave those requests in the Heroku request queue than jam them through a single worker. No doubt this will be a different story with a true-concurrency ruby engine, but until then my conclusion is that it has a distinctly negative effect.
I also tried doubling the workers on a 2x sized Heroku dyno. I didn’t get any performance benefit from this (it actually went down), so my conclusion is that two 1x dynos with 5 workers each is better than one 2x dyno with 10. As always, your milage will certainly vary!
Why bother with Puma if you’re going to treat it just like Unicorn? Aside from a small speed increase, and being thread-ready, Puma also handles incoming requests better. In Unicorn, the worker is tied up from the moment the client hits the server until the request is finished, so if the client request is slow, or the HTTP body is large, that worker is wasted. You can read more about that problem on heroku.
In conclusion: you can treat Puma like unicorn by setting the threads to 1, and using workers as you do in unicorn. The speed increase may not be massive, since you’re still process bound, but it’s a more modern server and ready to go multi-threaded when ruby is. Play around with your worker count, the perfect number is 1 below whatever number causes memory warnings on Heroku.