Ratestate is a ratelimiter in the form of a Node.js module that can transmit states of different entities while avoiding transmitting the same state twice, and adhering to a global speed limit.
Recently I was asked to estimate how many hours I worked on a project. Since I hadn't really tracked them I decided to use the Git history to get an indication.
Four days ago the news about the Heartbleed got every sysadmin's attention. Renowned security expert Bruce Schneier writes:
This means that anything in memory -- SSL private keys, user keys, anything -- is vulnerable. And you have to assume that it is all compromised. All of it.
"Catastrophic" is the right word. On the scale of 1 to 10, this is an 11.
Almost spring here! Birds are chirping and we start cleaning out our kitchens and backyards and closets and GitHub accounts. Let's trash some legacy!
- We're ashamed of old code
- We want to save money by having a lower (private) repo count
- We want to improve the signal-to-noise on our profiles before a job interview
But wait, what if your co-worker wants to access some of those commits again? You probably don't feel like peeling archives from crashed backup drives in the basement of your previous building.
Renan and I faced this at true.nl and we started looking for simple solutions.
Dispite testcases, syntax errors still find their way into our commits.
- Maybe it was a change in that bash script that wasn't covered by tests. Too bad our deploys relied on it.
- Maybe it was just a textual change and we didn't think it was necessary to run the associated code before pushing this upstream. Too bad we missed that quote.
Whatever the reason, it's almost 2014 and we are still comitting broken code. This needs to change because in the
- Best case: Travis or Jenkins prevent those errors from hitting production and it's frustrating to go back and revert/redo that stuff. A waste of your time and state of mind, as you already moved onto other things.
- Worst case: your error goes unnoticed and hits production.
Git offers commit hooks to prevent bad code from entering the repository, but you have to install them on a local per-project basis.
Chances are you have been too busy/lazy and never took the time/effort to whip up a commit hook that could deal with all your projects and programming languages.
That holds true for me, however I recently had some free time and decided to invest it in cooking up
ochtra. One Commit Hook To Rule All.
When you're upgrading to MySQL 5.6 you may notice strict mode is turned on by default. You can disable it, but now might be a good time to get your schemas strict, to ensure smooth upgrade paths in the future.
I recently tweeted a few best practices that I picked up over the years and got some good feedback. I decided to write them all down in a blogpost. Here goes
This article was featured on Hacker News. More comments there.
However with services like Disqus, and (my own startup) Transloadit, it gets more and more feasible to just run a flat site and have external services cover for not running serverside code and a database yourself.
In this post I'm going to show you how easy it is to make file uploading possible even if your site is just a single page of HTML.
This article was featured on Hacker News. Some insightful comments there.
Yesterday I wrote my first Firefox OS App.
Sometimes it happens that vagrant hangs during boot of your virtual image. Right after typing:
$ vagrant up
It hangs for a long time and then finally throws:
[default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI.
If you open VirtualBox you'll see that the virtual machine preview shows a black screen with kernels to choose from. This is GRUB requiring user input to boot further.
Here's how to fix that.
At our company we use Capistrano for deploys. It reads Ruby instructions
./Capfile in the project's root directory, then deploys
accordingly via SSH. It has support for releases, shared log dirs, rollbacks,
rsync vs remote cached git deploys, etc. It can be run from any machine
that has access to your production servers. Be it your workstation, or a
Continuous Integration server.
So all in all pretty convenient but typically it assumes you know what servers you
want to deploy to at the time of writing your
What if the composition of your platform changes often? Will you keep changing
Capfile right before every deploy? Seems like effort ; )
If you are writing code in Go and are executing a lot of (remote) commands,
you may want to indent all of their
output, prefix the loglines with hostnames, or mark anything that was thrown to
red, so you can spot errors more easily.
For this purpose I wrote Logstreamer.
to compare values is bad practice because it doesn't
account for type, hence
false == 0 == '' == null == undefined, etc.
And you may accidentally match more than you bargained for.
If you want you can limit unintented effects & bugs this may lead to,
it's often wise to use
In the process of converting legacy codebases to use these triple equality operators, I find that as a rule of thumb you can almost always force triple equality in case of comparing variables against non-numerical strings.
There's just never a case where you want the text
to pass for the boolean
true, or the number
And if that can still happen in your legacy codebase,
you'll want to limit those risks rather sooner than later. Even if that
breaks things that now accidentally, work.
Here are some commands to download the most important pages of your
site as plain text (determined by
MAX_DEPTH), and save it into one
This could come in handy when you want to have everything checked for grammar & spelling errors.
After the spellcheck you'd still have to search through your codebase / database to find & fix the culprits, but this should already save you some time in discovery.
Recently we moved the Transloadit status page from an unmanaged EC2 instance to the Nodejitsu platform. We kept status uptime history in redis, and obviously I wanted to preserve that data.
For the new setup I did not have access to the filesystem, I only had a redis
port to talk to. So instead of rsyncing the
.rdb file I used Redis replication
to migrate the data between instances.