Introducing ratestate
Ratestate is a ratelimiter in the form of a Node.js module that can transmit states of different entities while avoiding transmitting the same state twice, and adhering to a global speed limit.
Ratestate is a ratelimiter in the form of a Node.js module that can transmit states of different entities while avoiding transmitting the same state twice, and adhering to a global speed limit.
Recently I was asked to estimate how many hours I worked on a project. Since I hadn't really tracked them I decided to use the Git history to get an indication.
Four days ago the news about the Heartbleed got every sysadmin's attention. Renowned security expert Bruce Schneier writes:
This means that anything in memory -- SSL private keys, user keys, anything -- is vulnerable. And you have to assume that it is all compromised. All of it.
"Catastrophic" is the right word. On the scale of 1 to 10, this is an 11.
Almost spring here! Birds are chirping and we start cleaning out our kitchens and backyards and closets and GitHub accounts. Let's trash some legacy!
Why? Because
But wait, what if your co-worker wants to access some of those commits again? You probably don't feel like peeling archives from crashed backup drives in the basement of your previous building.
Renan and I faced this at true.nl and we started looking for simple solutions.
Dispite testcases, syntax errors still find their way into our commits.
Whatever the reason, it's almost 2014 and we are still committing broken code. This needs to change because in the
Git offers commit hooks to prevent bad code from entering the repository, but you have to install them on a local per-project basis.
Chances are you have been too busy/lazy and never took the time/effort to whip up a commit hook that could deal with all your projects and programming languages.
That holds true for me, however I recently had some free time and decided to invest it in cooking up ochtra. One Commit Hook To Rule All.
When you're upgrading to MySQL 5.6 you may notice strict mode is turned on by default. You can disable it, but now might be a good time to get your schemas strict, to ensure smooth upgrade paths in the future.
I recently tweeted a few best practices that I picked up over the years and got some good feedback. I decided to write them all down in a blogpost. Here goes
This article was featured on Hacker News. More comments there.
More and more sites are written in flat HTML. Hosted on GitHub pages, S3, etc. The advantages are clear: ridiculously low to no hosting costs, it can hardly ever break, and with things like Jekyll and Octopress it can still be fun to maintain. And with JavaScript frameworks such as Angular you could build entire apps clientside. The downsides are clear too: no central point of knowledge makes interaction between users hard.
However with services like Disqus, and (my own startup) Transloadit, it gets more and more feasible to just run a flat site and have external services cover for not running serverside code and a database yourself.
In this post I'm going to show you how easy it is to make file uploading possible even if your site is just a single page of HTML.
This article was featured on Hacker News. Some insightful comments there.
Yesterday I wrote my first Firefox OS App.
Sometimes it happens that vagrant hangs during boot of your virtual image. Right after typing:
$ vagrant up
It hangs for a long time and then finally throws:
[default] Failed to connect to VM!
Failed to connect to VM via SSH. Please verify the VM successfully booted
by looking at the VirtualBox GUI.
If you open VirtualBox you'll see that the virtual machine preview shows a black screen with kernels to choose from. This is GRUB requiring user input to boot further.
Here's how to fix that.
At our company we use Capistrano for deploys. It reads Ruby instructions
from a ./Capfile in the project's root directory, then deploys
accordingly via SSH. It has support for releases, shared log dirs, rollbacks,
rsync vs remote cached git deploys, etc. It can be run from any machine
that has access to your production servers. Be it your workstation, or a
Continuous Integration server.
So all in all pretty convenient but typically it assumes you know what servers you
want to deploy to at the time of writing your Capfile.
What if the composition of your platform changes often? Will you keep changing
the Capfile right before every deploy? Seems like effort ; )
If you are writing code in Go and are executing a lot of (remote) commands,
you may want to indent all of their
output, prefix the loglines with hostnames, or mark anything that was thrown to stderr
red, so you can spot errors more easily.
For this purpose I wrote Logstreamer.
In loosely typed languages such as JavaScript or PHP, using ==
to compare values is bad practice because it doesn't
account for type, hence false == 0 == '' == null == undefined, etc.
And you may accidentally match more than you bargained for.
If you want you can limit unintented effects & bugs this may lead to,
it's often wise to use ===.
In the process of converting legacy codebases to use these triple equality operators, I find that as a rule of thumb you can almost always force triple equality in case of comparing variables against non-numerical strings.
There's just never a case where you want the text 'Kevin'
to pass for the boolean true, or the number 3.
And if that can still happen in your legacy codebase,
you'll want to limit those risks rather sooner than later. Even if that
breaks things that now accidentally, work.
Here are some commands to download the most important pages of your
site as plain text (determined by MAX_DEPTH), and save it into one
big <DOMAIN>.txt file.
This could come in handy when you want to have everything checked for grammar & spelling errors.
After the spellcheck you'd still have to search through your codebase / database to find & fix the culprits, but this should already save you some time in discovery.
Recently we moved the Transloadit status page from an unmanaged EC2 instance to the Nodejitsu platform. We kept status uptime history in redis, and obviously I wanted to preserve that data.
For the new setup I did not have access to the filesystem, I only had a redis
port to talk to. So instead of rsyncing the .rdb file I used Redis replication
to migrate the data between instances.