At our company we use Capistrano for deploys. It reads Ruby instructions from a ./Capfile
in the project's root directory, then deploys accordingly via SSH. It has support for releases, shared log dirs, rollbacks, rsync vs remote cached git deploys, etc. It can be run from any machine that has access to your production servers. Be it your workstation, or a Continuous Integration server.
So all in all pretty convenient but typically it assumes you know what servers you want to deploy to at the time of writing your Capfile
.
What if the composition of your platform changes often? Will you keep changing the Capfile
right before every deploy? Seems like effort ; )
Dynamic Configuration of Deploy Targets
Here's how a snippet of a handwritten Capfile
might look like:
# Static Capistrano targets
role :app,
"server1.example.com",
"server2.example.com"
But if you have a highly volatile cloudplatform where servers come and go, you probably don't want to edit your Capfile
to reflect what's currently in production with every deploy.
There are probably better ways to write the replacement since my Ruby-fu is limited, but assuming you keep a serverlist.sh
script that prints each of the current hostnames of your platform on a new line (e.g. by using your hosting provider's API), you could define your :app
role dynamically like so:
# Dynamic Capistrano targets
hostnames = run_locally "./variable_server_list.sh"
hostnames = hostnames.split("\n")
for hostname in hostnames
server hostname, :app
end
And this will make Capistrano deploy to target all active servers in production.
Hope this helps!