How we reduced EC2 costs 98% with spot instances

Last year we changed our EC2 system from long-running instances to on-demand, spot request instances. This reduced our EC2 bill by 98%. It also ensured that every instance was built with the latest image and security patches and ran only as long as needed.

EC2 Cost Reduction

Continue reading “How we reduced EC2 costs 98% with spot instances”

Advertisement

Upgrade PostgreSQL on Scientific Linux

We updated two database servers this weekend, one from postgresql 9.1 and the other from 9.2 and brought both of them to 9.3. What follows are my combined process notes in the hopes that it will help you.

Preparation

To do this, you must have enough free disk space on your data drive to make a duplicate of the existing cluster (that is, all databases hosted on the server). So for example, our data drive on one server had 55% usage and I had to clear it to 50% (the drive is dedicated to database storage). On the other server it was 66% consumed. In both cases I removed files that were unrelated to the cluster (backups and WAL archives) and moved them off-server. In the case of the second server this wasn’t enough. If you can easily install or mount a new drive, that’s much easier than these steps but we didn’t have that luxury.

You can free up disk space by re-indexing the databases, running vacuum full, or dump and restore. Re-index can be done without taking the database offline. The other two require taking down the database and may take hours or days for a multi-gigabyte cluster. Restoring the database from backup file took 18 hours on a 250GB database (13GB gzipped, pg_dump backup file) and 39 hours for our 450GB cluster (25GB backup file). From everything I’ve read, for databases in the hundreds of gigabytes and larger, vacuum full will basically take forever. It’s faster to dump and rebuild the database.

However, you can recover a significant amount of space by re-indexing. We recovered 100GB of our 600GB cluster by running re-index on each database. Note that this took 3 hours for one 260GB database and 4.5 hours for a different 250GB database. The major difference of the two was the latter had older data — so the indexing was more fragmented.

    sudo su - postgres -c 'psql app-database-production'
    REINDEX DATABASE "app-database-production";

Instructions

We’re using Scientific Linux. The PostgreSQL Global Development Group has made a repository of builds available for binary distributions. You install the repository by installing the RPM for the repository (this was weirdly meta for me but it works). This creates a pgdg93 repository. See the repository packages page for more links.

    sudo rpm -ivh http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-redhat93-9.3-1.noarch.rpm

You can then install the new PostgreSQL 9.3 releases:

    sudo yum install postgresql93 postgresql93-devel postgresql93-libs postgresql93-server postgresql93-contrib

postgresql93-contrib is only needed for the pg_upgrade tool we’re going to use. You can remove it after the upgrade if you want.

Make a database folder

Create the new database data folder as postgres user. The Scientific Linux distro puts postgresql into /usr/pgsql-VERSION. The default data file location is /var/lib/pgsql/VERSION/data, although ours is mounted on a separate drive.

    sudo su postgres -c '/usr/pgsql-9.3/bin/initdb -D /var/lib/pgsql/9.3/data'

Disable database access

Stop all connections to the database and disable your web application. This next phase can take several hours so you’ll want to make sure you have time. Our 845GB cluster took a little over 2 hours of server downtime.

In our case, closing connections meant stopping resque workers that we have managed by monit, and disabling the web applications with capistrano maintenance mode. We also stop monitoring the database postmaster process to ensure that monit doesn’t restart it while we’re doing the upgrade. Obviously these are meant to jog your thoughts, your own infrastructure will look different.

    worker-server$ sudo monit -g resque-workers stop

    database-server$ sudo monit unmonitor postmaster
    database-server$ sudo /etc/init.d/postgresql-9.1 stop

    dev$ cap production deploy:web:disable REASON="a scheduled system upgrade" UNTIL="at 11pm Pacific Time"

Run the upgrade

Run the new pg_upgrade to migrate from the old version (-b,-d) to the new version (-B,-D). This is the part that takes a couple hours per server.

    sudo su postgres -c '/usr/pgsql-9.3/bin/pg_upgrade -B /usr/pgsql-9.3/bin -b /usr/pgsql-9.2/bin -D /var/lib/pgsql/9.3/data -d /var/lib/pgsql/9.2/data'

Verify the new cluster

Manually inspect the differences between the startup scripts:

    diff /etc/init.d/postgresql-9.?

Transfer any important things to the 9.3 script and remove the 9.2 one. In our case we have a custom PGDATA setting.

Similarly compare the pg_hba.conf and postgresql.conf files in the old data directory with the new ones. The postgresql.conf can be tedious if you’ve done a lot of tuning. (p.s. Anyone know of a good diff tool for configuration files that can compare uncommented lines in either version with their commented pairs in the other?)

    diff /var/lib/pgsql/9.2/data/postgresql.conf /var/lib/pgsql/9.3/data/postgresql.conf
    diff /var/lib/pgsql/9.2/data/pg_hba.conf /var/lib/pgsql/9.3/data/pg_hba.conf

Start the new postgresql and analyze the new cluster to optimize your database for the new version. (Note that analyze_cluster.sh is installed into the working directory when pg_upgrade is run.) The analyze script has three phases. The minimal one will get the database up and running in a couple minutes. So you can bring things back online at this point or wait until it’s fully complete.

    sudo /etc/init.d/postgresql-9.3 start
    sudo su postgres -c ./analyze_cluster.sh

Bring things back online

If you’re running monit (or god or something) to manage your postgresql server, you’ll need to modify the script with new references.

Now, bring everything back up online that you disabled earlier.

    database-server$ sudo monit reload

    worker-server$ sudo monit start all

    dev$ cap production deploy:web:enable

Test that things are working with everything up and running.

Clean up

If you’re satisfied with the new system, you can delete the old cluster. This script is installed in the working directory that you ran pg_upgrade from.

    sudo su postgres -c ./delete_old_cluster.sh

If you’d rather be a little more careful (after all it only copied the database files over) you can delete the old data/base folder, which is the bulk of the storage, and keep other configuration files around in case you need to recover them.


References:

1. How to install PostgreSQL 9.2 on RHEL/CentOS/Scientific Linux 5 and 6
2. pg_upgrade
3. REINDEX
4. How to optimize PostgreSQL database size

Nginx maintenance page configuration with load-balancer health check

We wanted to add a maintenance page to our rails app running on nginx. Capistrano provides a simple task to show your maintenance page cap deploy:web:disable. It includes basic instructions for how to configure nginx, although I found these didn’t quite work for us as documented.

One thing the configuration suggests it to return a 503 status code so that crawlers know the site is temporarily down and will retry the crawl later. This made sense, but when we did this in front of the load balancer it automatically sent it’s own 503 to the browser because it detected that the app server was returning 503 (which is what it should do, or roll over to another server). What we needed was the ability to have our load balancer run it’s health check on the app server without getting the 503.

While I have read that that “If is evil” in nginx configurations, I found that I could check whether the request was the load balancer and return a 200 code within the maintenance handler:

        if ($request_filename ~ '/health_check') {
          return 200;
        }

I also learned a lot from this older post (like how to allow inclusion of graphic assets such as our logo). But the key trick was to put a bare ‘=’ on the error_pages line so that Nginx would return the handler’s response code, rather than 503 for all pages.

      error_page 503 = @maintenance;

Here is the entire configuration that we used (put this at the server level).

      recursive_error_pages on;
      
      if (-f $document_root/system/maintenance.html) {
        return 503;
      } 

      error_page 503 = @maintenance;
      location @maintenance {
        error_page 405 /system/maintenance.html;

        if ($request_filename ~ '/health_check') {
          return 200;
        }

        if (-f $request_filename) {
          break;
        }

        rewrite  ^(.*)$  /system/maintenance.html break;
      }

Opscode Chef makes operations fun

Years ago, I did some time helping companies plan and build networks and computer infrastructure. Since then, in several companies, I’ve done my share of operations and IT work, but more recently I’ve found that I prefer to delegate that work to someone who finds it more interesting. I’ll be honest; I find the ever-changing complexity of systems configuration to be a frustrating, never-ending learning problem. (This particularly applies to mail server management.)

Over the past month I’ve been playing with Chef and the Opscode Platform and found that I absolutely love learning a new framework that gives me new development tools for solving an existing problem. This means that my argument, above, is entirely irrational. Let me put that another way: Chef is damn cool. Continue reading “Opscode Chef makes operations fun”

Windows 7 Netbooks: Second Look

Following on my previous post about new netbooks running Windows 7, I wanted to give some of my thoughts to the platform. I spent a week working with each of the two netbooks I ordered: the Toshiba Mini NB205 and the Asus Eee PC T91MT.

Asus Eee Multi-touch Tablet

The Asus Eee PC T91MT is a tablet format netbook with multi-touch support on the screen (as well as the trackpad). It shipped with a 1.33Ghz processor, 1GB RAM and Windows 7 Home Premium. As mentioned before, this start up was really slow. It didn’t get any better. I don’t know if it’s Windows Home Premium or the 1.33Gmhz processor, but it really wasn’t fast enough to use. Everything needed to wait for a response. I get frustrated with my G1 phone doing this, but I put up with it. No way would I put up with this on my netbook. I suspect that the tablet and multi-touch drivers also slow this down a bit. Bottom line is that I know I bought a reference machine for cutting edge ideas, but until they can get this faster and under $400 it’s probably not going to see much traffic. Still, would love to see more multi-touch tablets in the market.

Toshiba Mini

The Toshiba Mini performed much better. It shipped with a 1.6GHz processor and Windows Starter. The Windows Starter version pisses me off. I mentioned it before, but it seems ridiculous to think I would pay a $90 premium on a $300 netbook to upgrade the OS that shipped with it. Here’s what really grinds my gears: you can’t change the default background of the desktop in Starter edition. You have to pay to upgrade for that privilege. The gall! No wonder small device manufacturers are looking to Android and other low-cost OSes.

Other than that, the Mini was usable and extremely portable. Like I have heard in all the marketing, it’s really easy to pick up the netbook and go. If one of them really did have a 10.5 hour battery life, well, it would be awesome. Although the keyboard was small, and I made lots of typing errors. Still with the errors, the typing was faster than an iPhone or G1.

Windows 7 Netbooks: First Look

So we’re doing a bunch of work targeted at netbooks and I ordered a couple for testing and to get a feel for how they are or can be used. First impressions are important and between the manufacturers and Microsoft they have a long way to go before they’ll be as sexy as a Macbook.

Toshiba Mini and Asus Eee PC
Toshiba Mini and Asus Eee PC

Unpacking

I ordered an ASUS Eee PC T91MT (link is T91 before Multi-touh was added) and a Toshiba Mini NB205-N230. Straight from the box, Asus has a much sexier packaging, but is less eco-friendly (the Netbook sits in a plastic shell). It was charged, however, which meant I could use it right away (well, see notes about start up below). The Toshiba was not. Come on folks, Apple did this ten years ago (or thereabouts) when they shipped the first iPod. People want to use your product out-of-the-box. Ship it charged.

The Eee PC also came with a nice soft case. Granted it was $200 more, so they can do that, but it was a nice touch. Better than the silly white disposable fabric packaging covered in warnings from Toshiba.

The Eee PC is a 9″ screen with a tablet format (the screen swivels around and lays down so that it works like a tablet. This was compelling enough to get, but they just released a multitouch version (T91MT) which I figured would make it feel like using my Macbook. Almost. The scroll and rotate are nice, when they work and it was nice just to have those gestures there.

Start up

Both of these ship with Windows 7 which is really too much of a resource hog for these little machines. The Eee PC has Windows 7 Home Ultimate and it is slow, slow, slow. The Toshiba Mini has Windows 7 Starter Kit which performs a little better (also, it has a faster processor 1.66GGhz N280 vs. 1.33 GHz Z520 on the Eee PC). Starter Kit, from what I can tell, is the stripped bare version of Windows that Microsoft made for Netbooks to compete with Linux. Then they try very hard to get you to upgrade to Home edition for a mere $90. Really? I just paid $300 for a computer and you want me to pay a 30% premium? Take a lesson from the extended warranty dealers — people will pay about 10% over the purchase price at checkout for added value but not more.

Initializing...
Please Wait to use your new computer...

Apparently Microsoft has decided to ship their OEMs a disk image that can be dropped onto any computer and does the device discovery and installation on “first run.” Sure it makes it easier to install Windows on these machines (because they all have the same image), but it means to use a brand new machine with Windows 7 installed, the machine has to go through a lengthy (and not very sexy) installation period. On the Eee PC this took nearly 45 minutes between first power on and when I could use any software. Most of the time was watching the really silly animation shown at right (I don’t know if this is Microsoft software or Asus software, probably the latter).

The Toshiba Mini fared better. About 15 minutes from plugging in and turning on to using it. During this time I saw the screen shift between Windows 2000-style dialog layouts to Aero-style several times and a few rendering errors in the process. This didn’t really give me a lot of faith in the product.

A later post gives some of my impressions of using the two netbooks (once I’ve spent more time on the Toshiba Mini). I will repeat that so far I am not impressed with Windows 7 on a 1.33Ghz machine. It’s just not fast enough.

Restoring a lost INBOX (or Why I hate email servers)

Email servers are so complex to manage that I can’t really justify running my own. Still I do. Unfortunately, there’s a known bug in the old version of SquirrelMail that’s available for Ubuntu Hardy that causes your inbox to be emptied sometimes when you try to empty your trash. Yeah, it’s quite a bug.

So this happened (again) and I needed to recover it. My hosting company does daily image of my machine, so the best I could do was recover the email from midnight before the day it dropped. Still that’s 741 messages, so it needs to be recovered.

I restored the backup image to a new server as step number one. This allowed me access to the messages without affecting anything else on my server.

Then I installed fetchmail and read up on how to use it. After some mistakes (that caused 100+ bounce messages) I got the mail transferred, with a couple issues. All messages came marked as ‘unread’ and when I ran it twice (because I made mistakes) the messages got doubled. I’m sure there are options to address this but I got the job done.

Here’s the fetchmail command to do this.

fetchmail -v -a -k -p IMAP \
  -u user\@example.com \
  --smtpname user\@example.com \
  111.222.333.444

The IP address is the image that’s hosting the backup from the day before the loss.

The parameter to -u is the login username on that image. In this case, I have virtual mail configured for IMAP so the login is the full email address.

The parameter to –smptname is the email address to send the email to on the new server. This is critical because if you don’t add this, it will send everything to your own account at localhost. Which may cause 100+ bounce messages if that’s not a valid address. Just saying.