Archive for the ‘Amazon’ Category

Installing Amazon RDS Command Line Toolkit on Ubuntu 10.04

Amazon’s Relational Database Service (RDS) is their answer to a hosted, maintained database service. They currently offer several versions of MySQL on their instances, the ability to incrementally backup the databases, and offer full fail-over to another availability zone. Pretty awesome stuff compared to having to manage your own MySQL servers, keep them updated, patched, and manage backing them up.

I am writing these instructions because I didn’t find a Ubuntu package to install the RDS Command Line Toolkit for Ubuntu 10.04, and I couldn’t find any clear concise instructions on how to install them, or how to use them. I believe that this is one area in which Amazon fails at miserably. The RDS is a great service, but you have to be a rocket scientist to figure out how to install and use them to use the service to begin with.

UPDATE: Scott Moser is working on an Ubuntu package for the RDS tools to go along with the EC2 tools. You can find his work here.

The main hurdle in using RDS is that you need to change a database parameter to be able to import a MySQL database from a local server dump to the Amazon RDS server instance that you create. You have to change database instance parameters with the command line tools because you can view, but not change any of the database instance parameters from Amazon’s web based management console. Honesty, this was way harder than it should have been.

First, I’ll outline the instructions for installing the RDS command line toolkit, then I’ll supply the parameters that I had to change. Judging from the discussions I found over at the Amazon RDS forum, the need to change some parameters to get RDS to work seems to be a pretty common occurrence.

The RDS Command Line Toolkit is available from Amazon’s website here.

I downloaded them locally and unzip’d the zip file to a directory called rds.

On the Ubuntu server:

$ sudo mkdir -p /usr/local/aws
$ mkdir ~/.ec2

Transfer the unzip’d rds directory to /usr/local/aws/rds on the server using sftp.

Transfer your cert-.pem and pk-.pem files to /home/ubuntu/.ec2/ directory

Set the permissions for the files that you just uploaded
On the Ubuntu server:

$ cd /usr/local/aws/rds/bin
$ sudo chmod 744 *
$ sudo chmod 0700 ~/.ec2
$ sudo chmod 0400 ~/.ec2/*

Setup the credential file

$ cd /usr/local/aws/rds
$ sudo cp credential-file-path.template credential-file
$ sudo nano credential-file

Add your own credentials into this file. This should be pretty self explanatory.

# Enter the AWS Keys without the < or >
# These can be found at 
# under Account->Security Credentials

Set the permissions on the credential file

$ sudo chmod 600 /usr/local/aws/rds/credential-file

Add these lines to the end of your ~/.bashrc file

$ sudo nano ~/.bashrc

Here’s the lines to add:

# Set Java home directory for EC2 tools
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk

# Set location of AWS key
export EC2_REGION=eu-west-1
export EC2_URL=
export EC2_PRIVATE_KEY=~/.ec2/pk-.pem
export EC2_CERT=~/.ec2/cert-.pem

# Set location of the ec2 and rds command line tools
export EC2_HOME=/usr
export AWS_RDS_HOME=/usr/local/aws/rds

# Set AWS path
export PATH=$PATH:$EC2_HOME/bin:$AWS_RDS_HOME/bin

You will notice that there are a few lines that we added or may have already been added that relate to EC2 instead of RDS. This because I had the ec2-api-tools package already installed. If you want to install those tools, you can do them with the following command, but you probably already have them installed.

sudo apt-get install ec2-api-tools

Now that you have the RDS tools installed and your credentials setup, you should “source” your .bashrc file.

$ source .bashrc

At this point, you should create a database instance and a database parameter group through Amazon’s Management console. You need to create a second parameter group because you can’t modify the default group.

You will also need to give the instance security group access to the RDS database security group through the web management console. The Ubuntu server at EC2 that will be accessing your database on RDS will need to be a member of the instance security group.

Once, you have created those two items, you should be able to see them through the command line tools that we just installed.

$ rds-describe-db-instances
$ rds-describe-db-parameter-groups

Now that you have the rds command line tools installed and working, you need to change the following parameters in your RDS parameter group. RDS uses latin1_swedish by default, so I needed to change the default for new databases to UTF-8. Additionally, I had some stored procedures in my MySQL databases, and there is no super-user privilege on RDS, so I needed to change the log_bin_trust_function_creators parameter to be able to upload my databases. Last, but not least, my 10 megabyte database wouldn’t upload, so I figured out that I needed to increase the max_allowed_packet value. Please see the commands, parameters, and values below. In this example, my additional database parameter group is called mygroupname.

This is where I see Amazon’s failure to make this easy for someone who hasn’t used RDS before. I was only uploading a 10MG database, which isn’t that big. I needed to change at the very least the max_allowed_packets to get my data into RDS. I would assume that this would be a common issue with just about anyone.

$ rds-modify-db-parameter-group mygroupname --parameters="name=character_set_server, value=utf8, method=immediate" 
$ rds-modify-db-parameter-group mygroupname --parameters="name=collation_server, value=utf8_general_ci, method=immediate"
$ rds-modify-db-parameter-group mygroupname --parameters="name=max_allowed_packet, value=67108864, method=immediate"
$ rds-modify-db-parameter-group mygroupname --parameters="name=log_bin_trust_function_creators, value=1, method=immediate"

Then you need to assign your database instance to use the new group:

$ rds-modify-db-instance mydbinstancename --db-parameter-group-name mygroupname

Last, you’ll need to reboot your instance to make the parameter group settings take affect. Note that this can take several minutes before you see your RDS instance back online, but it happened pretty quickly for me.

$ rds-reboot-db-instance mydbinstancename

To import your local database dump to your RDS instance, you use this command on your local server:

$ mysql -h -uroot -pmypassword mydatabasename < mylocaldatabasedumpname.sql

Here's some instructions that I found somewhat useful.
Amazon RDS User Guide
Hosting a simple LAMP application in the cloud

ec2-consistent-snapshot fails during cron

I was getting the following errors when running ec2-consistent-snapshot to create automatic nightly backups of an EBS volume. You can find Eric Hammond’s ec2-consistent-snapshot script here.

Backing up volume vol-06ca566f
Can't exec "xfs_freeze": No such file or directory at /usr/bin/ec2-consistent-snapshot
line 470.
ec2-consistent-snapshot: ERROR: xfs_freeze -f /vol: failed(-1)
Can't exec "xfs_freeze": No such file or directory at /usr/bin/ec2-consistent-snapshot
line 470.
ec2-consistent-snapshot: ERROR: xfs_freeze -u /vol: failed(-1)
PHP Notice:  Undefined index: v in /opt/scripts/backup/remove_old_snapshots.php on
line 28
PHP Notice:  Undefined index: v in /opt/scripts/backup/remove_old_snapshots.php on
line 31
PHP Notice:  Undefined index: n in /opt/scripts/backup/remove_old_snapshots.php on
line 34
ERROR: Please provide number of snapshots to keep (-n option)

This batch job ran fine manually, but fails when executed via cron. The problem is that cron has a different path than a shell environment.

You can fix the issue by adding the following line to the beginning of the cron. This adds the path to the cron so that it can execute files in various directories.

$ sudo crontab -e

add this line


Newer version of Elasticfox now available

For those who use Elasticfox Firefox Extension for Amazon EC2 to manage their AWS entities, you may want to know that an updated version of Elasticfox is available from the AWS developer website.

It works with Firefox 4 (as did the previous version), and allows you to tag your entities a bit better. The interface has been slightly optimized and improved and there are a few more columns that are available for you in the instances view, such as Name, which is a big help.

It doesn’t allow you to manage your RDS instances, but it’s still got a lot of small added items that add up. If you are using Elasticfox, you really want to update.

Elasticfox updated menu

It also has some added features in the right click menu, such as instance lifecycle, the ability to edit the tags, and it also has menu items for termination protection, which allows you to access the features that help prevent accidentally terminating an instance.

You can find all you need here:
Original ElasticFox
EC2 tag (updated) version
The updated EC2 tag version has a repository with the latest versions hosted at

Tagging AWS entities

I found this great source of information from Mark Russell on the ec2ubuntu Google group email list. I thought I would document/share it here.

If you are using EC2, you can assign tags to almost any kind of AWS resource including instances and images. Then when you run ec2-describe-{volumes, instances, images, etc.} you can get some human readable output! Of course they are also viewable out of the AWS web console.

Introduction to tagging from the Amazon Web Services blog:

Official documentation:

Review of publicly available Drupal stack Amazon AMIs

Until Pantheon Mercury 1.1 or 1.2 comes out publicly, I wanted to setup a plain vanilla Drupal server for development on Amazon’s EC2 environment.

I wanted the server to have:

  • Ubuntu 10.04
  • Apache
  • MySQL
  • PHP
  • Drupal
  • Drush
  • Webmin
  • PhpMyAdmin

There are a ton of AMIs out there, so I wanted to find one that was already available and easy to spin up and configure with my server name and database name/users.

NOTE: The preferred method for me, is still going to be to use Pantheon. I was only looking for a short term, temporary image for a development server while we wait for Pantheon to go public/live.

The final result for me was that I spun up a Ubuntu 10.04 LTS server. The latest official Canonical Ubuntu AMIs are are all listed at for each region. The list is automatically updated with the latest images.
From there, I installed LAMP all in one go with one command, then added Mail servers, PhpMyAdmin, Webmin, Drupal, and finally Drush.

It actually wouldn’t have taken me less time to just do that instead of going out and looking for other AMIs and trying out the ones I found.

Here’s the command to get you started:

$ sudo tasksel install lamp-server

I found 3 main public AMIs for Drupal. Here’s what I thought about each one of them.

Bitnami (

I tried spinning up one of the free Bitnami AMIs on a micro instance. I decided that it was not a viable solution due to poor documentation and non-standard, non-Ubuntu way software is installed such as Apache and Drupal.

If you run on the Bitnami cloud, it’s easy to spin up an instance and backup your instance. There is a flat fee for using their system. For a single server, it’s not worth it ($49/month up to 3 servers, on top of Amazon charges) –


  • Up-to-date versions of Drupal 6.20 and 7 available.
  • Can run on micro instance.
  • AMIs are EBS boot.
  • Have a free AMI that you can use in addition to the cloud hosted version with a backup that is similar to Turnkey Linux’s Hub setup.


  • I found the Bitnami stack unusuable because of the customized installation of Apache.
  • It is installed in /opt/bitnami/apache2
  • sudo service apache2 restart didn’t work – there is a bitnami specific command to restart it
  • Drupal was installed in a non-tradtional way /opt/bitnami/apps/drupal
  • By defualt apache was setup such that was a bitnami splash page, whereas to get to Drupal, I had to go to
  • Changing the default apache root was not-inuitive (I couldn’t figure it out).
  • No username/password supplied for PhpMyAdmin.
  • Poor documentation

Turnkey Linux(

I run another, non-Drupal turnkey linux appliance and it works pretty well, so I was familiar with how their AMIs and general setup works. Their software images are open source and you can download and install on your own hardware. They do not offer a free public Amazon AMI. If you use their Amazon AMI, you kick it off from their Hub. After logging into, you can spin up an instance, and configure automatic backups, similar to Bitnami’s cloud backup feature. They charge 10% on top of the Amazon charges, so it’s pretty affordable for a single server.

I decided against the Drupal AMI without installing it because of the following two cons.


  • No free AMI version to try like Bitnami has.
  • The AMI is an instance boot.
  • The AMIs do not support micro instance, so you have to run a small instance at a minimum.
  • The AMI was built on a much older version of Drupal 6.16 (not really a big issue).
  • The following statement is on their page:

Note: Drupal 6 includes a built-in update status module.
* When enabled could produce confusing/annoying version notifications.
* This is due to drupal6 being installed and updated via the APT package manager, instead of manually from the upstream source code.

I didn’t want to deal with having Ubuntu’s apt-get manage my Drupal updates, I would rather use Drush.

Jumpbox (

They didn’t have a free version to try, so I didn’t try their service. It looks similar in setup and price to Bitnami.


I'm currently available
for Lotus Notes / Domino consulting engagements.


Connect with me: