This site is entirely hosted on Amazon’s S3. Nothing more. And it costs me 3¢ a month. If you look closely you might think it’s a WordPress site, and you’d be partly right. The difference is I stripped out the database and webserver and flattened the files to static HTML.
I recently completed a large project moving a Chicago dot-com off of a physical hosting provider to 100% Amazon AWS hosted. To do so required Elastic Load Balancers, many EC2 nodes, S3, CloudFront CDN, Glacier, Relational Database Service, AMI’s, CloudWatch monitoring/alerting, and more. It was way cool.Suffice it to say I know my way around the AWS’ and hosting a WordPress site as flat-files off S3 sounded like a fun challenge. And I’m cheap.
How It’s Done
What’s nice about WordPress is it’s easy to set up and easy to use. So the first thing I did was set up a “development environment” on my laptop. I have no problems hacking Apache and MySQL – but MAMP made this trivial and I didn’t have to change my existing apache/mysql configurations. Once it was up and running I used the existing “hello world” posts/pages to get the rest of it working.
Next was the tricky part. With the skeleton WordPress site hosted at localhost the challenge was dealing with the PHP. Think of it this way, when a request is sent to a php page the webserver does a bunch of magic and spits back html. Since I didn’t want to run a webserver I need to “flatten” the php into html to be served directly off S3. Well after a ton of tedious work, this is what did the trick:
wget --content-disposition --mirror --cut-dirs=1 -P davedahl.com-static -nH -np -p -k -E http://localhost:8888/davedahl.com/
Next I had to set up S3 so it was ready to accept the flattened WordPress. First part was easy – sign up. (Incidentally I chose to use the existing Amazon account that I use to buy books and junk, to keep from over-complicating things). Once signed up I created an S3 “bucket” with the same name as my domain (davedahl.com). Separately from this website I used S3 for cloud backup of my personal things, but this new bucket is a special type since I want to expose it to stream pages to web browsers, so this needs to be configured. I did that with the following configuration changes:
- Under “Permissions” grant “Everyone” permissions to “List” what’s in the bucket.
- Click “Static Website Hosting” and “Enable website hosting”.
- In the associated box under “Enable website hosting” I typed the start page for the website – which for me is index.html.
- Note the “Endpoint” that’s assigned to the public bucket in this same section – this is where you files will be hosted from. It looks like an ugly URL – I’ll tell you how to assign a domain to it with AWS Route 53 a bit later.
So, to recap, I set up a base WordPress, squashed the php into html, and configured an S3 bucket to receive the html files. The next step is pushing the files to S3. This can be done using their website (or what Amazon calls the web console) but it’s much easier with S3cmd command line tools. I’ll skip the gory and tedious detail and get right to the money. Below is what I used to push the files using s3cmdhere’s what you need to know about pushing to S3 with s3cmd:
# Remove the old stuff on AWS
s3cmd --config=$S3CMD_CONF del --recursive --force s3://davedahl.com/
# Upload new stuff to AWS
s3cmd --config=$S3CMD_CONF put --reduced-redundancy --recursive -M --no-check-md5 davedahl.com-static/ s3://davedahl.com/
# s3cmd dosn't do proper mime-type-assingment (despite -M flag above)
for absfile in `find $WP_BASE/davedahl.com-static -name '*.css'`
do
s3cmd --config=$S3CMD_CONF put --reduced-redundancy -m text/css $absfile s3://davedahl.com${absfile#*static}
done
Once that’s done I was able to hit the “endpoint” mentioned above and see the website in all it’s glory. For me, as of this writing, that endpoint is davedahl.com.s3-website-us-east-1.amazonaws.com. If that endpoint is still active you can hit it with a browser and see the same content as what’s at davedahl.com. But these change over time so the next step was to deal with the DNS mojo required to handle that changing endpoint.
The last thing was assigning my davedahl.com domain to the S3 endpoint using AWS’ Route 53, which is simply a DNS manager. There are other DNS services available, in particular I like FreeDNS.com, but because the AWS S3 endpoint can change the most elegant to managing the change was using Route 53 – Amazon keeps the domain constantly pointed at the endpoint regardless of any changes on their end.
To configure Route 53 I first set up the Start of Authority (SOA) which was trivial. Next was changing the name servers at GoDaddy to point to the name servers Route 53 provided. I then set up an A-record pointing to the S3 endpoint, and a CNAME for the www portion, and I was in business.
Setting everything up took me a handful of hours. Lots of trial and error. But for me, writing the content believe it or not took quite a bit longer. If you have any questions about how I did it, or are interested in hearing about how I saved my last client 70% in month-over-month hosting costs with Amazon AWS, drop me a line here!