<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>BarneyBlog &#187; amazon</title>
	<atom:link href="http://www.barneyb.com/barneyblog/category/amazon/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.barneyb.com/barneyblog</link>
	<description>Thoughts, rants, and even some code from the mind of Barney Boisvert.</description>
	<lastBuildDate>Mon, 02 Mar 2020 13:20:35 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9.2</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Migration Complete!</title>
		<link>https://www.barneyb.com/barneyblog/2011/09/28/migration-complete/</link>
		<comments>https://www.barneyb.com/barneyblog/2011/09/28/migration-complete/#comments</comments>
		<pubDate>Wed, 28 Sep 2011 22:56:19 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[meta]]></category>
		<category><![CDATA[personal]]></category>
		<category><![CDATA[potd]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=1738</guid>
		<description><![CDATA[This morning I cut barneyb.com and all it's associated properties over from my old CentOS 5 box at cari.net to a new Amazon Linux "box" in Amazon Web Service's us-east-1 region.Â  Migration was pretty painless.Â  I followed the "replace hardware with cloud resources" approach that I advocate and have spoken on at various places.Â  The [...]]]></description>
			<content:encoded><![CDATA[<p>This morning I cut <code>barneyb.com</code> and all it's associated properties over from my old CentOS 5 box at cari.net to a new Amazon Linux "box" in Amazon Web Service's <code>us-east-1</code> region.Â  Migration was pretty painless.Â  I followed the "replace hardware with cloud resources" approach that I advocate and have spoken on at various places.Â  The process looks like this:</p>
<ol>
<li>launch a virgin EC2 instance (I used the console and based it on <code>ami-7f418316</code>).</li>
<li>create a data volume and attach it to the instance.</li>
<li>allocate an Elastic IP and associate it with the instance.</li>
<li>set up an A record for the Elastic IP.</li>
<li>build a setup script which will configure the instance as needed.Â  I feel it's important to use a script for this so that if your instance dies for some reason you can create a new one without too much fuss.Â  It's not strictly necessary, but part of the cloud mantra is "don't repair, replace" because new resources are so inexpensive.Â  Don't forget to store it on your volume, not the root drive or an ephemeral store.Â  Here's one useful snippet for modifying /etc/sudoers that took me a little digging to figure out:
<pre>bash -c "chmod 660 /etc/sudoers;sed -i -e 's/^\# \(%wheel.*NOPASSWD.*\)/\1/' /etc/sudoers;chmod 440 /etc/sudoers"</pre>
</li>
<li>rsync all the various data files from the current server to the new one (everything goes on the volume; symlink &#8211; via your setup script &#8211; where necessary).Â  Again, use a script.</li>
<li>once you're happy that your scripts work, kill your instance,</li>
<li>launch a new virgin EC2 instance,</li>
<li>attach your data volume,</li>
<li>associate your Elastic IP,</li>
<li>run your setup script,</li>
<li>if anything didn't turn out the way you wanted, fix it, and go back to step 8.</li>
<li>shut down all the state-mutating daemons on the old box.</li>
<li>shut down all the daemons on the new instance.</li>
<li>set up a downtime message in Apache on the old box.Â  I used these directives:
<pre>RewriteEngine  On
RewriteRule    ^/.+/.*    /index.html    [R]
DocumentRoot   /var/www/downtime</pre>
</li>
<li>run the rsync script.</li>
<li>turn on all the daemons on your new instance.</li>
<li>add <code>/etc/hosts</code> records to the old box and update DNS with the Elastic IP.</li>
<li>change Apache on the old box to proxy to the new instance (so people will get the new site without having to wait for DNS to flush).
<pre>ProxyPreserveHostÂ Â  On
ProxyPassÂ Â          /Â Â  http://www.barneyb.com/
ProxyPassReverseÂ Â Â  /Â Â  http://www.barneyb.com/</pre>
<p>These directives are why you need the rules in <code>/etc/hosts</code>, otherwise you'll be in an endless proxy loop.Â  You'll need to tweak them slightly for your SSL vhost.Â  The ProxyPreserveHost directive is important so that the new instance still gets the original Host header, allowing it to serve from the proper virtual host.Â  This lets you proxy all your traffic with a single directive and still have it split by host on the new box.</li>
</ol>
<p>The net result was a nearly painless transition.Â  There was a bit of downtime during the rsync copy (I had to sync about 4GB of data), but only a few minutes.Â  Once the new box was populated and ready to go, the proxy rules allowed everyone to keep using the sites, even before DNS was fully propagated.Â  Now, a few hours later, the only traffic still going to my old box is from <code>Baiduspider/2.0; +http://www.baidu.com/search/spider.html</code>, whatever that is.Â  Hopefully it'll update it's DNS cache like a well-behaved spider should, but not according to my TTLs.Â  Hmph.</p>
<p>Steps 1-12 (the setup) took me about 4 hours to do for my box.Â  Just for reference, I host a couple Magnolia-backed sites, about 10 WordPress sites (including this one), a WordPressMU site, and a whole pile of CFML apps (all running within a single Railo).Â  I also host MySQL on the same box which everything uses for storage.Â  Steps 13-19 took about an hour, most of that being waiting for the rsync and then running through all the DNS changes (about 20 domains with between 1 and 10 records each).</p>
<p>And now I have extra RAM.Â  Which is a good thing.Â  I'm sure a few little bits and pieces will turn up broken over the next few days, but I'm quite happy with both the process and the result.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2011/09/28/migration-complete/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Route 53 API Regression</title>
		<link>https://www.barneyb.com/barneyblog/2011/05/25/route-53-api-regression/</link>
		<comments>https://www.barneyb.com/barneyblog/2011/05/25/route-53-api-regression/#comments</comments>
		<pubDate>Wed, 25 May 2011 23:03:14 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=1689</guid>
		<description><![CDATA[If you're using Route 53, the latest revision of the API (2011-05-05) changed the casing of the 'name' and 'type' parameters for paged listings.Â  Previously (in version 2010-10-01) they were 'Name' and 'Type'.Â  Unfortunately, it appears that the casing change has leaked backwards to the old API version.Â  So if you're doing paged listings anywhere, [...]]]></description>
			<content:encoded><![CDATA[<p>If you're using Route 53, the latest revision of the API (2011-05-05) changed the casing of the 'name' and 'type' parameters for paged listings.Â  Previously (in version 2010-10-01) they were 'Name' and 'Type'.Â  Unfortunately, it appears that the casing change has leaked backwards to the old API version.Â  So if you're doing paged listings anywhere, you'll want to go update your code to use the lowercase names, even if you're still using the 2010-10-01 endpoint.</p>
<p>UPDATE: AWS contacted me and apparently <a href="http://www.barneyb.com/barneyblog/2011/05/26/update-route-53-api-regression/">the problem was documentation, not a regression</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2011/05/25/route-53-api-regression/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Minor AmazonS3.cfc Bug Fix</title>
		<link>https://www.barneyb.com/barneyblog/2010/09/02/minor-amazons3-cfc-bug-fix/</link>
		<comments>https://www.barneyb.com/barneyblog/2010/09/02/minor-amazons3-cfc-bug-fix/#comments</comments>
		<pubDate>Thu, 02 Sep 2010 22:10:49 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[cfml]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=1578</guid>
		<description><![CDATA[Today I identified a subtle bug with the listObjects method of AmazonS3.cfc dealing with delimiters.Â  If you supply a prefix that ends with a trailing delimiter, certain paths would be returned partially truncated.Â  Removing the trailing delimiter solves the issue, so there's an easy workaround, but I've added a snippet to take care of that [...]]]></description>
			<content:encoded><![CDATA[<p>Today I identified a subtle bug with the listObjects method of AmazonS3.cfc dealing with delimiters.Â  If you supply a prefix that ends with a trailing delimiter, certain paths would be returned partially truncated.Â  Removing the trailing delimiter solves the issue, so there's an easy workaround, but I've added a snippet to take care of that if you inadvertantly pass one in.Â  The patched CFC <a href="https://ssl.barneyb.com/svn/barneyb/!svn/bc/6774/amazon/trunk/amazons3.cfc">is available here</a>.Â  You can always get the latest version on <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">the project page</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2010/09/02/minor-amazons3-cfc-bug-fix/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon S3 CFC Update &#8211; Now With Listings!</title>
		<link>https://www.barneyb.com/barneyblog/2010/06/08/listings-for-amazon-s3-cfc/</link>
		<comments>https://www.barneyb.com/barneyblog/2010/06/08/listings-for-amazon-s3-cfc/#comments</comments>
		<pubDate>Tue, 08 Jun 2010 17:48:00 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[cfml]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=1543</guid>
		<description><![CDATA[I've added two new methods to my Amazon S3 CFC: listBuckets and listObjects.Â  Both of them do about what you'd expect, returning a CFDIRECTORY-esque recordset object containing the rows you are interested in.Â  I've attempted to make S3 appear like a "normal" filesystem where "/" is S3 itself, the top-level directories are your buckets, and [...]]]></description>
			<content:encoded><![CDATA[<p>I've added two new methods to my <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">Amazon S3 CFC</a>: listBuckets and listObjects.Â  Both of them do about what you'd expect, returning a CFDIRECTORY-esque recordset object containing the rows you are interested in.Â  I've attempted to make S3 appear like a "normal" filesystem where "/" is S3 itself, the top-level directories are your buckets, and your objects are below that.Â  At the moment no consideration is made for paging or truncation.Â  Leveraging the new functionality, here's complete source for a simple S3 browser (minus your key/secret):</p>
<pre>&lt;cfparam name="url.path" default="" /&gt;

&lt;cfset s3 = createObject("component", "amazons3").init(
  "YOUR_AWS_KEY",
  "YOUR_AWS_SECRET"
) /&gt;

&lt;cfoutput&gt;
&lt;cfset bp = "" /&gt;
&lt;h1&gt;
&lt;a href="?path=#bp#"&gt;ROOT&lt;/a&gt;
&lt;cfloop list="#url.path#" index="segment" delimiters="/"&gt;
  &lt;cfset bp = listAppend(bp, segment, "/") /&gt;
  / &lt;a href="?path=#bp#"&gt;#segment#&lt;/a&gt;
&lt;/cfloop&gt;
&lt;/h1&gt;

&lt;cfif url.path EQ ""&gt;
  &lt;cfset b = s3.listBuckets() /&gt;
  &lt;ul&gt;
    &lt;cfloop query="b"&gt;
      &lt;li&gt;&lt;a href="?path=/#name#"&gt;#name#/&lt;/a&gt; #dateLastModified#&lt;/li&gt;
    &lt;/cfloop&gt;
  &lt;/ul&gt;
&lt;cfelse&gt;
  &lt;cfset q = s3.listObjects(listFirst(url.path, '/'), listRest(url.path, '/')) /&gt;
  &lt;ul&gt;
    &lt;li&gt;&lt;a href="?path=#reverse(listRest(reverse(url.path), '/'))#"&gt;..&lt;/a&gt;&lt;/li&gt;
    &lt;cfloop query="q"&gt;
      &lt;li&gt;
      &lt;cfif type EQ "dir"&gt;
        &lt;a href="?path=#listAppend(directory, name, '/')#"&gt;#name#/&lt;/a&gt;
      &lt;cfelse&gt;
        &lt;a href="#s3.s3Url(bucket, objectKey)#"&gt;#name#&lt;/a&gt;
      &lt;/cfif&gt;
      &lt;/li&gt;
    &lt;/cfloop&gt;
  &lt;/ul&gt;
&lt;/cfif&gt;
&lt;/cfoutput&gt;</pre>
<p>The default mode of operation assumes a delimiter of '/' (just like a filesystem).Â  If you want to do non-delimited operations (like generic prefix matching), you'll want to supply an empty delimiter, or you'll get weird results.Â  For example:</p>
<pre>&lt;cfset k_objects = s3.listObjects('my-bucket', 'k', '') /&gt;
</pre>
<p>If you omit the third parameter, the default 'k' will be used, and you'll get back objects within the 'k' psuedo-directory, rather than objects that begin with a 'k'.Â  This is the reverse of the default position of the raw S3 API, which assumes you want simple prefixing and makes you explicitly add the delimiter if you want psuedo-directory contents.</p>
<p>This dichotomy can also lead to weird results in the resulting recordset.Â  Every recordset comes with both 'bucket' and 'objectKey' columns that match the raw S3 nomenclature and 'directory' and 'name' columns that match the filesystem "view" of S3.Â  If you're doing raw prefixes you'll want to use bucket/objectKey (as the directory/name semantic doesn't work with prefixes).Â  If you're doing filesystem type stuff you'll probably want directory/name (though bucket/objectKey will still be correct).</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2010/06/08/listings-for-amazon-s3-cfc/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
		<item>
		<title>Amazon CloudFront CFC</title>
		<link>https://www.barneyb.com/barneyblog/2010/01/19/amazon-cloudfront-cfc/</link>
		<comments>https://www.barneyb.com/barneyblog/2010/01/19/amazon-cloudfront-cfc/#comments</comments>
		<pubDate>Tue, 19 Jan 2010 19:03:43 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[cfml]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=1184</guid>
		<description><![CDATA[Amazon CloudFront is a CDN that sits atop their S3 file hosting service to provide caching and geographically dispersed delivery.Â  It's all very simple, except security.Â  Much like my Amazon S3 CFC's original goal, my new Amazon CloudFront CFC's primary purpose is to ease the creation of signed URLs for CloudFront.Â  You can grab a [...]]]></description>
			<content:encoded><![CDATA[<p>Amazon CloudFront is a CDN that sits atop their S3 file hosting service to provide caching and geographically dispersed delivery.Â  It's all very simple, except security.Â  Much like my <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">Amazon S3 CFC</a>'s original goal, my new <a href="http://www.barneyb.com/barneyblog/projects/amazon-cloudfront-cfc/">Amazon CloudFront CFC</a>'s primary purpose is to ease the creation of signed URLs for CloudFront.Â  You can grab a copy from <a href="http://www.barneyb.com/barneyblog/wp-content/uploads/2010/01/amazoncloudfrontcfc.txt">amazoncloudfrontcfc.txt</a>.Â  The API for the CFC is about what you'd expect:</p>
<pre>&lt;cfset cloudfront = createObject("component", "amazoncloudfront").init(keyPairId, privateKeyFile) /&gt;
&lt;cfset signedUrl = cloudfront.signUrlWithTimeout(<span style="color: #0000ff;">resourceUrl</span>, <span style="color: #ff0000;">600</span>) /&gt;</pre>
<p>This will generate a signed URL for the CloudFront <span style="color: #0000ff;">resourceUrl</span> (direct domain or CNAMEd) which expires in <span style="color: #ff0000;">600</span> seconds (10 minutes).Â  Very much like the S3 CFC.Â  Here we're dealing with resourceURLs directly (which correspond to a bucket and object key) rather than separate buckets and object keys.Â  CloudFront doesn't distinguish between the two parts, so the CFC doesn't either.</p>
<p>The biggest gotcha, however, is with the signing mechanism.Â  S3 uses a simple pre-shared key, but CloudFront uses an RSA private key which is significantly more complicated to deal with.Â  Unfortunately, Amazon provides it's keys in PEM format, but core Java can only read DER format, so you must either convert your private key to DER format with OpenSSL, or use a third party library.Â  Fortunately, both are pretty simple.</p>
<p>Here's the command to convert your key with OpenSSL:</p>
<pre>openssl pkcs8 -topk8 -in pk-KEYPAIRID.pem -nocrypt -outform DER -out pk-KEYPAIRID.der</pre>
<p>Alternatively, you can use the <a href="http://juliusdavies.ca/commons-ssl/">not-yet-commons-ssl</a> package, which provides support for reading keys in PEM format (among a pile of other things).Â  If you can add the JAR to your classpath, this is definitely a superior solution to manual conversion, since you can transparently use pretty much any RSA private key.Â  And there's nothing to enable in the CFC; if not-yet-commons-ssl is available on the classpath, it'll automatically use it for reading in the private key.Â  As an aside, the name 'not-yet-commons-ssl' reflects the fact that the package has applied fro <a href="http://commons.apache.org/">Apache Commons</a> incubation, but it hasn't been accepted yet.Â  The code is orgniazed in the 'org.apache.commons.ssl' package assuming it's acceptance, but it is still unofficial.</p>
<p>As always, updates and such are available on the <a href="http://www.barneyb.com/barneyblog/projects/amazon-cloudfront-cfc/">project page</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2010/01/19/amazon-cloudfront-cfc/feed/</wfw:commentRss>
		<slash:comments>14</slash:comments>
		</item>
		<item>
		<title>AmazonS3.cfc Update</title>
		<link>https://www.barneyb.com/barneyblog/2008/05/11/amazons3cfc-update/</link>
		<comments>https://www.barneyb.com/barneyblog/2008/05/11/amazons3cfc-update/#comments</comments>
		<pubDate>Mon, 12 May 2008 05:28:41 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[cfml]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/?p=392</guid>
		<description><![CDATA[I've updated my AmazonS3 CFC to include local caching of files.  The new source is available here: amazons3.cfc.txt, or visit the project page.Â  The only public API change from the first version is the addition of an optional third parameter to the init method for specifying the local directory to use as a cache. [...]]]></description>
			<content:encoded><![CDATA[<p>I've updated my AmazonS3 CFC to include local caching of files.  The new source is available here: <a href="http://www.barneyb.com/barneyblog/wp-content/uploads/2008/05/amazons3cfc.txt">amazons3.cfc.txt</a>, or visit the <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">project page</a>.Â  The only public API change from the <a href="http://www.barneyb.com/barneyblog/wp-content/uploads/2008/04/amazons3cfc.txt">first version</a> is the addition of an optional third parameter to the init method for specifying the local directory to use as a cache.  If you're doing repetitive read operations on S3-stored assets, using the local cache can speed things up significantly, though it is not without drawbacks.</p>
<p>In particular, the CFC assumes that it is the only interface to the S3-stored assets that it is used to interface with.  If you use any other mechanism to manipulate those assets (including multiple CF applications), you'll run into issues.  The cache itself is the canonical source for cache state, so emptying the cache folder will always revert the CFC back to S3's state if the cache is out of sync.</p>
<p>If you cluster multiple CF instance together, you can still use the local cache, but you must use a single cache for all CF instances.   I.e. the cache must reside on a disk shared by all instances, rather than each instance having it's own separate cache.  This reduces the performance benefit slightly (since you must use a non-local disk), but it will still be faster than S3.</p>
<p>The CFC exposes a deleteCacheFor() method that accepts a bucket and objectKey pair that can be used for managing the cache outside of actual S3 operations.  If you have multiple CF instances that cannot share a single local cache, or for which the network overhead for a shared cache is still undesirable, you can use this method to synchronize the instances' caches via JMS or something.  Obviously that's far outside the scope of the CFC itself, but the hook is there to support it.  Note that you must delete cache when overwriting an asset on S3, as the local cache will not pick up the change in S3, it will continue to return the old version if it's not cleared.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2008/05/11/amazons3cfc-update/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>S3 is Sweet (One App Down)</title>
		<link>https://www.barneyb.com/barneyblog/2008/04/07/s3-is-sweet-one-app-down/</link>
		<comments>https://www.barneyb.com/barneyblog/2008/04/07/s3-is-sweet-one-app-down/#comments</comments>
		<pubDate>Mon, 07 Apr 2008 20:10:50 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[coldfusion]]></category>
		<category><![CDATA[development]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/2008/04/07/s3-is-sweet-one-app-down/</guid>
		<description><![CDATA[This weekend I ported my big filesystem-based app to S3, and it went like a dream.  It's a image-management application, with all the actual images stored on disk.  In addition to the standard import/edit/delete, the app provides automatic on-the-fly thumbnail generation, along with primitive editing capabilities (crop, resize, rotate, etc.).  With images [...]]]></description>
			<content:encoded><![CDATA[<p>This weekend I ported my big filesystem-based app to S3, and it went like a dream.  It's a image-management application, with all the actual images stored on disk.  In addition to the standard import/edit/delete, the app provides automatic on-the-fly thumbnail generation, along with primitive editing capabilities (crop, resize, rotate, etc.).  With images on local disk, that's all really easy: read them in, do whatever, write them back out.  I figured using S3 would make things both more cumbersome and less performant.  Both suspicions turned out to be unwarranted.</p>
<p>Building on the 's3Url' UDF that I published last week, I whipped up a little CFC to manage file storage on S3 with a very simple API.  It has s3Url, putFileOnS3, getFileFromS3, s3FileExists, and deleteS3File methods, which all do about what you'd expect.  You can grab the code here: <a title="AmazonS3 CFC" href="http://www.barneyb.com/barneyblog/wp-content/uploads/2008/04/amazons3cfc.txt">amazons3.cfc.txt</a> (make sure you remove the ".txt" extension) or visit the <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">project page</a>.  It uses the simple HTTP-based interface, so after the authentication is handled, it's all very simple and fast.  I haven't looked at the SOAP interface &#8211; why bother complicating a simple task?</p>
<p>With that CFC (and an application-specific wrapper to take care of some path-related transforms), porting the whole app took about two hours.  I also realized after I was mostly done that the CF image tools accept URLs as well as files, so I switched my image reads to just use URLs instead of pulling the file local and reading it from disk.</p>
<p>As for moving all the actual content, S3Sync was a champ, moving about 4.5GB of data from my Cari server to S3 in a few hours, including gracefully handling a couple errors raised by S3 (which a retry &#8211; performed automatically &#8211; solved), and a stop/restart in the middle.   Total cost: about 65 cents.</p>
<p>Next is porting the blogs, including all the Picasa-based galleries.  Unfortunately, that means writing PHP, but with how easy the CF stuff was, I don't think it'll be too much effort.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2008/04/07/s3-is-sweet-one-app-down/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>My Amazon Toolkit (Thus Far)</title>
		<link>https://www.barneyb.com/barneyblog/2008/04/04/my-amazon-toolkit-thus-far/</link>
		<comments>https://www.barneyb.com/barneyblog/2008/04/04/my-amazon-toolkit-thus-far/#comments</comments>
		<pubDate>Sat, 05 Apr 2008 05:15:26 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/2008/04/04/my-amazon-toolkit-thus-far/</guid>
		<description><![CDATA[I'm early in the move to Amazon, of course, but already some specific tools are indispensable.Â  I'm sure the list will grow, but here's where I'm at right now:

S3Sync &#8211; A simple rsync-like command line tool (called 's3sync') for syncing stuff from a computer to S3 or the reverse.Â  Also includes the 's3cmd' tool that [...]]]></description>
			<content:encoded><![CDATA[<p>I'm early in the move to Amazon, of course, but already some specific tools are indispensable.Â  I'm sure the list will grow, but here's where I'm at right now:</p>
<ul>
<li><a href="http://s3sync.net/wiki">S3Sync</a> &#8211; A simple rsync-like command line tool (called 's3sync') for syncing stuff from a computer to S3 or the reverse.Â  Also includes the 's3cmd' tool that roughly implements the web service API (list your buckets, put a file, etc.).Â  This is the cornerstone of the plan for moving all my data files from my current server and backups to S3.Â  Once the migration is complete, s3cmd will probably be the tool of choice for manipulating S3 programatically.Â  Written in Ruby, and requires 1.8.4+; my CentOS 4 box couldn't find a new enough RPM, so I had to compile from source (which was totally painless).</li>
<li><a href="https://addons.mozilla.org/en-US/firefox/addon/3247">S3 Firefox Organizer (S3Fox)</a>- a client for S3 following the standard FTP client paradigms.Â  It has it's own proprietary definition of folders, but they're unobstrusive.Â  Since I'm getting stuff into S3 mostly with s3sync, I'm mostly using this for read-only oversight.</li>
<li><a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609">EC2 UI</a> &#8211; a client for managing your EC2 "stuff" from Firefox.Â  While not FTP-like at all, it shares a lot of the same UI as S3Fox for setting up accounts and the like.</li>
</ul>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2008/04/04/my-amazon-toolkit-thus-far/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Amazon S3 URL Builder for ColdFusion</title>
		<link>https://www.barneyb.com/barneyblog/2008/04/04/amazon-s3-url-builder-for-coldfusion/</link>
		<comments>https://www.barneyb.com/barneyblog/2008/04/04/amazon-s3-url-builder-for-coldfusion/#comments</comments>
		<pubDate>Fri, 04 Apr 2008 17:45:50 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>
		<category><![CDATA[coldfusion]]></category>
		<category><![CDATA[development]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/2008/04/04/amazon-s3-url-builder-for-coldfusion/</guid>
		<description><![CDATA[First task for my Amazon move is getting data assets (non-code-managed files) over to S3.  I have a variety of types of data assets that need to move and have references updated, most of which require authentication.  To make that easier, I wrote a little UDF to take care of building urls with [...]]]></description>
			<content:encoded><![CDATA[<p>First task for my Amazon move is getting data assets (non-code-managed files) over to S3.  I have a variety of types of data assets that need to move and have references updated, most of which require authentication.  To make that easier, I wrote a little UDF to take care of building urls with authentication credentials in there.</p>
<pre>&lt;cffunction name="s3Url" output="false" returntype="string"&gt;
  &lt;cfargument name="awsKey" type="string" required="true" /&gt;
  &lt;cfargument name="awsSecret" type="string" required="true" /&gt;
  &lt;cfargument name="bucket" type="string" required="true" /&gt;
  &lt;cfargument name="objectKey" type="string" required="true" /&gt;
  &lt;cfargument name="requestType" type="string" default="vhost"
    hint="Must be one of 'regular', 'ssl', 'vhost', or 'cname'.  'Vhost' and 'cname' are only valid if your bucket name conforms to the S3 virtual host conventions, and cname requires a CNAME record configured in your DNS." /&gt;
  &lt;cfargument name="timeout" type="numeric" default="900"
    hint="The number of seconds the URL is good for.  Defaults to 900 (15 minutes)." /&gt;
  &lt;cfscript&gt;
    var expires = "";
    var stringToSign = "";
    var algo = "HmacSHA1";
    var signingKey = "";
    var mac = "";
    var signature = "";
    var destUrl = "";

    expires = int(getTickCount() / 1000) + timeout;
    stringToSign = "GET" &amp; chr(10)
      &amp; chr(10)
      &amp; chr(10)
      &amp; expires &amp; chr(10)
      &amp; "/#bucket#/#objectKey#";
    signingKey = createObject("java", "javax.crypto.spec.SecretKeySpec").init(awsSecret.getBytes(), algo);
    mac = createObject("java", "javax.crypto.Mac").getInstance(algo);
    mac.init(signingKey);
    signature = toBase64(mac.doFinal(stringToSign.getBytes()));
    if (requestType EQ "ssl" OR requestType EQ "regular") {
      destUrl = "http" &amp; iif(requestType EQ "ssl", de("s"), de("")) &amp; "://s3.amazonaws.com/#bucket#/#objectKey#?AWSAccessKeyId=#awsKey#&amp;Signature=#urlEncodedFormat(signature)#&amp;Expires=#expires#";
    } else if (requestType EQ "cname") {
      destUrl = "http://#bucket#/#objectKey#?AWSAccessKeyId=#awsKey#&amp;Signature=#urlEncodedFormat(signature)#&amp;Expires=#expires#";
    } else { // vhost
      destUrl = "http://#bucket#.s3.amazonaws.com/#objectKey#?AWSAccessKeyId=#awsKey#&amp;Signature=#urlEncodedFormat(signature)#&amp;Expires=#expires#";
    }

    return destUrl;
  &lt;/cfscript&gt;
&lt;/cffunction&gt;</pre>
<p>To use it, do something like this:</p>
<pre>s3Url(aws_key, aws_secret, "s3.barneyb.com", "test.txt", 'cname');</pre>
<p>That will generate a request to the file "test.txt" in the "s3.barneyb.com" bucket, using a CNAME-style URL.  Obviously you'll have to know my AWS key and secret for it to work, and I'm not telling, but substitute your own values.  You can use regular (bucket name in the request), vhost (bucket name in an S3 subdomain), cname (a vanity CNAME pointing at S3), or ssl (regular over HTTPS) for the 5th type parameter to control the style of URL generated.</p>
<p><strong>Edit:</strong> here's a link to the <a href="http://www.barneyb.com/barneyblog/projects/amazon-s3-cfc/">project page</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2008/04/04/amazon-s3-url-builder-for-coldfusion/feed/</wfw:commentRss>
		<slash:comments>30</slash:comments>
		</item>
		<item>
		<title>Moving to the Amazon</title>
		<link>https://www.barneyb.com/barneyblog/2008/04/04/moving-to-the-amazon/</link>
		<comments>https://www.barneyb.com/barneyblog/2008/04/04/moving-to-the-amazon/#comments</comments>
		<pubDate>Fri, 04 Apr 2008 17:35:55 +0000</pubDate>
		<dc:creator>barneyb</dc:creator>
				<category><![CDATA[amazon]]></category>

		<guid isPermaLink="false">http://www.barneyb.com/barneyblog/2008/04/04/moving-to-the-amazon/</guid>
		<description><![CDATA[I'm in the process of switching my hosting from a dedicated box at cari.net over to Amazon EC2 and S3.  Based on my estimates, the costs will be slightly higher per month ($60/mo right now, $75-80/mo post move), but the benefits are significant:

Using S3 for all my backups and data storage will definitely give [...]]]></description>
			<content:encoded><![CDATA[<p>I'm in the process of switching my hosting from a dedicated box at <a href="http://cari.net/">cari.net</a> over to Amazon <a href="http://aws.amazon.com/ec2">EC2</a> and <a href="http://aws.amazon.com/s3">S3</a>.  Based on my estimates, the costs will be slightly higher per month ($60/mo right now, $75-80/mo post move), but the benefits are significant:</p>
<ul>
<li>Using S3 for all my backups and data storage will definitely give me some piece of mind that I've been lacking.</li>
<li>The virtualized nature of the servers means doing upgrades is totally safe: launch a new copy of the box, do the upgrade, and if everything's golden, switch the IP to the new box.  Cost is $0.10/hr which is close enough to zero to not matter.</li>
<li>I get a processor "upgrade" from my Celeron at Cari to a similarly clocked Xeon equivalent.  <a href="http://bulkingclub.co.uk/uk-sale/weight-loss/">weight loss</a> The latter is paravirtualized, of course, but it should still help since most of my apps are CPU-bound.  I also get some more RAM, but that's less important.</li>
<li>Last, but not least, Cari has had a lot of network issues in the year I've hosted there while Amazon hasn't.</li>
</ul>
<p>First task is to move storage over to S3, and update the applications that currently access stuff off the filesystem (like autogeneration of thumbnails).</p>
]]></content:encoded>
			<wfw:commentRss>https://www.barneyb.com/barneyblog/2008/04/04/moving-to-the-amazon/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
	</channel>
</rss>
