Amazon S3 CFC Update – Now With Listings!

I've added two new methods to my Amazon S3 CFC: listBuckets and listObjects.  Both of them do about what you'd expect, returning a CFDIRECTORY-esque recordset object containing the rows you are interested in.  I've attempted to make S3 appear like a "normal" filesystem where "/" is S3 itself, the top-level directories are your buckets, and your objects are below that.  At the moment no consideration is made for paging or truncation.  Leveraging the new functionality, here's complete source for a simple S3 browser (minus your key/secret):

<cfparam name="url.path" default="" />

<cfset s3 = createObject("component", "amazons3").init(
) />

<cfset bp = "" />
<a href="?path=#bp#">ROOT</a>
<cfloop list="#url.path#" index="segment" delimiters="/">
  <cfset bp = listAppend(bp, segment, "/") />
  / <a href="?path=#bp#">#segment#</a>

<cfif url.path EQ "">
  <cfset b = s3.listBuckets() />
    <cfloop query="b">
      <li><a href="?path=/#name#">#name#/</a> #dateLastModified#</li>
  <cfset q = s3.listObjects(listFirst(url.path, '/'), listRest(url.path, '/')) />
    <li><a href="?path=#reverse(listRest(reverse(url.path), '/'))#">..</a></li>
    <cfloop query="q">
      <cfif type EQ "dir">
        <a href="?path=#listAppend(directory, name, '/')#">#name#/</a>
        <a href="#s3.s3Url(bucket, objectKey)#">#name#</a>

The default mode of operation assumes a delimiter of '/' (just like a filesystem).  If you want to do non-delimited operations (like generic prefix matching), you'll want to supply an empty delimiter, or you'll get weird results.  For example:

<cfset k_objects = s3.listObjects('my-bucket', 'k', '') />

If you omit the third parameter, the default 'k' will be used, and you'll get back objects within the 'k' psuedo-directory, rather than objects that begin with a 'k'.  This is the reverse of the default position of the raw S3 API, which assumes you want simple prefixing and makes you explicitly add the delimiter if you want psuedo-directory contents.

This dichotomy can also lead to weird results in the resulting recordset.  Every recordset comes with both 'bucket' and 'objectKey' columns that match the raw S3 nomenclature and 'directory' and 'name' columns that match the filesystem "view" of S3.  If you're doing raw prefixes you'll want to use bucket/objectKey (as the directory/name semantic doesn't work with prefixes).  If you're doing filesystem type stuff you'll probably want directory/name (though bucket/objectKey will still be correct).

17 responses to “Amazon S3 CFC Update – Now With Listings!”

  1. Thomas Burleson

    Feature requests… need for real-world use.

    More than bucket listings, when I upload content to S3/Cloudfronts I need to set custom upload headers (expires, no-cache) and then want to change the ACL so the content is public. Also if I upload other content to replace existing content, I often want to rename the parent; this is a form of versioning to force browsers to reload non-expiring cached content.

    Your CFC would be fantastic if it would set headers and ACL and rename if needed.
    Also I would want to upload an entire directory or pass a zip with an "uncompress" option so the contents would be uploaded to S3 in a hierarchial fashion.

    This would fantastic.

  2. Sanjeev Shukla

    I am putting an object on S3, but the object permission is not being set to "public-read" as defined in the function. I am using the following function:


    I do not see anywhere in the above function to define permissions other than the one set withing s3.cfc.

    Just wondering why object would not get "public-read" permission.

  3. Steve-O

    Hey Barney,

    Would you happen to have a sample piece of code implementing your S3 CFC to build a URL for "authenticated links to secured assets for embedding in pages"? Thanks!

  4. David

    Thank you for the CFC. I am having some issues with the CFLOOP tag resulting in an Attribute error saying that ARRAY is not valid but I cannot see that you are using that in the tag. Have you seen this error or have any suggestions?



  5. David

    Awesome, really! You responded immediately and are very supportive with your suggestions. Thanks! I will send you a revised file.

  6. Jules Gravinese

    How do you delete a 'folder'? does not seem to do anything if there are objects within that folder.

  7. Jules Gravinese

    Yes I realize that. That's why I put quotes around "folder" and called it "psuedoDir". What is the workaround then?

    My first thought is to edit your deleteS3File function. First get the list of objects in the bucket. Loop through them looking for a match of listGetAt(name,1,'/') EQ pseudoDir. With each match call deleteS3FileInternal(bucket, name, 0).

    Seem resource intensive though. I was just hoping there was a smarter way to do it that I overlooked.

  8. Jules Gravinese

    That stinks. AS3 should have a wildcard function instead. DELETE #psuedoDir#/* would be nice. Anyway… here's my contribution, in case anyone needs to do the same.

    <cffunction name="deleteS3File" access="public" output="false" returntype="void">
    <cfargument name="bucket" type="string" required="true" />
    <cfargument name="objectKey" type="string" required="true" />

    <cfset q = application.as3.listObjects(bucket, ", '?') />
    <cfloop query="q">
    <cfif listGetAt(, 1, "/") eq objectKey>
    <cfset deleteS3FileInternal(bucket, name, 0) />

  9. Gaurav Malik

    Hi Barney,

    Thanks for the cfc it is really excellent am still playing with it. One thing I noticed is that the list object only returns a 1000 items. Is that normal? I am using CFMX Ver 7.

    Many thanks,


  10. Gaurav Malik

    Hi Barney,

    Thanks for that. So how do you suggest to scroll via the ListObjects methods? If I know that listObjects method gets the 1000th item, can I then ask for the next 1000th item or do I have to explictlly refer to the 1000th Object and then start from there. Sorry for the details.

    Is there any way the 1000 records can be made into a argument, so when using the method you can choose how many records to bring back.