Makahiki Testing Part 2: Heroku Installation


Continuing the Makahiki series started by my earlier post, I set up a Makahiki installation on Heroku which used an Amazon AWS S3 bucket to host static files (images, CSS scripts, Javascript, etc), according to the official guide. The Heroku documentation recommends using AWS to host static files as a complement to its “ephemeral file system,” which only stores the latest running version of an application’s code. Files in a bucket can be managed more similarly to a traditional file system, as opposed to files which are stored in a Heroku slug.

Unlike the previous localhost installation, I ran into several problems. Though trying to coordinate Amazon and Heroku deployment did add some additional complexity, the biggest obstacle came from mistakes in my local configuration that resulted in my having to erase Heroku apps and start over several times.
I do not have any experience in working with cloud-hosted applications apart from my other Heroku-based assignments, but the installation instructions that we were given were simple enough that they did not pose problems for me.

AWS S3 Buckets: The Importance Of US Standard

The Heroku and Makahiki developers both suggest locating your bucket in the US Standard region. As the Heroku documentation points out, Heroku uses the US East region, and transfers between US East and US Standard are free. Normally, however, Amazon’s default settings for the bucket are set by auto-detection of where your traffic originated from. As of this article, I am in Hawaii, which is auto-detected as US West (Oregon). It is important to set US Standard as the region when the bucket is created, because bucket regions cannot be changed after the bucket has been created. In addition, buckets which are not declared as US Standard use a base URL that is different from the URL that Makahiki expects.

Virtualenv Setup for a Heroku Upload

I configured a new virtualenv, makahiki-heroku, to avoid conflicts with the makahiki virtualenv created in the last article. Following the guide, I set the environment variables manually, as specified in the guide:

$ export MAKAHIKI_ADMIN_INFO=admin:Dog4Days56
$ export MAKAHIKI_AWS_ACCESS_KEY_ID=<AWS access key id>
$ export MAKAHIKI_AWS_SECRET_ACCESS_KEY=<AWS secret access key>

The user can also put these variables in the virtualenv permanently by editing the $WORKON_HOME//bin/postactivate script. However, storing credentials in a static file runs the risk of exposing your S3 credentials if someone gets unauthorized access to your computer. Note also that the
heroku config:add command will not work yet because the Heroku app has not been created. It is created later, by the script.

The Script

Though running the initialization script is not necessarily the hardest step, I think it is the step that has the most room for error. It is important not to run this script with sudo. Switching to a root user context erases your user’s environment variables, so the script runs with its Amazon S3 variables not set and will fail to upload files to S3.

Running the script with sudo overrides the environment variables.

Above: Do not use the script with sudo. If you do, you will get a public key error and the S3 upload operations will fail.

When I resolved the environment variables problem, I went to my application page and saw that the stylesheets and images had failed to load:
The page without stylesheets or Javascript.

Above: The script failed to upload the static assets, including scripts and images.

Inspecting the Amazon S3 bucket revealed that the site_media/static directory, which contained images, scripts, and page templates, was missing. The script was detecting the site_media/static directory, but not using s3put to push the files to S3:
The site_media/static directory is not uploaded.

The script failed to push the static files to Amazon S3.

There was one other error in the upload involving a missing module. As T.A. Yongwen Xu and my classmate Zach Tomaszewski have pointed out, this happened because I did not run pip install -r requirements.txt, which installs Makahiki’s dependencies for a local project. After this step, the uploads were successful. Though it is probably something a user could be expected to remember, the instructions should still specify that running pip install -r requirements.txt is needed for the Heroku upload to work.

The Makahiki home page on Heroku.

The Makahiki home page on Heroku.

Suggested Improvements

The instructions are easy enough to understand. However, they do not specify US Standard as the desired bucket region and do not specify that the user needs to run pip install -r requirements.txt before running the script. The Heroku S3 tutorial that is linked to by the official guide recommends US Standard as the bucket region, but this is not specified in the manual, and users who already know how to install S3 may think that they do not need to see the instructions and end up with a bucket that is in the wrong region. Likewise, pip install -r requirements.txt may seem like an obvious requirement for a project that uses pip to handle dependencies, but the instructions should explicitly require it to be run.


Installing Makahiki on Heroku took much longer than installing it locally. Most of the extra time was the result of needing to set up an S3 account or having to erase and re-upload the Makahiki application on Heroku several times due to problems with missing S3 credentials or not having dependencies installed. Most of my problems with S3 uploads resulted from not using pip to install dependencies, and I think this is a step which should be included in the documentation. Despite all this, it is still easier than I expected it to be to get a cloud-hosted application up and running for a free or cheap monthly price, which demonstrates the lengths the Makahiki developers and companies such as Amazon and Heroku have gone to in order to make the setup process understandable for each of their applications or web platforms.