visit our site

Heroku Queuing Time, Part2: Solution

In Development, Performance

by Yaroslav Lazor by February 28, 2013


In previous articles we discussed application with slow resources and heroku queuing time problems.

Today we find out how to setup HAProxy as “slow-fast” request balancer for heroku application using Amazon Elastic Compute Cloud.

You need:

  • non optimized application
  • Heroku account
  • Amazon EC2 account

Prepare AWS tools

Although you can manage AWS via WEB interface we prefer command line tools.

Now switch to console and setup your credentials for tools.

Check access by invoking (for example):

Tools are ready.

Create balancer ssh access key

In order to access balancer we need ssh private key.

Choose name for your key (e.g. haproxy-balancer) and create it by:

Save output to some file e.g haproxy-balancer.pem. Don’t forget to set right permissions:

Configure Firewall

Open to ports in order to access our instances via SSH and HTTP:

Note. From security point of view we recommend to create special access group for balancer instead of using default group.

Setup balancer on AWS


We are going to use standard latest Ubuntu AMI. On alestic page choose desired ubuntu version.

Be sure to choose AMI from us-east-1 region because heroku instances are located exactly in this region,
otherwise you will have network latency.

On moment of writing this article latest AMI is:

Run instance using haproxy-balancer credentials into us-east-1 region on t1.micro instance:

In order to find out status and IP of instance wait a while and run:


Connect to balancer instance:

Install HAProxy package:

(Optionally) Install git package:

Clone our HAProxy request balancer configuration.
You are free to use own HAProxy configuration just follow ideas in our configuration.
Anyway we strongly recommend you to keep your configuration under control version system (git. subversion, etc).
And then deliver changes to server from SCM repository instead of editing it directly on the server.

For simplity we clone sample configuration and change it on the server:

Tune slow urls in:

Enable HAProxy by setting ENABLED=1 into /etc/default/haproxy:

Start HAProxy:

Check HAProxy:

Switch to HAProxy balancer

Now we can try visit slow/fast urls of our application using balancer IP address or perform some load/stress testing.
When everything is ok we are ready to change application DNS host settings to balancer IP.

We recommend to allocate and assign elastic IP for this purporse. It allows you easy switch to another balancer if something goes wrong.

Allocate new IP addresss:

Assign address to your Balancer IP address:

Check again by visiting your application via assigned elastic IP.

Finally change your DNS “A” record of to balancer IP according your DNS provider manual.

Also Heroku Custom Domains article can be useful.


If something went wrong switch back DNS settings to use heroku application directly.


As we already mentioned this solution is “temporary”. It gives your application opportunity to survive under high traffic.
You may use “slow-fast” balancer until you fix your slow resources by moving them to background or changing your architecture.
Then you may switch back to direct application usage.

We are hope you find this article helpful.


* Railsware is a premium software development consulting company, focused on delivering great web and mobile applications. Learn more about us.
  • Daniel

    Could you elaborate how that helps dynos that are stuck (serving a long running request) not getting new requests? For me this is a description how to put a haproxy in front of the dynos, but not more. Am I missing something (as I don’t have much experience with HAProxy)?

    • Yaroslav Lazor

      Hi Daniel,

      It doesn’t give a 100% guarantee that dynos with long requests won’t get new requests.

      But it does help with two things:

      – reducing amount of served previously stated (by url pattern) slow requests;
      – to hold all requests in HaProxy and only proxy X requests to the dynos, not passing request to the round-robin load balancing and reducing the likelihood of placing slow requests on top of other requests.

      First case is useful when you know you have a temporarily slow request (like admin reports, or just some controllers with degraded performance)

      The second case – is good for the following.

      When you have 10 web workers, you will only place 10 requests to the dynos. And only when one of the request gets freed-up – the next request will get placed. However the next request could get placed to a currently working web worker processing a slow request. But the likelihood is reduced. In a usual native balancer lots of fast requests could be placed after a slow one – making all of them slow.

      Also you can have two separate same heroku apps deployed and splitting slow and fast requests between two separate virtual heroku apps, reducing the likelihood even more.
      More time to maintain that – but the app would work much better.
      Get complex when you have fast/med/slow requests …

  • Daniel

    Ok, checked your haproxy configuration. It’s cleared now, but really looks like it’s hard to maintain + i thought you could send specific queues to specific dynos, which is not the case. Nice if it helped you though.

Signup for our weekly newsletter

Want to get more of Railsware blog?

RSS Feed

We're always ready to help!

Contact us