{"id":3887,"date":"2013-02-25T10:00:29","date_gmt":"2013-02-25T07:00:29","guid":{"rendered":"http:\/\/railsware.com\/blog\/?p=3887"},"modified":"2021-08-12T13:49:37","modified_gmt":"2021-08-12T10:49:37","slug":"heroku-queuing-time-part1-problem","status":"publish","type":"post","link":"https:\/\/railsware.com\/blog\/heroku-queuing-time-part1-problem\/","title":{"rendered":"Heroku Queuing Time: Problem and Solution"},"content":{"rendered":"\n<p>One of the most exciting things about Heroku is that you can scale in a single click &#8211; just add more dynos, and you can immediately handle higher load. This post is about our experience of hosting high load application on Heroku and situation when adding more dynos does not help.<\/p>\n\n\n\n<p>While throughput of our application was increasing each and every day we started to notice that we have more and more tolerated and frustrated requests. The problem was that it was fast (nothing to optimize) requests, but sometimes processing time for these requests was more 15 sec.<\/p>\n\n\n\n<p>After investigation we found that the reason is <em>Request Queuing Time<\/em>. If you take a look on queuing time chart, the average is pretty good, but the maximums could be enormous.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"264\" src=\"\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/queuing_time_average_and_max-1024x264.png\" alt=\"queuing_time_average_and_max\" class=\"wp-image-3925\" srcset=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/queuing_time_average_and_max-1024x264.png 1024w, https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/queuing_time_average_and_max-300x77.png 300w, https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/queuing_time_average_and_max.png 1112w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>To get the maximums you have to create custom dashboard and add chart for <code>WebFrontend\/QueueTime<\/code> metric.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/custom_dashboard-e1356443403835.png\"><img loading=\"lazy\" decoding=\"async\" width=\"400\" height=\"411\" src=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/custom_dashboard-e1356443403835.png\" alt=\"custom_dashboard\" class=\"wp-image-3928\"\/><\/a><\/figure>\n\n\n\n<p>Why is Queuing Time so huge? Maybe we need more dynos? But adding more dynos doesn&#8217;t help.<\/p>\n\n\n\n<p>So, let\u2019s see what is Queuing Time on Heroku, how it works and what we can do.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The cause in Heroku Routing Mesh<\/h2>\n\n\n\n<p>Let&#8217;s take a look at the Heroku <a href=\"https:\/\/devcenter.heroku.com\/articles\/http-routing#routing-mesh\" target=\"_blank\" rel=\"noopener noreferrer\">docs<\/a> to understand queuing time better. When a request reaches Heroku, it&#8217;s passed from a load balancer to the &#8220;routing mesh&#8221;. The routing mesh is a custom Erlang solution based on MochiWeb that routes requests to a specific dyno. Each dyno has its own request queue, and the routing mesh pushes requests to a dyno queue. When dyno is available, it picks up a request from its queue and your application processes it. The time between when routing mesh receives the request and when the dyno picks it up is the queuing time.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/routing_mesh.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"312\" src=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/routing_mesh.jpg\" alt=\"routing_mesh\" class=\"wp-image-3941\" srcset=\"https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/routing_mesh.jpg 500w, https:\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/routing_mesh-300x187.jpg 300w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><\/figure>\n\n\n\n<p>Each request has an <code>X-Heroku-queue-Wait-Time<\/code> HTTP header with the queuing time value for this particular request. That&#8217;s how New Relic knows what to show on its charts. The key point for us in the routing schema is the request distribution. The Heroku docs say:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>The routing mesh uses a random selection algorithm for HTTP request load balancing across web processes.<\/p><\/blockquote>\n\n\n\n<p>That means that even if a dyno is stuck processing a long-running request, it will still get more requests in its queue. If a dyno serves a 15-second request and is routed another request that ends up taking 100ms, that second request will take 15.1 seconds to complete. That&#8217;s why Heroku recommends putting all long-running actions in background jobs.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Possible solutions<\/h2>\n\n\n\n<p>These queuing time peaks were a major pain in the neck for us. Under high load (~10-15k requests per minute), a stuck dyno could result in a hundreds of timeouts. Here are some ways to minimize request queuing:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>Drop long-running requests<\/strong><br>Heroku recommended one final workaround: drop all long-running requests within our application using <a href=\"https:\/\/github.com\/kch\/rack-timeout\" target=\"_blank\" rel=\"noopener noreferrer\">Rack::Timeout<\/a> or, when we switched from Thin to Unicorn, by setting a timeout in Unicorn.<\/li><li><strong>Move long-running actions to background jobs<\/strong><br>Long-running requests, like uploading CSVs of contacts or downloading CSVs of contacts can be moved to background jobs<\/li><li><strong>Splitting into multiple applications<\/strong><br>Each application was an identical Heroku application that served different types of requests using a unique subdomain. So, for example, you can move database-heavy reports to second app.Problems:\n<p>&nbsp;<\/p>\n<ul>\n<li>&#8220;Single push&#8221; deployment became a pain<\/li>\n<li>Many add-ons we had in our original app (logging, SSL, etc) needed to be added to the new applications<\/li>\n<li>Had to monitor several New Relic accounts<\/li>\n<li>Managing multiple applications on Heroku worked in the short run, but was not ideal<\/li>\n<\/ul>\n<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Our solution: Slow-Fast Request Balancer<\/h2>\n\n\n\n<p>With this solution we got same results like with \u201cSplitting into multiple applications\u201c but without additional pain. We setup Haproxy balancer which have two queues, one for &#8220;slow&#8221; and another for &#8220;fast&#8221; requests. By limiting parallel processing in &#8220;slow&#8221; queue we always have free workers for &#8220;fast&#8221; queue.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Do you have this problem?<\/h2>\n\n\n\n<p>If you&#8217;re running an application under high load on Heroku, you&#8217;re almost certainly facing this problem and don&#8217;t know about it. New Relic showed that our application performance was very fast (&lt; 100ms) and average queuing time very low even when these problems were at their worst. But check your maximums for Queuing Time with custom dashboard graph!<\/p>\n\n\n\n<p>For most applications, it probably doesn&#8217;t matter if a fraction of a percent of your requests take a long time. But sometimes we can&#8217;t afford to have any requests that take more than a couple seconds, even if our average response time is well under 100ms.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>Each of possible solutions resulted in fewer timeouts. But the best solution which helps us get fast results is solution with Haproxy Balancer. Next we will describe how we implemented it for Heroku application.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Preface<\/h2>\n\n\n\n<p>Previously we discussed <a href=\"https:\/\/railsware.com\/blog\/slow-fast-request-balancer\/\">application with slow resources<\/a> and <a href=\"https:\/\/railsware.com\/blog\/heroku-queuing-time-part1-problem\/\">heroku queuing time<\/a> problems. Today we find out how to setup HAProxy as &#8220;slow-fast&#8221; request balancer for heroku application using Amazon Elastic Compute Cloud.<\/p>\n\n\n\n<p>You will need:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>non optimized application<\/li><li>Heroku account<\/li><li>Amazon EC2 account<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Prepare AWS tools<\/h2>\n\n\n\n<p>Although you can manage AWS via WEB interface we prefer command line tools.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Download <a href=\"http:\/\/aws.amazon.com\/developertools\/351\">Amazon EC2 API Tools<\/a><\/li><li>Login to <a href=\"https:\/\/console.aws.amazon.com\/console\/home\">EC2 consle<\/a><\/li><li>Obtain your AWS_ACCESS_KEY and AWS_SECRET_KEY on to <a href=\"https:\/\/portal.aws.amazon.com\/gp\/aws\/securityCredentials\">Access Credentials<\/a> page<\/li><\/ul>\n\n\n\n<p>Now switch to console and setup your credentials for tools.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ export AWS_ACCESS_KEY=your_AWS_ACCESS_KEY_ID\n$ export AWS_SECRET_KEY=your_AWS_SECRET_KEY\n<\/pre>\n\n\n\n<p>Check access by invoking (for example):<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-describe-instances\n<\/pre>\n\n\n\n<p>Tools are ready.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Create balancer ssh access key<\/h2>\n\n\n\n<p>In order to access balancer we need ssh private key.<\/p>\n\n\n\n<p>Choose name for your key (e.g. haproxy-balancer) and create it by:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-add-keypair haproxy-balancer\nKEYPAIR haproxy-balancer 0a:ea:f9:f6:4c:29:80:33:0c:2e:af:7b:8c:dc:5c:5b:24:65:53:6f\n-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----\n<\/pre>\n\n\n\n<p>Save output to some file e.g <em>haproxy-balancer.pem<\/em>. Don&#8217;t forget to set right permissions:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ chmod 600 haproxy-balancer.pem\n<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Configure Firewall<\/h2>\n\n\n\n<p>Open to ports in order to access our instances via SSH and HTTP:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-authorize default -P tcp -p 22\n$ ec2-authorize default -P tcp -p 80\n<\/pre>\n\n\n\n<p>Note. From security point of view we recommend to create special access group for balancer instead of using default group.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Setup balancer on AWS<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">AMI<\/h3>\n\n\n\n<p>We are going to use standard latest Ubuntu AMI. On <a href=\"http:\/\/alestic.com\/index.html\">alestic<\/a> page choose desired ubuntu version.<\/p>\n\n\n\n<p>Be sure to choose AMI from <strong>us-east-1<\/strong> region because heroku instances are located exactly in this region,<br>otherwise you will have network latency.<\/p>\n\n\n\n<p>On moment of writing this article latest AMI is:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">Ubuntu 12.10 Quantal EBS boot ami-7539b41c\n<\/pre>\n\n\n\n<p>Run instance using <em>haproxy-balancer<\/em> credentials into <em>us-east-1<\/em> region on <em>t1.micro<\/em> instance:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-run-instances ami-7539b41c -k haproxy-balancer -t t1.micro --region us-east-1\n<\/pre>\n\n\n\n<p>In order to find out status and IP of instance wait a while and run:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-describe-instances i-99de1ce8\n<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">HAProxy<\/h3>\n\n\n\n<p>Connect to balancer instance:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ssh -i haproxy-balancer.pem ubuntu@ec2-174-129-156-71.compute-1.amazonaws.com\n<\/pre>\n\n\n\n<p>Install HAProxy package:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ubuntu@ip-10-112-71-4:~$ sudo apt-get install haproxy\n<\/pre>\n\n\n\n<p>(Optionally) Install git package:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ubuntu@ip-10-112-71-4:~$ sudo apt-get install git\n<\/pre>\n\n\n\n<p>Clone <a href=\"https:\/\/github.com\/railsware\/haproxy-slow-fast-request-balancer\">our HAProxy request balancer<\/a> configuration.<br>You are free to use own HAProxy configuration just follow ideas in our configuration.<br>Anyway we strongly recommend you to keep your configuration under control version system (git. subversion, etc).<br>And then deliver changes to server from SCM repository instead of editing it directly on the server.<\/p>\n\n\n\n<p>For simplity we clone sample configuration and change it on the server:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ubuntu@ip-10-112-71-4:~$ sudo mv \/etc\/haproxy\/ haproxy.origin\nubuntu@ip-10-112-71-4:~$ sudo git clone git:\/\/github.com\/railsware\/haproxy-slow-fast-request-balancer.git \/etc\/haproxy\n<\/pre>\n\n\n\n<p>Tune slow urls in:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">\/etc\/haproxy\/patterns\/*\n<\/pre>\n\n\n\n<p>Enable HAProxy by setting <em>ENABLED=1<\/em> into <em>\/etc\/default\/haproxy<\/em>:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ubuntu@ip-10-112-71-4:~$ sudo vim \/etc\/default\/haproxy\n<\/pre>\n\n\n\n<p>Start HAProxy:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">ubuntu@ip-10-112-71-4:~$ sudo vim \/etc\/init.d\/haproxy start\n<\/pre>\n\n\n\n<p>Check HAProxy:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">http:\/\/stats:qwerty@ec2-174-129-156-71.compute-1.amazonaws.com\/haproxystats\n<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Switch to HAProxy balancer<\/h3>\n\n\n\n<p>Now we can try visit slow\/fast urls of our application using balancer IP address or perform some load\/stress testing.<br>When everything is ok we are ready to change application DNS host settings to balancer IP.<\/p>\n\n\n\n<p>We recommend to allocate and assign <a href=\"http:\/\/en.wikipedia.org\/wiki\/Amazon_Elastic_Compute_Cloud#Elastic_IP_Addresses\">elastic IP<\/a> for this purporse. It allows you easy switch to another balancer if something goes wrong.<\/p>\n\n\n\n<p>Allocate new IP addresss:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-allocate-address\n$ ADDRESS 54.235.201.228 standard\n<\/pre>\n\n\n\n<p>Assign address to your Balancer IP address:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">$ ec2-associate-address 54.235.201.228 -i i-99de1ce8\nADDRESS 54.235.201.228 i-99de1ce8\n<\/pre>\n\n\n\n<p>Check again by visiting your application via assigned elastic IP.<\/p>\n\n\n\n<p>Finally change your DNS &#8220;A&#8221; record of to balancer IP according your DNS provider manual.<\/p>\n\n\n\n<p>Also <a href=\"https:\/\/devcenter.heroku.com\/articles\/custom-domains\">Heroku Custom Domains<\/a> article can be useful.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Rollback<\/h2>\n\n\n\n<p>If something went wrong switch back DNS settings to use heroku application directly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>As we already mentioned this solution is &#8220;temporary&#8221;. It gives your application opportunity to survive under high traffic.<br>You may use &#8220;slow-fast&#8221; balancer until you fix your slow resources by moving them to background or changing your architecture.<br>Then you may switch back to direct application usage.<\/p>\n\n\n\n<p>We are hope you find this article helpful.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li><a href=\"https:\/\/railsware.com\/blog\/slow-fast-request-balancer\/\">Slow-Fast Request Balancer<\/a><\/li><li><a href=\"http:\/\/haproxy.1wt.eu\/\">HAProxy<\/a><\/li><li><a href=\"https:\/\/devcenter.heroku.com\/articles\/custom-domains\">Heroku Custom Domains<\/a><\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>One of the most exciting things about Heroku is that you can scale in a single click &#8211; just add more dynos, and you can immediately handle higher load. This post is about our experience of hosting high load application on Heroku and situation when adding more dynos does not help. While throughput of our&#8230;<\/p>\n","protected":false},"author":38,"featured_media":9456,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[3],"tags":[],"coauthors":["Maxim Bondaruk"],"class_list":["post-3887","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-development"],"acf":[],"aioseo_notices":[],"categories_data":[{"name":"Engineering","link":"https:\/\/railsware.com\/blog?category=development"}],"post_thumbnails":"\/\/railsware.com\/blog\/wp-content\/uploads\/2012\/12\/queuing_time_average_and_max-1024x264.png","amp_enabled":true,"_links":{"self":[{"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/posts\/3887","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/users\/38"}],"replies":[{"embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/comments?post=3887"}],"version-history":[{"count":65,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/posts\/3887\/revisions"}],"predecessor-version":[{"id":13975,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/posts\/3887\/revisions\/13975"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/media\/9456"}],"wp:attachment":[{"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/media?parent=3887"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/categories?post=3887"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/tags?post=3887"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/railsware.com\/blog\/wp-json\/wp\/v2\/coauthors?post=3887"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}