Building a Performant API using Sinatra, MongoDB, and memcache

Background


Several months ago I created zipasaur.us to serve up postal code data in an easy to digest manner. I threw together a naive implementation in several hours and moved on with life, hoping to return for some cleanup at a later time. I finally had a chance to go back over the application and make some simple changes that dramatically improved performance. This blog is an overview of the architectural structure of zipasaur.us.


Architecture


When I started the project, my design goals were cheap, easy to use, and educational. That is, I wanted to build a service that was cheap to host, easy to work with, and used some tools with which I had never worked. I wanted to support a handful of queries with zipasaur.us - query by zip code, query by city/state, and query by city/state/county.


Sinatra is what I know and love. At my work, and for personal projects, I rarely need a full Rails stack, and I have yet to run into performance issues hosting on Heroku with this setup. In every instance I’ve been able to add caching, indexes, or ramped up dynos to achieve the desired performance. NOTE: Sinatra DOES eventually become a hindrance with larger projects. I would always advocate using a different framework (Rails, Padrino) if your project has more than 10 models.



I knew from the start I wanted to use an alternative datastore because part of this exercise was to learn! The dataset I was working with was quite small and I considered storing it all in redis, but a redis instance large enough to handle the dataset would have been $5/mo on Heroku - cheapskate, I know. I settled on the NoSQL datastore to which everyone defaults, MongoDB because it was easy to work with, available as an addon on Heroku, and the available ORM’s - MongoMapper and Mongoid - were (are) well supported.



Modeling was trivial: a single Zip model that contained city, code, county, lat, lng, and state columns.



While Sinatra is lightweight and MongoDB blazing fast, my hope was that queries would hit the application and database relatively infrequently. The two major components that made this application performant were Rack::Cache and memcache. Memcache with Rack::Cache was used for HTTP caching and I stashed database responses in memcache to avoid hitting Mongo multiple times.




I added indexes for the various columns in the Zip model and to use memcache with Rack::Cache, I set it up in my environment file like so:

Zip.ensure_index :code 
Zip.ensure_index :city 
Zip.ensure_index :state
    
$memcache = Dalli::Client.new

use Rack::Cache,
  :verbose => true,
  :metastore => "memcached://#{ENV['MEMCACHE_SERVERS']}",
  :entitystore => "memcached://#{ENV['MEMCACHE_SERVERS']}"




The ‘byzip’ route in the controller looked like the following:



get '/byzip/:code' do
code = params[:code]

record = $memcache.get(code)

if record.nil?
record = Zip.where(:code => code).first
$memcache.set(code, record)
end

record.to_json
end


First we attempt to get the record out of memcache. If this fails, we do the lookup in Mongo and stash the response. If we get a record back, then return it!

After all was said and done, with warmed caches, this application running on a single dyno was handling 2700 ish req/sec., all served under 300ms!



$ ab -n 2000 -c 500 http://zipasaur.us/api/zipinfo/byzip/97209
Server Software: nginx
Server Hostname: zipasaur.us
Server Port: 80

Document Path: /api/zipinfo/byzip/97209
Document Length: 139 bytes

Concurrency Level: 400
Time taken for tests: 0.736 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Total transferred: 920830 bytes
HTML transferred: 278417 bytes
Requests per second: 2717.99 [#/sec] (mean)
Time per request: 147.167 [ms] (mean)
Time per request: 0.368 [ms] (mean, across all concurrent requests)
Transfer rate: 1222.08 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 6 4.1 5 30
Processing: 16 111 71.1 99 321
Waiting: 16 110 70.7 98 320
Total: 24 117 69.7 103 325

Percentage of the requests served within a certain time (ms)
50% 103
66% 138
75% 158
80% 166
90% 225
95% 261
98% 303
99% 316
100% 325 (longest request)



Conclusion and Caveats


Optimizing zipasaur.us was a lot of fun. Admittedly, the main reason this API is so fast is because it is read-only, the data changes infrequently, and thus we’re able to heavily cache responses. But I definitely accomplished all my goals. To this day, zipasaur.us hasn’t cost me anything to host, it’s plenty fast for anything I would need, and I learned a bit about MongoDB, indexing, and caching.




Source code here.

Show Comments