The technology behind Adafruit IO will most likely be changing soon as we scale the service, but we thought it might be useful to share how things are currently working behind the scenes.
Website & Rest API
The Adafruit IO website and REST API are powered by a Ruby on Rails application, which is served by nginx and Phusion Passenger. The user interface for the site is currently a combination of jQuery, Backbone, and d3. We plan on migrating to React in the near future, and will slowly be replacing jQuery and Backbone. IO users are authenticated using their adafruit.com accounts via OAuth, but the online storefront and IO are completely separate applications hosted on separate servers.
Node.js is currently handling all of our real-time traffic from clients connected via MQTT or websocket. We use a modified version of Mosca as our MQTT broker, and will be switching to Aedes as soon as it is production ready. The node.js workers push data to the database by sending information to a Sidekiq queue, which is then picked up by a Ruby worker and inserted into the database using the same Active Record models that the website and REST API use. The Ruby worker then notifies node.js of the successful database transaction via Redis, and data is emitted to the appropriate MQTT and websocket subscribers. The node.js workers are managed by pm2, and we use nginx in front of node.js to handle TLS/SSL connections for MQTT on port 8883. Node.js will also eventually handle connections from other IoT protocols like CoAP and MQTT-SN.
Databases & Caching
We use PostgreSQL for storing all data that needs to be persisted to disk. This includes things like user account data, feed metadata, and the logged data that is sent to IO from user devices. We have PostGIS extensions installed for geospatial queries, and we use wal-e for continuous backups of the database. We use Redis as the glue for the separate services. It handles connecting the separate node.js worker processes, and also handles any message passing between the Ruby and node.js processes. We also use memcached for the Rails application cache, and for storing user sessions.
There are currently two developers working part-time on IO, and we use git to track our changes. We make heavy use of git branches while working on new features, and create pull requests on GitHub to summarize our changes before merging. We write unit tests using Rails’ Test::Unit, and we use Mocha for unit testing Node.js code. We use Travis CI to make sure that both Rails and Node tests pass before merging any branches into master. We use swagger for documenting our REST API, and also generate client libraries from the API docs using swagger-codegen. We mainly use HipChat for communication, and meet once a week on Google Hangouts.
Server Hardware & Administration
We currently host IO on a single server running CentOS with two 4 core Intel Xeon E3-1270 v3’s @ 3.50GHz, 16GB of RAM, and a 500GB SSD. We will most likely migrate to EC2 or DigitalOcean in the near future. We use capistrano for deploying code to the server, and the server’s configuration is managed with chef-solo. We monitor the server with New Relic, and use Mandrill for sending email. Some other administration tools we rely on heavily are htop, tmux, mosh, free, and monit.
That’s it for now. As I mentioned, these things are likely to change soon, but if you have any suggestions for modifying our current stack, please let us know.