Deploying RethinkDB applications with Docker using Dokku

Dokku is a simple application deployment system built on Docker. It gives you a Heroku-like PaaS environment on your own Linux system, enabling you to deploy your applications with git. Dokku automatically configures the proper application runtime environment, installs all of the necessary dependencies, and runs each application in its own isolated container. You can easily run Dokku on your own server or an inexpensive Linux VPS.

The RethinkDB Dokku plugin, created by Stuart Bentley, lets developers create containerized RethinkDB instances for their Dokku-deployed apps. I've found that Dokku is a really convenient way to share my RethinkDB demos while I'm prototyping without having to manually deploy and configure each one. In this short tutorial, I'll show you how you can set up Dokku and install the plugin on a Digital Ocean droplet.

Set up a Digital Ocean droplet

If you want to set up Dokku somewhere other than Digital Ocean, you can use the Dokku project's official install script to get it running on any conventional Ubuntu 14.04 system.

Digital Ocean provides a selection of base images that make it easy to create new droplets that come with specific applications or development stacks. Dokku is among the applications that Digital Ocean supports out of the box. When you create a new droplet, simply select the Dokku image from the Applications tab.

You can configure the droplet with the size, region, and hostname of your choice. Be sure to add an SSH key---it will be used later to identify you when you deploy to the system.

After Digital Ocean finishes creating the new droplet, navigate to the droplet's IP address in your browser. The server will display a Dokku configuration panel. The page will prompt you for a public key and a hostname. The key that you selected during droplet creation will automatically appear in the public key field. In the hostname box, you can either put in a domain or the IP address of the droplet.

If you use an IP address, Dokku will simply assign a unique port to each of your deployed applications. If you configure Dokku with a domain, it will automatically create a virtual host configuration with a subdomain for each application that you deploy. For example, if you set as the hostname, an app called demo1 will be available at After you fill in the form, click the Finish Setup button to complete the Dokku configuration.

If you chose to use a domain, you also have to set up corresponding DNS records. In your DNS configuration system, add two A records---one for the domain itself and a wildcard record for the subdomains. Both records should use the IP address of your droplet.

A   *

Install the RethinkDB Dokku plugin

The next step is installing the plugin. Use ssh to log into the droplet as root. After logging into the system, navigate to the Dokku plugin folder:

$ cd /var/lib/dokku/plugins

Inside of the Dokku plugin folder, use the git clone command to obtain the plugin repository and put it in a subdirectory called rethinkdb. When the repository finishes downloading, use the dokku plugins-install command to install the plugin.

$ git clone rethinkdb
$ dokku plugins-install

Configure your application for deployment

Before you deploy an application, you will need to use Dokku to set up a linked RethinkDB container. While you are logged into the droplet as root, use the following command to set up a new RethinkDB instance:

$ dokku rethinkdb:create myapp

You can replace myapp with the name that you want to use for your application. When you deploy an application, Dokku will automatically link it with the RethinkDB container that has the same name. Now that you have created a RethinkDB container, it is time to deploy your first application.

Dokku supports a number of different programming languages and development stacks. It uses certain files in the project root directory to determine what dependencies to install and how to run the application. For a Ruby demo that I built with Sinatra, all I needed was a Gemfile and a For a node.js application built with Express, I used a package.json that included the dependencies and a start script.

You can also optionally use a Heroku-style Procfile to specify how to start the app. Dokku is largely compatible with Heroku, so you can refer to the Heroku docs to see what you need to do for other programming language stacks.

In the source code for your application, you will need to specify the host and port of the RethinkDB instance in the linked container. The RethinkDB Dokku plugin exposes those through environment variables called RDB_HOST and RDB_PORT. In my Ruby application, for example, I used the following code to connect to the database:

DBHOST = ENV["RDB_HOST"] || "localhost"
DBPORT = ENV["RDB_PORT"] || 28015

conn = r.connect :host => DBHOST, :port => DBPORT

After you finish configuring your application so that it will run in Dokku, be sure to commit your changes to your local git repository. To deploy the application, you will need to create a new remote:

$ git remote add dokku

In the example above, use the domain or IP address of the droplet. Replace the word myapp with the name of your application. The name should match the one that you used when you created the RethinkDB container earlier.

Deploy your application

When you are ready to deploy the application, simply push to dokku:

$ git push dokku master

When you push the application, Dokku will automatically create a new container for it on the droplet, install the necessary dependencies, and start running the application. After the deployment process is complete, you will see the address in your output. If you used an IP address, it will just be the IP and port. If you used a domain, it will be a subdomain like Visit the site in a web browser to see if it worked correctly.

If your application didn't start correctly, you can log into the droplet to troubleshoot. Use the following command to see the logs emitted by the deploy process:

$ dokku logs myapp

Replace myapp with the name that you used for your application. That command will show you the log output, which should help you determine if there were any errors. If you want to delete the deployed application, perform the following command:

$ dokku delete myapp

You can type dokku help to see the full list of available commands. I also recommend looking at the advanced usage examples for the RethinkDB Dokku plugin to learn about other capabilities that it provides. You can, for example, expose the web console for a specific containerized RethinkDB instance through a public port on the host.

Although the initial setup process is a little bit involved, Dokku makes it extremely easy to deploy and run your RethinkDB applications. Be sure to check out our example projects if you are looking for a sample RethinkDB application to try deploying with Dokku.

For additional information about using Dokku with RethinkDB, check out:

Upcoming RethinkDB events for October and November

Join the RethinkDB team at the following upcoming events:

RethinkDB at HTML5DevConf

October 21-22, Moscone Center

RethinkDB will be in San Francisco next week for HTML5DevConf, a popular event for frontend web developers. Conference attendees will be able to find us at table 29 in the conference expo hall. You can see our latest demo apps and meet RethinkDB co-founder, Slava Akhmechet. We will also have some fun RethinkDB goodies on hand to give away, including shirts and stickers.

Webinar with Compose

Wednesday, October 22 at 1:30PM PST

Our friends at Compose recently introduced a new service that provides managed RethinkDB hosting in the cloud. They have published several guides to help new users get started with the service. If you would like to learn more, be sure to catch our joint webinar with Compose next week. The live video event will feature Slava Akhmechet and Compose co-founder Kurt Mackey.

RSVP Here »

RethinkDB at DevCon5

November 18-19, San Jose Convention Center

RethinkDB will be at the HTML5 Communications Summit next month in San Jose. Slava will present a talk about real-time web application development with RethinkDB. We will also have a booth where you can see RethinkDB demos, meet members of the team, and get some nice RethinkDB goodies to bring home.

Move Fast and Break Things meetup

Wednesday, November 19 at 6:30 PM, Heavybit Industries, 325 Ninth Street (map)

RethinkDB will give a presentation for the Move Fast and Break Things meetup group in San Francisco. Learn how the RethinkDB team works, including details about the tools and collaborative processes that we use to deliver new RethinkDB releases. More details about the event will be available as it approaches.

RSVP Here »

Hosted RethinkDB deployments in the cloud now available from Compose

We are pleased to announce that our friends at Compose now offer RethinkDB hosting in the cloud. Their new service lets you get a managed RethinkDB deployment in a matter of seconds, providing a fast and easy way to start working on your RethinkDB project without the overhead of managing your own infrastructure or provisioning your own cluster.

Compose, formerly known as MongoHQ, is a dedicated Database as a Service (DBaaS) company. RethinkDB is the third database in their product lineup, launching alongside their existing support for MongoDB and Elasticsearch. Available today as a public beta, their hosted RethinkDB deployments come with automatic scaling and backups.

Each deployment provided by Compose is configured as a high-availability cluster with full redundancy. Their elastic provisioning service manages the entire environment, scaling deployments as needed to accommodate user workloads. Pricing starts at $45 per month for a three-node cluster with 2GB of storage capacity.

Migrate data from a MongoDB deployment

In addition to elastic scaling, Compose also offers a data migration system called a Transporter. If you have data in an existing MongoDB deployment managed by Compose, you can seamlessly import it into a RethinkDB deployment.

The import can be a one-time event or maintained on an ongoing basis with continuous updates—regularly pulling the latest changes into RethinkDB from your MongoDB deployment. If you have an existing MongoDB application that you would like to consider migrating to RethinkDB, Compose makes it really easy to get started.

Get started with Compose

To create a hosted RethinkDB instance, click the Add Deployment button in the Compose admin panel and select RethinkDB. Simply enter a name for the deployment—Compose handles the rest. You will need to input billing information for your Compose account if you have not done so previously.

Each RethinkDB deployment hosted by Compose has its own private network. Compose uses SSH tunneling to provide secure access to a hosted cluster. When you create a RethinkDB deployment in the Compose admin console, it will give you the host and port information that you need to connect.

Once you set up the SSH tunnel on your client system, you can work with the hosted RethinkDB instance in much the same way you would work with a local installation of the database. Even the RethinkDB admin console and Data Explorer operate as expected.

Building your next application with RethinkDB couldn't be easier. Register an account at and get started right away. For more details:

BeerThink: infinite scrolling in a mobile app with Ionic, Node.js, and RethinkDB

Developers often use pagination to display large collections of data. An application can fetch content in batches as needed, presenting a fixed number of records at a time. On the frontend, paginated user interfaces typically provide something like "next" and "previous" navigation buttons so that users can move through the data set. In modern mobile apps, it is increasingly common to implement an infinite scrolling user interface on top of paginated data. As the user scrolls through a list, the application fetches and appends new records.

To demonstrate the use of pagination in RethinkDB applications, I made a simple mobile app called BeerThink. It displays a list of beers and breweries, providing a detailed summary when the user taps an item. The app uses a data dump from the Open Beer Database, which contains information about roughly 4,400 beers and 1,200 breweries. I converted the data to JSON so that it is easy to import into RethinkDB. There are two tables, one for beers and one for breweries. The application uses RethinkDB's support for table joins to correlate the beers with their respective breweries.

BeerThink's backend is built with Node.js and Express. It exposes beer and brewery data retrieved from a RethinkDB database, providing a paginated API that returns 50 records at a time.

The BeerThink frontend is built with Ionic, a popular AngularJS-based JavaScript framework designed for mobile web apps. BeerThink uses an infinite scrolling list to present the beers in alphabetical order.

BeerThink's architecture aligns with the API-first approach used by many modern mobile web applications. The backend is solely an API layer, completely decoupled from the frontend. The frontend is a single-page web application designed to consumes the backend API. This particular approach makes it easy to build multiple frontend experiences on top of the same backend. You could, for example, easily make native desktop and mobile applications that consume the same backend API.

This tutorial demonstrates how BeerThink's pagination works at each layer of the stack: the RethinkDB database, the Node backend, and the Ionic client application.

Efficient pagination in RethinkDB

If you'd like to follow along and try the pagination queries yourself, create a table and then use the r.http command to add the beer list to a database:

r.table("beers").insert(r.http("", {result_format: "json"}))

To efficiently alphabetize and paginate the beer list, you first need to create an index on the name property:


After creating the index, you can use it in the orderBy command to fetch an alphabetized list of names:

r.table("beers").orderBy({index: "name"})

When paginating records from a database, you want to be able to obtain a subset of ordered table records. In a conventional SQL environment, you might accomplish that by using OFFSET and LIMIT. RethinkDB's skip and limit commands are serviceable equivalents, but the skip command doesn't offer optimal performance.

The between command, which is commonly used to fetch all documents that are between two keys in a table, is a much more efficient way to get the start position of a table subset. You can optionally specify a secondary index when using the between command, which means that it can operate on the indexed name property of the beers table.

The following example shows how to use the between command on the name index to get all of the beers between "Petrus Speciale" and "Plank Road Pale Ale" in alphabetical order:

  .between("Petrus Speciale", "Plank Road Pale Ale", {index: "name"})
  .orderBy({index: "name"})

When the BeerThink application starts, it uses orderBy and limit to fetch the first page of data. To get subsequent pages, it uses the between and limit commands. The value that the program supplies for the between command's start position is simply the index of the very last item that was fetched on the previous page.

  .between("Petrus Speciale", null, {leftBound: "open", index: "name"})
  .orderBy({index: "name"}).limit(50)

The example above shows how to fetch 50 records, starting from a particular beer. Because the program doesn't actually know what beer will be at the end of the new page of data, the between command is given null as its closing index value. That will cause the between command to return everything from the start index to the end of the table. The query uses the limit command to get only the desired number of records.

Setting the value of the leftBound option to open tells the between command to omit the first record, the one that we use to define the start index. That's useful because the item is one that you already have at the end of your list---you don't want to add it again.

The slice command

The between command is a good way to implement pagination in many cases, but it isn't universally applicable. There are cases where you won't have the last item of the previous page to use as a starting point.

Consider a situation where you want the user to be able to visit an arbitrary page without first iterating through the entire set. You might, for example, want to build a web application that accepts an arbitrary page number as a URL path segment and returns the relevant results. In such cases, the best approach is to use the slice command.

The slice command takes a start index and an end index. To get 50 records that are 3000 records down from the top of the table, simply pass 3000 and 3050 as the parameters:

r.table("beers").orderBy({index: "name"}).slice(3000, 3050)

When the user requests an arbitrary page, you simply multiply by the number of items per page to determine the slice command's start and end positions:

query.slice((pageNumber - 1) * perPage, pageNumber * perPage)

In the example above, use the desired values for pageNumber and perPage. Although the slice command isn't as fast as using between and limit, it is still much more efficient than using the skip command.

Pagination in BeerThink's API backend

The BeerThink backend is built with Node and Express. It provides simple API endpoints that are consumed by the frontend client application. The /beers endpoint provides the list of beers, 50 records at a time. The application also has a /breweries endpoint that similarly displays a list of beers.

For pagination, the user can optionally pass a last URL query parameter with the name of the most recently-fetched item. Both API endpoints support the same pagination mechanism. Taking advantage of the ReQL query language's composability, I generalized the operation that I use for pagination into a function that I can apply to any table index:

function paginate(table, index, limit, last) {
  return (!last ? table : table
    .between(last, null, {leftBound: "open", index: index}))
  .orderBy({index: index}).limit(limit)

The table parameter takes a RethinkDB expression that references a table. The index parameter is the name of the table index on which to operate. The limit parameter is the total number of desired items. The last parameter is the item to use to find the start of the page. If the last parameter is null or undefined, the application will fetch the first page of data instead of applying the between command.

In the /breweries endpoint, apply the paginate function to the breweries table. Use the req.param method provided by Express to get the URL query parameter that has the value of the last list item. If the user didn't provide the URL query parameter, the value will be undefined. All you have to do is run the query and give the user the JSON results:

app.get("/breweries", function(req, res) {
  var last = req.param("last");

  paginate(r.table("breweries"), "name", 50, last).run(req.db)
  .then(function(cursor) { return cursor.toArray(); })
  .then(function(output) { res.json(output); })
  .error(function(err) {
    res.status(500).json({error: err});

The /beers endpoint is implemented the exact same way as the /breweries endpoint, using the same paginate function that I defined above. The query is a little more complex, however, because it has to use an eqJoin operation to get the brewery for each beer:

app.get("/beers", function(req, res) {
  var last = req.param("last");

  paginate(r.table("beers"), "name", 50, last)
    .eqJoin("brewery_id", r.table("breweries"))
    .map(function(item) {
      return item("left").merge({"brewery": item("right")})
  .then(function(cursor) { return cursor.toArray(); })
  .then(function(output) { res.json(output); })
  .error(function(err) {
    res.status(500).json({error: err});

Even though the two endpoints used different queries, the same pagination function worked well on both. Abstracting common ReQL patterns into reusable functions can greatly simplify your code. If you wanted to make it possible for the client specify how many records are returned for each page, you could easily achieve that by taking another request variable and passing it to the paginate function as the value of the limit parameter.

Slice-style pagination on the backend

Although the between command is the best approach to use for pagination in the BeerThink application, the slice command is also easy to implement on the backend. I've included a brief explanation here for those who would like to see an example.

When you define a URL handler in Express, you can use a colon to signify that a particular URL segment is a variable. If you define the breweries endpoint as /breweries/:page, the page number passed by the user in the URL segment will be assigned to the request's page parameter.

In the handler for the endpoint, use parseInt or a plus sign to coerce the page number into an integer that can be passed into the ReQL query. Next, use the orderBy command to alphabetize the breweries. Finally, use the slice command with the page number and item count to fetch the desired subset of items.

app.get("/breweries/:page", function(req, res) {
  var pageNum = parseInt( || 1;

  r.table("breweries").orderBy({index: "name"})
    .slice((pageNum - 1) * 50, pageNum * 50).run(req.db)
  .then(function(cursor) { return cursor.toArray(); })
  .then(function(output) { res.json(output); })
  .error(function(err) {
    res.status(500).json({error: err});

If the user browses to /breweries/3, the application will give them the third page of brewery data formatted in JSON. In the example above, you might notice that the code assigns a default value of 1 to the pageNum variable if a page number wasn't provided with the request. That makes it so visiting /breweries by itself, without a page URL segment, will return the first page of data.

Consuming the paginated API in Ionic

Now that the endpoint is defined, the client can simply iterate through the pages as the user scrolls, adding each page of data to a continuous list. It's especially easy to accomplish with Ionic, because the framework includes an AngularJS directive called ion-infinite-scroll that you can use alongside any list view to easily implement infinite scrolling:

    <ion-item collection-repeat="beer in items" ...>

  <ion-infinite-scroll on-infinite="fetchMore()" distance="25%">

In the markup above, the framework will execute the code in the on-infinite attribute whenever the user scrolls to the position described in the distance attribute. In this case, the application will call the fetchMore method on the active scope whenever the user scrolls within 25% of the list's bottom.

In the associated AngularJS controller, the fetchMore method uses the $http service to retrieve the next page of data. It passes the name property of the most recently-fetched list item as the last URL query parameter, telling the backend which page to return.

app.controller("ListController", function($scope, $http) {
  $scope.items = [];
  var end = false;

  $scope.fetchMore = function() {
    if (end) return;

    var count = $scope.items.length;
    var params = count ? {"last": $scope.items[count-1].name} : {}

    $http.get("/beers", {params: params}).success(function(items) {
      if (items.length)
        Array.prototype.push.apply($scope.items, items);
      else end = true;
    }).error(function(err) {
      console.log("Failed to download list items:", err);
      end = true;
    }).finally(function() {

Each time that the fetchMore function retrieves data, it appends the new records to the items scope variable. If the backend returns no data, the application assumes that it has reached the end of the list and will stop fetching additional pages. Similarly, it will stop fetching if it encounters an error. In a real-world application, you might want to handle errors more gracefully and make it so that the user can force a retry.

The ion-item element in the HTML markup is bound to the items array, which means that new records will automatically display in the list. When I first built the application, I originally implemented the repeating list item with Angular's ng-repeat directive. I soon discovered that ng-repeat doesn't scale very well to lists with thousands of items---scrolling performance wasn't very good and switching back from the beer detail view was positively glacial.

I eventually switched to Ionic's relatively new collection-repeat directive, which is modeled after the cell reuse techniques that found in native mobile frameworks. Adopting collection-repeat substantially improved scrolling performance and eliminated detail view lag. If you are building mobile web apps with infinite scrolling lists that will house thousands of items, I highly recommend collection-repeat.

Going further

The application has a number of other features that are beyond the scope of this article, but you can get the source code from GitHub and have a look if you would like to learn more.

Install RethinkDB and check out the 10-minute intro guide to start building your first project.

Building an earthquake map with RethinkDB and GeoJSON

RethinkDB 1.15 introduced new geospatial features that can help you plot a course for smarter location-based applications. The database has new geographical types, including points, lines, and polygons. Geospatial queries makes it easy to compute the distance between points, detect intersecting regions, and more. RethinkDB stores geographical types in a format that conforms with the GeoJSON standard.

Developers can take advantage of the new geospatial support to simplify the development of a wide range of potential applications, from location-aware mobile experiences to specialized GIS research platforms. This tutorial demonstrates how to build an earthquake map using RethinkDB's new geospatial support and an open data feed hosted by the USGS.

Fetch and process the earthquake data

The USGS publishes a global feed that includes data about every earthquake detected over the past 30 days. The feed is updated with the latest earthquakes every 15 minutes. This tutorial uses a version of the feed that only includes earthquakes that have a magnitude of 2.5 or higher.

In the RethinkDB administrative console, use the r.http command to fetch the data:


The feed includes an array of geographical points that represent earthquake epicenters. Each point comes with additional metadata, such as the magnitude and time of the associated seismic event. You can see a sample earthquake record below:

  id: "ak11383733",
  type: "Feature",
  properties: {
    mag: 3.3,
    place: "152km NNE of Cape Yakataga, Alaska",
    time: 1410213468000,
    updated: 1410215418958,
  geometry: {
    type: "Point",
    coordinates: [-141.1103, 61.2728, 6.7]

The next step is transforming the data and inserting it into a table. In cases where you have raw GeoJSON data, you can typically just wrap it with the r.geojson command to convert it into native geographical types. The USGS earthquake data, however, uses a non-standard triple value for coordinates, which isn't supported by RethinkDB. In such cases, or in situations where you have coordinates that are not in standard GeoJSON notation, you will typically use commands like r.point and r.polygon to create geographical types.

Using the merge command, you can iterate over earthquake records from the USGS feed and replace the value of the geometry property with an actual point object. The output of the merge command can be passed directly to the insert command on the table where you want to store the data:

    .merge(function(quake) {
      return {
        geometry: r.point(

The r.point command takes longitude as the first parameter and latitude as the second parameter, just like GeoJSON coordinate arrays. In the example above, the r.point command is passed the coordinate values from the earthquake object's geometry property.

As you can see, it's easy to load content from remote data sources into RethinkDB. You can even use the query language to perform relatively sophisticated data transformations on the fetched data before inserting it into a table.

Perform geospatial queries

The next step is to create an index on the geometry property. Use the indexCreate command with the geo option to create an index that supports geospatial queries:

r.table("quakes").indexCreate("geometry", {geo: true})

Now that there is an index, try querying the data. For the first query, try fetching a list of all the earthquakes that took place within 200 miles of Tokyo:

r.table('quakes').getIntersecting([139.69, 35.68], 200,
    {unit: "mi"}), {index: "geometry"})

In the example above, the getIntersecting command will find all of the records in the quakes table that have a geographic object stored in the geometry property that intersects with the specified circle. The command creates a polygon that approximates a circle with the desired radius and center point. The unit option tells the command to use a particular unit of measurement (miles, in this case) to compute the radius. The coordinates used in the above example correspond with the latitude and longitude of Tokyo.

Let's say that you wanted to get the largest earthquake for each individual day. To organize the earthquakes by day, use the group command on the date. To get the largest from each day, you can chain the max command and have it operate on the magnitude property.


The USGS data uses timestamps that are counted in milliseconds since the UNIX epoch. In the query above, div(1000) is used to normalize the value so that it can be interpreted by the r.epochTime command. It's also worth noting that commands chained after a group operation will automatically be performed on the contents of each individual group.

Build a simple API backend

The earthquake map application has a simple backend built with node.js and Express. It implements several API endpoints that client applications can access to fetch data. Create a /quakes endpoint, which returns a list of earthquakes ordered by magnitude:

var r = require("rethinkdb");
var express = require("express");

var app = express();
app.use(express.static(__dirname + "/public"));

var configDatabase = {
  db: "quake",
  host: "localhost",
  port: 28015

app.get("/quakes", function(req, res) {
  r.connect(configDatabase).then(function(conn) {
    this.conn = conn;

    return r.table("quakes").orderBy(
  .then(function(cursor) { return cursor.toArray(); })
  .then(function(result) { res.json(result); })
  .finally(function() {
    if (this.conn)


Add an endpoint called /nearest, which will take latitude and longitude values passed as URL query parameters and return the earthquake that is closest to the provided coordinates:

app.get("/nearest", function(req, res) {
  var latitude = req.param("latitude");
  var longitude = req.param("longitude");

  if (!latitude || !longitude)
    return res.json({err: "Invalid Point"});

  r.connect(configDatabase).then(function(conn) {
    this.conn = conn;

    return r.table("quakes").getNearest(
      r.point(parseFloat(longitude), parseFloat(latitude)),
      { index: "geometry", unit: "mi" }).run(conn);
  .then(function(result) { res.json(result); })
  .finally(function(result) {
    if (this.conn)

The r.point command in the code above is given the latitude and longitude values that the user included in the URL query. Because URL query parameters are strings, you need to use the pareFloat function (or a plus sign prefix) to coerce them into numbers. The query is performed against the geometry index.

In addition to returning the closest item, the getNearest command also returns the distance. When using the unit option in the getNearest command, the distance is converted into the desired unit of measurement.

Build a frontend with AngularJS and leaflet

The earthquake application's frontend is built with AngularJS, a popular JavaScript MVC framework. The map is implemented with the Leaflet library and uses tiles provided by the OpenStreetMap project.

Using the AngularJS $http service, retrieve the JSON quake list from the node.js backend, create a map marker for each earthquake, and assign the array of earthquake objects to a variable in the current scope:

$scope.fetchQuakes = function() {
  $http.get("/quakes").success(function(quakes) {
    for (var i in quakes)
      quakes[i].marker = L.circleMarker(L.latLng(
        quakes[i].place.coordinates[0]), {
        radius: quakes[i].properties.mag * 2,
        fillColor: "#616161", color: "#616161"

    $scope.quakes = quakes;

To display the points on the map, use Angular's $watchCollection to apply or remove markers as needed when a change is observed in the contents of the quakes array.

$ ="map").setView([0, 0], 2);
$, {attribution: mapAttrib}));

  function(addItems, removeItems) {
    if (removeItems && removeItems.length)
      for (var i in removeItems)

    if (addItems && addItems.length)
      for (var i in addItems)

You could just call $ in the fetchQuakes method to add markers directly as they are created, but using $watchCollection is more idiomatically appropriate for AngularJS---if the application adds or removes items from the array later, it will dynamically add or remove the corresponding place markers on the map.

The application also displays a sidebar with a list of earthquakes. Clicking on an item in the list will focus the associated point on the map. That part of the application was relatively straightforward, built with a simple ng-repeat that binds to the quakes array.

To complete the application, the last feature to add is support for plotting the user's own location on the map and indicating which earthquake in the list is the closest to their position.

The HTML5 Geolocation standard introduced a browser method called geolocation.getCurrentPosition that provides coordinates of the user's current location. In the callback for that method, assign the received coordinates to the userLocation variable in the current scope. Next, use the $http service to send the coordinates to the /nearest endpoint.

$scope.updateUserLocation = function() {
  navigator.geolocation.getCurrentPosition(function(position) {
    $scope.userLocation = position.coords;

    $http.get("/nearest", {params: position.coords})
      .success(function(output) {
        if (output.length)
          $scope.nearest = output[0].doc;

To display the user's position on the map, use $watch to observe for changes to the value of userLocation. When it changes, create a new place marker at the user's coordinates.

$scope.$watch("userLocation", function(newVal, oldVal) {
  if (!newVal) return;

  if ($scope.userMarker)

  var point = L.latLng(newVal.latitude, newVal.longitude);
  $scope.userMarker = L.marker(point, {
    icon: L.icon({iconUrl: "mark.png"})


Put a pin in it

To view the complete source code, you can check out the repository on GitHub. To try the example, run npm install in the root directory and then execute the application by running node app.js.

To learn more about using geospatial queries in RethinkDB, check out the documentation. Geospatial support is only one of the great new features introduced in RethinkDB 1.15. Be sure to read the release announcement to get the whole story.