Advancing the realtime web

Over the past few months the team at RethinkDB has been working on a project to make building modern, realtime apps dramatically easier. The upcoming features are the start of an exciting new database access model -- instead of polling the database for changes, the developer can tell RethinkDB to continuously push updated query results to applications in realtime.

This work started as an innocuous feature to help developers integrate RethinkDB with other realtime systems. A few releases ago we shipped changefeeds -- a way to subscribe to change notifications in the database. Whenever a document changes in a table, the server pushes a notification describing the change to subscribed clients. You can subscribe to changes on a table like this:

r.table('accounts').changes().run(conn)

Originally we intended this feature to help developers push data from RethinkDB to specialized data stores like ElasticSearch and message systems like RabbitMQ, but the release generated enormous excitement we didn't expect. Digging deeper, we saw that many web developers used changefeeds as a solution to a much broader problem -- how do you adapt the database to push realtime data to applications?

This turned out to be an important problem for so many developers that we expanded RethinkDB's architecture to explicitly support realtime apps. The first batch of the new features will ship in a few days in the upcoming 1.16 release of RethinkDB, and I'm very excited to share what we've been working on in this post.

Why is building realtime apps so hard?

The query-response database access model works well on the web because it maps directly to HTTP's request-response. However, modern marketplaces, streaming analytics apps, multiplayer games, and collaborative web and mobile apps require sending data directly to the client in realtime. For example, when a user changes the position of a button in a collaborative design app, the server has to notify other users that are simultaneously working on the same project. Web browsers support these use cases via WebSockets and long-lived HTTP connections, but adapting database systems to realtime needs still presents a huge engineering challenge.

A naive way to support live updates is to periodically poll the database for changes, but this solution is unworkable because it entails a tradeoff between the number of concurrent users and the polling interval. Even a small number of users polling the database will place a tremendous load on the database servers, requiring the administrator to increase the polling interval. In turn, high polling intervals very quickly result in an untenable user experience.

A scalable solution to this problem involves many cumbersome steps:

  • Hooking into replication logs of the database servers, or writing custom data invalidating logic for realtime UI components.
  • Adding messaging infrastructure (e.g. RabbitMQ) to your project.
  • Writing sophisticated routing logic to avoid broadcasting every message to every web server.
  • Reimplementing database functionality in the backend if your app requires realtime computation (e.g. realtime leaderboards).

All this requires enormous commitment of time and engineering resources. This tech presentation from Quora gives a good overview of how challenging it can be. The upcoming 1.16 release of RethinkDB is our take on helping developers build realtime apps with minimal effort, and includes the first batch of realtime push features to tackle this problem.

The database for the realtime web

A major design goal was to make the implementation non-invasive and simple to use. RethinkDB users can get started with the database by using a familiar request-response query paradigm. For example, if you're generating a web page for a visual web design app, you can load the UI elements of a particular project like this:

> r.table('ui_elements').get_all(PROJECT_ID, index='projects').run(conn)
{ 'id': UI_ELEMENT_ID,
  'project_id': PROJECT_ID
  'type': 'button',
  'position': [100, 100],
  'size': [200, 100] }

But what if your design app is collaborative, and you want to show updates to all designers of a project in realtime? The 1.16 release of RethinkDB significantly expands the changes command to work on a much larger set of queries. The changes command lets you get the result of the query, but also asks the database to continue pushing updates to the web server as they happen in realtime, without the developer doing any additional work:

> r.table('ui_elements').get_all(PROJECT_ID, index='projects').changes().run(conn)
{ 'new_val':
  { 'id': UI_ELEMENT_ID,
    'project_id': PROJECT_ID
    'type': 'button',
    'position': [100, 100],
    'size': [200, 100] }
}

The first result of the query is just the value of the document. However, when the developer tacks on the changes command, RethinkDB will keep the cursor open, and push updates onto the cursor any time a relevant change occurs in the database. For example, if a different user moves the button in a project, the database will push a diff to every connected web server interested in the particular project, informing them of the change:

{ 'old_val':
  { 'id': UI_ELEMENT_ID,
    'project_id': PROJECT_ID
    'type': 'button',
    'position': [100, 100],
    'size': [200, 100] },
  'new_val':
  { 'id': UI_ELEMENT_ID,
    'project_id': PROJECT_ID
    'type': 'button',
    'position': [200, 200],  # the position has changed
    'size': [200, 100] }
}

Any time a web or a mobile client connects to your Python, Ruby, or Node.js application, you can create a realtime feed using the official RethinkDB drivers. The database will continuously push query result updates to your web server, which can forward the changes back to the client in realtime using WebSockets or one of the many wrapper libraries like SockJS, socket.io, or SignalR. Additionally, you'll be able to access the functionality from most languages using one of the many community supported drivers.

The push access model eliminates the need for invalidation logic in the UI components, additional messaging infrastructure, complex routing logic on your servers, and custom code to reimplement aggregation and sorting in the application. The changes command works on a large subset of queries and is tightly integrated into RethinkDB's architecture. For example, if you wanted to create an animated line graph of operation statistics for all tables in your production database, you could set up a feed on the internal statistics table to monitor the RethinkDB cluster itself:

> r.db('rethinkdb').table('stats').filter({ 'db': 'prod' }).changes().run(conn)

The architecture is designed to be scalable. We're still running benchmarks, but you should be able to create thousands of concurrent changefeeds to scale your realtime apps, and the results will be pushed within milliseconds.

We've also built in many bells and whistles like latency awareness, that make building realtime apps much more convenient. For example, if the query results change too quickly and you don't want to update the DOM more frequently than fifty milliseconds, you can tell changes to squash updates on a fifty millisecond window, and the database will take care of aggregating diffs and removing duplicates:

> r.table('ui_elements').get_all(PROJECT_ID, index='projects').changes(squash=0.05).run(conn)

Comparison with realtime sync services

There are many existing realtime sync services that significantly ease the pain of building realtime applications. Firebase, PubNub, and Pusher are notable examples, and there are many others. These services are excellent for getting up and running quickly. They let you sync documents across multiple browsers, offer sophisticated security models, and integrate with many existing web frameworks.

The upcoming features in RethinkDB are fundamentally different from realtime sync services in four critical ways.

Firstly, most existing realtime sync services offer very limited querying capabilities. You can query for a specific document and perhaps a range of documents, but you can't express even simple queries that involve any computation. For example, sorting, advanced filtering, aggregation, joins, or subqueries are either limited or not available at all. This limitation turns out to be critical for real world applications, so most users end up using realtime sync services side by side with traditional database systems, and build up complex code to duplicate data between the two.

In contrast, RethinkDB is a general purpose database that allows you to easily express queries of arbitrary complexity. This eliminates the need for multiple pieces of infrastructure and additional code to duplicate data and keep it in sync across multiple services.

Secondly, the push functionality of realtime sync services is limited to single documents. You can sync documents across clients, but you can't get a realtime incremental feed for more complex operations. In contrast, RethinkDB allows you to get a feed on queries, not just documents. For example, suppose you wanted to build a realtime leaderboard of top five gameplays in your game world. This requires sorting the gameplays by score in descending order, limiting the resultset to five top gameplays, and getting a continuous incremental feed that pushes updates to your clients any time the resultset changes. This functionality isn't available in realtime sync services, but is trivial in RethinkDB:

r.table('gameplays').order_by(index=r.desc('score')).limit(5).changes().run(conn)

Any time the database gets updated with a new gameplay, this query will inform the developer which items dropped off the leaderboard, and which new gameplays should be included. Internally, the database doesn't merely rerun the query any time there is a change to the gameplays table -- the changefeeds are recomputed incrementally and efficiently.

Thirdly, realtime sync services are closed ecosystems that run in the cloud. While a hosted version of RethinkDB is available through our partners at Compose.io, both the protocol and the implementation are, and always will be, open-source.

Finally, most existing realtime sync services are built to allow access to their API directly from the web browser. This eliminates the need for building a backend in simple applications, and lets new users quickly deploy their apps with less hassle. As a general purpose database RethinkDB expects to be accessed from a backend server, and does not yet provide a sufficiently robust security model to be accessed directly from the web browser. We're playing with the idea of building a secure proxy server to let web clients access RethinkDB directly from the browser, so eventually you might not need to write backend code if your application is simple enough. However, unlike realtime sync services, for now you have to access RethinkDB feeds through the backend code running in your web server.

Comparison with hooking into the replication log

Most traditional database systems offer access to their replication log, which allows clients to learn about the updates happening in the database in realtime. Many infrastructures for realtime apps are built on top of this functionality. There are three fundamental differences between RethinkDB's changefeeds and hooking into the replication log of a database.

Firstly, like with realtime sync, hooking into the replication log gives you access to updates on individual documents. In contrast, RethinkDB's changefeeds allow you to get feeds on query resultsets. Consider the example above, where we're building a leaderboard of top five gameplays in a game world:

r.table('gameplays').order_by(index=r.desc('score')).limit(5).changes().run(conn)

To rebuild this functionality on top of a replication log your application would need to keep track of top five gameplays, and you'd have to write custom code to compare each new record in the gameplays table to decide if it replaces any of the gameplays in the leaderboard. More importantly, consider what happens if the game admin decides the player cheated and their gameplay score has to be reduced. Your code would have to go back to the database and recompute the query from scratch, because it has no information about which gameplay has the new record that should be on the leaderboard.

Writing this code is doable, but is fairly complex and error-prone. In a large application, the complexity can add up quickly if you have many realtime elements. In contrast, RethinkDB's query engine eliminates this complexity by automatically taking care of the computation and sending you the correct updates as the resultset changes in realtime.

Secondly, as you move to sharded environments, working with a replication log presents additional complexity as there isn't a single replication log to deal with. Your application would need to subscribe to multiple replication logs, and manually aggregate the events from replication logs for each shard. In contrast, RethinkDB automatically takes care of handling shards in the cluster, and changefeeds present unified views to your application.

Finally, most database systems don't offer granular filtering functionality for replication logs, so your clients can't get only the parts of the log they're interested in. This presents non-trivial scalability challenges because your infrastructure has to deal with the firehose of all database events, and you need to write custom code to route only the relevant events to appropriate web servers. In contrast, RethinkDB handles scalability issues in the cluster, and each feed gives you exactly the information you need for a particular client.

RethinkDB's changefeeds operate on a higher level of abstraction than traditional replication logs, which significantly reduces the amount of custom code and operational challenges the application developer has to consider.

Integrating with realtime web frameworks

One of the more notable projects that helps developers build realtime apps is Meteor. Meteor is an open-source platform for building realtime apps in JavaScript that promises a significantly improved developer experience. It handles a lot of the boilerplate necessary to build responsive interfaces with live updates, provides a complete platform with client-side and server-side components, and offers many advanced features like latency compensation and security out of the box. The team is making great strides in scalability and maturity of the platform, and many companies are starting to use Meteor to build the next generation of web applications.

Meteor is part of the Node.js ecosystem, and multiple other projects have popped up to bring its functionality to other languages. Volt is a framework that implements similar functionality in Ruby, and webalchemy is an alternative platform for Python. These projects are less mature, but have picked up a lot of interest in their respective ecosystems, and are likely to gain a lot of momentum once they accumulate enough functionality to let developers build high quality, scalable apps.

Meteor, Volt, and webalchemy frameworks run on top of databases, so they're ultimately constrained by the realtime functionality and scalability of existing database systems. We've been collaborating with the Meteor team to ensure our design will work well with these and other similar projects. A few community members have been working on a RethinkDB integration with Meteor and Volt, and we expect robust integrations to become available in the coming months.

More work ahead

The upcoming 1.16 release contains only a subset of the functionality we'd like to include. In the next few releases we plan to expand realtime push even further:

  • We're discussing the implementation for restartable feeds here and here. Feedback welcome!
  • We'd like to make more complex queries available via realtime push. In particular, efficient realtime push implementations for the eq_join command and map/reduce are fairly complex, and aren't making it into 1.16.
  • Exposing the database to the internet entails serious security concerns, so we're kicking around ideas for a secure proxy to enable direct browser access of realtime feeds.

This work is guided by three high level design principles:

  • We believe it's important for realtime database infrastructure to be open. Both the protocol and the implementation are, and always will be, open-source.
  • The implementation should be non-invasive and very simple to use. Developers shouldn't have to care about realtime features until they're ready to add the functionality to their apps.
  • Realtime functionality should be efficient, scalable, and tightly integrated with the rest of the database. It shouldn't feel like an afterthought.

Advancing the realtime web

The new functionality is a start of an exciting new database access model that eliminates many complex steps necessary for building realtime apps today. There is no need to poll the database for changes or introduce additional infrastructure like RabbitMQ. RethinkDB pushes relevant changes to the web server the instant they occur. The amount of additional code the developer has to write to implement realtime functionality in their apps is minimal, and all scalability issues are handled by the RethinkDB cluster.

We'll be releasing the realtime extensions to RethinkDB in the next few days along with tutorials and documentation. In the meantime, you can watch the video with a live demo of the features:

We're hoping RethinkDB 1.16 will make building realtime apps dramatically simpler and more accessible. Stay tuned for more updates, and please share your feedback with the RethinkDB team!

Query RethinkDB tables from PostgreSQL with foreign data wrappers

Rick Otten (@rotten) recently released a foreign data wrapper for PostgreSQL that provides a bridge to RethinkDB. The wrapper makes it possible to expose individual tables from a RethinkDB database in PostgreSQL, enabling users to access their RethinkDB data with with SQL queries.

The wrapper could prove especially useful in cases where a developer wants to incorporate RethinkDB into an existing application built on PostgreSQL, taking advantage of RethinkDB features like changefeeds to easily add realtime updates. You could, for example, use RethinkDB to store and propagate realtime events while continuing to use PostgreSQL for things like account management and other data persistence.

To try the foreign data wrapper myself, I used it to access cat pictures from my CatThink demo in PostgreSQL. I built CatThink last year to illustrate how RethinkDB changefeeds can simplify the architecture of realtime applications. CatThink, which is built with Node.js and Socket.io, uses Instagram's realtime APIs and a RethinkDB changefeed to display a stream of the latest cat pictures posted to the popular photo sharing service.

As I will show you in this article, I used the foreign data wrapper to connect a PostgreSQL instance to a RethinkDB database so that I could retrieve cat picture URLs with simple SQL queries.

Configure the foreign data wrapper

Rick's RethinkDB wrapper is built with Multicorn, a PostgreSQL extension that lets developers implement foreign data wrappers in Python. Using Multicorn made it possible to build the wrapper around the official RethinkDB Python driver. The wrapper is currently a read-only implementation—you can perform queries that retrieve data from the RethinkDB tables, but you can't manipulate the wrapped tables with operations like INSERT or UPDATE.

I performed my experiment on a Linux system running Ubuntu 14.10, RethinkDB 1.15, and PostgreSQL 9.4. I installed the following packages with APT:

apt-get install python-setuptools python-dev postgresql-server-dev-9.4 pgxnclient postgresql rethinkdb git python-pip

To install Multicorn and the RethinkDB foreign data wrapper, I followed the instructions from the project's documentation.

Retrieve RethinkDB data with SQL queries

I used the following command to initialize the foreign data wrapper, specifying the name of the desired database and the host and port of the RethinkDB server:

CREATE SERVER rethink FOREIGN DATA WRAPPER multicorn OPTIONS (wrapper 'rethinkdb_fdw.rethinkdb_fdw.RethinkDBFDW', host 'localhost', port '28015', database 'cats');

I used the following SQL expression to expose the instacat table, which contains image posting data from Instagram:

CREATE FOREIGN TABLE instacat (id varchar, "user" json, caption json, images json, time timestamp) server rethink options (table_name 'instacat');

In the command, I defined columns that correspond with top-level properties from the documents in the instacat RethinkDB table. I can use those columns when performing a query against the foreign table. Each column in the table is defined with an associated type. I use the json type for properties that contain objects with other nested values. Note that I string-escaped the user column so that it won't be mistaken for a keyword. I only created columns for a subset of the properties available in each record. You can create columns for as many records as you want.

To see the foreign table in action, I performed a simple select query in the SQL console:

SELECT * FROM instacat;

The operation worked as expected, displaying the values from the RethinkDB table. It's also possible to extract individual sub-properties from the JSON objects. PostgreSQL 9.3 introduced a number of specialized SQL operators for working with JSON data. The following query shows how to extract a few individual values out of nested JSON structures in each record from the table:

select "user"->'full_name', caption->'text', images#>'{low_resolution,url}' from instacat;

The -> operator allows you to extract the value of a given field as text. The #> operator lets you specify multiple keys so that you can retrieve a value from an arbitrary depth within a nested JSON structure. The expression images#>'{low_resolution, url}' in PostgreSQL is equivalent to something like r.row("images")("low_resolution")("url") in ReQL. Thanks to the magic of foreign data wrappers, I can now access kitties in my PostgreSQL applications.

Although you can't modify the data, many SQL operations will work as expected. You can even use joins, performing queries that operate across foreign tables and conventional PostgreSQL tables.

Final notes

Given that every query against a foreign table entails a query against the RethinkDB instance through the Python driver, there's a fair amount of overhead involved. The wrapper's documentation recommends using a materialized view in performance-sensitive usage scenarios.

The documentation also suggests setting log_min_messages to debug1 in your postgresql.conf file (/etc/postgresql/9.4/main/postgresql.conf on Ubuntu) during troubleshooting. That will expose errors from the foreign data wrapper in your logs, which make it a bit easier to see what's going on.

Rick's foreign data wrapper makes it easy to incorporate RethinkDB into existing applications built on PostgreSQL. It's also a pretty compelling example of how Multicorn simplifies interoperability between PostgreSQL and external data sources.

Want to try it yourself? Install RethinkDB and check out the thirty-second quick start guide.

Resources:

Using RethinkDB with io.js: exploring ES6 generators and the future of JavaScript

A group of prominent Node.js contributors recently launched a community-driven fork called io.js. One of the most promising advantages of the new fork is that it incorporates a much more recent version of the V8 JavaScript runtime. It happens to support a range of useful ECMAScript 6 (ES6) features right out of the box.

Although io.js is still too new for production deployment, I couldn't resist taking it for a test drive. I used io.js and the experimental rethinkdbdash driver to get an early glimpse at the future of ES6-enabled RethinkDB application development.

ES6 in Node.js and io.js

ES6, codenamed Harmony, is a new version of the specification on which the JavaScript language is based. It defines new syntax and other improvements that greatly modernize the language. The infusion of new hotness makes the development experience a lot more pleasant.

Node.js has a special command-line flag that allows users to enable experimental support for ES6 features, but the latest stable version of Node doesn't give you very much due to its reliance on a highly outdated version of V8. The unstable Node 0.11.x pre-release builds provide more and better ES6 support, but still hidden behind the command-line flag.

In io.js, the ES6 features that are stable and maturely-implemented in V8 are flipped on by default. Additional ES6 features that are the subject of ongoing development are still available through command-line flags. The io.js approach is relatively granular, but strongly encourages adoption of features that are considered safe to use.

Among the most exciting ES6 features available in both Node 0.11.x and io.js is support for generators. A generator function, which is signified by putting an asterisk in front of the name, outputs an iterator instead of a conventional return value. Inside of a generator function, the developer uses the yield keyword to express the values that the iterator emits.

It's a relatively straightforward feature, but some novel uses open up a few very interesting doors. Most notably, developers can use generators to simplify asynchronous programming. When an asynchronous task is expressed with a generator, you can make it so that the yield keyword will suspend execution of the current method and resume when the desired operation is complete. Much like the C# programming language's await keyword, it flattens out asynchronous code and allows it to be written in a more conventional, synchronous style.

Introducing rethinkdbdash

Developed by RethinkDB contributor Michel Tu, rethinkdbdash is an experimental RethinkDB driver for Node.js that provides a connection pool and several other advanced features. When used in an environment that supports generators, rethinkdbdash optionally lets you handle asynchronous query responses with the yield keyword as an alternative to callbacks or promises.

The following example uses rethinkdbdash with generators to perform a sequence of asynchronous operations. It will create a database, table, and index, which it will then populate with remote data:

var bluebird = require("bluebird");
var r = require("rethinkdbdash")();

var feedUrl = "earthquake.usgs.gov/earthquakes/feed/v1.0/summary/4.5_month.geojson";

bluebird.coroutine(function *() {
  try {
    yield r.dbCreate("quake").run();
    yield r.db("quake").tableCreate("quakes").run();
    yield r.db("quake").table("quakes")
                       .indexCreate("geometry", {geo: true}).run();

    yield r.db("quake").table("quakes")
                       .insert(r.http(feedUrl)("features")).run();
  }
  catch (err) {
    if (err.message.indexOf("already exists") == -1)
      console.log(err.message);
  }
})();

Each time that the path of execution hits the yield keyword, it jumps out and waits for the operation to finish before continuing. The behavior is similar to what you would get if you used a promise chain, separating each operation into a then method call. The following is the equivalent code, as you would write it today using promises and the official RethinkDB JavaScript driver:

var conn;
r.connect().then(function(c) {
  conn = c;
  return r.dbCreate("quake").run(conn);
})
.then(function() {
  return r.db("quake").tableCreate("quakes").run(conn);
})
.then(function() {
  return r.db("quake").table("quakes").indexCreate(
    "geometry", {geo: true}).run(conn);
})
.then(function() { 
  return r.db("quake").table("quakes")
                      .insert(r.http(feedUrl)("features")).run(conn); 
})
.error(function(err) {
  if (err.msg.indexOf("already exists") == -1)
    console.log(err);
})
.finally(function() {
  if (conn)
    conn.close();
});

The built-in connection pooling in rethinkdbdash carves off a few lines by itself, but even after you factor that in, the version that uses the yield keyword is obviously a lot more intuitive and concise. The generator approach also happens to be a lot more conducive to using traditional exception-based error handling.

Use rethinkdbdash in a web application

To build a working application, I used rethinkdbdash with Koa, a next-generation Node.js web framework designed by the team behind Express. Koa is very similar to Express, but it makes extensive use of generators to provide a cleaner way of integrating middleware components.

The following example defines a route served by the application that returns the value of a simple RethinkDB query. It fetches and orders records, emitting the raw JSON value:

var app = require("koa")();
var route = require("koa-route");
var r = require("rethinkdbdash")();

app.use(route.get("/quakes", function *() {
  try {
    this.body = yield r.db("quake").table("quakes").orderBy(
      r.desc(r.row("properties")("mag"))).run();
  }
  catch (err) {
    this.status = 500;
    this.body = {success: false, err: err};
  }
}));

When the asynchronous RethinkDB query operation completes, its output becomes the return value of the yield expression. The route handler function takes the returned JSON value and assigns it to a property that represents the response body for the HTTP GET request. It doesn't get much more elegant than that.

A promising future awaits

Generators offer a compelling approach to asynchronous programming. In order to make the pattern easier to express and use with promises, developers have already proposed adding an official await keyword to future versions of the language.

By bringing the latest stable V8 feature to the masses, the io.js project holds the potential to bring us the future just a little bit faster. Although the developers characterize it as "beta" quality in its current form, it's worth checking out today if you want to get a tantalizing glimpse of what's coming .next().

Give RethinkDB a try with io.js or Node. You can follow our thirty-second RethinkDB quickstart guide.

Resources

Hands-on with Remodel: a new Python ODM for RethinkDB

This week, Andrei Horak released Remodel, a new Python-based object document mapping (ODM) library for RethinkDB. Remodel simplifies RethinkDB application development by automating much of the underlying logic that comes into play when working with relations.

Remodel users create high-level model objects and rely on a set of simple class attributes to define relationships. The framework then uses the model objects to generate tables and indexes. It abstracts away the need to do manual work like performing join queries or populating relation attributes when inserting new items. Remodel also has built-in support for connection pooling, which obviates the need to create and manage connections. In this brief tutorial, I'll give you a hands-on look at Remodel and show you how to use it in a web application.

Define your models

To start using Remodel, first install the library. You can use the setup.py included in the source code or you can install it from pip by typing pip install remodel at the command line.

For the purposes of this tutorial, let's assume that we want to build a Starfleet crew roster that correlates crew members with their starships. The first step is to define the models and create the tables:

import remodel.utils
import remodel.connection
from remodel.models import Model

remodel.connection.pool.configure(db="fleet")

class Starship(Model):
    has_many = ("Crewmember",)

class Crewmember(Model):
    belongs_to = ("Starship",)

remodel.utils.create_tables()
remodel.utils.create_indexes()

In an application built with Remodel, all of the model classes must inherit remodel.models.Model. In this application, there are two models: Starship and Crewmember. The has_many and belongs_to class attributes are used to define the relationships between objects. In this case, each Starship can have many Crewmember instances and each Crewmember instance belongs to only one Starship.

The create_tables and create_indexes methods will, as the names suggest, automatically generate tables and indexes based on your defined models. Remodel pluralizes your table names, which means that the Starship model will get a starships table.

The framework instantiates a connection pool, accessible at remodel.connection.pool. You can use the pool's configure method to adjust its behavior and specify connection options, such as the desired database name, host, and port.

Populate the database

Now that the models are defined, you can populate the database with content. To create a new database record, call the create method on one of the model classes:

voyager = Starship.create(name="Voyager", category="Intrepid", registry="NCC-74656")

Remodel doesn't enforce any schemas, so you can use whatever properties you want when you create a record. The create method used above will automatically add the Voyager record to the starships table. Because the Starship model defines a has_many relationship with the Crewmember model, the voyager record comes with a crewmembers property that you can use to access the collection of crew members that are associated with the ship. Use the following code to add new crew members:

voyager["crewmembers"].add(
    Crewmember(name="Janeway", rank="Captain", species="Human"),
    Crewmember(name="Neelix", rank="Morale Officer", species="Talaxian"),
    Crewmember(name="Tuvok", rank="Lt Commander", species="Vulcan"))

The records provided to the add method are instantiated directly from the Crewmember class. You don't want to use the create method in this case because the add method called on the Voyager instance handles the actual database insertion. It will also automatically populate the relation data, adding a starship_id property to each Crewmember record.

To make the example more interesting, add a few more Starship records to the database:

enterprise = Starship.create(name="Enterprise", category="Galaxy", registry="NCC-1701-D")
enterprise["crewmembers"].add(
    Crewmember(name="Picard", rank="Captain", species="Human"),
    Crewmember(name="Data", rank="Lt Commander", species="Android"),
    Crewmember(name="Troi", rank="Counselor", species="Betazed"))

defiant = Starship.create(name="Defiant", category="Defiant", registry="NX-74205")
defiant["crewmembers"].add(
    Crewmember(name="Sisko", rank="Captain", species="Human"),
    Crewmember(name="Dax", rank="Lt Commander", species="Trill"),
    Crewmember(name="Kira", rank="Major", species="Bajoran"))

Query the database

When you want to retrieve a record, you can invoke the get method on a model class. When you call the get method, you can either provide the ID of the specific record that you want or you can provide keyword arguments that perform a query against record attributes. If you want to get a specific starship by name, for example, you can do the following:

voyager = Starship.get(name="Voyager")

You can take advantage of the relations that you defined in your models. If you want to find all of the human members of Voyager's crew, you can simply use the filter method on the crewmembers property:

voyager = Starship.get(name="Voyager")
for human in voyager["crewmembers"].filter(species="Human"):
  print human["name"]

Perform filtering on an entire table by calling the filter method on a model class. The following code shows how to display the captain of each ship:

for person in Crewmember.filter(rank="Captain"):
  print person["name"], "captain of", person["starship"]["name"]

As you might have noticed, the starship property of the Crewmember instance points to the actual starship record. Remodel populates the property automatically to handle the Crewmember model's belongs_to relationship.

When you want to perform more sophisticated queries, you can use ReQL in conjunction with Remodel. Let's say that you want to evaluate Starfleet's diversity by determining how many crew members are of each species. You can use ReQL's group command:

Crewmember.table.group("species").ungroup() \
          .map(lambda item: [item["group"], item["reduction"].count()]) \
          .coerce_to("object").run()

The table property of a model class provides the equivalent of a ReQL r.table expression. You can chain additional ReQL commands to the table property just as you would when creating any ReQL query.

Put it all together

Just for fun, I'm going to show you how to build a web application for browsing the Starfleet crew roster. The app is built with Flask, a lightweight framework for web application development. The example also uses Jinja, a popular server-side templating system that is commonly used with Flask.

In a Flask application, the developer defines URL routes that are responsible for displaying specific kinds of information. The application uses templates to render the data in HTML format. Create a route at the application root:

app = flask.Flask(__name__)

@app.route("/")
def ships():
    return flask.render_template("ships.html", ships=Starship.all())

if __name__ == "__main__":
    app.run(host="localhost", port=8090, debug=True)

When the user visits the site root, the application will fetch all of the starships from the database and display them by rendering the ships.html template. The following is from the template file:

<ul>

</ul>

In the example above, the template iterates over every ship and displays a list item for each one. The list item includes an anchor tag that points to a URL with the ship's ID.

To make the application display the crew members of the ship when the user clicks one of the links, create a new /ship/x route that takes an arbitrary ship ID as a parameter:

@app.route("/ship/<ship_id>")
def ship(ship_id):
    ship = Starship.get(ship_id)
    crew = ship["crewmembers"].all()
    return flask.render_template("ship.html", ship=ship, crew=crew)

Fetch the desired ship from the database using the provided ID. In a real-world application, you might want to check to make sure that the record exists and throw an error if it doesn't. Once you have the ship, fetch the crew via the crewmembers property. Pass both the ship and the crew to the template:

<h1></h1>
<ul>

</ul>

Now create a /member/x route so that the user can see additional information about a crewman when they click one in the list:

@app.route("/member/<member_id>")
def member(member_id):
    member = Crewmember.get(member_id)
    return flask.render_template("crew.html", member=member)

Finally, define the template for that route:

<h1></h1>
<ul>
  <li><strong>Rank:</strong> </li>
  <li><strong>Species:</strong> </li>
</ul>

The template HTML files should go in a template folder alongside your Python script. When you run the Python script, it will start a Flask server at the desired port. You should be able to visit the URL and see the application in action.

Check out Remodel and Install RethinkDB to try it for yourself.

Resources

Make beautiful charts with RethinkDB queries and Charted.co

While building applications with RethinkDB, I often find cases where I want to be able to produce simple visualizations to help me better understand my data. Ideally, I'd like to take the output of a simple query and see what it looks like in a graph with as little work as possible. A new project recently introduced by the developers at Medium offers a compelling solution.

Medium's product science team built a lightweight web application called Charted that makes it easy for users to generate and share graphs. As input, the user provides a URL that points to CSV data. Charted processes the data and produces simple graphs with a clean and elegant design. No configuration is needed, though it allows the user to choose between bar and line formats and customize certain aspects of the output.

Charted is built on D3, a popular frontend JavaScript library that is widely used for data visualization. Simplicity is the chief advantage that Charted offers over rolling your own D3-based data visualizations by hand. Medium runs a hosted instance at Charted.co that anyone can use to publish and share graphs. You can also download the Charted source code from Github and run your own installation.

In order to use Charted with RethinkDB, you will need to convert the output of the desired query into CSV format and publish it at a URL. Fortunately, there are a number of libraries that make it very easy to perform the necessary conversion. In this tutorial, I will show you how I used the Python-based CSVKit framework with Flask to expose the output of a RethinkDB query in a form that I could pass to Charted.

Prepare your data with CSVKit

CSVKit is an open source toolkit for manipulating CSV content. It's primarily intended for use at the command line, but you can also consume it as a library in a Python script. It has a wide range of features, but we are primarily interested in using its built-in support for converting JSON to CSV.

You can import the json2csv function from the csvkit.convert.js module. The function expects to receive a file-like object, which means that you will need to wrap the content in StringIO if you would like to use a string instead of a file:

from csvkit.convert.js import json2csv

data = """[
  {"name": "Scott Summers", "codename": "Cyclops"},
  {"name": "Hank McCoy", "codename": "Best"},
  {"name": "Warren Worthington", "codename": "Angel"}
]"""

print json2csv(StringIO.StringIO(data))

If you run the code above, it will correlate the matching keys and display a comma-separated table of the values:

name,codename
Scott Summers,Cyclops
Hank McCoy,Best
Warren Worthington,Angel

Not bad so far, right? The conversion process is relatively straightforward. If you have nested objects, it will simply ignore them—it only operates on the top-level keys.

Transform data from RethinkDB

Now that you know how to convert JSON to CSV, the next step is applying the function to the output of your desired query. For the purposes of this tutorial, I'm going to use a feed of earthquake data from the USGS. As you might recall, I used that same data a few months ago in a tutorial that introduced geospatial queries.

In this case, I want to get the total number of earthquakes for each given day so that I will be able to plot it on a graph. Start by creating the table and loading the earthquake feed into the database:

c = r.connect()
r.db_create("quake").run(c)
r.db("quake").table_create("quakes").run(c)

url = "earthquake.usgs.gov/earthquakes/feed/v1.0/summary/4.5_month.geojson"
r.table("quakes").insert(r.http(url)["features"]).run(c)

To retrieve the relevant data, start by using the group command to organize the earthquakes by date. Next, append the ungroup command to chain additional operations to the grouped output. Finally, use the merge command to add a property that contains a total count of the records for each individual group:

output = r.db("quake").table("quakes") \
    .group(r.epoch_time(r.row["properties"]["time"] / 1000).date()) \
    .ungroup().merge({"count": r.row["reduction"].count()}).run(conn)

The group command will create a property called reduction that contains all of the values for each group. To get the total number of items for the group, you can simply call the count method on the array stored in reduction. The USGS feed uses high-precision timestamps, so you have to divide the value of the time property by 1000 to get the number of seconds before applying the epoch_time command.

There are a few minor wrinkles that you have to sort out before you convert the output to CSV. The group keys are date objects, which you can't really use for graphing. You must convert those timestamps to simple date strings that are suitable for use in the graph. The order of the keys is also important, because Charted will automatically use the first column as the x-axis in its graphs.

In order to specify the key order and format the timestamps, you will need to iterate over each item in the result set and create an OrderedDict that contains all of the values:

data = json.dumps([OrderedDict([
    ["date", item["group"].strftime("%D")],
    ["count", item["count"]]]) for item in output])

print json2csv(StringIO.StringIO(data))

Serve the output

In order to get the data into Charted, you will need to serve the generated CSV content through a public URL. For the purposes of this tutorial, I chose to accomplish that with Flask, a simple Python library for building server-side web applications.

In a Flask application, you use a Python decorator to associate a function with a URL route. I chose to create two routes, one that exposes the content in JSON format and one that exposes it in CSV format. The latter simply wraps the output of the former:

@app.route("/quakes")
def quakesJSON():
    conn = r.connect()
    output = r.db("quake").table("quakes") \
        .group(r.epoch_time(r.row["properties"]["time"] / 1000).date()) \
        .ungroup().merge({"count": r.row["reduction"].count()}).run(conn)

    conn.close();
    return json.dumps([OrderedDict([
        ["date", item["group"].strftime("%D")],
        ["count", item["count"]]]) for item in output])

@app.route("/quakes/csv")
def quakesCSV():
    return json2csv(StringIO.StringIO(quakesJSON()))

Now that you have a running server that outputs your data set in CSV format, you can take the URL and provide it to Charted. If you intend to use the public instance of Charted that is hosted at Charted.co, you will need to make sure that your Flask application server is publicly accessible. You might want to consider using a tool like ngrok to make a Flask application running on your local system accessible to the rest of the Internet. If you don't want to publicly expose your data, you could also optionally run your own local instance of Charted.

You can find a complete 50-line example by visiting this gist on GitHub. Install RethinkDB to try it for yourself.

For additional information, you can refer to: