Database service for the #realtimeweb

10x Faster Analytics - Callinize case study

After spending four years working with CRMs, Blake and Patrick started Callinize to solve the problems they had faced firsthand in the sales and CRM worlds. Callinize connects phone calls to the data stored in CRMs. Sales and Marketing teams use Callinize to keep track of the identity of the callers, their conversation history including their entire social data. By building rich context around conversations and providing analytics on call logs, Callinize helps businesses improve their outbound sales significantly.

Callinize is the bridge between phone and the CRM, builds context around each call and provides data insights, analytics via automatic call logging.

The call analytics dashboard is very important to Callinize's customers, as it helps them to decode,

  • Who in the team is most effective on calls
  • Parameters (time of the, day of the week) that generate best results from a call
  • New leads, and their sources
  • Smart followups

The dashboard itself is divided into 8 pie charts, 5 graphs, 1 leaderboard, 1 table and 1 counter.

Callinize Analytics Dashboard

Analytics is hard

Let's take a look at Callinize's stack and their requirements.

Stack and scale

  • NodeJS
  • Hosted MongoDB with autoscaling option with
  • 7M JSON objects
    • each object ~2.8KB in size
    • having nested arrays and objects
    • ~200K new objects added daily


The dashboard they build is quite heavy and complex on the backend. For example, the chart "Since Created" under Sales Call Stats, categorizes the calls into 7 time ranges - less than a day, less than a week, less than a month etc.

Sales Call Stats

This query is called a group aggregation or bucketing aggregation, where entities are grouped together into different buckets (categories). It require a database to pull the data into memory, analyze and create groupings. MongoDB puts a hard limit of 100 MB on the RAM usage for this aggregation operation, or altneratively allows a much slower disk based aggregation. autoscaling is vertical, and leaves sharding responsibilities to developers. During peak hours, the indexing process slows down and affects the throughput, and write operations start to fail.

"We migrated from to ObjectRocket and then to MMS. It is ridiculous how slow our backend is with just 7 million records and a few aggregations."

Patrick, CTO.

For instance, while fetching call stats for a month, this banner would show up for nearly 5 seconds.

They had two choices: hire a MongoDB expert, or spend their own time on it. Both of them were not appealing, particularly when they were going through the Techstars Cloud 2015 accelerator and needed to be focusing on scaling their business, not their infrastucture.

Appbase is fast, like Usain Bolt

Patrick sent us this video of Usain Bolt when he first tested Appbase with his dataset.

Appbase is a truly managed database service. It provides a blazing fast document store with an analytics engine that can run ad-hoc queries ranging from search to complex aggregations - all using a modern REST API. Developers don't have to think about shards and scaling performance.

After initial testing, they decided to push their production data to Appbase. The aggregation query for "Since Created" chart something like this:

With Appbase, Callinize didn't have to spend any engineering resources monitoring and scaling their analytics queries. Appbase is now running alongside their existing infrastructure since March '15.

Their dashboard build time has dropped from 5 seconds to under 500 milliseconds, literally 10x faster.

Finally, this is what Patrick has to say about his queries!

"Appbase queries are insanely fast."

Patrick, CTO.

Author image
San Antonio
Entrepreneur, programmer. Interested in history, comedy, and music; also, psychology and philosophy when drunk.