Oct 29, 2018


On one side of the aisle are the databases that exist long time. The data falls nicely into tables and the database will execute exotic queries to match together the tables and find the right rows. On the other side are the NoSQL upstarts, which make grand promises about speed and parallelism, with the little caveat that every once in a while things might go south and the database will send back wrong or inconsistent answers.

Are the belts-and-suspender approaches of traditional databases with traditional transaction protection the right thing for your data? Or do you want a faster, cheaper, more modern tool that will spread the load effectively over a cluster of machines? Sure, consistency and accuracy are important to banks, but what about a table full of random blather from the Internet? Does everything need the best protection that data scientists can deliver?

Those who need absolute consistency like banks and airlines go with traditional SQL databases with real transactions. Everyone else chooses the speedy, simpler, scalable NoSQL.