Salesforce’s Tableau Acquisition: What It's About & Why It Matters
by Tyler Putterman, on June 15, 2019
What it’s about, why it’s important, and what it means for your data strategy
It’s been a nice few weeks to be a data visualization company. Salesforce’s acquisition of Tableau and Google’s acquisition of Looker, as well as Sisense merging with Periscope Data and Alteryx snapping up ClearStory Data, are a clear indication of the increased need to access and utilize data assets in an easy and fast manner.
What it's about
Businesses have been using data to make decisions for a long time, but the sheer amount of data available today—and the ways in which it can be used and applied—has overwhelmed many businesses’ capacity to harness it for better decision-making.
In order to manage all this data, data stacks have been emerging to collect, store, organize, render, and take action on data. Starting from the bottom, there are 3 main components to these data stacks:
- The data layer. This is the data itself, the fundamental layer, and can be made up of both first and third party assets. This could be customer records in Salesforce, or impression logs stored in Google.
- The infrastructure layer. How the data is being stored, organized, and ultimately retrieved. This is what already existed with Google’s and Salesforce’s data platform.
- The visualization layer. How the data is being rendered and manipulated, based on the infrastructure. This is what Salesforce and Google are acquiring: A way to visualize and understand their data in order to enable them to make decisions based off of it.
Why it's important
Why are we seeing this activity by so many significant players happening right now? There are a number of trends in the data space that help us understand why having a robust data visualization capacity is suddenly so important:
- Growing emphasis on utility across departments. We have been beholden to a world of siloed data; even practitioners within the same company are not necessarily using the same data assets when they should be. It’s not uncommon right now for a data science team to be using raw logs to derive insights and test hypotheses, and for a media activation or ad sales team to be using segment data to activate against those exact insights the data science team surfaced. It doesn’t make sense to use one data set to make a decision, and a separate data set to activate that same decision.
- Increase in sophistication of data use cases. Coupled with the increased depth and breadth of data, more sophisticated use cases with data use means more data will be used, requiring a stack to manage the various layers and components.
- Greater importance of seamless integration of first and third party data. Of course first party data is extremely valuable, and will continue to be so, but increasingly, organizations will need third party data to move their business forward in an intelligent manner. For real insights and actions to come from these assets, though, first and third party data need to be interoperable. If you think third party data “doesn’t work,” it’s likely because it hasn’t been well integrated with what you already know about your customers in your first party data.
Even five years ago, very few companies were equipped to leverage external data. Most companies still did not analyze their own data!— Auren Hoffman (@auren) June 10, 2019
But as companies get better and better at finding insights in their internal data, they will look externally for data more and more.
What these trends indicate is both a growing use of data by functions across the enterprise, and a growing complexity of the data being used. However, if all functions who need the data for their respective reasons can’t easily access, render, and activate that data, it doesn’t really matter how much data you have or how good it is. These acquisitions address that by removing some of the friction between the different layers and helping democratize data within an organization.
What does this mean for me?
While making data easy to access, render, and activate is a must for any data-driven organization, it doesn’t matter much if the 3 Vs of data—volume, variety, and velocity—are not mastered first. And the fact of the matter is that most of us are still stuck in the era of data being heavy, disparate, slow, and cumbersome to acquire, analyze, and activate.
To power data visualization in a way that enables proper decision-making, organizations must have access to a high volume of highly varied data points, and must be able to access them at a high velocity.
Narrative can help simplify and bridge the data layer with the infrastructure layer in such a way that allows you to render the data with your preferred data visualization product or in-house capabilities. Suppliers in our Narrative marketplace can help with the breadth and depth—or the variety and volume—of data, and our core platform capabilities of aggregating, normalizing, de-duplicating and formatting across multiple supply partners increases the velocity in which you can acquire and activate data, and make it very easy for you to make informed, data driven business decisions.