Patrick Schwerdtfeger is a leading authority on global business trends including “big data” and business intelligence, and the challenges and opportunities of massive data management and analytics. More and more devices are monitoring more and more activities, resulting in unprecedented quantities of data as well as the insights and opportunities hidden therein. It’s the Internet of Things (IoT) and it’s already revolutionizing the business world. The trick is to identify the key performance indicators (KPIs) to reveal actionable insights but that can only begin when all the participants understand what “big data” is and how to approach it. Patrick is the author of the award-winning book Marketing Shortcuts for the Self-Employed (2011, Wiley) and a regular speaker for Bloomberg TV. He has spoken at conventions and business events around the world. His approach to the data analytics topic is strategic and empowering. And since big data presentations tend to be dry and technical, Patrick’s dynamic and engaging speaking style makes him a perfect selection to open your conference and energize your audience. His “Monetizing Big Data” keynote program does a great job of introducing the topic, defining the terms and sharing inspiring case histories where companies have increased revenue, decreased costs or even transformed their industries as a result of big data technologies.
Past speaking clients include:
Recent speaking destinations include:
Expert on Big Data and Analytics: Keynote Speech
Patrick begins his Big Data keynote program with an explanation of the terms and the primary players involved. Aerospace, utilities, eGovernment initiatives, healthcare and financial institutions are all accumulating incredible amounts of data. The problem is that they don’t know how to organize it, process it or analyze it. In some cases, due to the magnitude of data involved, it takes networks of thousands of servers to run simple filters and queries (using technologies like Hadoop and MapReduce). The resulting challenge has led to entirely new educational specialties and professional occupations including data scientists and data engineers. These professionals specialize in the structuring and analysis of massive data sets.
Once the evolving trends has been introduced, Patrick’s keynote shifts to the opportunities. Algorithms and predictive analytics are the future. The accumulation of data is step #1. Identifying patterns and opportunities (analytics) is step #2. The final step is to create algorithms that deliver actionable insights, as services, to customers and prospects. There are countless examples of this including automated stock market timing, arbitrage trading, medical diagnostics and pharmaceutical procurement, weather forecasting models, internet search engines, kidney transplant networks and even sports journalism. Algorithms are the final link between big data and revenue.
Big data and algorithms also form the foundation of virtual reality and artificial intelligence. As the computing power continues to increase and the technology continues to evolve, the trends in big data will directly impact the developments of these other related fields. Machine learning will inevitably improve performance in autonomous driving vehicles and augmented reality interfaces. Patrick explains these connections and gives people an idea of how quickly these developments will impact our daily lives.
Patrick builds his keynote programs by accumulating and studying case histories. In new fields like Big Data, the best way to do that is to follow the companies on the forefront of innovation. Some of the leading Big Data and innovators include:
- SAP: Explore the world of Big Data
- Oracle: The Foundation for Data Innovation
- IBM: Big Data at the Speed of Business
- Teradata: Analytics and data unleash the potential of great companies.
- Cloudera: A modern platform for data management and analytics.
- Hortonworks: Open and Connected Data Platforms
- Hewlett Packard: Architecture, Infrastructure and Analytics
- MapR: Converged Data Platform
- Microsoft: Data Warehouse Solutions, from Terabytes to Petabytes
- Amazon Web Services: Cloud Computing Services
- SAS: Big Data Insights
- Accenture: Big Success from Big Data
- Palantir: Products Built for a Purpose
- Dell: Data is the new currency. Invest wisely.
Structured Data versus Unstructured Data
When people think about data, they generally think about rows and columns of numbers, all matching in format. Unfortunately, only about 5% of digital data is in that format. It’s referred to as structured data, but 95% of digital data is unstructured. That means it’s buried inside machine logs, computer logs, books and documents, websites, legal briefs, blog posts and countless others. Every instance is in a unique digital environment. The trick is to churn through all that data and pull the relevant pieces out. It’s difficult but it also offers tremendous business insights.
Probably the best example of an algorithm that process massive quantities of unstructured data is the Google search algorithm. It literally indexes the entire internet, including websites and blogs in an infinite number of unique formats, and pulls out relevant data based on a given search query. The internet is a classic example of unstructured data, and the emerging big data capabilities are allowing companies to run similar filters and queries on countless other data sets. As it turns out, it’s the unstructured data (where N=all) that provides the greatest opportunity.
Years ago, computer algorithms were developed to allow computers to play chess. Every possible move was entered into the program allowing the computer to calculate every possible contingency in the game. However, the world’s chess masters still routinely beat the computer. Much later, as data storage and processing became more accessible, thousands of previous chess games with chess maters were added to the algorithm, allowing the computer to supplement its contingency planning with probabilities of how the opponent would respond to any particular move. The new data set of past games was vastly larger (and less structured) than the original set of move possibilities, but it was precisely that data set that allowed computers to become unbeatable.
Automatic language translation is another example. The early versions of computer-driven translation comprised of thesaurus-like databases from one language to another but the resulting translations were all clumsy and awkward. Google later opened the algorithm up to the entire internet including formally translated documents (like court papers and parliamentary procedures in Canada translated from English to French and vice versa) and countless webpages translated by people and companies all around the world. It was a massive quantity of messy unstructured data, but it was precisely this messy data which allowed the computer to incorporate probabilities based on contextual factors to improve the end result.
Messy data is the key. In the past, sampling was used to glean insights from large data sets but this had the net affect of minimizing outliers from consideration. The N=all possibility of big data allows all occurrences to be considered, including the outliers, allowing algorithms to include true probabilities in their calculations. The opportunity of big data is with unstructured data. It’s with the messy data. Only when you churn through the entire availability of data will the most valuable insights emerge. Patrick’s “Monetizing Big Data” keynote speech concludes with this message and encourages audience attendees to embrace unstructured data as the key to future profits.
To what extent is big data a reality in any particular field? Where are the opportunities? What are the technical requirements? And how can you identify and test theories efficiently? Depending on the audience, this might include innovations in data center technology, strategies for database management, identification of key performance indicators or the recruitment of data engineers and scientists. Regardless of the implications, Patrick strives to present an empowering angle for event attendees.
The Future of Big Data
Data storage and processing power continue to accelerate. Much of the current innovation revolves around parallel processing facilitated by Hadoop and MapReduce. As such, the term “big data” will refer to increasingly massive data sets as time goes on. Databases that seem unmanageable today will be commonplace tomorrow, and data storage companies like EMC continue to expand capacity along the way. But at any one point, the organizations that are able to effectively analyze their data will be the first to exploit new opportunities via analytics and algorithms. As consumers, we can expect increasingly intuitive product or service offerings as businesses better understand what factors influence purchase decisions.
The trends leading to big data and the Internet of things are worth mentioning. Historically, data was accumulated and recorded by employees. As the internet evolved, data could then be entered directly by users. Facebook is a great example, accumulating over 25 terabytes of fresh data every single day. Later still, data started being generated by machines. Buildings are full of monitors, residential houses have smart meters and satellites record data about our planet 24 hours each day. Each evolution increased data collection by orders of magnitude, making traditional data processing (via relational databases) impossible.
To respond to increasingly large data sets, technologies like Hadoop and MapReduce facilitated new “parallel processing” as a new option. Historically, we took the data to the processor. Today, we bring the processors to the data. Each server contains its own CPU, making data processing as scalable as data itself. Google is leading the charge. In time, companies and governments will be able to mine their proprietary data for consumer insights never before revealed and we can all expect an increasingly intuitive world as a result.