Several options are being touted for doing Hadoop data analytics. Here are a few and their pros and cons as Hadoop alternatives.
I’ve been presenting a bunch and writing some blog posts, all on changes in the data management and analytics industry …
I’ve been doing a ton of writing and speaking, just not here. My first post on Medium is on COVID-19 myths and has a ton of links to reliable data sources to help dispel them. I’ve been writing some on the Vertica blog, doing a few projects for O’Reilly, and I’ve been writing my usual web content and technical architecture papers. But the main thing I’ve spent my time on in the last couple of years is public speaking. I was set to travel 5 weeks out of the last six to speak at conferences. That didn’t exactly happen.
The recent DBTA Data Summit provided a lot to think about. I did a short talk in the “Analytics in Action” track about how data analysts, architects and engineers can turn the endless waves of disruption we keep getting hit with into opportunities to boost bottom line. There were some very cool talks by other folks as well. For me, the highlights of the conference were Michael Stonebraker’s keynote, and the Data Kitchen folks diving into the principles of DataOps.
At the recent Data Day Texas event, I sat down with Davin Potts, who I have known for many years, and had a long conversation about a wide variety of subjects. Over on the Vertica blog, I broke the conversation into chunks, but I wanted to put it all together in one place so you can see what we chatted about end to end. So, here’s all of it, from machine learning to open source, from Python to Knime, and why the heck DO we move data out of a database to analyze it?
I just started working on the Vertica team. As the “new guy,” I’ve been cramming as much Vertica information into my brain as possible in the shortest time possible. Some things really surprised me, and I bet they’ll surprise you, too.
The theme of Data Day TX 2019 was the highly cooperative landscape between proprietary and open source, and a good architect doesn’t choose sides.
Owen O’Malley is one of the folks I chatted with at the last Hadoop Summit in San Jose. I already discovered the first time I met him that he was the big Tolkien geek behind the naming of ORC files, as well as making sure that Not All Hadoop Users Drop ACID. In this conversation, I learned that Hadoop and Spark are both partially his fault, about the amazing performance strides Hive with ORC, Tez and LLAP have made, and that he’s a Trek geek, too.
Hadoop Summit in San Jose this year celebrated Hadoop’s 10th birthday. All of the folks on stage are people who contributed to Hadoop during those 10 years. One of them is Yolanda Davis.
Yolanda and I worked together on a Hortonworks project last year. She was in charge of the user interface design and development team. I caught up with her early in the morning of the last day of Hadoop Summit, and quizzed her on this new project she’s working on that you may have heard of, Apache Nifi. As promised, here is my interview with her on the subject of Nifi and the new HDF (Hortonworks Data Flow) streaming data processing platform, which includes Nifi, Apache Kafka and Apache Storm.