The Apache Kudu team is happy to announce the release of Kudu 1.8.0!
The new release adds several new features and improvements, including the following:
The Apache Kudu team is happy to announce the release of Kudu 1.8.0!
The new release adds several new features and improvements, including the following:
This summer I got the opportunity to intern with the Apache Kudu team at Cloudera. My project was to optimize the Kudu scan path by implementing a technique called index skip scan (a.k.a. scan-to-seek, see section 4.1 in [1]). I wanted to share my experience and the progress we’ve made so far on the approach.
I’ve been working with Hadoop now for over seven years and fortunately, or unfortunately, have run across a lot of structured data use cases. What we, at phData, have found is that end users are typically comfortable with tabular data and prefer to access their data in a structured manner using tables.
The following article by Brock Noland was reposted from the phData blog with their permission.
Five years ago, enabling Data Science and Advanced Analytics on the Hadoop platform was hard. Organizations required strong Software Engineering capabilities to successfully implement complex Lambda architectures or even simply implement continuous ingest. Updating or deleting data, were simply a nightmare. General Data Protection Regulation (GDPR) would have been an extreme challenge at that time.
Last week, the OpenTracing community invited me to their monthly Google Hangout meetup to give an informal talk on tracing and instrumentation in Apache Kudu.
While Kudu doesn’t currently support distributed tracing using OpenTracing, it does have quite a lot of other types of instrumentation, metrics, and diagnostics logging. The OpenTracing team was interested to hear about some of the approaches that Kudu has used, and so I gave a brief introduction to topics including: