Virtualizing Hadoop with NAS 3

Virtualizing Hadoop with NAS
A recent question in the Hortonworks Community  mentioned someone using Hadoop in a virtualized environment with EMC’s Isilon NAS (Network Attached Storage). While this may be a valid use case for some anyone who is looking at Hadoop as more than small number crunching cluster(s) will have to reflect on this approach. Here are some ... read more →

Star Schema in Hive and Impala 2

Star Schema in Hive and Impala
Someone on the Hortonworks Community asked about how to design star schema with Hive. This is a question I hear in some way or another from various stakeholders in large enterprises we work with at Big Data Partnership. And I usually answer it by taking a step back and I did that answering the community ... read more →

The four types of Big Data as a Service (BDaaS) 4

The four types of Big Data as a Service (BDaaS)
The popularity of Big Data lies within its broad definition of employing high volume, velocity, and variety data sets that are difficult to manage and extract value from. Unsurprisingly, most businesses can identify themselves as facing now or in future Big Data challenges and opportunities. This therefore is not a new issue yet it has a ... read more →

Full Metal Hadoop as a Service with Altiscale

Full Metal Hadoop as a Service with Altiscale
Hadoop, known to be powerful and challenging to manage, is increasingly becoming available as-a-Service in numerous varieties. Initially do-it-yourself distributions like Cloudera, MapR, and Hortonworks made up a great part of the market. In recent years, following the success of Amazon Web Services ElasticMapReduce (EMR), Hadoop/data services like Qubole are becoming popular. Last year, quietly, another entrant in the field ... read more →

Lambda Architecture: Achieving Velocity and Volume with Big Data 4

Lambda Architecture: Achieving Velocity and Volume with Big Data
Big data architecture paradigms are commonly separated into two (supposedly) diametrical models, the more traditional batch and the (near) real-time processing. The most popular technologies representing the two are Hadoop with MapReduce and Storm. However, a hybrid solution, the Lambda Architecture, challenges the idea that these approaches have to exclude each other. The Lambda Architecture combines ... read more →

GraphChi: How a Mac Mini outperformed a 1,636 node Hadoop cluster

GraphChi: How a Mac Mini outperformed a 1,636 node Hadoop cluster
Last year GraphChi, a spin-off of GraphLab, a distributed graph-based high performance computation framework, did something remarkable. GraphChi outperformed a 1,636 node Hadoop cluster processing a Twitter graph (dataset from 2010) with 1.5 billion edges – using a single Mac Mini. The task was triangle counting and the Hadoop cluster required over 7 hours while ... read more →

ORC: An Intelligent Big Data file format for Hadoop and Hive 11

ORC: An Intelligent Big Data file format for Hadoop and Hive
RCFile (Record Columnar File), the previous Hadoop Big Data storage format on Hive, is being challenged by the smart ORC (Optimized Row Columnar) format. My first post on the topic, Getting Started with Big Data with Text and Apache Hive, presented a common scenario to illustrate why Hive file formats are significant to its performance and ... read more →

Faster Big Data on Hadoop with Hive and RCFile 5

Faster Big Data on Hadoop with Hive and RCFile
SQL on Hadoop with Hive makes Big Data accessible. Yet performance can lack. RCFile (Record Columnar File) are great optimisation for Big Data with Hive. The previous two posts in this four parts series explained the reasons why to use text on the periphery of an ETL process and optimisations for text. The inside of a Hive ... read more →

Optimising Hadoop and Big Data with Text and Hive

Optimising Hadoop and Big Data with Text and Hive
Hadoop’s Hive SQL interface reduces costs and to gets results fast with Big Data from Text. Simple optimisations improve the performance significantly. The previous post Getting Started with Big Data with Text and Apache Hive described the case for using text format to import and export data for a Hive ETL and reporting process. These ... read more →

Getting Started with Big Data with Text and Apache Hive 3

Getting Started with Big Data with Text and Apache Hive
Big Data more often than expected is stored and exchanged as text. Apache Hadoop’s Hive SQL interface helps to reduce costs and to get results fast. Often, things have to get done fast rather than perfectly. However, with big data even a small decision like a file format could have a great impact. What are ... read more →