Introduction to Oracle Big Data Cloud Service – Compute Edition (Part VI) – Hive

I though I would stop writing about “Oracle Big Data Cloud Service – Compute Edition” after my fifth blog post, but then I noticed that I didn’t mention about the Apache Hive, another important component of the Big Data. Hive is a data warehouse infrastructure built on top of Hadoop, designed to work with large datasets. Why is it so important? Because it includes support for SQL (SQL:2003 and SQL:2011), and helps users to utilize existing SQL skillsets to quickly derive value from big data.

Although new improvements of Hive project enables sub-second query retrieval (Hive LLAP) but it’s not designed for online transaction processing (OLTP) workloads. Hive is best used for traditional data warehousing tasks.

In this blog post, I’ll demonstrate how we can import data from CSV files into hive tables, and run SQL queries to analyze the date stored in these tables.

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part V) – Pig

This is my fifth blog post of my introduction series for Oracle Big Data Cloud Service – Compute Edition. In this blog post, I’ll mention “Apache Pig”. It’s a tool/platform created by “Yahoo!” to analyze large data sets without the complexities of writing a traditional MapReduce program. It’s designed to process any kind of data (structured or unstructured) so it’s a great tool for ETL jobs. Pig comes installed and ready to use with “Oracle Big Data Cloud Service – Compute Edition”. In this blog post, I’ll show how we can write use pig to read, parse and analyze data.

Pig has a high-level SQL-like programming language called Pig Latin. We need to learn basics of this language to be able to use Pig. Each statement in a Pig script, is processed by the Pig interpreter to build a logical plan which will be used to procedure MapReduce jobs. The steps in the logical plan are not “executed” until a DUMP or STORE statement is used.

Pig scripts have generally the following structure:

  1. Data is read by using LOAD statements.
  2. Data is transformed/processed.
  3. The result is dumped (to screen) or stored to a file (or a Hive table).

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part IV) – Zeppelin

This is my forth blog post about Oracle Big Data Cloud Service – Compute Edition. In my previous blog posts, I showed how we can create a big data cloud service compute edition on Oracle Cloud, which services are installed by default, ambari management service and now it’s time to write about how we can work with data using Apache Zeppelin. Apache Zeppelin is a web-based notebook that enables interactive data analytics. Zeppelin is not the only way to work data but it’s surely very friendly for end-users and (as I said before) it’s already installed to our big data cloud service compute edition.

We can create a rule to allow access to TCP port 9995 for accessing Zeppelin directly, or we can use “big data console” provided by Oracle. I’ll prefer the second one because our Ngix proxy will let only authenticated users to access Zeppelin.

After you reach the console, go to the notebooks page. Click “new note”, enter a name and then click “OK” – this will create a new empty notebook, and you’ll start editing it. My new notebook’s name is “MyFirstNote”. As you can see there are some sample notebooks, you can examine them to learn how you can use java and spark with Zeppelin.

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part III) – Ambari

This is my third blog post about Oracle Big Data Cloud Service – Compute Edition. I continue to guide you about the “Big Data Cloud Service – Compute Edition” and its components. In this blog post, I will introduce Ambari – the management service of our hadoop cluster.

The Apache Ambari simplifies provisioning, managing, and monitoring Apache Hadoop clusters. It’s the default management tool of Hortonworks Data Platform but it can be used independently from Hortonworks. After you create your big data service, SSH and 8080 (port used by Ambari) is blocked. You need to enable the rules to allow access through these ports. In my first blog post about Oracle Big Data Cloud Service – Compute Edition, I showed how to enable these ports.

Introduction to Oracle Big Data Cloud Service – Compute Edition (Part II) – Services

In my previous post, I gave a list of installed services on a “Oracle Big Data Cloud Service – Compute Edition” when you select “full” as deployment profile. In this post, I’ll explain these services and software.

HDFS: HDFS is a distributed, scalable, and portable file system written in Java for Hadoop. It stores data so it is the main component of the our cluster. A Hadoop (big data) cluster has nominally a single namenode plus a cluster of datanodes, but there are redundancy options available for the namenode due to its criticality. Both namenode and datanode services can run in same server (although it’s not recommended on a production environment). In our small cluster, we have 1 active namenode, 1 standby namenode and 3 datanodes – distributed to 3 servers.

YARN + MapReduce (v2): MapReduce is a programming model popularized by Google to process large datasets in a parallel and scalable way. is a framework for cluster resource management and job scheduling. YARN contains a Resource Manager and Node Managers (for redundancy we can create a standby Resource Manager). The Resource Manager tracks how many live nodes and resources are available on the cluster and coordinates which applications submitted by users should get these resources. Each datanode should have a nodemanager to run MapReduce jobs.