hadoopi.wordpress.com hadoopi.wordpress.com

hadoopi.wordpress.com

Hadoopi | A french hadooper in London

Dear reader, Welcome to this blog around hadoop and Big-data. The aim of this blog is not to provide you with official documentation (you will find plenty of official websites for that purpose), but rather to share with other developers my knowledge around Hadoop. Happy hadooping ! Antoine

http://hadoopi.wordpress.com/

WEBSITE DETAILS
SEO
PAGES
SIMILAR SITES

TRAFFIC RANK FOR HADOOPI.WORDPRESS.COM

TODAY'S RATING

>1,000,000

TRAFFIC RANK - AVERAGE PER MONTH

BEST MONTH

December

AVERAGE PER DAY Of THE WEEK

HIGHEST TRAFFIC ON

Sunday

TRAFFIC BY CITY

CUSTOMER REVIEWS

Average Rating: 3.9 out of 5 with 16 reviews
5 star
9
4 star
1
3 star
4
2 star
0
1 star
2

Hey there! Start your review of hadoopi.wordpress.com

AVERAGE USER RATING

Write a Review

WEBSITE PREVIEW

Desktop Preview Tablet Preview Mobile Preview

LOAD TIME

0.4 seconds

FAVICON PREVIEW

  • hadoopi.wordpress.com

    16x16

  • hadoopi.wordpress.com

    32x32

CONTACTS AT HADOOPI.WORDPRESS.COM

Login

TO VIEW CONTACTS

Remove Contacts

FOR PRIVACY ISSUES

CONTENT

SCORE

6.2

PAGE TITLE
Hadoopi | A french hadooper in London | hadoopi.wordpress.com Reviews
<META>
DESCRIPTION
Dear reader, Welcome to this blog around hadoop and Big-data. The aim of this blog is not to provide you with official documentation (you will find plenty of official websites for that purpose), but rather to share with other developers my knowledge around Hadoop. Happy hadooping ! Antoine
<META>
KEYWORDS
1 hadoopi
2 menu
3 skip to content
4 about me
5 search for
6 dear reader
7 happy hadooping
8 antoine
9 like this
10 like
CONTENT
Page content here
KEYWORDS ON
PAGE
hadoopi,menu,skip to content,about me,search for,dear reader,happy hadooping,antoine,like this,like,loading,tags,development,flume,hacking,hadoop,hive,inputformat,mahout,mapreduce,performance,scala,spark,sparksql,tableau,test,recent posts,categories
SERVER
nginx
CONTENT-TYPE
utf-8
GOOGLE PREVIEW

Hadoopi | A french hadooper in London | hadoopi.wordpress.com Reviews

https://hadoopi.wordpress.com

Dear reader, Welcome to this blog around hadoop and Big-data. The aim of this blog is not to provide you with official documentation (you will find plenty of official websites for that purpose), but rather to share with other developers my knowledge around Hadoop. Happy hadooping ! Antoine

INTERNAL PAGES

hadoopi.wordpress.com hadoopi.wordpress.com
1

Spark: Connect Tableau Desktop to SparkSQL | HadooPI

https://hadoopi.wordpress.com/2014/12/31/spark-connect-tableau-desktop-to-sparksql

A french hadooper in London. Spark: Connect Tableau Desktop to SparkSQL. Last (but not least) post of 2014, and a new Hacking challenge. Based on the work I’ve done on SQLDeveloper ( https:/ hadoopi.wordpress.com/2014/10/25/use-spark-sql-on-sql-developer/. I was wondering how to connect Tableau Desktop to my SparkSQL cluster. Create a Hive Table. That contains around 60’000 documented reports of unidentified flying object. My Hive table is as follows. Col name data type comment - - - - - - - - - - - - - ...

2

Spark / Hadoop: Processing GDELT data using Hadoop InputFormat and SparkSQL | HadooPI

https://hadoopi.wordpress.com/2014/09/24/spark-hadoop-processing-gdelt-data-using-hadoop-inputformat-and-sparksql

A french hadooper in London. Spark / Hadoop: Processing GDELT data using Hadoop InputFormat and SparkSQL. A quick overview of GDELT. GDELT Project monitors the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organisations, counts, themes, sources, and events driving our global society every second of every day, creating a free open platform for computing on the entire world. Read data from Spark shell. GDELT Data...

3

2014 was great, 2015 will be SPARKling! | HadooPI

https://hadoopi.wordpress.com/2014/12/31/2014-in-review

A french hadooper in London. 2014 was great, 2015 will be SPARKling! Only 15 posts written so far and more than 70K viewers from 119 countries! A great year is almost ending, let’s celebrate 2015 with a giant SPARKling fireworks. The WordPress.com stats helper monkeys prepared a 2014 annual report for my Hadoopi blog. Thanks to all of you, and see you guys next year for more fun around Hadoop and big data! Here’s an excerpt:. Click here to see the complete report. Spark: Use Spark-SQL on SQL Developer.

4

Spark: Use Spark-SQL on SQL Developer | HadooPI

https://hadoopi.wordpress.com/2014/10/25/use-spark-sql-on-sql-developer

A french hadooper in London. Spark: Use Spark-SQL on SQL Developer. I’m describing here how I set SQL Developer to connect / query my Spark cluster. I made it work on my local environment below:. Ubuntu precise 64 bits (1 master, 2 slaves). Hadoop Hortonworks 2.4.0.2.1.5.0-695. Hive version 0.13.0.2.1.5.0-695, metastore hosted on MySQL database. Spark 1.1.0 prebuilt for Hadoop 2.4. SQL Developer 4.0.3.16. Note that I’ve successfully tested same setup on a 20 nodes cluster on AWS (EMR). Website. Note ...

UPGRADE TO PREMIUM TO VIEW 0 MORE

TOTAL PAGES IN THIS WEBSITE

4

LINKS TO THIS WEBSITE

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Hadoop ≅ HDFS + MapReduce (Part – II) | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/13/hadoop-≅-hdfs-mapreduce-part-ii

Abode for Hadoop Beginners. Hadoop HDFS MapReduce (Part II). August 13, 2013. In this post, we will discuss the following with regards to MapReduce framework :-. Motivation for a parallel processing framework. How MapReduce solves a problem? Shuffle and Sort Phase. Motivation for a parallel processing framework. Is one such programming model designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. Each of these tasks are then run on individual node i...

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Unit Test MapReduce using MRUnit | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/09/03/unit-test-mapreduce-using-mrunit

Abode for Hadoop Beginners. Unit Test MapReduce using MRUnit. September 3, 2013. In order to make sure that your code is correct, you need to Unit test your code first. And like you unit test your Java code using JUnit testing framework, the same can be done using MRUnit to test MapReduce Jobs. I will now discuss the template that can be used for writing any unit test for MapReduce job. To Unit test MapReduce jobs:. Create a new test class to the existing project. Add the mrunit jar file to build path.

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Hello World of MapReduce – Word Count | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/20/hello-world-of-mapreduce-word-count

Abode for Hadoop Beginners. Hello World of MapReduce – Word Count. August 20, 2013. Its finally time to attempt our first MapReduce program. As with any programming language the first program you try is “Hello World”. We execute “Hello World” because it the easiest and we test whether the everything is perfectly installed and configured. The easiest problem in MapReduce is the word count problem and is therefore called MapReduce’s “Hello World” by many people. So let us dive into it. Create a new project.

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Abode for Hadoop Beginners | Page 2

https://learnhadoopwithme.wordpress.com/page/2

Abode for Hadoop Beginners. Install Hadoop in Standalone mode. August 18, 2013. We discussed in the last post. The different modes in which Hadoop can be run. Depending upon what kind of user you are and what you want to do with Hadoop, you can decide the mode in which you run Hadoop. You will want to run Hadoop in Standalone mode when you want to test and debug Hadoop programs with small input files that are stored locally (not in HDFS). Steps involved in the installation :-. Download and unpack Hadoop.

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Basic HDFS commands | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/19/basic-hdfs-commands

Abode for Hadoop Beginners. August 19, 2013. Before we move on to developing our first MapReduce program, it is essential to know few basic HDFS commands to play with. First open the Cloudera’s virtual image from the virtual box. Open the terminal type the following command:. As you can see, it gives you the list of hadoop commands and a short descrition. There is a subsystem associated with HDFS called fsShell. To invoke the shell type the following command:. List the contents of a directory. Cloudera@l...

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Install Cloudera’s Hadoop Demo VM | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/19/install-clouderas-hadoop-demo-vm

Abode for Hadoop Beginners. Install Cloudera’s Hadoop Demo VM. August 19, 2013. Installing Clouder’s Hadoop Demo VM would be the best and easiest way to learn and start working with Hadoop. The virtual Machine is installed in Pseudo Distributed mode. It is best to test your code first in this mode before you run it in the actual cluster. The step to install Clouder’s Hadoop Demo VM using Virtual Box are as follows:-. Choose the version as Virtual Box and click on Download. In this step you need to select...

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Run MapReduce Job in Standalone Mode | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/30/run-mapreduce-job-in-standalone-mode

Abode for Hadoop Beginners. Run MapReduce Job in Standalone Mode. August 30, 2013. In the last post. We saw how to run our first MapReduce job. If you gone through the previous post, you will remember that I had mentioned the steps that you must conform to before running your code on an actual cluster. You must,. First run you MapReduce code in Standalone Mode. It gives you the chance to put break points in your code and debug it extensively with a small input file stored locally. Select Java Application...

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Install Hadoop in Pseudo Distributed mode | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/08/19/install-hadoop-in-pseudo-distributed-mode

Abode for Hadoop Beginners. Install Hadoop in Pseudo Distributed mode. August 19, 2013. Installing Hadoop in pseudo distributed mode lets you mimic multi server cluster on a single machine. Unlike standalone mode, this mode has all the daemons running. Also the data in pseudo distributed mode is stored in HDFS rather than the local hard disk. If you have followed the last post, the first three steps of this tutorial are the same. Download and unpack Hadoop. Open System Preference Users and Groups. By doi...

learnhadoopwithme.wordpress.com learnhadoopwithme.wordpress.com

Implementing Custom Writables in Hadoop – BigramCount | Abode for Hadoop Beginners

https://learnhadoopwithme.wordpress.com/2013/11/21/implementing-custom-writables-in-hadoop-bigramcount

Abode for Hadoop Beginners. Implementing Custom Writables in Hadoop – BigramCount. November 21, 2013. Apologies for the delay in coming up with this post. I was caught up with my studies. Anyways, today we are going to see how to implement a custom Writable in Hadoop. But before we get into that, let us understand some basics and get the motivation behind implementing a custom Writable. We will discuss the following in this post:. What is a Writable in Hadoop? Why does Hadoop use Writable(s)? In the word...

UPGRADE TO PREMIUM TO VIEW 6 MORE

TOTAL LINKS TO THIS WEBSITE

15

SOCIAL ENGAGEMENT



OTHER SITES

hadoophappyhour.com hadoophappyhour.com

hadoophappyhour.com - Registered at Namecheap.com

This domain is registered at Namecheap. This domain was recently registered at Namecheap. Please check back later! This domain is registered at Namecheap. This domain was recently registered at Namecheap. Please check back later! The Sponsored Listings displayed above are served automatically by a third party. Neither Parkingcrew nor the domain owner maintain any relationship with the advertisers.

hadoophbase.com hadoophbase.com

24x7 Hadoop Support - info@hadoophbase.com

24×7 Hadoop Support – info@hadoophbase.com. 24×7 Hadoop Support – info@hadoophbase.com. December 23, 2014. Download and extract the content of http:/ extjs.com/deploy/ext-2.2.zip into /var/lib/oozie if you are using cloudera manager, then enable the webconsole by going into oozie configuration – search for webconsole on the search bar. restart the oozie server. browse the oozie server at port 11000 –like this – http:/127.0.0.1:11000/oozie. How to DOWNLOAD all Files in a remote http directory using WGET.

hadoophelp.blogspot.com hadoophelp.blogspot.com

Hadoop Help

Monday, April 18, 2011. Http:/ www.umiacs.umd.edu/ jimmylin/publications/Lin etal MAPREDUCE2011.pdf. Friday, April 8, 2011. I have been using hadoop since long time. Following are my desired features that I want from Hadoop next build. 2 Map reduce cron job. 3 Easy interface for multiple output stream from reducer (It can be possible with current version but it is little difficult). 4 built in encryption function while passing data from mapper to reducer. Hadoop, hive, pig, mapreduce cronjob. Hadoop, hiv...

hadoophive.com hadoophive.com

hadoophive.com

hadoophosting.com hadoophosting.com

Hadoop Hosting - Hadoop Hosting Open Source Tech

Core Java 100% Reference. Core Java 100% Reference. July 23, 2015. Orgspringframework.stereotype Annotation uses? July 23, 2015. February 14, 2015. February 14, 2015. How to retrieve Data from Collection Variable? Here is the example to insert the data into List Collection variable and also how to retrive data from Collection variable. Continue reading →. February 12, 2015. IS-A a Relation TypeCating. Class B extends A{. System.out.println(“Hello”);. Return new B();. B b=new B().test();. November 11, 2014.

hadoopi.wordpress.com hadoopi.wordpress.com

Hadoopi | A french hadooper in London

A french hadooper in London. Welcome to this blog around hadoop and Big-data. The aim of this blog is not to provide you with official documentation (you will find plenty of official websites for that purpose), but rather to share with other developers my knowledge around Hadoop. Spark: Connect Tableau Desktop to SparkSQL. 2014 was great, 2015 will be SPARKling! Spark: Use Spark-SQL on SQL Developer. Spark / Hadoop: Processing GDELT data using Hadoop InputFormat and SparkSQL. Top Posts and Pages.

hadoopifi.com hadoopifi.com

hadoopifi.com

hadoopified.wordpress.com hadoopified.wordpress.com

Hadoopified | Almost everything Hadoop!

Parquet – columnar storage for Hadoop. Asymp; 3 Comments. Parquet is a columnar storage format for Hadoop that uses the concept of repetition/definition levels borrowed from. It provides efficient encoding and compression schemes, the efficiency being improved due to application of aforementioned on a per-column basis (compression is better as column values would all be the same type, encoding is better as values within a column could often be the same and repeated). Writing a Parquet file. PigSchemaStri...

hadoopilluminated.com hadoopilluminated.com

Hadoop illuminated -- Open Source Hadoop Book

Open Source Hadoop Book. Hadoop illuminated' is the open source book about Apache Hadoop™. It aims to make Hadoop knowledge accessible to a wider audience, not just to the highly technical. The book is a 'living book' - we will keep updating it to cover the fast evolving Hadoop eco system. Checkout these chapters : Hadoop use cases. Publicly available Big Data sets. The book is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Same as MIT Open Courseware. I must...

hadoopinc.com hadoopinc.com

Cloudera | Home

We are updating the site for Strata Conference Hadoop World 2012. For details on the conference, go to:. Hadoop and Big Data. For Your Use Case. Palo Alto, CA 94306. Hadoop and the Hadoop elephant logo are trademarks of the Apache Software Foundation.

hadoopinc.net hadoopinc.net

Cloudera | Home

We are updating the site for Strata Conference Hadoop World 2012. For details on the conference, go to:. Hadoop and Big Data. For Your Use Case. Palo Alto, CA 94306. Hadoop and the Hadoop elephant logo are trademarks of the Apache Software Foundation.