Hadoop MapReduce startup task

In Progress Posted 6 years ago Paid on delivery
In Progress Paid on delivery

Given a text file, compute the average length of words starting with each letter. This means that for every letter, you need to compute: the total length of all words that start with that letter divided by the total number of words that start with that letter.

• Ignore the letter case, i.e., consider all words as lower case.

• Ignore terms starting with non-alphabetical characters, i.e., only consider terms starting with “a” to “z”.

• The length of the term is obtained by the length() function of String. E.g., the length of “text234sdf” is 10.

• Use the tokenizer give in Lab 3 to split the documents into terms.

StringTokenizer itr = new StringTokenizer([login to view URL](),

" *$&#/\t\n\f\"'\\,.:;?![](){}<>~-_");

• You do not need to configure the numbers of mappers and reducers. Default values will be used.

Hadoop Map Reduce

Project ID: #13584402

About the project

3 proposals Remote project Active 6 years ago

3 freelancers are bidding on average $233 for this job

eaglepoint

Hi I am a big data engineer with 2 years of full industry experience. I have practical experience of Hadoop and its various applications in its ecosystem. I can do this job for you. Looking forward for your response More

$200 AUD in 3 days
(8 Reviews)
4.7
farrukhcheema23

Hi, I am a Big Data Consultant with over 4 years of experience. I have read your request and interested to work for you as I am expert of Hadoop, MapReduce, Spark, Scala and can write a MapReduce program to do this tas More

$250 AUD in 3 days
(2 Reviews)
2.6
souvikghosh

Hi there, I have been working in the Hadoop framework for over 3 years now. Hadoop has changed a lot from what it was in the initial years. From a mere low cost solution for large dataset storing via HDFS and analysi More

$250 AUD in 2 days
(0 Reviews)
0.0