Hadoop MapReduce startup task
$30-250 AUD
Paid on delivery
Given a text file, compute the average length of words starting with each letter. This means that for every letter, you need to compute: the total length of all words that start with that letter divided by the total number of words that start with that letter.
• Ignore the letter case, i.e., consider all words as lower case.
• Ignore terms starting with non-alphabetical characters, i.e., only consider terms starting with “a” to “z”.
• The length of the term is obtained by the length() function of String. E.g., the length of “text234sdf” is 10.
• Use the tokenizer give in Lab 3 to split the documents into terms.
StringTokenizer itr = new StringTokenizer([login to view URL](),
" *$&#/\t\n\f\"'\\,.:;?![](){}<>~-_");
• You do not need to configure the numbers of mappers and reducers. Default values will be used.
Project ID: #13584402
About the project
3 freelancers are bidding on average $233 for this job
Hi I am a big data engineer with 2 years of full industry experience. I have practical experience of Hadoop and its various applications in its ecosystem. I can do this job for you. Looking forward for your response More
Hi, I am a Big Data Consultant with over 4 years of experience. I have read your request and interested to work for you as I am expert of Hadoop, MapReduce, Spark, Scala and can write a MapReduce program to do this tas More
Hi there, I have been working in the Hadoop framework for over 3 years now. Hadoop has changed a lot from what it was in the initial years. From a mere low cost solution for large dataset storing via HDFS and analysi More