Skip to content

How to find min, max and mean of wordcount from text file in hadoop mapreduce? #6

@sathishpakalapati

Description

@sathishpakalapati

public class MaxMinReducer extends Reducer {
int max_sum=0;
int mean=0;
int count=0;
Text max_occured_key=new Text();
Text mean_key=new Text("Mean : ");
Text count_key=new Text("Count : ");
int min_sum=Integer.MAX_VALUE;
Text min_occured_key=new Text();

public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {
int sum = 0;

   for (IntWritable value : values) {
         sum += value.get();
         count++;
   }

   if(sum < min_sum)
      {
          min_sum= sum;
          min_occured_key.set(key);        
      }     


   if(sum > max_sum) {
       max_sum = sum;
       max_occured_key.set(key);
   }          

   mean=max_sum+min_sum/count;

}

@OverRide
protected void cleanup(Context context) throws IOException, InterruptedException {
context.write(max_occured_key, new IntWritable(max_sum));
context.write(min_occured_key, new IntWritable(min_sum));
context.write(mean_key , new IntWritable(mean));
context.write(count_key , new IntWritable(count));
}
}

Here I am writing minimum,maximum and mean of wordcount.

My input file :

high low medium high low high low large small medium

Actual output is :

high - 3------maximum

low - 3--------maximum

large - 1------minimum

small - 1------minimum

but i am not getting above output ...can anyone please help me?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions