Home > Hadoop, java > TF-IDF in Hadoop Part 1: Word Frequency in Doc

TF-IDF in Hadoop Part 1: Word Frequency in Doc


My interest about parallel computing dates since my undergraduate school, just one or two years after Google’s paper was published about how to make efficient data processing. From that time on, I was wondering how they manage to index “the web”. As I started learning the API and the HDFS, as well as exploring the implementation of the TF-IDF algorithm, as explained by the Cloudera training. I started this implementation after I implemented the InvertedIndex example using both the Hadoop 0.18 and the 0.20.1 APIs. The parts of my experiences are defined as follows:

This code uses the Hadoop 0.20.1 API.

7 years passed and while writing my thesis project, I started dealing with the same questions regarding large datasets… How to process them on a database level? I mean, how to efficiently process with the computational resources you’ve got? Interestingly enough, my first contact with a MapReduce processing was with the mongoDB’s MapReduce API to access data in parallel in different shards in of a database cluster. If the data is stored in different shards depending on different properties of the data. And of course, one of the tools to process the distributed data is a MapReduce API. I learned how to use that API thanks to the Cloudera’s Basic Training on MapReduce and HDFS. This first documentation was produced after studying and completing the first exercises of the Cloudera’s InverseIndex Example using Hadoop, where I have downloaded the VMPlayer image and played with initial examples, driven by the PDF explaining the exercises. Although the source-code works without a problem, it uses the Hadoop 0.18 API, and if you get buzzed by the warnings on Eclipse, I have updated and documented the necessary changes to remove those and use the refactored version of InverseIndex using the Hadoop 0.20.1 API.

I finally found the Cloudera basic introduction training on MapReduce and Hadoop… and let me tell you, they made the nicest introduction to MapReduce I’ve ever seen :) The slides and documentation are very well structured and nice to follow (considering you came from the academic world)… They actually worked closely with Google and the University of Washington to get to that level… I’m was very pleased to read and understand the concept… My only need on that time was to use that knowledge on the MapReduce engine from mongoDB… I did a simple application and it proved to be interesting…

So, I’ve been studying the Cloudera basic training in Hadoop, and that was the only way I could learn MapReduce! If you have a good background on Java 5/6, Linux, Operating System, Shell, etc, you can definitely move on… If you don’t have experience with Hadoop, I definitely suggest following the basic training from sessions 1 – 5, including the InvertedIndex exercise. You will find the exercises describing the TF-IDF algorithm in one of the PDFs.

The first implementation I did with Hadoop was the implementation of the indexing of words on All the Shakespeare collection. However, I was intrigued and could not resist and downloaded more e-books from the Gutenberg project (all Da-Vinci books and The Outline of Science Vol1). The input directory includes the collection from Shakespeare books, but I had to put the new ones into the filesystem. You can add the downloaded files to the Hadoop File System by using the “copyFromLocal” command:

training@training-vm:~/git/exercises/shakespeare$ hadoop fs -copyFromLocal the-outline-of-science-vol1.txt input
training@training-vm:~/git/exercises/shakespeare$ hadoop fs -copyFromLocal leornardo-davinci-all.txt input

You can verify if the files were added by listing the contents of the “input” directory.

training@training-vm:~/git/exercises/shakespeare$ hadoop fs -ls input
Found 3 items
-rw-r--r--   1 training supergroup    5342761 2009-12-30 11:57 /user/training/input/all-shakespeare
-rw-r--r--   1 training supergroup    1427769 2010-01-04 17:42 /user/training/input/leornardo-davinci-all.txt
-rw-r--r--   1 training supergroup     674762 2010-01-04 17:42 /user/training/input/the-outline-of-science-vol1.txt

Note that the command “hadoop fs” proxies any unix program to its filesystem. “-ls”, “-cat”, among others. Following the suggestion of the documentation, the approach I took to easily understand the concepts was to device-to-conquer. Each of the jobs are executed in separate as an exercise, saving the generated reduced values into the HDFS.

Job 1: Word Frequency in Doc

As mentioned before, the word frequency phase is designed in a Job whose task is to count the number of words in each of the documents in the input directory. In this case, the specification of the Map and Reduce are as follows:

  • Map:
    • Input: (document, each line contents)
    • Output: (word@document, 1))
  • Reducer
    • n = sum of the values of for each key “word@document”
    • Output: ((word@document), n)

In order to decrease the payload received by reducers, I’m considering the very-high-frequency words such as “the” as the Google’s stopwords list. Also, the result of each job is the intermediate values for the next jobs are saved to the regular file, followed by the next MapReduce pass. In general, the strategy is:

  1. Reduces the map phase by using the lower-case values, because they will be aggregated before the reduce phase;
  2. Don’t use unnecessary words by verifying in the stopwords dictionary (Google search stopwords);
  3. Use RegEx to select only words, removing punctuation and other data anomalies;

Job1, Mapper

// (c) Copyright 2009 Cloudera, Inc.
// Hadoop 0.20.1 API Updated by Marcello de Sales (marcello.desales@gmail.com)
package index;

import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;

/**
 * WordFrequenceInDocMapper implements the Job 1 specification for the TF-IDF algorithm
 */
public class WordFrequenceInDocMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

    public WordFrequenceInDocMapper() {
    }

    /**
     * Google's search Stopwords
     */
    private static Set<String> googleStopwords;

    static {
        googleStopwords = new HashSet<String>();
        googleStopwords.add("I"); googleStopwords.add("a");
        googleStopwords.add("about"); googleStopwords.add("an");
        googleStopwords.add("are"); googleStopwords.add("as");
        googleStopwords.add("at"); googleStopwords.add("be");
        googleStopwords.add("by"); googleStopwords.add("com");
        googleStopwords.add("de"); googleStopwords.add("en");
        googleStopwords.add("for"); googleStopwords.add("from");
        googleStopwords.add("how"); googleStopwords.add("in");
        googleStopwords.add("is"); googleStopwords.add("it");
        googleStopwords.add("la"); googleStopwords.add("of");
        googleStopwords.add("on"); googleStopwords.add("or");
        googleStopwords.add("that"); googleStopwords.add("the");
        googleStopwords.add("this"); googleStopwords.add("to");
        googleStopwords.add("was"); googleStopwords.add("what");
        googleStopwords.add("when"); googleStopwords.add("where");
        googleStopwords.add("who"); googleStopwords.add("will");
        googleStopwords.add("with"); googleStopwords.add("and");
        googleStopwords.add("the"); googleStopwords.add("www");
    }

    /**
     * @param key is the byte offset of the current line in the file;
     * @param value is the line from the file
     * @param output has the method "collect()" to output the key,value pair
     * @param reporter allows us to retrieve some information about the job (like the current filename)
     *
     *     POST-CONDITION: Output <"word", "filename@offset"> pairs
     */
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // Compile all the words using regex
        Pattern p = Pattern.compile("\\w+");
        Matcher m = p.matcher(value.toString());

        // Get the name of the file from the inputsplit in the context
        String fileName = ((FileSplit) context.getInputSplit()).getPath().getName();

        // build the values and write <k,v> pairs through the context
        StringBuilder valueBuilder = new StringBuilder();
        while (m.find()) {
            String matchedKey = m.group().toLowerCase();
            // remove names starting with non letters, digits, considered stopwords or containing other chars
            if (!Character.isLetter(matchedKey.charAt(0)) || Character.isDigit(matchedKey.charAt(0))
                    || googleStopwords.contains(matchedKey) || matchedKey.contains("_")) {
                continue;
            }
            valueBuilder.append(matchedKey);
            valueBuilder.append("@");
            valueBuilder.append(fileName);
            // emit the partial <k,v>
            context.write(new Text(valueBuilder.toString()), new IntWritable(1));
        }
    }
}

Job1, Mapper Unit Test

Note that the unit tests use the JUnit 4 API. The MRUnit API is also updated to use the Hadoop 0.20.1 API for the Mapper and the respective MapDriver. Generics are used to emulate the actual implementation as well.

// (c) Copyright 2009 Cloudera, Inc.
// Hadoop 0.20.1 API Updated by Marcello de Sales (marcello.desales@gmail.com)
package index;

import static org.apache.hadoop.mrunit.testutil.ExtendedAssert.assertListEquals;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import junit.framework.TestCase;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.apache.hadoop.mrunit.mock.MockInputSplit;
import org.apache.hadoop.mrunit.types.Pair;
import org.junit.Before;
import org.junit.Test;

/**
 * Test cases for the word frequency mapper.
 */
public class WordFreqMapperTest extends TestCase {

    private Mapper<LongWritable, Text, Text, IntWritable> mapper;
    private MapDriver<LongWritable, Text, Text, IntWritable> driver;

    /** We expect pathname@offset for the key from each of these */
    private final Text KEY_SUFIX = new Text("@" + MockInputSplit.getMockPath().toString());

    @Before
    public void setUp() {
        mapper = new WordFrequenceInDocMapper();
        driver = new MapDriver<LongWritable, Text, Text, IntWritable>(mapper);
    }

    @Test
    public void testEmpty() {
        List<Pair<Text, IntWritable>> out = null;

        try {
            out = driver.withInput(new LongWritable(0), new Text("")).run();
        } catch (IOException ioe) {
            fail();
        }

        List<Pair<Text, Text>> expected = new ArrayList<Pair<Text, Text>>();

        assertListEquals(expected, out);
    }

    @Test
    public void testOneWord() {
        List<Pair<Text, IntWritable>> out = null;

        try {
            out = driver.withInput(new LongWritable(0), new Text("foo")).run();
        } catch (IOException ioe) {
            fail();
        }

        List<Pair<Text, IntWritable>> expected = new ArrayList<Pair<Text, IntWritable>>();
        expected.add(new Pair<Text, IntWritable>(new Text("foo" + KEY_SUFIX), new IntWritable(1)));

        assertListEquals(expected, out);
    }

    @Test
    public void testMultiWords() {
        List<Pair<Text, IntWritable>> out = null;

        try {
            out = driver.withInput(new LongWritable(0), new Text("foo bar baz!!!! ????")).run();
        } catch (IOException ioe) {
            fail();
        }

        List<Pair<Text, IntWritable>> expected = new ArrayList<Pair<Text, IntWritable>>();
        expected.add(new Pair<Text, IntWritable>(new Text("foo" + KEY_SUFIX), new IntWritable(1)));
        expected.add(new Pair<Text, IntWritable>(new Text("bar" + KEY_SUFIX), new IntWritable(1)));
        expected.add(new Pair<Text, IntWritable>(new Text("baz" + KEY_SUFIX), new IntWritable(1)));

        assertListEquals(expected, out);
    }
}

Job1, Reducer

// (c) Copyright 2009 Cloudera, Inc.
// Hadoop 0.20.1 API Updated by Marcello de Sales (marcello.desales@gmail.com)

package index;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

/**
 * LineIndexReducer Takes a list of filename@offset entries for a single word and concatenates them into a list.
 */
public class WordFrequenceInDocReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

    public WordFrequenceInDocReducer() {
    }

    /**
     * @param key is the key of the mapper
     * @param values are all the values aggregated during the mapping phase
     * @param context contains the context of the job run
     *
     *      PRE-CONDITION: receive a list of <"word@filename",[1, 1, 1, ...]> pairs
     *        <"marcello@a.txt", [1, 1]>
     *
     *      POST-CONDITION: emit the output a single key-value where the sum of the occurrences.
     *        <"marcello@a.txt", 2>
     */
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {

        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        //write the key and the adjusted value (removing the last comma)
        context.write(key, new IntWritable(sum));
    }
}

Job1, Reducer Unit Test

// (c) Copyright 2009 Cloudera, Inc.
// Hadoop 0.20.1 API Updated by Marcello de Sales (marcello.desales@gmail.com)

package index;

import static org.apache.hadoop.mrunit.testutil.ExtendedAssert.assertListEquals;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import junit.framework.TestCase;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mrunit.mapreduce.ReduceDriver;
import org.apache.hadoop.mrunit.types.Pair;
import org.junit.Before;
import org.junit.Test;

/**
 * Test cases for the inverted index reducer.
 */
public class WordFreqReducerTest extends TestCase {

    private Reducer<Text, IntWritable, Text, IntWritable> reducer;
    private ReduceDriver<Text, IntWritable, Text, IntWritable> driver;

    @Before
    public void setUp() {
        reducer = new WordFrequenceInDocReducer();
        driver = new ReduceDriver<Text, IntWritable, Text, IntWritable>(reducer);
    }

    @Test
    public void testOneItem() {
        List<Pair<Text, IntWritable>> out = null;

        try {
            out = driver.withInputKey(new Text("word")).withInputValue(new IntWritable(1)).run();
        } catch (IOException ioe) {
            fail();
        }

        List<Pair<Text, IntWritable>> expected = new ArrayList<Pair<Text, IntWritable>>();
        expected.add(new Pair<Text, IntWritable>(new Text("word"), new IntWritable(1)));

        assertListEquals(expected, out);
    }

    @Test
    public void testMultiWords() {
        List<Pair<Text, IntWritable>> out = null;

        try {
            List<IntWritable> values = new ArrayList<IntWritable>();
            values.add(new IntWritable(2));
            values.add(new IntWritable(5));
            values.add(new IntWritable(8));
            out = driver.withInput(new Text("word1"), values).run();

        } catch (IOException ioe) {
            fail();
        }

        List<Pair<Text, IntWritable>> expected = new ArrayList<Pair<Text, IntWritable>>();
        expected.add(new Pair<Text, IntWritable>(new Text("word1"), new IntWritable(15)));

        assertListEquals(expected, out);
    }
}

Before executing the hadoop application, make sure that the Mapper and Reducer classes are passing the unit tests for each of them. Test-Driven Development helps during the development of the Mappers and Reducers by identifying problems related to incorrect inherited methods (Generics in special), where wrong “map” or “reduce” method signatures may lead to skipping designed phases. Therefore, run the test cases before the actual execution of the driver classes is safer.

training@training-vm:~/git/exercises/shakespeare$ ant test
Buildfile: build.xml

compile:
[javac] Compiling 5 source files to /home/training/git/exercises/shakespeare/bin
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.

test:
[junit] Running index.AllTests
[junit] Testsuite: index.AllTests
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.279 sec
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.279 sec
[junit]

BUILD SUCCESSFUL
Total time: 2 seconds

Then, the execution of the Driver can proceed. It includes the definitions of the mapper and reducer classes, as well as defining the combiner class to be the same as the reducer class. Also, note that the definition of the outputKeyClass and outputValueClass are the same as the ones defined by the Reducer class!!! If not, Hadoop will complain! :)

Job1, Driver
// (c) Copyright 2009 Cloudera, Inc.
// Hadoop 0.20.1 API Updated by Marcello de Sales (marcello.desales@gmail.com)
package index;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

/**
 * WordFrequenceInDocument Creates the index of the words in documents,
 * mapping each of them to their frequency.
 */
public class WordFrequenceInDocument extends Configured implements Tool {

    // where to put the data in hdfs when we're done
    private static final String OUTPUT_PATH = "1-word-freq";

    // where to read the data from.
    private static final String INPUT_PATH = "input";

    public int run(String[] args) throws Exception {

        Configuration conf = getConf();
        Job job = new Job(conf, "Word Frequence In Document");

        job.setJarByClass(WordFrequenceInDocument.class);
        job.setMapperClass(WordFrequenceInDocMapper.class);
        job.setReducerClass(WordFrequenceInDocReducer.class);
        job.setCombinerClass(WordFrequenceInDocReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        FileInputFormat.addInputPath(job, new Path(INPUT_PATH));
        FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PATH));

        return job.waitForCompletion(true) ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        int res = ToolRunner.run(new Configuration(), new WordFrequenceInDocument(), args);
        System.exit(res);
    }
}
As specified by the Driver class, the data is read from the books listed in the input directory from the HDFS and the output is the directory from this first step “1-word-freq”. The training virtual machine contains the necessary build scripts to compile and generate the jars for the execution of the map reduce application, as well as running Unit Tests for each of the Mapper and Reducer classes.
training@training-vm:~/git/exercises/shakespeare$ ant
Buildfile: build.xml

compile:
[javac] Compiling 5 source files to /home/training/git/exercises/shakespeare/bin
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.

jar:
[jar] Building jar: /home/training/git/exercises/shakespeare/indexer.jar

BUILD SUCCESSFUL
Total time: 1 second

After making sure that everything is working according to the tests, it is time to execute the main driver.

training@training-vm:~/git/exercises/shakespeare$ hadoop jar indexer.jar index.WordFrequenceInDocument
10/01/05 16:34:54 INFO input.FileInputFormat: Total input paths to process : 3
10/01/05 16:34:54 INFO mapred.JobClient: Running job: job_200912301017_0046
10/01/05 16:34:55 INFO mapred.JobClient:  map 0% reduce 0%
10/01/05 16:35:10 INFO mapred.JobClient:  map 50% reduce 0%
10/01/05 16:35:13 INFO mapred.JobClient:  map 66% reduce 0%
10/01/05 16:35:16 INFO mapred.JobClient:  map 100% reduce 0%
10/01/05 16:35:19 INFO mapred.JobClient:  map 100% reduce 33%
10/01/05 16:35:25 INFO mapred.JobClient:  map 100% reduce 100%
10/01/05 16:35:27 INFO mapred.JobClient: Job complete: job_200912301017_0046
10/01/05 16:35:27 INFO mapred.JobClient: Counters: 17
10/01/05 16:35:27 INFO mapred.JobClient:   Job Counters
10/01/05 16:35:27 INFO mapred.JobClient:     Launched reduce tasks=1
10/01/05 16:35:27 INFO mapred.JobClient:     Launched map tasks=3
10/01/05 16:35:27 INFO mapred.JobClient:     Data-local map tasks=3
10/01/05 16:35:27 INFO mapred.JobClient:   FileSystemCounters
10/01/05 16:35:27 INFO mapred.JobClient:     FILE_BYTES_READ=3129067
10/01/05 16:35:27 INFO mapred.JobClient:     HDFS_BYTES_READ=7445292
10/01/05 16:35:27 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=4901739
10/01/05 16:35:27 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1588239
10/01/05 16:35:27 INFO mapred.JobClient:   Map-Reduce Framework
10/01/05 16:35:27 INFO mapred.JobClient:     Reduce input groups=0
10/01/05 16:35:27 INFO mapred.JobClient:     Combine output records=94108
10/01/05 16:35:27 INFO mapred.JobClient:     Map input records=220255
10/01/05 16:35:27 INFO mapred.JobClient:     Reduce shuffle bytes=1772576
10/01/05 16:35:27 INFO mapred.JobClient:     Reduce output records=0
10/01/05 16:35:27 INFO mapred.JobClient:     Spilled Records=142887
10/01/05 16:35:27 INFO mapred.JobClient:     Map output bytes=27375962
10/01/05 16:35:27 INFO mapred.JobClient:     Combine input records=1004372
10/01/05 16:35:27 INFO mapred.JobClient:     Map output records=959043
10/01/05 16:35:27 INFO mapred.JobClient:     Reduce input records=48779

The execution generates the output as shown in the following listing (note that I had piped the cat process to the less process for you to navigate over the stream). Searching for the word “therefore” shows its use on the different documents.

training@training-vm:~/git/exercises/shakespeare$ hadoop fs -cat 1-word-freq/part-r-00000 | less</span>
...
therefore@all-shakespeare       652
therefore@leornardo-davinci-all.txt     124
therefore@the-outline-of-science-vol1.txt       36
...

The results produced are the intermediate data necessary as the input for the execution of the Job 2, specified in the Part 2 of this tutorial.

About these ads
  1. February 24, 2011 at 12:12 am | #1

    Hi,

    I really enjoyed reading this resource, thank you.

    I think that in the mapper class u need to clear the StringBuilder
    (e.g. valueBuilder.delete(0, valueBuilder.length());)
    Otherwise, the test unit fails.

    Coby

    • February 24, 2011 at 7:25 am | #2

      Hi Yaacov,

      I’m glad you enjoyed… I had been working on an update in January but did not have time to put the fixes, and I will include your suggestion… thanks!

      Marcello

  2. February 21, 2012 at 4:35 pm | #3

    Hi,
    Have you ever encountered this problem:
    When I run test with mrunit-0.5.0-incubating.jar, the testcase run fine. But when I changed it with mrunit-0.8.0-incubating.jar, the context.getInputSplit() returned null and Line 66 in WordFrequenceInDocMapper.java throwed the NullPointerException. What’s the problem? What changes the 0.8.0 version have?

  1. March 7, 2011 at 5:44 am | #1

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: