diff --git a/BDPA_Assign2_WJIN.md b/BDPA_Assign2_WJIN.md
index 44d2b3484dc004a89720d9c590e955ee138e4787..c6708ab14952bcb554ff4a99f6aaed9a6e884f88 100644
--- a/BDPA_Assign2_WJIN.md
+++ b/BDPA_Assign2_WJIN.md
@@ -21,10 +21,10 @@ By slightly modifying the wordcount code from the previous assignment, we can ou
       }
       
 ```
-The stop word file can be found [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/stopwords).
+The stop word file can be found [here](stopwords).
 
 #### 2. Count word frequency of pg100.txt
-By using again the wordcount algorithm, we recount the word frequency for `pg100.txt` to be used later for word sorting. This time capital cases are kept to be taken acount in the similarity comparison. The output file can be found [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/wordfreq).
+By using again the wordcount algorithm, we recount the word frequency for `pg100.txt` to be used later for word sorting. This time capital cases are kept to be taken acount in the similarity comparison. The output file can be found [here](wordfreq).
 
 #### 3. Output lines
 In this step, several tasks should be done:
@@ -74,7 +74,7 @@ For this step, all task are done within the mapper. The tokenizer is `space` or
          context.write(new LongWritable(counter.getValue()), words);
       }
 ```
-The output file can be found [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/sortedline). The total line number is output to HDFS as instructed, you can also find the file [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/linenumber).
+The output file can be found [here](sortedline). The total line number is output to HDFS as instructed, you can also find the file [here](linenumber).
 
 ---  
 ### Set similarity joins
@@ -220,7 +220,7 @@ To compute the similarity of two strings as instructed, we used a `Set` to store
 	    }
 ```
 ##### _File sampling_
-Our input file has more than `100000` documents. Consider pairwise calculation in the naive approah, there will be more than `10^10` pairs emitted. This exceeds way beyond the capacity of our virtual machine. In the following section, we only tested the algorithms on sampled documents. The files can be found [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/sortedline_sample) and [here](https://gitlab.my.ecp.fr/2014jinwy/BDPA_Assign2_WJIN/blob/master/linenumber_sample).
+Our input file has more than `100000` documents. Consider pairwise calculation in the naive approah, there will be more than `10^10` pairs emitted. This exceeds way beyond the capacity of our virtual machine. In the following section, we only tested the algorithms on sampled documents. The files can be found [here](sortedline_sample) and [here](linenumber_sample).
 
 #### 1. Naive approach
 
@@ -229,8 +229,7 @@ wise  comparisons  bet
 ween  documents,  using  the  following
 technique
 :
-Each document is handled by a single mapper (remember that lines are 
-used to represent documents in this assignment). 
+Each document is handled by a single mapper (remember that lines are used to represent documents in this assignment). 
 The map method should emit, for 
 each document, 
 the document id along with one other document id as a key (one such 
@@ -243,12 +242,14 @@ s for all/some selected pairs
 . 
 Output only similar pairs on HDFS, in TextOutputFormat.
 
+
 To avoid redundant parses of the input file, some intuition is needed. In this algorithm, the input file is only parsed once.
 * Stock in advance all document id (in our case we use the total line number that we got from the previous counter, id of empty documents will be ignore in the following)
 * For each id and document from the input, emit `(id,i), document` for all `i!=id` in the document id set (include also id of empty line). Due to the symmetry of the key pair, most keys will have two instances.
 * In the reduce phrase, process only keys with `two` instances. In this way we ignore empty documents because empty documents are not in input file, so they only appear once. Since empty documents are not often, computational time will not be too much affected.
-* The two instances are exactly the two documents that we need to compare for each key. Calculate similarity and emit key pairs that are similar.
+* The two instances are exactly the two documents that we need to compare for each key. Calculate similarity and emit key pairs that are similar.  
 
+**Mapper:**
 ```java
       @Override
       public void map(Text key, Text value, Context context)
@@ -283,6 +284,7 @@ To avoid redundant parses of the input file, some intuition is needed. In this a
     		 counter += 1;
     	 }
 ```
+**Reducer:**
 ```java
 	  @Override
       public void reduce(LongPair key, Iterable<Text> values, Context context)
@@ -302,17 +304,106 @@ To avoid redundant parses of the input file, some intuition is needed. In this a
     	 context.getCounter(CompCounter.NUM).increment(1);
     	 
     	 if (s>=0.8){
-             context.write(key, new DoubleWritable(s)); 
+             context.write(new Text(key.toString()), new Text(String.valueOf(s))); 
     	 }
       }
 ```
 
+The number of mapper can be changed with this line:
+```java
+      conf.set("mapreduce.input.fileinputformat.split.maxsize", String.valueOf(500));
+```
 The hadoop job overview:
 
 ![](img/naiveapproach.png)
 
+**Execution time:** ``7m 50s``
+
+`365085` similarity are calculated. Knowing that we have `n=855` documents in our sampled file, we find `365085=n*(n-1)/2`. So, the algorithm worked as expected.
+
+#### 2. Pre-filtering approach
+>Create  an  inverted  index,  only  for  the  first  |d| -⌈t|d|⌉+  1  words  of  each 
+document 
+d
+(remember that they are stored in ascending order of frequency)
+. 
+In your 
+reducer, compute the similarity of the 
+document pairs. Output only similar pairs
+on 
+HDFS, in TextOutputFormat
+. 
+Report the execution time and
+the number of performed 
+comparisons
+.
+
+In this part, the implementation is more trivial:
+* At the map phrase, inverse id and words, output all words that are in the window `|d| -⌈t|d|⌉+  1`
+* Since in the map phrase we didn't output the document corpus but only ids, a hashmap for document retrieval is needed at the reduce phase. We load it in `setup` function.
+* At reduce phase, for every key, compute similarity if severals document id are represented. Since the words are sorted by frequency, ideally there will be much less comparaison needed
+
+**Mapper:**
+```java
+@Override
+      public void map(Text key, Text value, Context context)
+              throws IOException, InterruptedException {
+    	  
+    	 if (value.toString().isEmpty()){
+    		 return;
+    	 }
+    	 
+    	 String[] document = value.toString().split(",");
+    	 int window = document.length - (int)Math.floor(document.length*0.8);
+    	 int counter = 0;
+    	 
+    	 while(counter<window){
+    		 word.set(document[counter]);
+    		 context.write(word,key);
+    		 counter += 1;
+    	 }
+      }
+```
+**Reducer:**
+```java
+@Override
+      public void reduce(Text key, Iterable<Text> values, Context context)
+              throws IOException, InterruptedException {
+		 List<Text> val = new ArrayList<Text>();
+		 for (Text v :values){
+			 val.add(new Text(v));
+		 }
+    	 for (Text v1 : val){
+    		 for (Text v2: val){
+    			 if (v1.equals(v2)){
+    				 continue;
+    			 }
+    			 
+    			 String s1 = this.document.get(v1).toString();
+    			 String s2 = this.document.get(v2).toString();
+
+    	    	 context.getCounter(CompCounter.NUM).increment(1);
+    			 Double s = similarity(s1, s2);
+    			 
+    			 if (s>=0.8){
+    				 context.write(new Text(v1.toString()+','+v2.toString()), new Text(String.valueOf(s)));
+    			 }
+    		 }
+    	 }
+```
+The hadoop job overview:
+![](img/prefilteringapproach.png)
 
+**Execution time:** ``15 s``
 
+`976` comparaison is made in this job, much less than the naive approach.
 
+#### 3 Justification of difference
 
+| Job       | # of comparaison | Execution Time |
+|:----------------:|:----------------:|:--------------:|
+| NaiveApproach               | 365085           | 7m 50s         |
+| PrefilteringApproach               | 976               | 15s         |
+The naive approach takes O(n) computational time and memory, thus needs much more time, even in the shuffle and sort phase.  
+The prefiltering approach is very efficient when similar documents are rare and documents are not very long, which is exactly our case. This explains the drastic performance difference.