top of page
  • Jan 20, 2016

Goodhart's Law states that when a measure becomes a target it ceases to be a good measure. Keep in mind that his can be a concern in electronic discovery. Maura Grossman and Gordon Cormack (authors of the famous TAR glossary, see the night of June 3, 2015) published a paper in The Federal Courts Law Review, entitled, "Comments on “The Implications of Rule 26(g) on the Use of Technology-Assisted Review”. This paper criticizes a study on TAR by Karl Schieneman and Thomas Gricks that specifies that enough random sampling must be conducted in order estimate precision and recall within a margin of error of plus or minus 5 per cent. Grossman and Cormack submit that the goal of TAR is always to identify as much responsive ESI as possible for a proportionate cost and that statistics about precision and recall are just measures of success in getting to that goal. They assert that TAR directed by sampling specifications reflect the problem stated in Goodhart's Law. 2014 Fed. Cts. L. Rev. 285, 287.


Andre Ross gives a very good description of the shingling process in this blog post: http://digfor.blogspot.com/2013/03/fruity-shingles.html . As discussed in the tip of the night for January 16, 2015 document shingling involves comparing n-grams of overlapping word sequences in two different text files. Ross notes that shingling involves of the calculation of Jaccard Similarity, "the number of items in the intersection of A and B divided by the number of items in the union of A and B" or

Sim(A,B) = |A ∩ B |

______

|A ∪ B |

. . . so we get a figure based the number of n-grams the two have in common divided over the total number of unique n-grams used in both.

Here's an example.

1. In Fig. 1 we see 3 text files, which are edited over the period of several weeks. The August version is almost the same as the July version, but one phrase has been moved around. In the September version while the original first sentence is still present in parts, an entirely new phrase has been added and more changes have been made.

2. In Fig 2., we run the n-gram generator as was discussed in on the night of January 16, 2015 , and copy out the three word overlapping n-grams for each of the three text files to an Excel spreadsheet.

3. In Excel the n-grams from each text file are pasted into columns A, C, and E, and then we run VLOOKUP formulas in column B to check which of n-grams from the July version in column A are the same as those in the August version in column C [18], and which of the n-grams from the August version in column C match those in column E for the September version [8].

4. On a second worksheet, we combined the n-grams from a July and August de-duped set, and an August and September de-duped set to get totals of 36 and 49 respectively.

5. So while the July and August versions of have a Jaccard similarity of 0.5, the August and September versions only have a Jaccard similarity of 0.16.

  • Jan 16, 2016

Shingling is a method of determining the degree of similarity between two electronic files by measuring how many n-grams the two have in common. N-grams are sequences of a set number of words that appear in a text file that are created so that the second word of the present n-gram is always the first word of the succeeding n-gram. So n-grams for this phrase, where n=3 (or where we want to generate 'trigrams'):

Now is the time for all good men to come to the aid of their party.

. . . would be:

Now is the

is the time

the time for

time for all

for all good

all good men

good men to

men to come

to come to

come to the

to the aid

the aid of

aid of their

of their party

The idea is to create word groupings that overlap with one another. If you want to generate n-grams download the Win32 version of the N-gram extraction tool on this site: http://homepages.inf.ed.ac.uk/lzhang10/ngram.html

Just download the zip file and extract the files to a folder. Save the text file that you want to analyze in the same folder, CTRL + SHIFT and right click in the folder, and select 'Open command window here'. In the command prompt type:

text2ngram -n3 now.txt

. . . 'now.txt' being the name of the file you want to generate n-grams for. You'll get the results shown in this screen grab:

Sean O'Shea has more than 20 years of experience in the litigation support field with major law firms in New York and San Francisco.   He is an ACEDS Certified eDiscovery Specialist and a Relativity Certified Administrator.

The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer.

If you have a question or comment about this blog, please make a submission using the form to the right. 

Your details were sent successfully!

© 2015 by Sean O'Shea . Proudly created with Wix.com

bottom of page