top of page

Active Learning


The Tip of the Night for March 12, 2019, discussed elusion tests which are run as part of active learning projects in Relativity. Here's some more detail on active learning projects, to help you understand how a Relativity admin can help expedite document review.

Released in December 2017, this feature of Relativity is a simplified from of assisted review – or technology assisted review. An active learning project is found as an option under the Assisted Review tab. Active Learning is based on the Support Vector Machine (SVM) algorithm – which allows Relativity to learn from coding decisions. SVM performs binary classification - documents are either responsive or not. The text of documents is compared so that documents with the same non-responsive or responsive decision will be classified together. Coding decisions are used to determine which new data needs to be reviewed. SVM minimizes memory usage, and works quickly. It is resistant to contradictory coding decisions. Documents that get coded both responsive and non-responsive will have a low influence on how the model predicts documents responsiveness. Older coding decisions will be purged from the model in favor of more recent coding decisions.

The model re-ranks documents every 20 minutes in response to coding decisions. It will also perform an update if no coding decisions are made for five minutes. The admin has the ability to manually update document ranks. Relevance rate metrics will also be updated after 200 documents have been reviewed.

An active learning project uses a classification index which is based on a saved search. A designation field must be created that allows for two choices – responsive or non-responsive. Documents are ranked based on responsiveness decisions. The admin can monitor coding decisions made in the review queue. In addition to documents coded in the active learning queue, manually selected documents can also be coded. The latter can help 'jump start' the active learning model.

Decisions are later validated with elusion testing. The project home tab will show a bar graph which tracks which documents have coded responsive; non-responsive and which are ‘Not Set’. A line graph will track the prioritized review progress – which documents the model has predicted to be responsive actively have been coded as responsive. As the responsive rate falls, there is an indication that the review process may be achieving diminishing returns. An elusion test may be performed to check if any documents designated as non-responsive are in fact responsive.

Rolling productions loaded into a workspace can be pulled into active learning projects. So the classification index can be incrementally built as new documents become available.

When creating an analytics index, the admin should begin with a saved search. Documents without extracted text should be excluded. English stop words, email headers, and numbers will be removed from the index. The index type should be set to classification, rather than conceptual. Relativity recommends not suppressing duplicate documents for an active learning project. The model will mostly present documents to the reviewers that it believes to be responsive, but also include a smaller percentage of apparently non-responsive documents for the purposes of index health.

During an active learning project, if the reviewer is not sure if a document is responsive or non-responsive, it can request a new document, or skip the document. A document which is skipped will be removed from the review queue.

It may be possible to use active learning for privilege review, but a Relativity representative I asked about this indicated that it has not been proven to be effective for this purpose.


bottom of page