This blog was originally published on the Relativity Blog on June 21, 2016 by Stan Pierson.
You’ll find urban legends in every culture—including stories of Bigfoot in the northern hemisphere, and the chupacabra down south. Look closely, and you might find some interesting legends in the professional world, too. (Everyone’s been told that all doctors have terrible handwriting, right?)
When we presented our Analytics for Attorneys workshop at the Relativity Spring Roadshow, we discussed—and dispelled—some myths unique to the legal world that have evolved around using analytics.
What are the risks and costs involved when using predictive analytics in a document review workflow? Find the answer to this question and more by downloading these FAQ’s surrounding the application of predictive analytics in eDiscovery.
Here’s a look at those myths, including how they’ve been busted.
1. MYTH: It only makes sense to use analytics on the largest cases.
FACT: Some analytics tools are just as powerful and helpful on all cases, regardless of document count.
Analytics tools have evolved to become quite useful for document sets of any size. While some features—like technology-assisted review—do make the most sense for data populations of a certain size, others—such as email threading and near-duplicate identification—can increase efficiency on every document set regardless of size.
Analytics is moving into the mainstream of document review. The Sedona Conference encourages the use of analytics in gaining efficiency for review. Sedona Principle Number 11 discusses the use of “electronic tools and processes, such as data sampling, searching, or the use of selection criteria, to identify data reasonably likely to contain relevant information.”
We talk to clients often who include analytics in their standard processes and make running these tools part of their go-to e-discovery approach. For instance, Cristin Traylor and the team at McGuireWoods consider the use of email threading to be a no-brainer on their e-discovery projects.
2. Myth: I should use either keyword searching OR analytics.
FACT: Analytics works extremely well – sometimes even better – when used in conjunction with other tools.
You can use analytics in any combination, even if you’re already searching with keywords or dtSearch. Analytics tools should not be thought of as one feature for one case, but as tools in a toolbox. You’ll build a much better house if you use more than just a hammer.
Likewise, you’ll build a better case when you can combine multiple review functions, including analytics. The combination of email threading, language identification, near-duplicate identification, and clustering, for instance, can help you prioritize a review and get to responsive documents faster.
3. MYTH: Concept searching requires a great deal of training and time to yield results.
FACT: Concept searching setup and execution are fast and easy.
The most technical piece of the analytics puzzle, from an end user perspective, is creating an index. In Relativity, this can be easily accomplished with a template—and repeated projects will mean it comes easier and easier to your team. The process is simpler than ever in the latest version of the platform, and documentation and trainings are available – in person and online – to help streamline the learning curve.
Once a conceptual index is built, searching for concepts is just as easy as using other search methods—maybe easier, since it doesn’t require unique syntax. You can use any terms to run a concept search, as easily as you’d use free text in a Google search. Your results will be based only on the documents in your workspace—not outside dictionaries or word lists—so each search will be tailored to serve the most relevant results based on the content of your data.
For example, maybe you received a memo from your client or outside counsel describing the issues in the case. You can plug the language from that memo directly into a concept search and quickly find any documents that are related.
4. MYTH: The only way to conceptually group documents is to first review thousands of them.
FACT: Clustering will group your documents together by concept without any user input required.
To use clustering, neither the reviewer nor the system needs any training. This differs from other workflows like TAR, where the user submits specific documents to train the system. You can cluster any document set to see similar document groupings and get a birds-eye view of your data before any review takes place. Cluster visualizations give you the advantage of quickly seeing which clusters are the most substantive. From there, drill into the clusters by filtering on what matches any keyword or field you’d like, batch documents out for review, or even start strategizing for depositions.
Analytics can be so useful when you realize the broad functionality at your disposal. When you get familiar with today’s tools, you will see how easily they can fit together to enhance any workflow—and dispel any urban legends that are giving your team the heebie jeebies.
Stan Pierson is a member of the customer success team at kCura, offering case teams a practical perspective on document review with a mind for cost-effectiveness and efficiency.