During the webinar, How to Use Managed and Prioritized Workflows to Reduce the Cost of Review, the audience asked a lot of great questions. Panelists Charles Roberts (D4), Dave Lewis (Brainspace), Kalyn Johnson (Special Counsel) and moderator Chuck Kellner (D4) managed to weave in quite a few pre-registration questions into the content of the webinar, and they answered many of the live questions before the hour was up. Here’s a recap of the interactive discussion through the webinar.
1. What is CMML?
Dave Lewis: CMML stands for continual multimodal learning. For anyone who’s not sure of what supervised machine learning is, it simply means teaching a computer by example – what often gets called predictive coding in eDiscovery.
The Brainspace CMML interface has a very easy way to go through the top ranked documents, and with one click you can tag as responsive or non-responsive for training the system.The analytics tools that let you leverage any type of domain knowledge that you have, to find responsive documents in addition to leveraging the traditional predictive model.
2. How are these projects set up?
Charles Roberts: First we identify the documents that we are obligated to review, or documents that are likely to be included in production. From there, we can start to leverage the tools that Brainspace has to offer. For example, we can take a look at the communication analysis to get a better understanding of the data that we have. We may want to start with particular topics, or the times certain events occurred. Then we can see those specific documents and terms that are important, so we can start making judgments about how we should start our review.
3. What’s the difference between CMML & TAR?
Charles Roberts: CMML leverages predictive coding technology to put the most-likely relevant documents in front of reviewers first. It is an iterative process, whereby each day’s coding decisions, or “rounds”, are submitted to the machine for ranking, and the next day’s review set is drawn from the most recent machine learning.
4. What insight does the cluster wheel provide for review teams?
Charles Roberts: A Concept Cluster organizes documents according to the substance in each document. The bottom line when it comes to review is to try and organize documents based on if they are responsive or not responsive. With a Concept Cluster, we can visually see what’s happening in the corpus of a whole which makes it easier to see all the data at once and start to make decisions for coding. You’re also able to dive into each cluster on a granular level based on the substance of the document.
5. How do you leverage a team for this workflow?
Kalyn Johnson: A review team can handle the bigger projects so you’re not asking a partner, or high level associates to train rounds of seed sets like you would with the original TAR 1.0. You can have a team go in and dynamically do this, and feed the information back into Brainspace every night. The machine then takes the information to present more documents that could be responsive. The review team will continue doing the broader work and managing the integration with Relativity or other tools. This way the attorneys, or senior level associates, can then use their valuable time to focus on the case instead.
6. Can you do rolling productions and bring in new data with this workflow?
Kalyn Johnson: Yes, you can absolutely do rolling productions. When you bring in data, you don’t have to retrain everything. You can use the information that’s already been given based on the coding that’s already been done – you don’t need to start all over. We could split the team up, and have half the group continuing the review, and the second half doing a second-pass review for confidentiality, privilege, etc.
7. What is the ROI of this workflow?
8. What are some examples of past projects?
Construction Litigation:
- 83,475 records remaining after application of search terms
- Reviewed 29,124 using CMML
- Identified 6,599 Responsive Records
Case Refresh:
- Previously reviewed ~12,000 and produced 2,500 records
- Collected additional 11,000
- Used first set as seeds
- Reviewed additional 2,468
- Identified 1,356 responsive records within the 11k
Large Corpus, Small Review:
- 4 Million Records
- Single Reviewer
- Reviewed 11,872
- Identified 5,413 Responsive
- Reduced final review set to 850,000 records