In my first post, I explained the background behind the industry moving towards an IN/OUT pricing model and broke down exactly what this approach entails. In this post I’ll be covering the inherent risks to the IN/OUT workflow and propose an all-in alternative.
There will always be use case scenarios that come up in today’s discovery landscape that will prove the effectiveness and simplicity of this approach as the preferred alternative to other methods. It is also very familiar for attorneys who’ve grown up using Boolean search queries to perform research in tools like Lexis or Westlaw. However, as business users continue to find new ways to communicate, store information in ever more disparate forms, and as the volume of information that we all create on a daily basis continues to climb, the risks inherent with this model are too great to overlook. More importantly, much of this approach was a byproduct of the limitations of what the technology available was capable of doing in the first place.
Finally, looking at the cost to process and host discovery data only takes into account a minor portion of a much larger, and much more complex, supply chain that sees case teams outsourcing more work than ever before to service providers that are now handling an ever increasing responsibility over the discovery life-cycle. Leaning on service providers to simply process data and return it, in today’s landscape, is akin to using a chainsaw to slice bread.
Inherent Risks to the IN/OUT Workflow
Flexibility, transparency, and attorney review costs. The workflow associated with the old-world model typically has a case team working directly with an end-client IT contact to broadly collect source data, transfer it to a provider for processing, to then undergo an iterative culling process. More often than not, the service provider would ingest the source data into their processing environment and cull the data utilizing whatever search functionality exists within the processing tool, or export metadata and text only into a separate review platform to leverage more powerful culling methodologies. A list of search terms is then devised by the case team, transferred to the provider who runs them against the indexed data set, and returns a search hit report. The case team reviews the report, which only contains the number of hits for each term or phrase, and then modifies them based on the count of records returned being either overly inclusive, or too precise. This rinse and repeat effort continues until the case team is comfortable with the results appearing on the search term report.
Many studies have been conducted over the years to prove that relying strictly on Boolean searching, especially in a scenario where those devising the terms are largely stabbing in the dark, only returns a fraction of the actual relevant material. Putting this aside for the moment, and strictly looking at the logistics involved, working within one platform that has its own way of indexing data then exporting it to another that uses a slightly different method immediately inserts one of many points of failure. In this use case scenario, a single individual is often being tasked with performing the searches with limited opportunity for the requestor to spot check, or quality control, the efficacy of the results. Putting eyes on the actual documents within the processing environment is often limited, cumbersome, or impossible because these tools were designed to extract and convert data, not search, analyze or review the results. Secondly, due to the black-box nature of this workflow it is highly likely that once the review team settles in to putting eyes on the documents it is usually determined that the need to run additional searches against the source data set is necessary. This adds considerable disruption to the cadence of a review and introduces complexities that require the practitioner performing the searches to parse from their new searches any records that may have already hit on previous searches before running a new export. In the vast majority of scenarios it has become the rule, not the exception, that you’ll eventually discover new information during the data analysis and review phases that warrant going back to the well to identify additional records. In the example provided above, the original 100 GBs ported over to the review environment quickly becomes 200 to 300. In many cases, after going back to the well a couple of times, case teams will simply ask that any and all data that was originally processed be published to the review environment to reduce the turnaround time needed should the need arise again.
The list goes on, but the biggest pain point almost always comes down to the cost to review these documents. Eyes-on attorney review will typically make up $0.75-$0.80 of every dollar spent on initial first pass discovery efforts. This deprecated workflow has proven to result in an overly broad return of non-relevant material being presented to reviewers and is often organized in a linear or chronological manner. In turn this reduces the pace at which reviewers can get through document sets.
Assume 10,000 records are reviewed for relevancy and light issue tagging at a rate of 40 documents per hour at a rate of $55 per hour. It would take 30 days, or 250 total hours, to get through the initial first pass totaling $13,750. Even the slightest increase in the pace of review can have a significant impact. Increasing the pace to 45 docs per hour reduces the overall cost to just under $12,300, or a 10% reduction. Combine this with a minor decrease in the number of non-relevant documents presented to the reviewer by a mere 10%, down to 9,000 records, and the costs come down to $11,000.
Proposed ALL-IN Alternative
The advanced workflows that are readily available today, those that leverage more advanced analytical tools to augment Boolean searching, are yet to see large scale adoption. This is largely due to providers, law firms and corporate legal departments alike working within the toolset that is available to them. The hard costs associated with propping up an enterprise-class processing and hosting environment are significant. They require a fair amount of investment in hardware, personnel, and third party software licensing. This doesn’t even begin to take into account the expertise needed to maintain all of the back-end systems that need to talk to each other just to keep the lights on, or the expertise needed to step through each phase of the discovery process within these tools. Many made investments toward this end years ago and are hesitant to scrap all of these efforts and make the leap to something new that requires a concerted change management effort.
To that end the move to leveraging newer workflows will take some time. However, the introduction of solutions such as RelativityOne have leveled the playing field in such a way that allows one to gradually make the adjustment without the high barrier of entry traditionally associated with a mass migration. Relativity has flipped the script somewhat by integrating processing functionality directly into the same software stack as the review environment. The new workflow simply ingests all data and publishes it into the same familiar review workspace that many are familiar with and have grown comfortable navigating. Once data is published the data can be indexed not just for traditional text and fielded search, but also with the entire suite of advanced analytical tools such as concept searching, near duplicate detection, active learning, foreign language detection, and much more. This presents an unprecedented level of transparency to the case team that was previously locked out of putting eyes on the records prior to them being batched to review, in addition to providing mechanisms that can be used to increase the precision of culling methodologies. The turnaround time needed to go from raw data in hand, to reviewers getting to the most pertinent information, has been reduced from weeks to mere days. Again, the list goes on and includes ancillary benefits that often go overlooked like a new uniform way of calculating gigabytes, or requiring multiple copies of the same data in disparate systems. At the end of the day, the major benefit comes from being able to make more informed decisions about what data gets presented to reviewers. Done correctly, this reduces the overall number of documents to be reviewed, increases the pace of review, and limits inaccuracies by grouping records together in a way that works with a reviewers stream of consciousness as opposed to against it.
The rub, as many see it, is the excess storage footprint in the hosted environment. In the previous example, assuming all 500 GBs that were ingested resulted in 500 GBs published to the review environment, it is easy to see how this workflow may immediately get dismissed. However, from a service provider perspective, the days of maintaining a data center, keeping up with information security requirements, managing multiple technology stacks, and the requisite personnel to keep all the trains coming and going on time are a thing of the past. More importantly, the hard costs associated with processing data, licensing expensive third party software such as SQL, maintaining disaster recovery systems, along with a host of countless other requirements, is largely eliminated or drastically reduced by migrating to RelativityOne. In turn this eliminates the costs that need to be passed along to the consumer.
The proposed alternative to the “IN/OUT” pricing model, what is often referred to as the “ALL-IN” model, would eliminate the up front costs typically associated with processing data entirely. In addition, hard costs associated with tiffing, OCR processing, creating productions, or incorporating advanced analytics are fully baked into the purchase price of RelativityOne. The justification for the nickel and dime charges typically associated with data discovery are gone once the leap is made to a fully integrated platform that includes all of the technology and functionality under one umbrella. The new world will see the elimination of weighing whether or not to leverage the best technology available as a decision based on price, but rather on efficacy and flexibility. The new focus will be on reducing review costs in a more meaningful and defensible way.
Take the scenario outlined above through the entire lifecycle inserting some basic assumptions:
- 500 GBs in = $25 x 500 = $12,500
- 100 GBs out = $100 x 100 = $10,000
- 100 GBs hosted = $10 x 100 = $1500/month
- 100 GBs batched to review
- 2000 docs per GB = 200,000 docs to be reviewed
- 40 docs reviewed per hour = 200,000/40 = 5000 hours
- Attorney hourly rate = $55 x 5000 = $275,000
- Grand Total assuming 9 months of hosting = $309,500
Under the new ALL-IN model:
- 500 GBs in = $0.00
- 500 GBs out = $0.00
- 500 GBs hosted = $15 x 500 = $7,500/month
- 25 GBs batched to review
- 2000 docs per GB = 50,000
- 45 docs per hour = 50,000/45 = 1,111 hours
- Attorney hourly rate = $55 x 1111 = $61,105
- Grand Total assuming 9 months of hosting = $128,605
Per unit and hourly charges provided in the examples above are meant to serve as a baseline for comparison purposes only, not an advertisement of actual rates offered by a particular service provider.
[callout-box ctatitle=”Looking to lower your eDiscovery costs?” ctacopy=”There is little debate that eDiscovery can be a daunting and expensive endeavor. This white paper breaks down steps you can take throughout the EDRM to lower overall costs.” ctalink=”http://marketing.d4discovery.com/acton/media/8501/lower-ediscovery-costs-ebook” ctalinktext=”Get your copy of the white paper to get started → ” /]