Start a partnership with us, or discover your next opportunity.
Secure data deduplication is the process of deleting duplicate information produced during the collection process of a document review. Given the myriad of metadata example sources – email, IM, texts, chats, in addition to traditional documents, it’s increasingly more difficult to make accurate assessments to delete any duplicated information. This webinar discusses that challenge, as well as an innovative new technique to conquer it called ‘Metadata Matching.’
In today’s typical enterprise environment, data collection points expand far beyond an exchange server and a share drive. From the cloud and mobile data to social media and chat, collection points are incredibly diverse, leading to data related issues that result in over collection and the proliferation of duplicate records.
There are major consequences of redundant data in a document review. From the added cost of the document review and inconsistent coding, to the costly misuse of time and effort in QC and adding checks in privilege productions, duplicate records create more work and cost more money.
To clean up those “dupes,” deduplication is required. Deduplication is the process by which the processing tool gathers strings of data, converts those strings into hash codes, compares those hash code values, identifies matching records and flags one as unique and the others as duplicates.
As data is managed at the source, subtle changes may take place rendering deduplication practices ineffective. Although hashing algorithms don’t recognize files as duplicates, matching records continue to enter review populations leading to the review of the same file over and over. In this webinar, we will discuss this problem, some of the causes of it, and how Special Counsel is addressing the issue with its metadata matching workflows.
Start a partnership with us, or find your next opportunity.