Magnifying glass with black handle held by a hand over a keyboard laptop
| |

Never Assume Searches Are Accurate Without Verifying

Earlier this week, I shared a detailed description of something I found with the new M365 Purview discovery interface with the paid subscribers of my M365 Newsletter. Using the search builder to check off the exact options I used in the old Purview Premium eDiscovery interface gave me unexpected results.

Digging in, I found that the query differed from the previous version despite the selection being made the same way in the tool. The new query wasn’t correct. It added to the query, pulling in many hits that did not match what I thought I was asking for in my search.

I won’t get into the details here, but I want to discuss eDiscovery technology. Specifically, I want to use this as an example of why we don’t always trust technology to do the same thing if we haven’t vetted it.

In my situation, I was using the new preview version of Microsoft’s eDiscovery tool. I know it’s in preview status and, thus, could be buggy. On the other hand, the search query builder looks similar to how it looked previously, and most importantly, I wasn’t asking it to build a search that I couldn’t make using the old previous version. It would have been easy to assume the query in use was the same one I had used previously. I had a couple of advantages that helped me recognize that something wasn’t right, though.

  1. This test environment is one where I know everything that exists. Getting over 500 hits on a search that previously had 12 was a red flag.
  2. I had already done a similar search using the previous version and seen the correct results.
  3. It was a test where I looked for something particular instead of a real-world collection event where I might not think twice about what was being collected.

I can’t help but wonder if someone had just asked me to collect data about Copilot prompts if I had run the query, collected the results, and sent them along to the next phase of the process. I want to think I would have done some quality control, sampled the results, and noticed the items that didn’t match the query I thought I had created, but let’s be honest. With many requests and tight deadlines, those QC steps might get skipped some days. We’d hope not. I hope this example helps you understand why we don’t ignore them.

Technology, especially M365 technology, changes all the time. It’s vast and complicated, and things get broken when new versions are rolled out. When dealing with eDiscovery, security, privacy, etc., we have to stay on top of those changes to understand new features and ensure the old ones still work the same way.

Don’t assume the old ones will always work the same way. I can tell you from this and plenty of other experiences they often don’t. Assuming that they still work the same because they look the same is a good way to have the technology bite you in the ass. Collecting 45 times more data than was necessary in a real-world scenario would have been embarrassing and costly. A quick sampling would have saved me.

Even better, having time to test the tools would have helped me find the bug and change my process to account for the incorrect query before we were on a deadline.

If you work with technology, test.

If you work with others in charge of your technology, give them time to test. These tests can save you a lot of pain in the future.

 

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)