What is the cost of a false alarm when it comes to data issues?

Data Army
By Michael Ogilvie, Director, Data Army
Friday, 06 September, 2024


What is the cost of a false alarm when it comes to data issues?

As Australian businesses become more data-driven, the number of internal data producers and data consumers is increasing. Proficiency levels will naturally vary as people learn to work with data and become comfortable incorporating it into their processes and decision-making over time.

With the increased number of people working with data, some questions concerning its veracity and accuracy are to be expected. How these questions are handled will be a key test of a business’s data strategy, processes and operational maturity.

Where there are signs of smoke, is there necessarily fire?

What often happens in business is someone finds something they perceive to be an issue, conduct some brief analysis on it and form an assumption about what the problem is. Often this assumption about the cause or scale of the issue is incomplete, and in some cases, it can be completely incorrect — but this is not picked up before the issue is escalated to a broader spectrum of people and senior leaders. Slack messages start firing, meetings are called, salespeople informed, and then eventually it is brought to the attention of a domain expert who then validates whether it is a true issue or not and can accurately gauge the impact.

The cost and time wasted in this process can be significant.

According to one study, employees already spend about 18 hours a week in meetings — managers about 20% more time than that — of which about one-third are unnecessary. These premature meetings are often inefficient occurrences that result from chasing shadows in data.

In other domains, organisations don’t sit idle in addressing the productivity costs of false alarms.

For example, almost half of fire brigade callouts in NSW are the result of automatic alarms triggered by “non-emergencies such as burnt toast and steam”. Fire services (experts) have to attend each one to rule out an actual fire. In NSW, a false alarm callout can cost $1600; in Victoria, it’s $638 per truck per 15 minutes. This is not counting the productivity cost of having to evacuate the building each time the alarm goes off. An office can keep letting that happen, pay the fee and incur the productivity hit every time — or it can put in measures to prevent the false alarm in the first place (such as a ban on the use of toasters in common kitchens).

How this relates back to data false alarms

As more businesses go down the path of democratised data use, self-service access to data, and low/no-code analytics and AI, the number of false alarms in the data domain will increase.

Business leaders need to take the same action mindset they have taken with false fire alarms and apply that focus to the data domain, if they want to prevent or douse potential ‘fires’ caused by perceived data issues at the first signs of smoke. The aim of this action is not to discourage or deter people in the business from speaking up and raising potential issues in data. It’s to create a documented triage process to be followed whenever a potential data issue arises, and to weed out any misunderstandings and mistakes (false alarms).

The data issue triage process

A data issue triage process should ideally be included as part of the business’s data strategy and clearly communicated to everyone who works with data. The first aspect of an effective triage process is that it should place an onus on whoever raises the issue to help gather information and evidence that the problem exists, rather than simply declare it as an issue and abandon it.

In legal proceedings, the party that brings the case carries a “burden of proof” to show they’re correct in their assertions. A data issue at its most severe could have legal ramifications, but most issues will be more benign. Still, the “burden of proof” is a useful starting point for raising an issue internally. It ensures only problems that are repeatable and understood (to an extent) are escalated.

The reporter who found the potential issue may perform their own validation, such as confirming if the issue is replicable and persistent, or seek assistance from a more technical colleague.

In addition to validating the technical nature of the issue, this exercise can also assist with gauging severity in terms of the potential business impact of the issue if it remains unaddressed. Even if a data issue does exist, not all issues are equal. If it’s proven that the business will lose significant revenue or suffer reputational damage, these would be given a higher priority than a data point issue that may never actually be seen by or impact a customer.

For some businesses, an accuracy issue may fall within acceptable, defined and documented margins of error. For example, on Google Street Maps, people infrequently raise issues with their properties being blurred out. At Google’s scale, investigation and correction of these issues is likely to be prioritised below data issues impacting entire geographical locations. The business impact of having some incorrectly blurred images on the platform is relatively small, and these can be corrected when time allows.

If an issue passes validation, and minimum evidentiary criteria are met, then the issue should follow a documented process to hand it over to the data owner or data team.

The issue triage process may be documented through a support ticketing system; that is, to raise a ticket about a data issue, the person must include certain information they’ve collected about the issue. To ensure the robustness of the triage process, any attempt to raise a ticket without the required information can be technically rejected at this stage.

External teams and management should only be engaged on a data issue after an effective triage process — signalled by the raising of a support ticket —  takes place, not at the first signs of smoke.

Leadership teams have a right and responsibility to demand statistics and facts from a triage process before addressing data issues raised. They also have a responsibility to back the validation and escalation process that’s in place, and send anyone who tries to subvert it back to complete the process first.

When this happens, a general consensus about the right way to raise data issues should help to avoid wasted time and productivity on false alarms or issues that don’t warrant the attention. And importantly, this allows the data team to focus on their core deliverables and reduces distractions.

Top image credit: iStock.com/shapecharge

Related Articles

Why the information lifecycle will be vital to data privacy in 2025

Data accessibility, accountability, confidentiality and integrity are becoming increasingly...

You can't win the AI game without a playmaker captain

Kubernetes and containers promise to bring cohesion to the otherwise complex world of modern apps.

Fixing the cybersecurity skills gap in Australia

Industry needs to mend the broken pathway from cybersecurity education to employment.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd