Programme Resources

Nutshell: Slave to the algorithm: can data and AI improve our decision-making?

Written by Future Talent Learning | Aug 5, 2022 11:11:53 AM

The potential of data and AI to help us make well-informed decisions is undeniable. So, what role now for human intelligence? And should we always trust what the data is telling us?

A common trope of sci-fi films is for the pilot of a spaceship, faced with an asteroid storm or a hostile alien fleet, to bypass the superior processing power of the ship’s auto-pilot and instead assume manual control. He or she would then navigate the hazard on wits and instinct. Win the battle. Save the world. Yada, yada, Yoda.

 

In the real world, relying on human intuition and common sense won’t always meet with such a happy outcome – although we ignore them entirely at our peril. Just ask the Belgian truck driver who blindly followed his sat nav into an unsuitable cul-de-sac in the Cornish town of Wadebridge. Putting his foot down in a panic, he ended up ploughing over a mini roundabout, getting a car trapped under his lorry, and destroying five more vehicles.

 

Clearly, to blindly trust what the data tells us is just as risky as simply ignoring it. Instead, we should aim for a healthy balance – using relevant, verified, analysed data, while retaining a healthy (human) scepticism and an awareness of our own inherent biases. Here’s how.

 

Deciding what to measure

 With data comes the power to generate insights, create new connections, and identify emerging trends and new business opportunities.

 

According to a PwC survey of more than 1,000 senior executives, highly data-driven organisations are three times more likely to report significant improvements in their decision-making compared to those who rely less on data. So, the incentive is there. But the problem, as the writer Nassim Taleb explains, is that the “needle”  we are looking for comes “in an increasingly larger haystack”.

 

It helps to know that the data we’re looking for must be both relevant and accurate. This begins with our strategy and goals, and a clear sense of the problem we want to solve. Understanding “the questions in need of an answer” allows us to collect ‘this data to answer that question’ – not everything ‘just in case’.

 

Making good decisions about what data to gather is fundamental – and to get it wrong can end in disaster.

 

Take the example of Robert McNamara, US secretary of defence during the Vietnam War. McNamara had made his name at the Ford Motor Company, where he’d met with huge success by choosing and measuring data points and then ruthlessly optimising related processes to improve efficiency, cost and quality. However, when he brought this same approach to the Vietnam War effort, he made a critical mistake by choosing body count as the main metric.

 

With hindsight, it’s easy to see that body count is a very poor measure of how a deeply human event such as a war is actually progressing. For example, inaccuracies in the estimate of enemy casualties are almost inevitable, especially in a situation of guerrilla warfare and a context of widespread resistance.

 

In this case, things went so wrong that the sociologist Daniel Yankelovich coined the term “the McNamara Fallacy” to describe a (usually disastrous) decision based solely on metrics, with all (pertinent and pressing) qualitative factors ignored.

 

Yankelovich’s fallacy has four main facets:

 

  1. Measure whatever can be easily measured – this is ok as far as it goes…

  2. Disregard that which cannot be measured easily – this is artificial and misleading…

  3. Presume that which cannot be measured easily is not important – this is blind…

  4. Presume that which cannot be measured easily does not exist – this is suicide, at least in the context of war…

 

As leaders, we should be particularly mindful of point one on this list, as all too often it’s what’s easy to measure that gets measured, causing us to optimise a white elephant while the real opportunities (and issues) go ignored. Or as Einstein might say: “Not everything that can be counted counts and not everything that counts can be counted”.

 

Collecting and analysing the right data

Once we have decided what data we need, we can begin to gather it. And according to Marissa Mayer, Former president and CEO at Yahoo!, when it comes to data collection, “the sooner the better is always the best answer”.

 

Methods include survey responses, user testing and observation, and sophisticated business intelligence software and AI technologies. However, while such technologies make it easier to collect, extract, format and analyse our findings, we may still need a (human) data analyst to ‘clean’ and organise it for us; for example, to erase data that is outdated, duplicated or irrelevant to our specific goals and outcomes.

 

The sifting out of the most intelligent data then involves two primary forms of analysis:

 

Quantitative (based on measurement, i.e. numbers and statistics).

 

Qualitative (based on observation, i.e. interviews, videos, and anecdotes).

 

Either way, we should try to categorise and present our data in a way that adds value to the decision-making process. This is often more straightforward with quantitative data, as numbers tend to ‘speak for themselves’. Yet even quantitative data can be presented in a way that helps to frame our thinking.

For example, if a survey reveals that a vast majority of our staff members (quantitative data) are in favour of home and hybrid working, we might have the numbers that make investigating this option a worthwhile endeavour. And if we can see from the comments on that survey (qualitative data) that a repeated complaint is the lack of nearby lunch options, we might have the evidence to support our theory that food trucks would do more for morale than refurbishing the staffroom.

 

Visualising our data in the form of charts and graphs can be useful too. Data analysis is, at its heart, an attempt to find a pattern within, or a correlation between, different data points, from which we can draw insights and conclusions. Tools such as innovative dashboard software help us to reveal these patterns while also making our data more engaging. And information graphics can also be a powerful way to make our story connect.

 

Take Periscope’s 2013 visualisation of the ‘stolen years’ caused by gun deaths in the US. This is a masterclass in conveying data in a way that connects with its audience. Suddenly, 9,204 deaths become 390,852 lost years, which hits home even harder.

 

However, as we’ll see below, we need to be mindful that as soon as we start to tell a story with our data, that data is no longer entirely objective.

 

The devil is in the data

Much of the mental work we do is unconscious, which makes it difficult to verify the logic we use when we make a decision. We can even be guilty of seeing the data we wish were there instead of what’s actually in front of us. Or of seeing only what we want to see instead of drilling down into the detail.

 

Here’s what we should be mindful of to avoid these and other pitfalls.

Seeing (only) what we think is there

As economist and author Ronald Coase famously said: “Torture the data, and it will confess to anything.” 

 

A rather cheeky example of such a ‘confession’ is a paper by cardiologist Robert Yeh of the Harvard Medical School, published in the British Medical Journal in 2018. It concerned a randomised control trial in which 23 people jumped from an aircraft – 12 with a parachute and 13 with an empty backpack.

 

The main finding of the trial was that a parachute made no difference at all to a participant’s chances of survival; the outcome for those with empty backpacks was identical.

 

Surprising. Except... hidden in the middle of the paper was a sentence with a barely noticeable caveat: The participants might have been at lower risk of death or major trauma because they jumped from an average altitude of 0.6m (standard deviation 0.1) from an aircraft moving at an average 0 km/h (standard deviation 0). That is to say, the plane they jumped from was stationary and on the ground. And without reading that part, we assume the participants jumped from a plane that was airborne and moving at hundreds of miles an hour.

 

Of course, that is a ludicrous result" says Yeh. But he wanted to show how – when we anticipate the outcome of our research – we also cherry-pick the participants and the circumstances to achieve the results we expect to see. In other words, data can only answer the question it has been asked.

 

It’s a little bit of a parable to say we have to look at the fine print, says Yeh. We have to understand the context in which research is designed and conducted to really properly interpret the results.

 

Trusting our hunches

Another barrier to sound data-based decision-making is a disinclination to override our intuition or gut instinct, especially when the idea of the intuitive genius still holds such sway.

 

From Albert Einstein’s statement that ‘the intuitive mind is a sacred gift’, to Steve Jobs’ encouragement to follow your heart and intuition, we all see ourselves to some degree as that successful spaceship pilot, forging a path through the meteor storm on the strength of our superior hunches.

 

A publication from BI-Survey (a major annual study into the selection and use of analytics and business intelligence tools) shows us that 58% of the companies it surveyed said that they base at least half of their regular business decisions on gut feel or experience, instead of being data and information-driven.

 

However, while our gut may suggest a particular way forward, it is often through data that we verify, understand, and quantify. And where possible, it’s decisions backed by metrics rather than hunches that enable us to build a stable backbone for our business operations.

 

Relying on past solutions

Another issue is our tendency to make decisions based on limited information (such as insufficient or irrelevant data), or on the basis of past experiences that are no longer be relevant.

 

One of the most cited examples of this is that of Dick Fuld, who ‘saved’ Lehman after the Long-Term Capital Management crisis in 1998 only to see the same tactics fail 10 years later - because the cheap credit and lax lending standards that fuelled the bursting housing bubble was a much more complex issue.

 

While analysing past decisions can be a useful guide for the future, it can also be a sign of cognitive bias that leads us to think that what worked before will work again now.

 

If we are always looking back, there’s a real risk we’ll miss what’s right there in front of us, which is why we must always complement past experience with current and robust data.

 

Being ignorant of bias

In her book Invisible Women, Caroline Criado Perez introduces us to ‘Reference Man’ – aka a Caucasian man aged 25 to 30 weighing 70kg – who somehow has the superpower to represent humanity as a whole.

 

Of course, car manufacturers basing crash test dummies on Reference Man aren’t setting out to kill more women in car crashes. But as Criado Perez argues, it’s this type of "unthinking" – one that conceives of humanity as default male - that has put so many women at elevated risk.

 

There are two lessons here. First, we should always be mindful of the data we are not seeing: what is missing because no one has thought to include it.

 

And just as importantly, there is no such thing as objectivity. Everyone is biased – and there are plenty of biases to choose from.

 

When it comes to data, for example, we may fall foul of confirmation bias, by which we tend to favour information that confirms the beliefs we already have, right or wrong, while ignoring information to the contrary.

 

In an article in Tech News World, former IBM employee Rob Enderle names Microsoft as a culprit here, noting a past penchant for commissioning reports that sought to lend credibility by confirming decisions that had already made.

 

He also notes the partial sale of IBM’s ROLM division to Siemens and the ‘forgotten’ internal report which proved that this sale would be a catastrophic failure. The matter had already been decided - and it ended up costing the company more than $1 billion USD.

 

Even in the face of persuasive data, we may succumb to cognitive inertia (an inability to adapt to new conditions, preferring instead to stick to old beliefs), groupthink (the desire to be ‘part of the group’ by siding with the majority), or optimism bias (by which we make decisions based on the belief that the future will be much better than the past.).

 

Strategies for overcoming biased behaviour include:

  • collaboration: bouncing ideas off other people and encouraging our colleagues to flag when they see us exhibiting bias.

  • actively seeking out conflicting information, by asking the right questions to raise and address potential biases and removing preconceived notions from the decision-making process.

  • making data accessible to everyone – for example, via a centralised dashboard – to welcome a variety of perspectives.

  • being unafraid to step back and to rethink our decisions.

Writing opinions into algorithms

In contrast to biased humans, algorithms are often presented and marketed as objective fact.

 

But, as the mathematician, data scientist, and author, Cathy O’Neil points out, a much more accurate description of an algorithm is ‘an opinion embedded in math’. In other words, algorithms are not inherently fair because they pick up any bias that we give them.

 

‘To build an algorithm we need only two things, essentially: a historical data-set and a definition of success’ says O’Neil, who uses the example of an algorithm to cook dinner for her family to illustrate just how subjective that idea of success can be.

 

“I’m the one who is building the meals, so I get to decide a meal is successful if my kids eat vegetables, says O’Neil: My kids, if they were in charge, would have defined it differently.

 

In other words, every time we build an algorithm, we curate our data, we define success, and we embed our values. As a result, algorithms typically make things work for the builders of the algorithms. We optimise for success on our own terms – often profit – and rarely fairness to those who might be affected.

 

That’s why, argues O’Neil, we must inject ethics into the process. And that requires intelligence that’s real and human – not artificial.

 

The importance of humans in the loop

Removing bias from humans is vital as we can’t remove humans from the process of creating algorithms, even in this era of machine learning.

 

While many AI-enabled devices can now perform tasks independently, developing such machines is not possible in the first place without human intervention. That’s why ‘Human-in-the-Loop’ (HITL) machine learning remains essential.

 

For example, in the training and testing stages of building an algorithm, a machine or computer system is unable to solve a problem without a human in the loop, because it cannot understand raw data such as texts, audio, video, images, and other content.

 

Humans must annotate or label this data to make it understandable and to create the continuous ‘feedback loop’ which allows the algorithm to ‘learn’ and give better (faster and more accurate) results each time.

 

So, aside from human input and perspective being desirable, it is also necessary.

 

Being fairly transparent and transparently fair

In an era in which we create as much information every couple of days as between the dawn of man and 2003, we cannot ignore the power of big data – and the ethical questions it raises.

 

Author and MIT research fellow Michael Schrage reminds us that “greater knowledge of customers creates new potential and power to discriminate. For example, we now have far greater ethnographic insight into customer behaviour and influence, which raises some pertinent questions.

 

For instance, he asks:

  • Where does added-value personalisation and segmentation end and harmful discrimination begin?

  • Does promotionally privileging one set of more profitable customers inherently and unfairly discriminate against another?

  • Is it good business - let alone fair - to withhold special offers from specific groups who are demonstrably less profitable?

 

Schrage argues that to answer these questions, companies must have data analytics that are both ‘fairly transparent and transparently fair.’ And this goes way beyond our obligation under data protection legislation to protect people’s right to privacy.

 

In Europe, if data tells us something about an individual, it is covered by General Data Protection Regulation (or GDPR), overseen in the UK by the Information Commissioner. Any organisation handling personal information must follow six important principles – from processing the data lawfully, fairly and in a transparent manner to ensuring appropriate security is in place to protect it against unlawful processing, accidental loss, destruction or damage.

 

Failure to comply with the GDPR is a criminal offence and can result in fines up to 20 million euros – or 4% of global turnover. But there are other more positive reasons why we should meet our obligations. Effective handling of information can enhance business reputation and increase customer and employee confidence, while also reducing the risk of complaints.

 

The rewards (and pitfalls) of data-driven decision-making

However good our data is, it can only take us so far. Human judgement calls are still required.

 

Here are a few examples of companies that have used their data wisely and reaped the benefits, plus a couple that got things horribly wrong.

 

…Improved management at Google

A good example of data-driven decision-making in action is Google’s Project Oxygen, which sought to determine whether having (good) managers actually mattered.

 

The company mined qualitative data from more than 10,000 performance reviews and employee surveys as well as its ‘Great Manager Award’ to identify the top eight behaviours that make a great manager– as well as three that don’t. Armed with this knowledge, Google revised its management training and supplemented the Great Manager Award with a twice-yearly feedback survey.  Together, these efforts boosted managers’ median favourability scores from 83% to 88%.

 

…Bigger sales at Walmart

Walmart used a similar process when it came to stocking up on emergency merchandise in preparation for Hurricane Frances in 2004.

 

Its analysts mined records of past purchases from other Walmart stores under similar conditions, sorting a terabyte of quantitative data – to discover that strawberry Pop-Tarts and beer were what their customers really wanted in times of crisis. [Cut to trucks filled with toaster pastries and six-packs speeding down Interstate 95 towards every Walmart in the path of Frances.] The products sold quickly proving that anticipating demand can give profits a healthy boost, even in times of adversity.

 

…Better locations for Starbucks

After hundreds of Starbucks locations were closed in 2008, then-CEO Howard Schultz promised that the company would take a more analytical approach to identifying future store locations.

 

Starbucks now partners with a location-analytics company to pinpoint ideal store locations using data such as demographics and traffic patterns. The organisation also considers input from its regional teams before making decisions. Armed with this data, Starbucks is better able to determine the likelihood of success for a particular location before taking on a costly new investment.

 

Contrast these examples with those that follow – to show that, while the potential for data to inform sound decision-making is huge, so too are the pitfalls if we don’t think through our actions.

 

…Baby blunder by Target 

A few years ago, retail giant Target hit the headlines, setting off a maelstrom of outrage and privacy concerns, after sending discount coupons for cribs and baby clothes to a teenager who had not yet revealed her pregnancy to her parents.

 

Allegedly, by analysing customer purchase data, the store could assign a pregnancy prediction score to each shopper, estimate her due date, and send relevant coupons for various stages of her pregnancy. Following this incident, Target started mixing up its customised offers, so that someone – such as a worried father – couldn’t tell if the recipient of the deals was pregnant.

 

…Marriage mistake by Pinterest

Pinterest has long been popular with brides-to-be looking for wedding ideas and inspiration. Many use the site to curate image boards, which they then share with friends and family. While it’s common for social media companies to collect user data in order to serve up relevant content and ads, Pinterest took this to an awkward new level by emailing some users congratulating them on their upcoming weddings – with an offer on wedding invitations.

 

The trouble was, many of these women weren’t getting married – some were, in fact, single – and a number of them took to Twitter to share the erroneous email. A company spokesman issued an apology citing bad data. But it’s a reminder that no data is entirely foolproof and glitch-free.

 

‘Show me the data’ (but not to the exclusion of everything else…)

According to author and speaker, Geoffrey Moore: Without big data analytics, companies are blind and deaf, wandering out onto the web like deer on a freeway.

 

When relevant, accurate, unbiased, not missing key content, and presented logically, data can support us in removing subjective elements from our business decisions and empower us to make those decisions with more confidence. And when a company and its leaders view digital insights as a genuine asset, it tends to foster a commercial ecosystem where everyone is keen to use the power of information to keep learning and improving.

 

However, we need to match our approach to the complexity – or otherwise – of the decisions we are making. Wrestling with huge data sets may not be the best use of our time. And there will always be situations where no data – or only very limited data – is available. And no amount of data (good or bad) is going to help us if we can’t bring good old-fashioned human judgement to the table. Simply taking data at face value without interrogating it can lead to costly mistakes.

 

In short, it’s data and sound judgement together that illuminates. And that beats shooting in the dark – however great a spaceship captain we happen to be.

 

 

Test your understanding

  • Outline what is meant by the McNamara Fallacy.

  • Give the key practical reason why we still need Humans in the Loop.