Our latest round of TBA’s Global Book Review explores the increasingly automated world we live in with Gerd Gigerenzer's most recent book How to Stay Smart in a Smart World, published earlier in 2022.
Psychologist and author of several books on heuristics and decision making, including Risk Savvy and Reckoning with Risk, Gigerenzer was previously the director of the Harding Centre for Risk Literacy at the University of Potsdam and partner of Simply Rational: The Institute for Decisions.
Through his work and research, Gigerenzer has always been advocating that humans are smart - our behavioural biases are there to help us make good decisions (compared to other behavioural scientists who often state that humans make inherently bad decisions by relying on the cognitive shortcuts). In How to Stay Smart in a Smart World Gigerenzer debunks the belief that the algorithms behind smart technologies make better decisions than us while seeking to raise awareness of some of the limitations and risks it can pose to us, both as individuals and as a society.
Anaïs Le Joncour, Senior Strategic Consultant at TBA UK, and Xueyan Shao, Consultant at TBA China, read and discussed the book, and have shared their thoughts in the review below:
- What are Gigerenzer’s main arguments for humans making better decisions than AI?
In the Smart World, Girgerenzer warns of how AI predictions can easily be over-claimed, appearing to outperform human accuracy rates. He argues that although AI is superb in “stable situations where large amounts of data are available” such as chess, where AI can operate within set rules - it lacks the human ability to do this effectively in real-world settings full of uncertainties. Algorithms do not have the same understanding or intuition to quickly compute information that is new and adapt accordingly.
With many examples to illustrate this, the book seeks to discredit the notion that AI is superior at prediction, arguing this is only backed by the trick of the sharpshooter fallacy. He highlights how our causal thinking, intuitive psychology, physics and sociality have evolved so that we can succeed in our uncertain world.
- What new learnings, research, or case studies stood out to you?
It was the importance of context that was most impressive and relevant to TBA’s work. In Google Translate’s example, technology has never succeeded in understanding context that cannot be defined by stable rules. The machine’s sometimes hilarious mistakes really remind us of how important context can be. However, for humans, even when lacking certain information, the multiple and flexible routes our brains can use (vicarious functioning) allow us to promptly and almost unconsciously respond.
- What are the 3 main takeaways of this book that are relevant for TBA’s work?
First up, this book emphasises an important point that we are already familiar with - that when it comes to digital technology, much of our behaviour is deeply ingrained, habitual, automatic and unconscious. Although Gigerenzer doesn’t often spell out the psychology behind them, there are countless examples in the book, across different scenarios which could prove helpful to apply and build on our knowledge of habit theory, and the system 1 responses at play when clicking through a website, or swiping through a dating app.
Another takeaway for our work is to consider the ways potential changes in our cognitive development and reasoning could influence behaviour. It seems that as our world becomes increasingly automated, humans will become increasingly lazy. When talking of dating apps and detailing how they encourage users to keep swiping, to keep looking for ‘something better’, Gigerenzer implies that our reasoning could also change, shifting towards a mindset of ‘never good enough’. These potential shifts are helpful to keep in mind when observing and analysing behaviour in consumer decision-making.
Finally, we should use Gigerenzer’s Stable world principle and Sharpshooter fallacy to become more discerning of what data scientists have claimed or are predicting. The Stable World Principle can provide us with a healthy dose of scepticism toward AI claims. The sharpshooter fallacy meanwhile, reminds us how crucial it is to fact-check information sources, especially when predictions or correlations are being made. It reminds us to also be mindful of our own confirmation biases more generally, and if ever we find ourselves stumbling across unexpected correlations, or applying algorithms to datasets, ensuring results are cross-validated with a different data sample!