Conference talks, Conflabs and Critical Thinking

I recently attended (virtually) WomenTech Network’s Women In Tech Global Conference. It was a jam packed few days and I’m still catching up on a tonne of talks, but one in particular has got me thinking today.

Helena Nimmo, Global CIO at IFS, gave an engaging and insightful talk, “Responsible AI: Balancing Innovation, Ethics, and Impact in Enterprise Technology.” She started by saying that responsible AI should be ethical, transparent and accountable. Helena reminded us that AI uses any content it can access – and the content may very well be biased, inaccurate and potentially just plain incorrect. But the bias doesn’t just come from the data it has access to, for example if the technology is developed by a group made up of very similar people, if it has been developed based around one specific geographical location but is due to be used globally, or if it has been developed around a certain sector, for example finance, but is later used in a completely different sector such as healthcare – it is unlikely to perform to the best of its ability because it has that bias.

Helena suggested that this can be combatted by ensuring the team developing the system is diverse and well-represented and that we need to emphasise critical thinking and the value of asking “why?” And to ensure we innovate ethically we need to pick the right use cases. She gave the example of driverless vehicles. Brilliant when you consider the use case of using them to reduce risk of injury (or worse) in dangerous environments such as mines or in automated warehouses – these are very controlled environments with little to no nuance, but are we really ready to allow an AI agent to make life-or-death decisions on our roads?

I was interested to see what ChatGPT itself “thinks” about its own perceived ability to think critically and identify bias:

What leapt out at me was that ChatGPT cannot empathise and take nuances into account. It does claim to “consider multiple viewpoints and weigh pros and cons”, but this will be based on only the data it has been exposed to, bringing us back to the issue of bias. And the only way ChatGPT is going to know it is biased is if a human tells it that it is.

There is another issue here though, a post popped up on my LinkedIn feed a couple of days ago (frustratingly I can’t find it now so can’t link to it or credit the author), in which they opined that the ability to think critically is being reduced due to humans becoming more and more reliant on AI. Way back in sixth form I was made encouraged to sit an A level in critical thinking. At the time I didn’t appreciate the value of it – to be able to look at a scenario and consider the nuances around it, to identify strengths and weaknesses in arguments, and to empathise with the people involved. I was already very good at empathising and, looking back, I realise that this helped me hone a valuable skill which stood me in good stead for an Archaeology degree – one cannot take the evidence dug up (literally!) at face value. There are cultural, environmental and socio-economic factors to consider in addition to the fact that we often have the equivalent of only one or two bits of a 5000 piece jigsaw puzzle. I remember one of our lecturers in Archaeological Theory showed up in a shellsuit, walked to the front of the lecture theatre and asked us to imagine that a snapshot was taken of us – a group of young adults sitting in orderly rows and completely focussed on the person standing in front of us – and to think about how archaeologists a thousand years from now would interpret the image with zero background knowledge of our society or any context of where/when/why the snapshot was taken. His money was on an interpretation of religious ritual and/or worship of a god-like figure (obviously pink, green and white shellsuits would only be worn by demigods). Eddie Izzard also makes a valid point in Glorious (1997) – every archaeological dig reveals a series of small walls. And the archaeologists guess at the usage of the building (dancing elephants behind this wall…). But it made us undergraduates think twice about making assumptions based on our own culture and experience, and that has stayed with me to this day.

But getting back to the point, studies are starting to back the issue. Justin Jackson (2025) wrote an article for Phys.org summarising the findings of a study carried out by Michael Gerlich which found that relying on AI diminishes critical thinking abilities and Zhai, Wibowo and Li analysed the cognitive abilities of students who used AI and found that while decision-making and efficiency were improved, creativity and cognitive abilities (which included critical thinking, decision making and analytical thinking) were impaired (Zhai, Wibowo and Li, 2024). A study carried out by Carnegie Mellon University and Microsoft which found a confidence relationship – the more you trust AI, the less likely you are to think critically about its output – was reported on in Forbes (Towers-Clark, 2025).

I found myself going down a rabbit hole on this – should critical thinking therefore be part of the national curriculum? (For context – I live in the UK). After a not-so-quick Google, it turns out there is a raging debate which has been going on for some time. Andrew Jones’ article for SecEd (2024) summarises it beautifully if you want to read more about it.

I have found myself falling into this trap recently – we are able to use Copilot in Visual Studio and it has started suggesting entire chunks of code for me based on what it infers I am trying to achieve before I even ask for its help. It would be so easy to just use the code but it’s not always correct and has introduced some interesting bugs. And I find myself trusting it less and less – which means that I’m double/triple checking everything it outputs, which has on occasion meant that a relatively simple task has taken me much longer than it should have. Isn’t this defeating the object? I have started to use Copilot and/or ChatGPT in a different way now, asking whether it is possible to achieve x using y tools in z context. I was at a local tech meetup this week and got chatting to one of the speakers (Tom Jardine-McNamara – whose talk on using Storybook for test driven development was very engaging) who summed it up brilliantly and I shall paraphrase shamelessly:

AI should be a tool, not a crutch.

Which brings me nicely back to Helena’s talk. We need to ask “why?” more often. We need to question what AI tells us instead of blindly following. And we need to take ultimate accountability for the decisions we trust AI to make for us.

References

Jackson, J. 2025 Increased AI use linked to eroding critical thinking skills. Available at: Increased AI use linked to eroding critical thinking skills (Accessed 31st May 2025).

Jones, J. 2025 Critical thinking in schools: Can it be taught – and how? Available at: Critical thinking in schools: Can it be taught – and how? (Accessed 4th June 2025).

Towers-Clark, C. 2025 How AI Changes Critical Thinking: New Microsoft Research Findings. Available at: How AI Changes Critical Thinking: New Microsoft Research Findings (Accessed 31st May 2025).

Zhai, C., Wibowo, S. & Li, L.D. 2024 The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learn. Environ. 11, 28 (2024). https://doi.org/10.1186/s40561-024-00316-7


Discover more from Diffident Coder

Subscribe to get the latest posts sent to your email.

Leave a comment