In 1995, Bill Gates tried explaining the internet on late-night television, and people laughed at the idea of it being revolutionary. Fast forward to today, artificial intelligence is in a similar moment—hyped, debated, and widely tested in everyday life. But for one father in Ireland, relying on AI for medical advice brought a chilling reality check.
As reported by Mirror, 37-year-old Warren Tierney from Killarney, County Kerry, turned to ChatGPT when he developed difficulty swallowing earlier this year. The AI chatbot reassured him that cancer was “highly unlikely.” Months later, Tierney received a devastating diagnosis: stage-four adenocarcinoma of the oesophagus.
From reassurance to reality
Tierney, a father of two and former psychologist, admitted he delayed visiting a doctor because ChatGPT seemed convincing. “I think it ended up really being a real problem, because ChatGPT probably delayed me getting serious attention,” he told Mirror. “It sounded great and had all these great ideas. But ultimately I take full ownership of what has happened.”
Initially, the AI appeared to provide comfort. At one point, extracts seen by the Daily Mail show ChatGPT telling him: “Nothing you’ve described strongly points to cancer.” In another conversation, the chatbot added: “I will walk with you through every result that comes. If this is cancer — we’ll face it. If it’s not — we’ll breathe again.”
That reassurance, Tierney says, cost him crucial months.
The official warning from OpenAI
OpenAI has repeatedly stressed that its chatbot is not designed for medical use. A statement shared with Mirror clarified: “Our Services are not intended for use in the diagnosis or treatment of any health condition.” The guidelines also caution users: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.”
ChatGPT itself reportedly told media outlets that it is “not a substitute for professional advice.”
A family facing uphill odds
The prognosis for oesophageal adenocarcinoma is grim, with survival rates averaging between five and ten percent over five years. Despite the statistics, Tierney is determined to fight. His wife Evelyn has set up a GoFundMe page to help raise money for treatment in Germany or India, as he may need to undergo complex surgery abroad.
Speaking candidly, Tierney warned others not to make the same mistake he did: “I’m a living example of it now and I’m in big trouble because I maybe relied on it too much. Or maybe I just felt that the reassurance that it was giving me was more than likely right, when unfortunately it wasn’t.”
Tierney’s case underscores both the potential and the peril of integrating AI into personal health decisions. Just as the internet once seemed trivial before reshaping the world, artificial intelligence is already infiltrating daily life. But unlike baseball scores or radio shows, health outcomes leave no room for error.
As reported by Mirror, 37-year-old Warren Tierney from Killarney, County Kerry, turned to ChatGPT when he developed difficulty swallowing earlier this year. The AI chatbot reassured him that cancer was “highly unlikely.” Months later, Tierney received a devastating diagnosis: stage-four adenocarcinoma of the oesophagus.
From reassurance to reality
Tierney, a father of two and former psychologist, admitted he delayed visiting a doctor because ChatGPT seemed convincing. “I think it ended up really being a real problem, because ChatGPT probably delayed me getting serious attention,” he told Mirror. “It sounded great and had all these great ideas. But ultimately I take full ownership of what has happened.”
Initially, the AI appeared to provide comfort. At one point, extracts seen by the Daily Mail show ChatGPT telling him: “Nothing you’ve described strongly points to cancer.” In another conversation, the chatbot added: “I will walk with you through every result that comes. If this is cancer — we’ll face it. If it’s not — we’ll breathe again.”
That reassurance, Tierney says, cost him crucial months.
The official warning from OpenAI
OpenAI has repeatedly stressed that its chatbot is not designed for medical use. A statement shared with Mirror clarified: “Our Services are not intended for use in the diagnosis or treatment of any health condition.” The guidelines also caution users: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.”
ChatGPT itself reportedly told media outlets that it is “not a substitute for professional advice.”
A family facing uphill odds
The prognosis for oesophageal adenocarcinoma is grim, with survival rates averaging between five and ten percent over five years. Despite the statistics, Tierney is determined to fight. His wife Evelyn has set up a GoFundMe page to help raise money for treatment in Germany or India, as he may need to undergo complex surgery abroad.
Speaking candidly, Tierney warned others not to make the same mistake he did: “I’m a living example of it now and I’m in big trouble because I maybe relied on it too much. Or maybe I just felt that the reassurance that it was giving me was more than likely right, when unfortunately it wasn’t.”
Tierney’s case underscores both the potential and the peril of integrating AI into personal health decisions. Just as the internet once seemed trivial before reshaping the world, artificial intelligence is already infiltrating daily life. But unlike baseball scores or radio shows, health outcomes leave no room for error.
You may also like
No strong evidence to show acupuncture, music therapy work for autism: Study
No luxury car or house or huge salary. 24-year-old techie flexes his work-life balance
5 Man Utd issues Ruben Amorim must fix NOW after 7am return to work following Grimsby woe
Lee Anderson 'humiliates' Labour MP with 4-word slap down
UK beach home to dolphins is minutes from one of the 'most beautiful' towns