I Fought with ChatGPT Today
A story, quote, and lesson reflecting on AI and its future
It was a clear disagreement.
I was doing some exploratory work on a dataset while following a tutorial when I was told to create a correlation chart between certain variables. The end result was supposed to contain the specific correlations between all numeric variables.
I understood the code in the tutorial, copied it to my own workspace and ran it.
Immediately, I noticed something was off. My chart was only showing the correlations for the first row of the chart (leaving empty squares everywhere else).
At first, I thought I copied something wrong, maybe an added comma or forgotten parameter. I double-checked the code and… nothing. It was exactly the same as the tutorial.
Then I thought it may be the tutorial’s fault. It may have been outdated (as many coding tutorials tend to be) or an update made it obsolete.
I wasn’t convinced by my hypotheses so I decided (maybe too quickly) to check in with ChatGPT to see what it thought of the situation.

I explained my situation, copied the code I was using and the image of the faulty chart and clicked ‘Enter’. It took a brief moment to think (I have it set to Thinking mode by default to try and minimize hallucinations) but it finally gave me an answer.
Its response? ChatGPT confidently announced that the missing correlations were being annotated on the chart. I just couldn’t see them. It argued that the module I was using used only black text so of course I wouldn’t be able to see the numbers if the squares had a dark background. It then suggested I specifically ask for white text to prevent this in the future.
What?! I was baffled.
In the image, you can clearly see that there is both black and white text present (suggesting ChatGPT’s hypothesis is blatantly false) and, even if it was right, there are both light and dark squares in the chart. If the module I used only had white OR black text, then some should be visible everywhere in the chart.
To say I was disappointed would be an understatement. It was such a simple exercise and yet it just couldn’t diagnose the problem correctly. After a quick Google Search (and a visit to Stack Overflow) I saw that the actual cause of the bug was an incompatibility between modules I was using. I just had to update both of them to the latest versions and it quickly fixed the issue.
After I called out ChatGPT and asked it to help me update the modules, it answered with this:
“It’s almost certainly not a bug (it’s the annotation color contrast), but yes — here are the exact commands to check the version and upgrade it.”
Now I was angry.
It was such a trivial problem with a clear-cut solution but it still rubbed me the wrong way. How can it be that, after calling out a flaw in its reasoning it was still confidently spewing nonsense?
I decided to push the limits of the conversation and kept bugging ChatGPT to see when it would break its facade. It took 3 more messages, some snarky responses and even submitting proof to change its mind.
My anger slowly turned to concern. If such an irrelevant task caused so much strife and pushback, I could only wonder what the future could be like when AI becomes even more embedded in our day-to-day lives.
It could soon be trying to get an AI salesman to leave you alone after it bugged you for hours, convincing your AI doctor that you actually do feel bad and the treatment is not working, or even begging an AI landlord not to evict you after it made a mistake on your rent payment.
We need to ensure that, as AI grows ever more intelligent, the boundaries and limits under which it operates stay clear and unmoving.
This time I was able to identify and convince it of its mistake because I knew I had the solution. What happens when we don’t?
Will we become pawns at the mercy of AI, or will we set the proper foundation for it to make a positive impact? Only time will tell…



The future repercussions of the AI wave will be astonishing. Let’s just hope we learn to control it before it gets out of hand
Al is not human we can not expect it to function as our mind. We are made in God’s image. Remember that.