For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


Congrats. You just burned down 4 trees in the rainforest for every article you had an LLM analyze.
LLMs can be incredibly useful, but everybody forgets how much of an environmental nightmare this shit is.
This is my number 1 reason to oppose them. It is not worth the damage.
Not much when you use an already trained model, actually.
Unfortunately unless you are hosting your own, or using like DeepSeek which had a cutoff on its training data, then it is a perpetually training model.
When you ask ChatGPT things it is horrible for the world. It digs us a little deeper into an unsalvageable situation that will probably make us go extinct