

I never said its going to replace teachers or that it “stores context” but your sloppily googled preprints to support your “fundamentally can’t reason” statement were demonstrably garbage. You didn’t say even once “show me it’s close” but you think you said several times. Either your reading comprehension is worse than an LLM and you wildly confabulate, which means an LLM could replace you or you’re a bot. Anyway, so far you proved nothing and already said they can write code, it’s a non trivial cognitive task that you can’t perform without several higher order abilities so cope and seethe I guess.
So, what about Palantir AI? Is that also “not close”? Why are you avoiding surveillance AI? They’re both neural networks. Some are LLMs.
Your claim was this, “supported” by some corporate unpublished preprint (which is really funny considering you have the nerve to ask for citations):
You don’t need a citation for LLMs being able to “reason for code”, doubting AI coding abilities is delusional online yapping considering how documented it is since its deployed all over the place so how about you prove that being able to write code and do things like control flow, conditionals etc can be done without reason. Try doing that instead of spamming incoherent replies.
Nobody cares if you’re a professional vibe coder all of a sudden, if you can’t code without copilot maybe you shouldn’t have an opinion based on Apple’s “research”.
But until then, are Palantir’s AIs fundamentally incapable of reasoning? Yes or no? None of you anti-AI warriors are clear, should we not worry about corporate AI surveillance because apparently AI isn’t really “I” or not? Simple question, but maybe ask copilot for help. But you seem bugged when it comes to corporate propaganda contradictions, it’s really interesting.