• 1 Post
  • 47 Comments
Joined 3 years ago
cake
Cake day: July 24th, 2023

help-circle







  • US is liked

    Brave assumption, there might be some people outside the US who like them, but I guess that’s true for all countries. I for my part started hating the US, it’s such a bad development for humanity, since capitalism/oligarchy takes over. Even China starts to look good in comparison…

    Also:

    world stability for a huge portion of the globe

    Just LOL. Probably the force that causes the biggest global instability. Do I have to mention Iran as most recent event?

    There might’ve been a period where that might’ve been true, but it certainly isn’t anymore.


  • You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.

    I’d be careful about these claims. Maybe with our current iteration of “attention-based” LLMs, yes. But keep in mind that our way of processing information is strongly limited compared to how much data is fed to these LLMs while training, so they in theory have a lot more foundation to be able to reason about new problems.

    We’re vastly more capable at the moment at interpreting our limited view on foreign code, being actually creative, find new ways to reason, yes. Capable developers (open source…) often have seen quite a bit more code than the average developer and are highly skilled, still with just a tiny subset of the code that an LLM has seen.

    But say these models improve in creativity and “higher-level of thought” through whatever means (e.g. through more reinforcement learning). Well, let’s just say I’m careful with these claims. These LLMs are already quite a help with stupid boilerplaty code (less so with novel stuff, and writing idiomatic non-redundant code, but compared to 2-3 years ago it’s quite a step already, to the point that they’re actually helpful, disregarding all the hype and obvious marketing strategies of these AI-companies)