• 1 Post
  • 54 Comments
Joined 3 years ago
cake
Cake day: July 24th, 2023

help-circle

  • I mean just stating the endless facts, would be enough for a somewhat sane person to grasp how fucking awful this current administration is. There’s endless fuel every day. Just collect a list of the worst stuff, you can still fill the whole day with a fairly neutral list of facts… That should be enough motivation to vote them away…





  • In the end dosage is very relevant and of course even more important: set and setting

    I do think acid is more homogeneous and easier to dose appropriately though (YMMV) But the length of a trip can make it challenging.

    Do heroic doses of either, shrooms for me result in more body load and are more introspective chaotic/mystic. Acid more energetic/clearer/“Neon” wider headspace. But in the end setting is what I think is most important for how the trip plays out (I had my best and worst trip on shrooms).











  • US is liked

    Brave assumption, there might be some people outside the US who like them, but I guess that’s true for all countries. I for my part started hating the US, it’s such a bad development for humanity, since capitalism/oligarchy takes over. Even China starts to look good in comparison…

    Also:

    world stability for a huge portion of the globe

    Just LOL. Probably the force that causes the biggest global instability. Do I have to mention Iran as most recent event?

    There might’ve been a period where that might’ve been true, but it certainly isn’t anymore.


  • You will always be better at decisions than an n-dimensional matrix of numbers on an overpriced GPU.

    I’d be careful about these claims. Maybe with our current iteration of “attention-based” LLMs, yes. But keep in mind that our way of processing information is strongly limited compared to how much data is fed to these LLMs while training, so they in theory have a lot more foundation to be able to reason about new problems.

    We’re vastly more capable at the moment at interpreting our limited view on foreign code, being actually creative, find new ways to reason, yes. Capable developers (open source…) often have seen quite a bit more code than the average developer and are highly skilled, still with just a tiny subset of the code that an LLM has seen.

    But say these models improve in creativity and “higher-level of thought” through whatever means (e.g. through more reinforcement learning). Well, let’s just say I’m careful with these claims. These LLMs are already quite a help with stupid boilerplaty code (less so with novel stuff, and writing idiomatic non-redundant code, but compared to 2-3 years ago it’s quite a step already, to the point that they’re actually helpful, disregarding all the hype and obvious marketing strategies of these AI-companies)