• 2 Posts
  • 556 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle








  • I feel like the real answer is and has been for a long time some sort of distributed moderation system. Any individual user can take moderation actions. These actions produce visible effects for themself, and to anyone who subscribes to their actions. Create bot users who auto-detect certain types of behavior (horrible stuff like cp or gore) and take actions against it. Auto-subscribe users to the moderation actions of the global bots and community leaders (mods/admins) and allow them to unsubscribe.

    We’d probably still need some moderation actions to be absolute and global, though, like banning illegal content.




  • Ad hominem is when you attack the entity making a claim using something that’s not relevant to the claim itself. Pointing out that someone (general someone, not you) making a claim doesn’t have the right credentials to likely know enough about the subject, or doesn’t live in the area they’re talking about, or is an LLM, aren’t ad hominem, because those observations are relevant to the strength of their argument.

    I think the fallacy you’re looking for could best be described as an appeal to authority fallacy? But honestly I’m not entirely sure either. Anyways I think we covered everything… thanks for the debate :)







  • Ok, but if you aren’t assuming it’s valid, there doesn’t need to be evidence of invalidity. If you’re demanding evidence of invalidity, you’re claiming it’s valid in the first place, which you said you aren’t doing. In short: there is no need to disprove something which was not proved in the first place. It was claimed without any evidence besides the LLM’s output, so it can be dismissed without any evidence. (For the record, I do think Google engages in monopolistic practices; I just disagree that the LLM’s claim that this is true, is a valid argument).

    To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.

    How much do you know about how LLMs work? Their outputs aren’t nonsense or copying others directly; what they do is emulate the pattern of how we speak. This also results in them emulating the arguments that we make, and the opinions that we hold, etc., because we those are a part of what we say. But they aren’t reasoning. They don’t know they’re making an argument, and they frequently “make mistakes” in doing so. They will easily say something like… I don’t know, A=B, B=C, and D=E, so A=E, without realizing they’ve missed the critical step of C=D. It’s not a cop-out to say they’re unreliable; it’s reality.


  • You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.

    ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.