• 2 Posts
  • 55 Comments
Joined 8 months ago
cake
Cake day: November 24th, 2024

help-circle
  • That is a possibility. Data from interacting with actual humans reduces the rate of model degradation. Maybe somebody does feel like they would get better results here. But they’d have to go to the trouble of sending requests to join instances and federate communities. It’s not a whole lot of work but it’s slightly more overhead for a website that gets way less hits than reddit as of now.

    You’re not naive dude, you’re living in unprecedented times. It’s sad to see people get jumpy at the idea that all of our interactions are becoming simulations of real ones but in some places it literally happened. I don’t even fuck with instagram, facebook, or tiktok because I’ve seen the brainrot there that got created because the platform incentivised it. Stay curious and don’t let the bastards grind you down👍


  • I came here specifically to get away from the chatbot daycare hellhole that reddit became. Share some of your insights about these accounts and I’ll tell you a little about why reddit got so bad. Fediverse doesn’t really offer the same kind of incentive to somebody who’s trying to train an LLM on comments but who knows.

    On reddit, the biggest incentive for people to want to train LLM’s is just the sheer amount of data there. Reddit is insanely big and the karma system is basically a “weight” value similar to how neural networks already categorize info. Even if somebody notices the obvious bot account, enough people there will still interact with the bot sincerely that it gets the interaction it’s trying to provoke every time.

    Also it’s easy as hell to set one up to run on reddit. Simply verify an email address, subscribe to r/newtoreddit and and bunch of other subs that don’t require karma to comment, and then only give votes for the first month before finally starting to leave comments. Reddit claims to screen for bot accounts but deviating from this specific pattern of conduct is something that gets new users comments flagged for review. Reddit is actually only screening real people.

    If you want to talk real tinfoil hat shit, this is probably by design. Chatbots drive up traffic and interaction not just with eachother but specifically with the humans that will also severely inflate usage statistics to look good to advertisers. the ones who leave comments following common “redditisms” and patterns of discussion over and over and over and never get sick of saying the same things.

    Basically, I’m hoping none of these conditions exist here. So far doesn’t seem like it since fediverse isn’t hiding ads as posts, blocking VPN users, or taking such a heavy handed involvement in moderation.














  • autistics as well. we don’t start with the “common rules and customs” official strategy guidebook like everyone else did so it seems to acclimate us to doing the legwork of overanalyzing everything.

    sometimes I have to stop and ask myself “is this actually something I need to know or am I just scared of being punished socially not knowing it”