Title says it all
I’ve got a couple ads for an AI chat on Android, can’t remember the name but it has a disclaimer onscreen that reads something like “All characters shown are in their grown-up form”, implying that there are teen or child forms that you can communicate with.
I saw something similar! Reported it to Google ads and of course they “couldn’t find any ToS violations”
And the bot has 882.9k chats.
Im not surprised and I dont think you or anyone else is either. But that doesn’t make this less disturbing.
Im sure thw app devs are not interested in cutting off a huge chunk of their loyal users by doing the right thing and getting rid of those types of bots.
Yes, its messed up. In my experience, it is difficult to report chat bots and see any real action taken as a result.
Ehhh nah. As someone who used character.ai before there are many horrible bots that get cleared and the bots have been impossible to have sex with unless you get really creative. The most horrendous ones get removed quite a bit and were consistently reposted. I’m not here to shield a big company or anything, but the “no sex” thing was a huge thing in the community and they always fought with the devs about it.
They’re probably trying to hide behind the veil of more normal bots now, but I struggle to imagine how they’d get it to do sexual acts, when some lightly violent RPs I tried to do got censored. It’s pretty difficult, and got worse over time. Idk though, I stopped using it a while ago.
Unfortunately in a lot of places there’s really nothing illegal if it’s just fantasy and text.
why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you “protect” the chatbot? would you “protect” the good-think of the users? do you think it’s about preventing “normalization” of these thoughts?
in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.
I don’t think it’s the same discussion, video games and AI chatbots are two very different things that you engage with in very different ways.
Yeah, video games are against other people, so a lot worse than some llm
…huh?
Yep. I dick around on a similar platform because a friend built it.
The amount of shit I’ve reported is insane. Pedos just keep coming back with new accounts. Even with warnings and banned words, they find a way.
Yep. I dick around
Very poor choice of words.
there’s plausible denia… nah i got nothin. That’s messed up. Even for the most mundane, non-gross use case imaginable, why the fuck would anybody need a creepy digital facsimile of a child?
I mean, maaaybe if you wanted children and couldn’t have them. But why would it need to be “beautiful and up for anything”?
Conservatives
they would say the same thing about liberals.
As gross as it is. Let the weirdos get it out with ai instead of being weird to real people.
Hundred percent. It feels pretty fucking thought-crimey to vilify the people who use these services.
I agree in principle, but look at the number of interactions. I think there’s a fine line between creating safe spaces for urges and downright promoting and normalizing criminal activity. I don’t think this should be a) this accessible and b) happening without psychiatric supervision. But maybe I’m being too judgemental
If you suspect any wrongdoing, it’s generally the best to report such things.They have several different social media channels at the bottom of the website.
They have a contact form here: https://support.character.ai/hc/en-us/requests/new
And it looks like it’s a US company, so they better comply with US law.
Do not complain to scummy companies, they will ignore you. Send messages to the media and police.