Huh, I didn’t know that. Neat!
yellow [she/her]
- 0 Posts
- 24 Comments
yellow [she/her]@lemmy.blahaj.zoneto
Fuck AI@lemmy.world•AIs can generate near-verbatim copies of novels from training dataEnglish
5·14 days agoRNG is not an inherent property of a transformer model. You can make it deterministic if you really want to.
You can’t convert it back into anything remotely resembling human-readable text without inference and a whole lot of matrix multiplication.
Could you not make a similar argument about a zip file or any other compression format?
:xdoes the same thing as:wqbut in one less keystroke :3
yellow [she/her]@lemmy.blahaj.zoneto
No Stupid Questions@lemmy.world•What books have a lot of useful information should I get? (I mean like a Wikipedia thing with vast knowledge, but non-electronic.)English
1·17 days agoI think that, while yes, LLMs are an option for data storage, I don’t think that they’re worth the effort. Sure, they might have a very wide breadth of information that would be hard to gather manually, but how can you be sure that the information you’re getting is a good replica of the source, or that the source that it was trained on was good in the first place? A piece of information could come from either 4chan or Wikipedia, and unless you had the sources yourself to confirm (in which case, why use the LLM as all), you’d have no way of telling which it came from.
Aside from that, just getting the information out of it would be a challenge, at least for the hardware of today and the near future. Running a model large enough to have a useful amount of world knowledge requires a some pretty substantial hardware if you want any amount of speed that would be useful, and with rising hardware costs, that might not be possible for most people even years from now. Even with the software, if something with your hardware goes wrong, it might be difficult to get inference engines working on newer, unsupported hardware and drivers.
So sure, maybe as an afterthought if you happen to have some extra space on your drives and oodles of spare RAM, but I doubt that it’d be worth thinking that much about.
yellow [she/her]@lemmy.blahaj.zoneto
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Cannot find my torrent site bookmarkEnglish
3·22 days agoI doubt they’re blaming the site, they just lost it and are trying to find it again.
yellow [she/her]@lemmy.blahaj.zoneto
Enough Musk Spam@lemmy.world•Get the fuck out of America then MuskEnglish
8·2 months agoThe point the tweet OP is making isn’t that they want Musk to be deported to South Africa, they’re just calling out his hypocrisy.
yellow [she/her]@lemmy.blahaj.zoneto
People Twitter@sh.itjust.works•Be sure to plan aheadEnglish
12·2 months agoYour websites have updates??
yellow [she/her]@lemmy.blahaj.zoneto
memes@lemmy.world•I should’ve redacted the documentsEnglish
13·2 months agoOOTL, what’s up with NZXT?
Oh, my bad! The wording didn’t parse as humor to me.
It was a joke, friend
Bot account? Comments seem like your average “short and humorous response” bot.
It’s not the LLM that does the web searching, but the software stack around it. On its own, an LLM is just a text completer. What you’d need a frontend like OpenWebUI or Perplexica that would ask the LLM for, say five internet search queries that could return useful information for the prompt, throw those queries into SearxNG, and then pipe the results into the LLM’s context for it to be used.
As for the models themselves, any decently-sized one that was released fairly recently would work. If you’re looking specifically for open-source rather than open-weight models (meaning that the training data and methodologies were also released rather than just the model weights), GPT-OSS 20B/120B and the OLMo models are recent standouts there. If not, the Qwen3 series are pretty good. (There are other good models out there, this is just what I remember off the top of my head.)
Qwen3-0.6B is about 400 MB at Q4 and is surprisingly coherent for what it is.
yellow [she/her]@lemmy.blahaj.zonetoMicroblog Memes@lemmy.world•words to live by 🙏English
5·3 months agoGames should have a point, and winning is not a point on its own.
Why not? Is wanting to win not a valid motivator to play a game?
yellow [she/her]@lemmy.blahaj.zoneto
Programming@programming.dev•Software Quality CollapseEnglish
6·5 months ago- “This isn’t x, it’s y” literally everywhere in the article. No one uses that phrase that often.
- Em dashes are littered throughout.
- Random bolding of phrases for no reason.
- More headers than is really necessary.
- Far too many markdown lists.
D:
yellow [she/her]@lemmy.blahaj.zoneto
Greentext@sh.itjust.works•Anon doesn't understand streamer fansEnglish
21·6 months agoThe appeal isn’t in the games themselves, it’s in the personality playing them.





deleted by creator