

Luckily Australia is their largest source and they upped the amount. They can also get more shipments from the US.


Luckily Australia is their largest source and they upped the amount. They can also get more shipments from the US.
Just because one idiot state in the US changes a law doesn’t mean the entire world needs to follow. Fuck em.


Meh, I’ve owned an ATI 4870X2, GTX 580, GTX 970, 5700 XT, 3080 and now 5080.
Also helped out friends with their GPUs, 2070 Super, 6800 XT (which have a really shitty fan curve at stock).
The 5700 XT had the worst drivers of the bunch. Crashes, stuttering, … AMD managed to fix most issues with time, but not all.
Nvidia drivers early this year were shit too, but at this point they are great again. I don’t care about the brand, I only care about my PC running well.


???
Of course the game developers choose what to put into their games. Some games have FXAA, TAA, DLSS, FSR and even XeSS.
With an Nvidia card you can use them all. With AMD you can’t use DLSS.
Not sure what your last sentence means, of course your GPU runs AA?


Every time I got an AMD card I got burned, so that’s not really an option. Last try was a 5700 XT and oh my god was that a pain. So much so that instead of my usual 4-5 year upgrade cycle I grabbed a 3080 one year later.
Nowadays DLSS is a must for me, it just looks so much better than TAA. FSR is alright, but not great.


Dude, Arch is a rolling release, it has no dist-upgrade equivalent. You’re not even in the right conversation.
Debian, Ubuntu, … and plenty of other distros have. Just upgrading my server from Ubuntu 22 to 24 (both LTS) took an hour or two of fixing things.


Until you do a dist-upgrade and random shit breaks (:


You do realize that counts as “engagement” which AI companies then use to turn around and get more investor money?
Either use it for actual work or don’t use it at all.


Give me back my Google search from 10 years ago and alright, no need for AI.
Nowadays Google is so unusable that I actually go to Claude first if I need to research something.


The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.
For every single token/word. You input your system prompt, context, user input, then the output starts.
The
Feed the entire context back in and add the reply “The” at the end.
The capital
Feed everything in again with “The capital”
The capital of
Feed everything in again…
The capital of Austria
…
It literally works like that, which sounds crazy :)
The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.
Edit: I should add that tool and subagent use makes this approach a bit more powerful nowadays. But it all boils down to text prediction again. Even the tools are described per text for what they are for.


Decent sized for what?
Creative writing and roleplay? Plenty, but I try to fit it into my 16 GB VRAM as otherwise it’s too slow for my liking.
Coding/complex tasks? No, that would need 128GB and upwards and it would still be awfully slow. Except you use a Mac with unified memory.
For image and video generation you’d want to fit it into GPU VRAM again, system RAM would be way too slow.


It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.
So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)


You might genuinely be using it wrong.
At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.
Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).
Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.
Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).


Hey, if your project is important enough you might get your own Jia Tan (:


I’m from Austria, so plenty of options for high quality chocolate here. But they have gotten quite pricey lately.
Either way, I should lose weight (:


Maybe it has changed again, but in the past I gave it a try. When 16 GB was a lot. Then when 32 GB was a lot. I always thought “Not filling up the RAM anyway, might as well disable it!”
Yeah, no, Windows is not a fan. Like you get random “running out of memory” errors, even though with 16 GB I still had 3-4 GB free RAM available.
Some apps require the page file, same as crash dumps. So I just set it to a fixed value (like 32 GB min + max) on my 64 GB machine.


Baking is relatively easy, but how do you make your own chocolate? I don’t think you can properly do that at home.


Don’t fully disable swap on Windows, it can break things :-/


With 32 and 64 GB systems I’ve never run out of RAM, so the RAM isn’t the issue at all.
Optimization just sucks.
Indie games are better anyway and don’t need much hardware power :)