• 0 Posts
  • 46 Comments
Joined 6 months ago
cake
Cake day: February 5th, 2025

help-circle


  • Calling something “vibe coded” feels similar to calling something “woke”. Its used for everything so it lost all meaning.

    If I start writing a function name and the coding assistant suggests exactly, character by character, the function signature I would have typed, is that vibecoding? If I ask an agent to check out my classes and create me a json schema from that, what about that? What if I ask it to write me some database connection stuff, boilerplate that is basically directly copy pasted from docs?

    The presence of a cursorrules file signifies… Nothing. Most projects now have a prompt file to store general context, like styling etc. Most programmers use coding assistants. The use of coding assistants does not, in of itself, mean bad quality. Obviously you have to hide it because people get irrationally angry about it.

    The blind AI hate does not help. LLMs can be useful. Is it stupid to have an LLM write an email so the other side can use an LLM to summarize it? Yes. Is LLM slop annoying? Yes. That does not mean that everything is bad. If you check out the Developer-sphere, people who have tried coding assistants for a couple years now, the consensus is “overhyped but definitely useful”.

    Example, read this comment section: https://news.ycombinator.com/item?id=44836879







  • The idea behind keys is always, that keys can be rotated. Vast majority of websites to that, you send the password once, then you get a rotating token for auth.

    Most people don’t do that, but you can sign ssh keys with pki and use that as auth.

    Cryptographically speaking, getting your PW onto a system means you have to copy the hash over. Hashing is not encryption. With keys, you are copying over the public key, which is not secret. Especially managing many SSH keys, you can just store them in a repo no problem, really shouldn’t do that with password hashes.


  • This is mostly nonsense.

    • Why block outgoing? Its just going to cause issues for most people. If you’re going to do that, do it centrally (hw firewall)
    • Why allow http and NTP incoming, when there is no http / NTP server running.
    • If there is http server running no mention of https://ssl-config.mozilla.org/ and modsecurity
    • If you’re using ufw anyway why not go with applications instead of ports?
    • In a modern distro, the defaults are usually sane (maybe except TCP), most of the stuff in the SSH config is already default.
    • Why change the SSH port of a home server, which most likely is not reachable from the outside anyway?
    • Actually potentially impactful stuff like disabling services you don’t need, such as cups, is not mentioned
    • unattended-upgrades not mentioned
    • SELinux / AppArmor not mentioned
    • LKRG not mentioned https://lkrg.org/
    • Fail2ban not mentioned

    Don’t just copy random config from the internet, as annoying as it is, read the docs.









  • I want to write this in a separate post because I see many questionable suggestions:

    Your scenario does not allow for a simple rsync / ZFS copy. That is because those only work with 1:many. Meaning one “true” copy that gets replicated a couple of times.

    As I understand you have a many:many scenario, where any location can access and upload new data. So if you have two locations that changed the same file that day, what do you do? many:many data storage is a hard problem. Because of this a simple solution unfortunately won’t work. There is a lot of research that has gone into this for hyperscalers such as AWS GCP, Azure etc. They all basically came to the same solution, which is that they use distributed quorum based storage systems with a unified interface. Meaning everyone accesses the “same” interface and under the hood the data gets replicated 3 times. So it turns it back into a 1:many basically, with the advantages of many:many.


  • So I think this can be achieved in different levels of complexity.

    First of all, you may want to look into ZFS, because there you can have multiple “partitions” that all have access to the entire free space of the device or devices, meaning you won’t need two separate drives. Or probably you want multiple smaller and cheaper devices that are combined together because it will be cheaper and more fault tolerant.

    You also need some way to actually access the data. You have not shared how that is supposed to work: smb/nfs, etc. In either case you need a software that can do that. There a various options.

    Then, you probably want to create some form of overlay network. This will make it so that the individual devices can talk to each other lime they are in the same lan. You could use tailscale/headscale for this. If you have static public IPs you can probably get around this and build your own mesh using wireguard (spoiler: thats what tailscale does anyway).

    Then, the syncing. You can try to use syncthing for this, but I am not sure it will work well in this scenario.

    The better solution is to use a distributed storage system like garage for this, but that requires some technical expertise. https://garagehq.deuxfleurs.fr/

    Garage would actually allow you to for example only store two copies, so with three locations you would actually gain some storage space. Or you stay with the 3x replication factor. Anyway, garage is an object store which backup software will absolutely support, but there is no easy NFS/smb. So your smart TV, vanilla windows or whatever will not be able to access it. Plus side: its the only software you need, no ZFS required.

    Overall its a pretty tricky thing that will require some managing. There is no super easy solution to set this up.