• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: July 30th, 2023

help-circle
  • Ah kay, definitely not a RAM size problem then.

    iostat -x 5 Will print out per drive stats every 5 seconds. The first output is an average since boot. Check all of the drives have similar values while performing a write. Might be one drive is having problems and slows everything down, hopefully unlikely if they are brand new drives.

    zpool iostat -w Will print out a latency histogram. Check if any have a lot above 1s and if it’s in the disk or sync queues. Here’s mine with 4 HDDs in z1 working fairly happily for comparison:

    Here's mine with 4 HDDs  in z1 working fairly happily for comparison

    The init_on_alloc=0 kernel flag I mentioned below might still be worth trying.



  • After some googling:

    Some Linux distributions (at least Debian, Ubuntu) enable init_on_alloc option as security precaution by default. This option can help to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic.

    Unfortunately, it can lower ARC throughput considerably (see bug).

    If you’re ready to cope with these security risks 6, you may disable it by setting init_on_alloc=0 in the GRUB kernel boot parameters.

    I think it’s set to 1 on Raspberry Pi OS, you set it in /boot/cmdline.txtI think.

    Exhaustive ZFS performance tuning guide


  • sync=disabled will make ZFS write to disk every 5 seconds instead of when software demands it, which maybe explains your LED behavior.

    Jeff Geerling found that writes with Z1 was 74 MB/sec using the Radxa Penta SATA HAT with SSDs. Any HDD should be that fast, the SATA hat is likely the bottleneck.

    Are you performing writes locally, or over smb?

    Can try iostat or zpool iostat to monitor drive writes and latencies, might give a clue.

    How much RAM does the Pi 5 have?


  • OpenAI noticed that Generative Pre-trained Transformers get better when you make them bigger. GPT-1 had 120 million parameters. GPT-2 bumped it up to 1.5 billion. GPT-3 grew to 175 billion. Now we have models with over 300 billion.

    To run, every generated word requires doing math with every parameter, which nowadays is a massive amount of work, running on the most power hungry top of the line chips.

    There are efforts to make smaller models that are still effective, but we are still in the range of 7-30 billion to get anything useful out of them.