THe caveat was, you needed readonly root, so that meant freezing the OS, anything that needed changing was either stored in a ram disk (that you need to setup) or a per host nfs area (kinda like overlayfs, but not)
If you needed to update the root dir, you chrooted into it and did the (yum) update.
Wouldn't that need a local disk?
Then anaconda or whatover os installer picks up and installs the OS in a PXE install sequence when there is a local disk.
Does anyone have an opinion on iSCSI vs NBD?
https://forums.gentoo.org/viewtopic.php?p=4895771&sid=f9b7ac...
https://github.com/NetworkBlockDevice/nbd/issues/93
Whether that’s the case with the latest version, I don’t know, but it’s something you might test if you choose to try it.
Unfortunately in this specific case (diskless booting) dracut support still sucks, which I really need to get around to fixing at some point.
SFP28 might be cheap enough now too, I'm not sure...
to make this actually work well, consider modifying your switches QoS settings to carve out a priority VLAN for iSCSI traffic
I have recently upgraded my house to 10Gbps Ethernet, with only one room still stuck at gigabit, and unfortunately, it's my main office. I'm working on getting the drop there now (literally, just taking a break here).
Even once I'm done, accessing an iSCSI drive over 10GbE will be 4-8 times slower than a local NVMe drive, but it will sure be a lot better than it was!
Ideally, I could run VMs on the NAS and have great performance, but that's another hardware upgrade...
NVMe-oF is the best protocol with least overhead for network drives, with a proper setup you lose only 10-20% latency compared to local disk even with Intel Optane. Throughput should be almost similar.
Do these benefit the iSCSI target end of the equation too, or just the initiator? And do they work like an HBA, where you configure the card in a firmware setup menu, or does it just transparently accelerate the software initiator on Windows/Linux?
… you don't have to update the UEFI entries every time the kernel updates. (I guess you might if you do like a kernel w/ CONFIG_EFI_STUB, and you place the new kernel under a different filename than what the UEFI boot entry point to then you might … but I was under the impression that that'd be kind of an unusual setup, and I thought most of us booting w/ EFI were doing so with Grub.)
... And it's very, very fun.
iscsi is a block device: you gain a 'disk drive' sitting on your network. A dedicated network for disk traffic and use it to host on-prem virtualization. It's called a SAN array.
nbdinfo nbd://server
nbdcopy nbd://server:2001/ nbd+unix:///?socket=/tmp/localsock
https://github.com/NetworkBlockDevice/nbd/blob/master/doc/ur...How well does it work in environments with noticeable network latency?
You can actually see what happens quite easily if you've got an OS image handy. With a Fedora VM image:
$ virt-builder fedora-42 --root-password=password:123456
$ nbdkit file fedora-42.img --filter=delay rdelay=50ms \
--run 'qemu-system-x86_64 -machine accel=kvm:tcg \
-cpu max -m 2048 \
-drive file="$uri",format=raw,if=virtio'
("$uri" expands to the NBD URI of the nbdkit server which qemu can parse natively)Even that 1 second delay is painful since it turns out that booting is quite serialized. Edit: I turned down the latency to 50ms in the example which is a bit more realistic. Still painful.
Hmmh? I haven't done so in years, but configuring multi-boot used to be considerably easier than disk-less operation.
There are some exceptions (some hardware from Microsoft doesn't trust the third party certificate used, for instance, and Red Hat Enterprise has their own root of trust if you opt into that), but they're very rarely ever an issue.
The Linux NTFS resizing code also has a tendency to trigger data corruption. Not really Linux' fault, but it's a good reason to do partitioning from inside of Windows, which can be a pain already.
Another issue I've run into is Windows creating a very small (~300MiB) EFI partition that barely fits the Windows bootloader, let alone a Linux bootloader and kernel. You can resize and recreate the partition of course, but reconfiguring Windows to use a different boot partition is a special kind of hell I try to avoid.
If Linux corrupts someone files, it is 100% Linux's fault and is absolutely unacceptable.
You can install a prettier looking boot selection menu like rEFInd, but the default works just as well, and I think the mainstream distros all setup secure boot too. On my pc it was very easy, on my (8yr old) laptop I had to add some secure boot keys and the bios was very confusing, using terms that didn’t seem to match what they should have been.
My setup has worked almost entirely flawlessly and survived updates from both OSes. Only issue being “larger” windows feature updates putting windows back as the first OS in the list, but that happens maybe once or twice a year? And it’s a quick bios change to fix the order.
Luckily gaming now works well enough that the only reason to use Windows was gone. Well, apart from some online games played during lan parties.
I have been waiting for such a feature for like 15 years now. Without it, zfs is just a fad and useless filesystem (all that complexity for NOTHING).
ext2 for the win! still
--
0: https://klarasystems.com/articles/troubleshooting-zfs-common-issues-how-to-fix-them/Looks like ZFS is only used to store the image on the server, though. I was expecting this to be more interesting because of that.
rEFInd is _so_ much simpler: one efi entry, one text config file in the efi partition, nothing that needs to change when the kernel updates, and no massive pile of templating and moving parts to mysteriously break dumping you at an impenetrable grub “rescue” shell.
That said, I'm probably going to try a straight EFI boot on my next laptop.
Which ones? If you want to all you need is a single UKI in /EFI/Linux and everything is synthesized from that.
https://uapi-group.org/specifications/specs/boot_loader_spec...
[1]: https://uapi-group.org/specifications/specs/boot_loader_spec...
TBH I've only used it for Hackintoshes and BSDs. I don't know why you'd want it.
But I am finding this hard to post on account of this big hairy grey thing with a long prehensile nose that's in the way...
They only work on UEFI. I don't like UEFI. I have kit in near-daily use that doesn't have UEFI at all.
GRUB works on BIOS, UEFI, x86, Arm, whatever. It boots Linux, all the BSDs, Haiku, whatever. So GRUB wins.
I hate GRUB for its hostility and unfriendliness and impenetrability... But it does the job on more kit than, well, any of the alternatives.
I run a bunch of decade-old HP Microservers. All BIOS-only.
My personal laptops are old Thinkpads, from before the keyboards went crappy & Lenovo took away the expansion options. So they're all about 15, ?20 generation at the newest, but they are all maxed-out and go like stink. BIOS boot mode optional.
My default hypervisor is VirtualBox, because it runs the same on Linux, Windows and Mac. Defaults to BIOS boot.
This is not like some ancient history. All run current OSes and distros.
Thus the /s :-)
rEFInd seemed best for me too
I think that Fedora doesn't know to update its configuration when I install or remove a kernel, so I use rEFInd only to run systemd-boot which is pretty well supported by Fedora. I could probably try letting rEFInd scan the boot partition for kernels or modify/tune kernel-install [1], but why fix something that's not broken?
As a side note, I don't like how by default rEFInd does some things automatically and how it makes the boot menu kind of bloated. I had to do configure it a bit, but at least it lets you include separate configuration files that override the defaults or add menu entries. This is why I don't consider it quite simple; I prefer the more minimalist approach of systemd-boot.
[1]: https://www.freedesktop.org/software/systemd/man/latest/kern...
Now I use grub, because I don't know much better. I'll check out rEFINd.
I had to do it a bunch and after many many times I still couldn't remember without looking at my notes
But this idea or ones much like it have been presented before...
"Hassle-free diskless Virtual Machines with Xen and Alpine Linux"
https://jonnytyers.wordpress.com/2016/08/16/hassle-free-disk...
"How to make a Diskless Virtual Machine (KVM)"
https://github.com/lispydev/diskless-kvm
And if you just want a dead easy PXE server for bare metal...
Ansible? Amazing amount of complexity for something so simple. Sigh ... Linux. It's just a commercial toaster to me these days.
I net boot 9front and it takes 5 minutes to setup. Plus it's all built in so you don't need to install anything. Just enable tfp and dhcp on the 9 machine (no issue for me since 9 runs my network) then setup a few lines in your ndb file for the machine you want to boot which includes Mac, IP and boot image file. Then you need a simple plan9.ini and boot scripts for the machine in /cfg/. Last, a quick change to the root file server to listen on IP Then power on the machine and it boots from the same root fs so your starting at your desktop on a new machine. Done. Why make something so awesome so difficult?
That's more interesting than the netboot thing. Is 9front running your router? What do you do for WiFi? Have you written this up, and/or could you point me to any references for doing it?
However, at the moment my network is in shambles so I'm only running 9front for dhcp/dns/tftp. Lots of life things so I haven't had time to properly setup everything.
Wi-Fi is handled through a freebie unifi AP friend gave me. I run the controller in a docker manually when needed.
This is by no means the only way to implement this, an I’m sure there are better ways.
Thank you for the feedback, I’ll edit it and add more points on alternative stacks and the iSCSI network pickiness.