Not sure how comparable they are since I never used Monodraw due to not running MACs, but there is https://asciiflow.com/ and https://monosketch.io/ which I usually use. The latter is using some advanced UTF8 characters and when trying to get it incorporated for my personal blog, I had to use their specific monospaced font from their repo, as otherwise lines wouldn't line up correctly.
That should have been possible for a while. Get the block storage to the node (FC or configure iSCSI), configure multipathing in most situations, and then configure LVM (thick) on top and mark it as shared. One nice thing this release brings is the option to finally also have snapshots for such a shared storage.
When migrating a VM from one host to another it would require cloning the LVM volume, rather than just importing the group on the other node and starting the VM up.
I have existing VMware gusts that I'd like to migrate over in bulk. This would be easy enough to do by converting the VMDK files, but using LVM means creating an LVM group for each VM and importing the contents of the VMDK into the LV.
Hmm, staying with iSCSI. You should create one large LUN that is available on each node. Then it is important to mark the LVM as "shared". This way, PVE knows that all nodes access the same LVM, so copying the disk images is not necessary on a live migration.
With such a setup, PVE will create LVs on the same VG for each disk image. So no handling of multiple VGs or LUNs is necessary.
In case the author reads this: instead of `losetup` you could have imported the image with `qm disk import VMID path/to/source TARGET_STORAGE`
It would then show up as unused disk in the VM config to be further configured :-)
This helped me when I was using them some years back during my university time. Managed to place some papers on a heater and the ink disappeared. Putting it in the freezer for a bit made it all come back.
That highly depends on the underlying storage. If it is something that supports snapshots (ZFS, Ceph, LVM thin) then it should work fine, also backups will be possible without any downtime as they will be read from a temporary snapshot.
Even with ZFS you still have to wait for RAM to dump, haven't you? And it will freeze at least for the dump write time. Do they have CoW for container memory?
But even if they had, the RAM snapshot needs to be written, but without freezing the container. I would appreciate an option when I could ignore everything that was not fsyned, e.g. Postgres use case. In that case the normal ZFS snapshot should be enough.
RAM and other state can be part of a snapshot for VMs, in which case the VM will continue right where it was.
The state of a container is not part of the snapshot (just checked), as it is really hard to capture the state of the container (CPU, network, all kinds of file and socket handles) and restore it because all an LXC container is, is local processes in their separate cgroups. This is also the reason why a live migration is not really possible right now, as all that would need to be cut out from the current host machine and restore in the target machine.
This is much easier for VMs as Qemu offers a nice abstraction layer.
If you install Proxmox VE the next time and are not happy with the default layout, check out the parameters for LVM [0]. I personally prefer to install on ZFS as all ZFS datasets share the same free space unlike with LVM.
If you want full control of the disk layout and maybe want to do some more custom stuff, consider installing a base Debian and then install Proxmox VE on top of it [1].
The PVE installer makes it easy to get a system up and running quickly but is opinionated what the disk layout should be. That of course might not be working for everyone.