Installing Proxmox with legacy boot

If you still have some old hardware you want to recycle, you might run into issues since Proxmox mainly assumes that your system can handle GPT partitions. This tutorial will guide you through installing Proxmox with MBR. I wrote it because I am currently using an old laptop that still has enough power for some VMs but does not support UEFI or GPT.

That being said, even if you can run Proxmox with MBR, you will still need a 64‑bit system to operate it. Furthermore, disks larger than 2TB will not be fully usable under MBR.

Do not follow this if your system already supports UEFI/GPT properly.

Prerequisites

Prepare a bootable USB device, e.g. with Ventoy, and make sure it is using MBR. Ventoy can be used to create bootable USB devices with a partition for ISO files. You can choose during startup which ISO you would like to boot. Ventoy can be installed using GPT or MBR, and you can switch between modes without losing your ISOs, which makes it a good tool for this purpose. You will need the Proxmox installer ISO on the stick and – if you want to use ZFS – you will also need an Ubuntu live ISO.

For this tutorial I used Ubuntu Server. After the installer started, I switched with Ctrl+Alt+F3/F4 to a terminal. This allowed me to run everything without graphical overhead.

Installation

⚠️ WARNING

This tutorial includes manipulations for your partition tables.

In case of any error, you could lose your data. You might also lose data on other disks than intended.

Use this ONLY if you have backups for ALL connected disks.

Install Proxmox as usual, but uncheck the box for automated restart after the installation finishes. If you are using ZFS, make sure the USB device and all other disks you don’t want to use for ZFS are marked as "do not use".

After installation finished do not reboot switch to a console by pressing Ctrl+Alt+F3.

We now need to identify the correct disk, to do so, we can e.g. match for the size of the disk, it should have three partitions by default.

 1root@pve01:~# lsblk
 2NAME       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
 3loop0        7:0    0 108.7M  1 loop
 4loop1        7:1    0   611M  1 loop
 5sda          8:0    0 465.8G  0 disk
 6|-sda1       8:1    0  1007K  0 part
 7|-sda2       8:2    0     1G  0 part
 8`-sda3       8:3    0 464.8G  0 part
 9sdb          8:16   1    15G  0 disk
10|-sdb1       8:17   1  14.9G  0 part
11| |-ventoy 252:0    0   1.7G  1 dm   /cdrom
12| `-sdb1   252:1    0  14.9G  0 dm
13`-sdb2       8:18   1    32M  0 part
14sdc          8:32   1     0B  0 disk
15sr0         11:0    1  1024M  0 rom

In my case, sda is the disk I am looking for.

First we’re going to convert GPT to MBR. This creates a protective MBR layout. Large disks (>2TB) will not be fully usable. Then we will delete the first partition, which is the EFI bootloader. Luckily, the partition is right at the beginning of the disk, so we can make space for GRUB.

 1gdisk /dev/sdX
 2r       (recovery and transformation options (experts only))
 3g       convert GPT into MBR and exit
 4w       write table to disk and exit
 5y       confirm
 6
 7fdisk /dev/sdX
 8d       (delete a partition)
 91       (partition number)
10w       (write table to disk and exit)

Now we prepare a chroot environment to reinstall GRUB and update its configuration.

⚠️ EXT4 ONLY

1mount /dev/sdX3 /mnt

⚠️ ZFS ONLY First we import the pool without mounting it, then we temporarily override the dataset mountpoint stored in ZFS metadata. This change is persistent in ZFS metadata and must be reverted after installation, otherwise the system will mount incorrectly on next boot.

1# make pool available without mounting it (-N)
2zpool import -N rpool # replace the name of the pool in case you changed it
3zfs set mountpoint=/mnt  rpool/ROOT/pve-1

For both file system types, we need to make sure /dev, /proc and /sys are available inside the chroot:

1mount --bind /dev/ /mnt/dev/
2mount --bind /proc/ /mnt/proc/
3mount --bind /sys/ /mnt/sys/
4
5# and finally chroot to /mnt
6chroot /mnt

Chroot makes the specified mount point become / for us in a subshell. It is also frequently used for server processes to isolate them from the rest of the system in case of an exploit that provides filesystem access.

Next, we mount the boot partition so the kernel images and GRUB configuration become available. Normally these steps are handled by the proxmox-boot-tool, but for this setup we need to bypass it. On a Proxmox installation the grub-install command is replaced by a shell script, which main purpose is to tell you, that you should use the proxmox-boot-tool. To bypass it, we need to run grub-install.real

1mount /dev/sdX2 /boot
2grub-install.real /dev/sdX
3update-grub

For the EXT4 installation, we are done. For ZFS, we should fix the mountpoint, which is stored in the FS and export the pool again, to avoid trouble during the next boot.

⚠️ZFS ONLY

1umount /boot
2exit # leave the chroot
3# first umount everything inside the mount otherwise the device will be busy
4umount /mnt/dev/ /mnt/proc/ /mnt/sys
5zfs umount /mnt
6zfs set mountpoint=/ rpool/ROOT/pve-1
7# now we are exporting the pool again #dobbyisfree
8zpool export rpool

In both cases, we can now reboot the system.

Troubleshooting

GRUB shell

If you end up in the GRUB shell, make sure your update-grub command ran properly with /dev/sdX2 mounted to /boot.

EXT4

For ext4 installations, you can boot the Proxmox installer again, choose Advanced Options, and then the Terminal Installation (Debug) option.

ZFS

GRUB shell

To verify that everything works in general, you can run:

1ls  # check which drives exist
2ls (hdX,msdosY)/   # browse through the drives
3set root=(hdX,msdosY)  # set to the drive containing a kernel (vmlinuz-....)
4# use tab completion on the following commands
5linux /vmlinuz... root=zfs=rpool/ROOT/pve-1 boot=zfs
6initrd /init...
7boot

Initramfs shell

If you didn’t properly export the pool at the end, you will end up in the initramfs shell. The shell usually suggests how to fix the issue, e.g.

1zpool export -f rpool

Another possibility is, that you forgot to set the mountpoint back to / in this case you can try to run

1zfs set mountpoint=/ rpool/ROOT/pve-1

If this doesn't work in the initramfs shell, please use Ubuntu (see below).

Ubuntu

In case you need further debugging on a ZFS installation or want to redo a step, you can boot the Ubuntu installer as mentioned above (Ctrl+Alt+F3/F4 after the installer started).

To install the ZFS tools run:

1apt-get install zfsutils-linux

You can now follow the instruction as before.