Linux Server Installation UEFI/4K blocks partition error












0















I'm trying to install Ubuntu 18.04 Server on a native 4K SSD controller and it fails during the partition creation phase.



From the Installer I can see that the size detected for the disk is wrong, it is
showing 8 times the real size and try to use that size on the partition command for "/", I'm wondering if it is estimating the size based on 512B blocks instead of 4096B, but I couldn't find were this is happening. Anybody had a similar issue and know how to fix it?



From Install.log



    previous partition number for 'part-3' found to be '1'
previous partition: /sys/class/block/nvme0n1/nvme0n1p1
previous partition.size_sectors: 131072
previous partition.start_sectors: 256
adding partition 'part-3' to disk 'disk-0' (ptable: 'gpt')
partnum: 2 offset_sectors: 131328 length_sectors: 625989119
Running command ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
An error occured handling 'part-3': ProcessExecutionError - Unexpected error while running command.
Command: ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1']
Exit code: 4
Reason: -
Stdout: ''
Stderr: Could not create partition 2 from 131328 to 626120447
Could not change partition 2's type code to 8300!
Error encountered; not saving changes.


Version:
Ubuntu 18.04.2 LTS



Start of Install Log (There is already a partition form previous installation try):



curtin: Installation started. (18.2)
start: cmd-install/stage-partitioning/builtin/cmd-block-meta: curtin command block-meta
get_path_to_storage_volume for volume disk-0
Processing serial Logan_DU01-03-02-A01-061 via udev to Logan_DU01-03-02-A01-061
devsync for /dev/nvme0n1
Running command ['partprobe', '/dev/nvme0n1'] with allowed return codes [0, 1] (capture=False)
Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
TIMED udevadm_settle(): 0.092
devsync happy - path /dev/nvme0n1 now exists
return volume path /dev/nvme0n1
Declared block devices: ['/dev/nvme0n1']
start: cmd-install/stage-partitioning/builtin/cmd-block-meta/clear-holders: removing previous storage devices
Running command ['mdadm', '--assemble', '--scan', '-v'] with allowed return codes [0, 1, 2] (capture=True)
mdadm assemble scan results:

mdadm: looking for devices for further assembly
mdadm: Cannot assemble mbr metadata on /dev/sda1
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: no recogniseable superblock on /dev/nvme0n1p1
mdadm: no recogniseable superblock on /dev/nvme0n1
mdadm: no recogniseable superblock on /dev/loop6
mdadm: no recogniseable superblock on /dev/loop5
mdadm: no recogniseable superblock on /dev/loop4
mdadm: no recogniseable superblock on /dev/loop3
mdadm: no recogniseable superblock on /dev/loop2
mdadm: no recogniseable superblock on /dev/loop1
mdadm: no recogniseable superblock on /dev/loop0
mdadm: No arrays found in config file or automatically

Running command ['mdadm', '--detail', '--scan', '-v'] with allowed return codes [0, 1] (capture=True)
mdadm detail scan after assemble:


Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
TIMED udevadm_settle(): 0.009
Running command ['pvscan', '--cache'] with allowed return codes [0] (capture=True)
Running command ['vgscan', '--mknodes', '--cache'] with allowed return codes [0] (capture=True)
Running command ['vgchange', '--activate=y'] with allowed return codes [0] (capture=True)
Loading kernel module bcache via modprobe
Running command ['modprobe', '--use-blacklist', 'bcache'] with allowed return codes [0] (capture=False)
Loading kernel module zfs via modprobe
Running command ['modprobe', '--use-blacklist', 'zfs'] with allowed return codes [0] (capture=False)
zfs filesystem is not supported in this environment
devname '/sys/class/block/nvme0n1' had holders:
devname '/sys/class/block/nvme0n1/nvme0n1p1' had holders:
Current device storage tree:
nvme0n1
`-- nvme0n1p1
Shutdown Plan:
{'level': 1, 'device': '/sys/class/block/nvme0n1/nvme0n1p1', 'dev_type': 'partition'}
{'level': 0, 'device': '/sys/class/block/nvme0n1', 'dev_type': 'disk'}
shutdown running on holder type: 'partition' syspath: '/sys/class/block/nvme0n1/nvme0n1p1'
Running command ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=ALIGNMENT,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,FSTYPE,GROUP,KNAME,LABEL,LOG-SEC,MAJ:MIN,MIN-IO,MODE,MODEL,MOUNTPOINT,NAME,OPT-IO,OWNER,PHY-SEC,RM,RO,ROTA,RQ-SIZE,SIZE,STATE,TYPE,UUID', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
get_blockdev_sector_size: info:
{
"nvme0n1": {
"ALIGNMENT": "0",
"DISC-ALN": "0",
"DISC-GRAN": "0",
"DISC-MAX": "0",
"DISC-ZERO": "0",
"FSTYPE": "",
"GROUP": "disk",
"KNAME": "nvme0n1",
"LABEL": "",
"LOG-SEC": "4096",
"MAJ:MIN": "259:0",
"MIN-IO": "4096",
"MODE": "brw-rw----",
"MODEL": "Logan",
"MOUNTPOINT": "",
"NAME": "nvme0n1",
"OPT-IO": "0",
"OWNER": "root",
"PHY-SEC": "4096",
"RM": "0",
"RO": "0",
"ROTA": "0",
"RQ-SIZE": "1023",
"SIZE": "320573878272",
"STATE": "live",
"TYPE": "disk",
"UUID": "",
"device_path": "/dev/nvme0n1"
},
"nvme0n1p1": {
"ALIGNMENT": "0",
"DISC-ALN": "0",
"DISC-GRAN": "0",
"DISC-MAX": "0",
"DISC-ZERO": "0",
"FSTYPE": "",
"GROUP": "disk",
"KNAME": "nvme0n1p1",
"LABEL": "",
"LOG-SEC": "4096",
"MAJ:MIN": "259:2",
"MIN-IO": "4096",
"MODE": "brw-rw----",
"MODEL": "",
"MOUNTPOINT": "",
"NAME": "nvme0n1p1",
"OPT-IO": "0",
"OWNER": "root",
"PHY-SEC": "4096",
"RM": "0",
"RO": "0",
"ROTA": "0",
"RQ-SIZE": "1023",
"SIZE": "536870912",
"STATE": "",
"TYPE": "part",
"UUID": "",
"device_path": "/dev/nvme0n1p1"
}
}









share|improve this question



























    0















    I'm trying to install Ubuntu 18.04 Server on a native 4K SSD controller and it fails during the partition creation phase.



    From the Installer I can see that the size detected for the disk is wrong, it is
    showing 8 times the real size and try to use that size on the partition command for "/", I'm wondering if it is estimating the size based on 512B blocks instead of 4096B, but I couldn't find were this is happening. Anybody had a similar issue and know how to fix it?



    From Install.log



        previous partition number for 'part-3' found to be '1'
    previous partition: /sys/class/block/nvme0n1/nvme0n1p1
    previous partition.size_sectors: 131072
    previous partition.start_sectors: 256
    adding partition 'part-3' to disk 'disk-0' (ptable: 'gpt')
    partnum: 2 offset_sectors: 131328 length_sectors: 625989119
    Running command ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
    An error occured handling 'part-3': ProcessExecutionError - Unexpected error while running command.
    Command: ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1']
    Exit code: 4
    Reason: -
    Stdout: ''
    Stderr: Could not create partition 2 from 131328 to 626120447
    Could not change partition 2's type code to 8300!
    Error encountered; not saving changes.


    Version:
    Ubuntu 18.04.2 LTS



    Start of Install Log (There is already a partition form previous installation try):



    curtin: Installation started. (18.2)
    start: cmd-install/stage-partitioning/builtin/cmd-block-meta: curtin command block-meta
    get_path_to_storage_volume for volume disk-0
    Processing serial Logan_DU01-03-02-A01-061 via udev to Logan_DU01-03-02-A01-061
    devsync for /dev/nvme0n1
    Running command ['partprobe', '/dev/nvme0n1'] with allowed return codes [0, 1] (capture=False)
    Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
    TIMED udevadm_settle(): 0.092
    devsync happy - path /dev/nvme0n1 now exists
    return volume path /dev/nvme0n1
    Declared block devices: ['/dev/nvme0n1']
    start: cmd-install/stage-partitioning/builtin/cmd-block-meta/clear-holders: removing previous storage devices
    Running command ['mdadm', '--assemble', '--scan', '-v'] with allowed return codes [0, 1, 2] (capture=True)
    mdadm assemble scan results:

    mdadm: looking for devices for further assembly
    mdadm: Cannot assemble mbr metadata on /dev/sda1
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: no recogniseable superblock on /dev/nvme0n1p1
    mdadm: no recogniseable superblock on /dev/nvme0n1
    mdadm: no recogniseable superblock on /dev/loop6
    mdadm: no recogniseable superblock on /dev/loop5
    mdadm: no recogniseable superblock on /dev/loop4
    mdadm: no recogniseable superblock on /dev/loop3
    mdadm: no recogniseable superblock on /dev/loop2
    mdadm: no recogniseable superblock on /dev/loop1
    mdadm: no recogniseable superblock on /dev/loop0
    mdadm: No arrays found in config file or automatically

    Running command ['mdadm', '--detail', '--scan', '-v'] with allowed return codes [0, 1] (capture=True)
    mdadm detail scan after assemble:


    Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
    TIMED udevadm_settle(): 0.009
    Running command ['pvscan', '--cache'] with allowed return codes [0] (capture=True)
    Running command ['vgscan', '--mknodes', '--cache'] with allowed return codes [0] (capture=True)
    Running command ['vgchange', '--activate=y'] with allowed return codes [0] (capture=True)
    Loading kernel module bcache via modprobe
    Running command ['modprobe', '--use-blacklist', 'bcache'] with allowed return codes [0] (capture=False)
    Loading kernel module zfs via modprobe
    Running command ['modprobe', '--use-blacklist', 'zfs'] with allowed return codes [0] (capture=False)
    zfs filesystem is not supported in this environment
    devname '/sys/class/block/nvme0n1' had holders:
    devname '/sys/class/block/nvme0n1/nvme0n1p1' had holders:
    Current device storage tree:
    nvme0n1
    `-- nvme0n1p1
    Shutdown Plan:
    {'level': 1, 'device': '/sys/class/block/nvme0n1/nvme0n1p1', 'dev_type': 'partition'}
    {'level': 0, 'device': '/sys/class/block/nvme0n1', 'dev_type': 'disk'}
    shutdown running on holder type: 'partition' syspath: '/sys/class/block/nvme0n1/nvme0n1p1'
    Running command ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=ALIGNMENT,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,FSTYPE,GROUP,KNAME,LABEL,LOG-SEC,MAJ:MIN,MIN-IO,MODE,MODEL,MOUNTPOINT,NAME,OPT-IO,OWNER,PHY-SEC,RM,RO,ROTA,RQ-SIZE,SIZE,STATE,TYPE,UUID', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
    get_blockdev_sector_size: info:
    {
    "nvme0n1": {
    "ALIGNMENT": "0",
    "DISC-ALN": "0",
    "DISC-GRAN": "0",
    "DISC-MAX": "0",
    "DISC-ZERO": "0",
    "FSTYPE": "",
    "GROUP": "disk",
    "KNAME": "nvme0n1",
    "LABEL": "",
    "LOG-SEC": "4096",
    "MAJ:MIN": "259:0",
    "MIN-IO": "4096",
    "MODE": "brw-rw----",
    "MODEL": "Logan",
    "MOUNTPOINT": "",
    "NAME": "nvme0n1",
    "OPT-IO": "0",
    "OWNER": "root",
    "PHY-SEC": "4096",
    "RM": "0",
    "RO": "0",
    "ROTA": "0",
    "RQ-SIZE": "1023",
    "SIZE": "320573878272",
    "STATE": "live",
    "TYPE": "disk",
    "UUID": "",
    "device_path": "/dev/nvme0n1"
    },
    "nvme0n1p1": {
    "ALIGNMENT": "0",
    "DISC-ALN": "0",
    "DISC-GRAN": "0",
    "DISC-MAX": "0",
    "DISC-ZERO": "0",
    "FSTYPE": "",
    "GROUP": "disk",
    "KNAME": "nvme0n1p1",
    "LABEL": "",
    "LOG-SEC": "4096",
    "MAJ:MIN": "259:2",
    "MIN-IO": "4096",
    "MODE": "brw-rw----",
    "MODEL": "",
    "MOUNTPOINT": "",
    "NAME": "nvme0n1p1",
    "OPT-IO": "0",
    "OWNER": "root",
    "PHY-SEC": "4096",
    "RM": "0",
    "RO": "0",
    "ROTA": "0",
    "RQ-SIZE": "1023",
    "SIZE": "536870912",
    "STATE": "",
    "TYPE": "part",
    "UUID": "",
    "device_path": "/dev/nvme0n1p1"
    }
    }









    share|improve this question

























      0












      0








      0








      I'm trying to install Ubuntu 18.04 Server on a native 4K SSD controller and it fails during the partition creation phase.



      From the Installer I can see that the size detected for the disk is wrong, it is
      showing 8 times the real size and try to use that size on the partition command for "/", I'm wondering if it is estimating the size based on 512B blocks instead of 4096B, but I couldn't find were this is happening. Anybody had a similar issue and know how to fix it?



      From Install.log



          previous partition number for 'part-3' found to be '1'
      previous partition: /sys/class/block/nvme0n1/nvme0n1p1
      previous partition.size_sectors: 131072
      previous partition.start_sectors: 256
      adding partition 'part-3' to disk 'disk-0' (ptable: 'gpt')
      partnum: 2 offset_sectors: 131328 length_sectors: 625989119
      Running command ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
      An error occured handling 'part-3': ProcessExecutionError - Unexpected error while running command.
      Command: ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1']
      Exit code: 4
      Reason: -
      Stdout: ''
      Stderr: Could not create partition 2 from 131328 to 626120447
      Could not change partition 2's type code to 8300!
      Error encountered; not saving changes.


      Version:
      Ubuntu 18.04.2 LTS



      Start of Install Log (There is already a partition form previous installation try):



      curtin: Installation started. (18.2)
      start: cmd-install/stage-partitioning/builtin/cmd-block-meta: curtin command block-meta
      get_path_to_storage_volume for volume disk-0
      Processing serial Logan_DU01-03-02-A01-061 via udev to Logan_DU01-03-02-A01-061
      devsync for /dev/nvme0n1
      Running command ['partprobe', '/dev/nvme0n1'] with allowed return codes [0, 1] (capture=False)
      Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
      TIMED udevadm_settle(): 0.092
      devsync happy - path /dev/nvme0n1 now exists
      return volume path /dev/nvme0n1
      Declared block devices: ['/dev/nvme0n1']
      start: cmd-install/stage-partitioning/builtin/cmd-block-meta/clear-holders: removing previous storage devices
      Running command ['mdadm', '--assemble', '--scan', '-v'] with allowed return codes [0, 1, 2] (capture=True)
      mdadm assemble scan results:

      mdadm: looking for devices for further assembly
      mdadm: Cannot assemble mbr metadata on /dev/sda1
      mdadm: Cannot assemble mbr metadata on /dev/sda
      mdadm: no recogniseable superblock on /dev/nvme0n1p1
      mdadm: no recogniseable superblock on /dev/nvme0n1
      mdadm: no recogniseable superblock on /dev/loop6
      mdadm: no recogniseable superblock on /dev/loop5
      mdadm: no recogniseable superblock on /dev/loop4
      mdadm: no recogniseable superblock on /dev/loop3
      mdadm: no recogniseable superblock on /dev/loop2
      mdadm: no recogniseable superblock on /dev/loop1
      mdadm: no recogniseable superblock on /dev/loop0
      mdadm: No arrays found in config file or automatically

      Running command ['mdadm', '--detail', '--scan', '-v'] with allowed return codes [0, 1] (capture=True)
      mdadm detail scan after assemble:


      Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
      TIMED udevadm_settle(): 0.009
      Running command ['pvscan', '--cache'] with allowed return codes [0] (capture=True)
      Running command ['vgscan', '--mknodes', '--cache'] with allowed return codes [0] (capture=True)
      Running command ['vgchange', '--activate=y'] with allowed return codes [0] (capture=True)
      Loading kernel module bcache via modprobe
      Running command ['modprobe', '--use-blacklist', 'bcache'] with allowed return codes [0] (capture=False)
      Loading kernel module zfs via modprobe
      Running command ['modprobe', '--use-blacklist', 'zfs'] with allowed return codes [0] (capture=False)
      zfs filesystem is not supported in this environment
      devname '/sys/class/block/nvme0n1' had holders:
      devname '/sys/class/block/nvme0n1/nvme0n1p1' had holders:
      Current device storage tree:
      nvme0n1
      `-- nvme0n1p1
      Shutdown Plan:
      {'level': 1, 'device': '/sys/class/block/nvme0n1/nvme0n1p1', 'dev_type': 'partition'}
      {'level': 0, 'device': '/sys/class/block/nvme0n1', 'dev_type': 'disk'}
      shutdown running on holder type: 'partition' syspath: '/sys/class/block/nvme0n1/nvme0n1p1'
      Running command ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=ALIGNMENT,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,FSTYPE,GROUP,KNAME,LABEL,LOG-SEC,MAJ:MIN,MIN-IO,MODE,MODEL,MOUNTPOINT,NAME,OPT-IO,OWNER,PHY-SEC,RM,RO,ROTA,RQ-SIZE,SIZE,STATE,TYPE,UUID', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
      get_blockdev_sector_size: info:
      {
      "nvme0n1": {
      "ALIGNMENT": "0",
      "DISC-ALN": "0",
      "DISC-GRAN": "0",
      "DISC-MAX": "0",
      "DISC-ZERO": "0",
      "FSTYPE": "",
      "GROUP": "disk",
      "KNAME": "nvme0n1",
      "LABEL": "",
      "LOG-SEC": "4096",
      "MAJ:MIN": "259:0",
      "MIN-IO": "4096",
      "MODE": "brw-rw----",
      "MODEL": "Logan",
      "MOUNTPOINT": "",
      "NAME": "nvme0n1",
      "OPT-IO": "0",
      "OWNER": "root",
      "PHY-SEC": "4096",
      "RM": "0",
      "RO": "0",
      "ROTA": "0",
      "RQ-SIZE": "1023",
      "SIZE": "320573878272",
      "STATE": "live",
      "TYPE": "disk",
      "UUID": "",
      "device_path": "/dev/nvme0n1"
      },
      "nvme0n1p1": {
      "ALIGNMENT": "0",
      "DISC-ALN": "0",
      "DISC-GRAN": "0",
      "DISC-MAX": "0",
      "DISC-ZERO": "0",
      "FSTYPE": "",
      "GROUP": "disk",
      "KNAME": "nvme0n1p1",
      "LABEL": "",
      "LOG-SEC": "4096",
      "MAJ:MIN": "259:2",
      "MIN-IO": "4096",
      "MODE": "brw-rw----",
      "MODEL": "",
      "MOUNTPOINT": "",
      "NAME": "nvme0n1p1",
      "OPT-IO": "0",
      "OWNER": "root",
      "PHY-SEC": "4096",
      "RM": "0",
      "RO": "0",
      "ROTA": "0",
      "RQ-SIZE": "1023",
      "SIZE": "536870912",
      "STATE": "",
      "TYPE": "part",
      "UUID": "",
      "device_path": "/dev/nvme0n1p1"
      }
      }









      share|improve this question














      I'm trying to install Ubuntu 18.04 Server on a native 4K SSD controller and it fails during the partition creation phase.



      From the Installer I can see that the size detected for the disk is wrong, it is
      showing 8 times the real size and try to use that size on the partition command for "/", I'm wondering if it is estimating the size based on 512B blocks instead of 4096B, but I couldn't find were this is happening. Anybody had a similar issue and know how to fix it?



      From Install.log



          previous partition number for 'part-3' found to be '1'
      previous partition: /sys/class/block/nvme0n1/nvme0n1p1
      previous partition.size_sectors: 131072
      previous partition.start_sectors: 256
      adding partition 'part-3' to disk 'disk-0' (ptable: 'gpt')
      partnum: 2 offset_sectors: 131328 length_sectors: 625989119
      Running command ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
      An error occured handling 'part-3': ProcessExecutionError - Unexpected error while running command.
      Command: ['sgdisk', '--new', '2:131328:626120447', '--typecode=2:8300', '/dev/nvme0n1']
      Exit code: 4
      Reason: -
      Stdout: ''
      Stderr: Could not create partition 2 from 131328 to 626120447
      Could not change partition 2's type code to 8300!
      Error encountered; not saving changes.


      Version:
      Ubuntu 18.04.2 LTS



      Start of Install Log (There is already a partition form previous installation try):



      curtin: Installation started. (18.2)
      start: cmd-install/stage-partitioning/builtin/cmd-block-meta: curtin command block-meta
      get_path_to_storage_volume for volume disk-0
      Processing serial Logan_DU01-03-02-A01-061 via udev to Logan_DU01-03-02-A01-061
      devsync for /dev/nvme0n1
      Running command ['partprobe', '/dev/nvme0n1'] with allowed return codes [0, 1] (capture=False)
      Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
      TIMED udevadm_settle(): 0.092
      devsync happy - path /dev/nvme0n1 now exists
      return volume path /dev/nvme0n1
      Declared block devices: ['/dev/nvme0n1']
      start: cmd-install/stage-partitioning/builtin/cmd-block-meta/clear-holders: removing previous storage devices
      Running command ['mdadm', '--assemble', '--scan', '-v'] with allowed return codes [0, 1, 2] (capture=True)
      mdadm assemble scan results:

      mdadm: looking for devices for further assembly
      mdadm: Cannot assemble mbr metadata on /dev/sda1
      mdadm: Cannot assemble mbr metadata on /dev/sda
      mdadm: no recogniseable superblock on /dev/nvme0n1p1
      mdadm: no recogniseable superblock on /dev/nvme0n1
      mdadm: no recogniseable superblock on /dev/loop6
      mdadm: no recogniseable superblock on /dev/loop5
      mdadm: no recogniseable superblock on /dev/loop4
      mdadm: no recogniseable superblock on /dev/loop3
      mdadm: no recogniseable superblock on /dev/loop2
      mdadm: no recogniseable superblock on /dev/loop1
      mdadm: no recogniseable superblock on /dev/loop0
      mdadm: No arrays found in config file or automatically

      Running command ['mdadm', '--detail', '--scan', '-v'] with allowed return codes [0, 1] (capture=True)
      mdadm detail scan after assemble:


      Running command ['udevadm', 'settle'] with allowed return codes [0] (capture=False)
      TIMED udevadm_settle(): 0.009
      Running command ['pvscan', '--cache'] with allowed return codes [0] (capture=True)
      Running command ['vgscan', '--mknodes', '--cache'] with allowed return codes [0] (capture=True)
      Running command ['vgchange', '--activate=y'] with allowed return codes [0] (capture=True)
      Loading kernel module bcache via modprobe
      Running command ['modprobe', '--use-blacklist', 'bcache'] with allowed return codes [0] (capture=False)
      Loading kernel module zfs via modprobe
      Running command ['modprobe', '--use-blacklist', 'zfs'] with allowed return codes [0] (capture=False)
      zfs filesystem is not supported in this environment
      devname '/sys/class/block/nvme0n1' had holders:
      devname '/sys/class/block/nvme0n1/nvme0n1p1' had holders:
      Current device storage tree:
      nvme0n1
      `-- nvme0n1p1
      Shutdown Plan:
      {'level': 1, 'device': '/sys/class/block/nvme0n1/nvme0n1p1', 'dev_type': 'partition'}
      {'level': 0, 'device': '/sys/class/block/nvme0n1', 'dev_type': 'disk'}
      shutdown running on holder type: 'partition' syspath: '/sys/class/block/nvme0n1/nvme0n1p1'
      Running command ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=ALIGNMENT,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,FSTYPE,GROUP,KNAME,LABEL,LOG-SEC,MAJ:MIN,MIN-IO,MODE,MODEL,MOUNTPOINT,NAME,OPT-IO,OWNER,PHY-SEC,RM,RO,ROTA,RQ-SIZE,SIZE,STATE,TYPE,UUID', '/dev/nvme0n1'] with allowed return codes [0] (capture=True)
      get_blockdev_sector_size: info:
      {
      "nvme0n1": {
      "ALIGNMENT": "0",
      "DISC-ALN": "0",
      "DISC-GRAN": "0",
      "DISC-MAX": "0",
      "DISC-ZERO": "0",
      "FSTYPE": "",
      "GROUP": "disk",
      "KNAME": "nvme0n1",
      "LABEL": "",
      "LOG-SEC": "4096",
      "MAJ:MIN": "259:0",
      "MIN-IO": "4096",
      "MODE": "brw-rw----",
      "MODEL": "Logan",
      "MOUNTPOINT": "",
      "NAME": "nvme0n1",
      "OPT-IO": "0",
      "OWNER": "root",
      "PHY-SEC": "4096",
      "RM": "0",
      "RO": "0",
      "ROTA": "0",
      "RQ-SIZE": "1023",
      "SIZE": "320573878272",
      "STATE": "live",
      "TYPE": "disk",
      "UUID": "",
      "device_path": "/dev/nvme0n1"
      },
      "nvme0n1p1": {
      "ALIGNMENT": "0",
      "DISC-ALN": "0",
      "DISC-GRAN": "0",
      "DISC-MAX": "0",
      "DISC-ZERO": "0",
      "FSTYPE": "",
      "GROUP": "disk",
      "KNAME": "nvme0n1p1",
      "LABEL": "",
      "LOG-SEC": "4096",
      "MAJ:MIN": "259:2",
      "MIN-IO": "4096",
      "MODE": "brw-rw----",
      "MODEL": "",
      "MOUNTPOINT": "",
      "NAME": "nvme0n1p1",
      "OPT-IO": "0",
      "OWNER": "root",
      "PHY-SEC": "4096",
      "RM": "0",
      "RO": "0",
      "ROTA": "0",
      "RQ-SIZE": "1023",
      "SIZE": "536870912",
      "STATE": "",
      "TYPE": "part",
      "UUID": "",
      "device_path": "/dev/nvme0n1p1"
      }
      }






      partitioning 18.04 system-installation uefi nvme






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 20 at 18:34









      Magnus LuccheseMagnus Lucchese

      11




      11






















          0






          active

          oldest

          votes











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "89"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1119918%2flinux-server-installation-uefi-4k-blocks-partition-error%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Ask Ubuntu!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1119918%2flinux-server-installation-uefi-4k-blocks-partition-error%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

          Mangá

           ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕