Cant Find RAID 1 md0 drive using mdadm Ubuntu 18.04.1












0















I used mdadm to create a RAID 1 array using two 3TB drives. After the process, which took all night, I found that the two 3TB drives, /sdb and /sdc, have disappeared from the file explorer. Rebooted the system and they reappeared,then disappeared again after another reboot,they seem to be corrupted with error found in GParted where they can be found:



Corrupt extent header while reading journal super block</i>

<i>Unable to read the contents of this file system!
Because of this, some operations may be unavailable.
The cause might be a missing software package.
The following list of software packages is required for ext4 file system support: e2fsprogs v1.41


I called the new RAID array md0, which now has a folder in /mnt/md0, which is empty.



There is a conf file in /etc/mdadm which reads:



# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Mon, 24 Dec 2018 02:28:48 -0500 by mkconf
ARRAY /dev/md0 metadata=1.2 name=dna-computer:0 UUID=df25e6e6:cccb8138:aa9f4538:31608c33


Not sure if this helps but the command cat /proc/mdstat reads:



Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>









share|improve this question



























    0















    I used mdadm to create a RAID 1 array using two 3TB drives. After the process, which took all night, I found that the two 3TB drives, /sdb and /sdc, have disappeared from the file explorer. Rebooted the system and they reappeared,then disappeared again after another reboot,they seem to be corrupted with error found in GParted where they can be found:



    Corrupt extent header while reading journal super block</i>

    <i>Unable to read the contents of this file system!
    Because of this, some operations may be unavailable.
    The cause might be a missing software package.
    The following list of software packages is required for ext4 file system support: e2fsprogs v1.41


    I called the new RAID array md0, which now has a folder in /mnt/md0, which is empty.



    There is a conf file in /etc/mdadm which reads:



    # mdadm.conf
    #
    # !NB! Run update-initramfs -u after updating this file.
    # !NB! This will ensure that initramfs has an uptodate copy.
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #

    # by default (built-in), scan all partitions (/proc/partitions) and all
    # containers for MD superblocks. alternatively, specify devices to scan, using
    # wildcards if desired.
    #DEVICE partitions containers

    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>

    # instruct the monitoring daemon where to send mail alerts
    MAILADDR root

    # definitions of existing MD arrays

    # This configuration was auto-generated on Mon, 24 Dec 2018 02:28:48 -0500 by mkconf
    ARRAY /dev/md0 metadata=1.2 name=dna-computer:0 UUID=df25e6e6:cccb8138:aa9f4538:31608c33


    Not sure if this helps but the command cat /proc/mdstat reads:



    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    unused devices: <none>









    share|improve this question

























      0












      0








      0








      I used mdadm to create a RAID 1 array using two 3TB drives. After the process, which took all night, I found that the two 3TB drives, /sdb and /sdc, have disappeared from the file explorer. Rebooted the system and they reappeared,then disappeared again after another reboot,they seem to be corrupted with error found in GParted where they can be found:



      Corrupt extent header while reading journal super block</i>

      <i>Unable to read the contents of this file system!
      Because of this, some operations may be unavailable.
      The cause might be a missing software package.
      The following list of software packages is required for ext4 file system support: e2fsprogs v1.41


      I called the new RAID array md0, which now has a folder in /mnt/md0, which is empty.



      There is a conf file in /etc/mdadm which reads:



      # mdadm.conf
      #
      # !NB! Run update-initramfs -u after updating this file.
      # !NB! This will ensure that initramfs has an uptodate copy.
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default (built-in), scan all partitions (/proc/partitions) and all
      # containers for MD superblocks. alternatively, specify devices to scan, using
      # wildcards if desired.
      #DEVICE partitions containers

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR root

      # definitions of existing MD arrays

      # This configuration was auto-generated on Mon, 24 Dec 2018 02:28:48 -0500 by mkconf
      ARRAY /dev/md0 metadata=1.2 name=dna-computer:0 UUID=df25e6e6:cccb8138:aa9f4538:31608c33


      Not sure if this helps but the command cat /proc/mdstat reads:



      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
      unused devices: <none>









      share|improve this question














      I used mdadm to create a RAID 1 array using two 3TB drives. After the process, which took all night, I found that the two 3TB drives, /sdb and /sdc, have disappeared from the file explorer. Rebooted the system and they reappeared,then disappeared again after another reboot,they seem to be corrupted with error found in GParted where they can be found:



      Corrupt extent header while reading journal super block</i>

      <i>Unable to read the contents of this file system!
      Because of this, some operations may be unavailable.
      The cause might be a missing software package.
      The following list of software packages is required for ext4 file system support: e2fsprogs v1.41


      I called the new RAID array md0, which now has a folder in /mnt/md0, which is empty.



      There is a conf file in /etc/mdadm which reads:



      # mdadm.conf
      #
      # !NB! Run update-initramfs -u after updating this file.
      # !NB! This will ensure that initramfs has an uptodate copy.
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default (built-in), scan all partitions (/proc/partitions) and all
      # containers for MD superblocks. alternatively, specify devices to scan, using
      # wildcards if desired.
      #DEVICE partitions containers

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR root

      # definitions of existing MD arrays

      # This configuration was auto-generated on Mon, 24 Dec 2018 02:28:48 -0500 by mkconf
      ARRAY /dev/md0 metadata=1.2 name=dna-computer:0 UUID=df25e6e6:cccb8138:aa9f4538:31608c33


      Not sure if this helps but the command cat /proc/mdstat reads:



      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
      unused devices: <none>






      ubuntu hard-drive raid






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Dec 30 '18 at 5:11









      DanielJomaaDanielJomaa

      31




      31






















          1 Answer
          1






          active

          oldest

          votes


















          0














          This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.



          As stated in the manual page of mdadm:




          RAID devices are virtual devices created from two or more real block
          devices. This allows multiple devices (typically disk drives or
          partitions thereof) to be combined into a single device to hold (for
          example) a single filesystem.




          Also see for example this tutorial.






          share|improve this answer


























          • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

            – DanielJomaa
            Dec 31 '18 at 23:01











          • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

            – agtoever
            Dec 31 '18 at 23:43













          • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

            – DanielJomaa
            Jan 4 at 20:39











          • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

            – agtoever
            Jan 4 at 22:34













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "3"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1388891%2fcant-find-raid-1-md0-drive-using-mdadm-ubuntu-18-04-1%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.



          As stated in the manual page of mdadm:




          RAID devices are virtual devices created from two or more real block
          devices. This allows multiple devices (typically disk drives or
          partitions thereof) to be combined into a single device to hold (for
          example) a single filesystem.




          Also see for example this tutorial.






          share|improve this answer


























          • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

            – DanielJomaa
            Dec 31 '18 at 23:01











          • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

            – agtoever
            Dec 31 '18 at 23:43













          • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

            – DanielJomaa
            Jan 4 at 20:39











          • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

            – agtoever
            Jan 4 at 22:34


















          0














          This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.



          As stated in the manual page of mdadm:




          RAID devices are virtual devices created from two or more real block
          devices. This allows multiple devices (typically disk drives or
          partitions thereof) to be combined into a single device to hold (for
          example) a single filesystem.




          Also see for example this tutorial.






          share|improve this answer


























          • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

            – DanielJomaa
            Dec 31 '18 at 23:01











          • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

            – agtoever
            Dec 31 '18 at 23:43













          • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

            – DanielJomaa
            Jan 4 at 20:39











          • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

            – agtoever
            Jan 4 at 22:34
















          0












          0








          0







          This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.



          As stated in the manual page of mdadm:




          RAID devices are virtual devices created from two or more real block
          devices. This allows multiple devices (typically disk drives or
          partitions thereof) to be combined into a single device to hold (for
          example) a single filesystem.




          Also see for example this tutorial.






          share|improve this answer















          This is whats mdadm does: it replaces your disks with a RAID identity, which is represented as /dev/md (md stands for multiple device). From that point, you don’t want direct access to your individual harddisks, because that would compromise the disk array.



          As stated in the manual page of mdadm:




          RAID devices are virtual devices created from two or more real block
          devices. This allows multiple devices (typically disk drives or
          partitions thereof) to be combined into a single device to hold (for
          example) a single filesystem.




          Also see for example this tutorial.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 30 '18 at 22:01

























          answered Dec 30 '18 at 21:50









          agtoeveragtoever

          5,06911330




          5,06911330













          • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

            – DanielJomaa
            Dec 31 '18 at 23:01











          • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

            – agtoever
            Dec 31 '18 at 23:43













          • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

            – DanielJomaa
            Jan 4 at 20:39











          • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

            – agtoever
            Jan 4 at 22:34





















          • my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

            – DanielJomaa
            Dec 31 '18 at 23:01











          • Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

            – agtoever
            Dec 31 '18 at 23:43













          • I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

            – DanielJomaa
            Jan 4 at 20:39











          • Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

            – agtoever
            Jan 4 at 22:34



















          my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

          – DanielJomaa
          Dec 31 '18 at 23:01





          my md folder is located at /mnt/md0, does this serve the same purpose? So can I assume that this folder will hold 3TB of storage? also since this is in RAID 1, how would i be able to access the hard drives separately in the case of data loss.

          – DanielJomaa
          Dec 31 '18 at 23:01













          Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

          – agtoever
          Dec 31 '18 at 23:43







          Your md device is at /dev/md<#>, which can be mounted in the /mnt/ directory. Type mount to see which devices are mounted where. The underlying disks are not meant for direct access. If you do this, you risk damaging your md array. Look at mdadm as a layer between your disks and the RAID’ed disks. If one disk gets damaged, mdadm must handle this (by setting the disk to failed and/or remove the disk from the array). So the answer to the question how to access a “degraded” RAID array is: still through mdadm.

          – agtoever
          Dec 31 '18 at 23:43















          I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

          – DanielJomaa
          Jan 4 at 20:39





          I do not have a directory at /dev/md<#>. Just to make sure i understand, if i place files in the mnt/md0 directory, this will essentially be like placing files on the 3TB RAID 1 array and will not take up space on the main drive?

          – DanielJomaa
          Jan 4 at 20:39













          Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

          – agtoever
          Jan 4 at 22:34







          Check the output of mount | grep md; this should show you what device (/dev) is mounted (/mnt) on /mnt/md0. If it shows nothing, this directory is just a directory on your root mount (/). If it shows a device, that’s where the data is read from & written to.

          – agtoever
          Jan 4 at 22:34




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Super User!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1388891%2fcant-find-raid-1-md0-drive-using-mdadm-ubuntu-18-04-1%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

          Mangá

           ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕