If they came out of a raid array:
https://www.systutorials.com/how-to-clean-raid-signatures-on-linux/
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
If they came out of a raid array:
https://www.systutorials.com/how-to-clean-raid-signatures-on-linux/
If you want to fully wipe the disks of any data to start with, you can use a tool like dd
to zero the disks. First you need to figure out what your dive is enumerated as, then you wipe it like so:
sudo dd if=/dev/zero of=/dev/sdX
From there, you need to decide if you're going to use them individually or as a pool.
!< s
I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.
While this would work isn't it a bit time consuming compared to:
wipefs --all /dev/sdX
I had a somewhat similar issue, kernel kept seeing the old RAID flags of a formatted drive, so would not mount, and clonezilla wouldn't touch it either. I had to run some special command that specifically removes a certain info. I can't recall what command it was, but once I ran it everything was fine.
Could have been wipefs followed this maybe.. https://www.slac.stanford.edu/grp/cd/soft/unix/RAIDmetadata.html
Could have been combine with the lvremove command also. Really should have saved the notes.
Thanks, this looked really promising but didn't work for me. lvremove said it couldn't find the volume group and dmraid said that I have an unsupported sector size and it didn't see any raid disks at me drive location.
I'm currently using dd to write zeros to the drives. I'm not sure how long that will take me on this old USB 2.0 dock.
Hoefully that works. It was hours of trying different formating, zeroing etc, the error it gave me lead me to search and finally get a one liner that fixed it all. But why I didn't add it to my notes is a mystery LOL
This is why I keep my old-as-hell Shuttle PC in the closet... I boot it off a live CD so I don't accidentally dd my actual desktop's OS into oblivion, again.
I'm gonna join in with everyone and recommend completely zeroing all the drives (make sure you unmount them before doing it). It will take a while but at least you will have drives in a known state and can eliminate that as a possible issue.
I assume you've configured it for "single" mode?
I want to use RAID 1 but I've tried single disk as well.
Total shot in the dark but what does testdisk say?
I've never used this before so I'm not sure what to make of it. I am currently letting it analyze one of the disks and it's seeing a lot of HFS+ blocks (I assume that's what it's reporting) and a handful of ext4. That makes sense I guess, since I'm not wiping the drive, just trying to delete any partition info and/or formatting.
The only thing that seems like it might affect how the disk looks when inserted is cylinder geometry but I don't know enough about that to even guess at what to do with it. Is there something I should be looking for in testdisk?
I was hoping that testdisk would show you something funky going on with the partition table based on the parted error. No luck I guess.
My next two ideas are,