this post was submitted on 20 Feb 2025
21 points (100.0% liked)

Linux

50512 readers
1434 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

I don't care about the existing data on these drives, and that was originally what I thought was meant by "avoiding problems". So I tried just putting the drives in to no avail. They don't show up as drives in the file explorer. They don't show up in "Disks".

I also have an external hard drive dock - the 15 year old version of this one which does let me mount the drives and see them in Disks.

I have tried running "wipefs -a" and I've tried formatting them in Disks with no filesystem specified. I've also run parted on them, but anything I try in parted gives me the error "unrecognised disk label".

If I can't reuse these old drives then the enclosure is of less use to me than I had hoped as I don't intend to buy new drives any time soon.

Is there anything else I can try to reset these drives to a state where they'll act like new drives that have never been used before?

Update: The "tricks" haven't worked so I'm doing the full disk write using dd. It'll be a while before I have 2 disks prepped to try.

top 13 comments
sorted by: hot top controversial new old
[–] ikidd 5 points 6 days ago
[–] BluescreenOfDeath 6 points 6 days ago (1 children)

If you want to fully wipe the disks of any data to start with, you can use a tool like dd to zero the disks. First you need to figure out what your dive is enumerated as, then you wipe it like so:

sudo dd if=/dev/zero of=/dev/sdX

From there, you need to decide if you're going to use them individually or as a pool.

!< s

[–] [email protected] 2 points 6 days ago

I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

While this would work isn't it a bit time consuming compared to:

wipefs --all /dev/sdX
[–] [email protected] 4 points 6 days ago* (last edited 6 days ago) (1 children)

I had a somewhat similar issue, kernel kept seeing the old RAID flags of a formatted drive, so would not mount, and clonezilla wouldn't touch it either. I had to run some special command that specifically removes a certain info. I can't recall what command it was, but once I ran it everything was fine.

Could have been wipefs followed this maybe.. https://www.slac.stanford.edu/grp/cd/soft/unix/RAIDmetadata.html

Could have been combine with the lvremove command also. Really should have saved the notes.

[–] [email protected] 3 points 6 days ago (1 children)

Thanks, this looked really promising but didn't work for me. lvremove said it couldn't find the volume group and dmraid said that I have an unsupported sector size and it didn't see any raid disks at me drive location.

I'm currently using dd to write zeros to the drives. I'm not sure how long that will take me on this old USB 2.0 dock.

[–] [email protected] 1 points 6 days ago

Hoefully that works. It was hours of trying different formating, zeroing etc, the error it gave me lead me to search and finally get a one liner that fixed it all. But why I didn't add it to my notes is a mystery LOL

[–] [email protected] 2 points 6 days ago

This is why I keep my old-as-hell Shuttle PC in the closet... I boot it off a live CD so I don't accidentally dd my actual desktop's OS into oblivion, again.

[–] [email protected] 1 points 6 days ago

I'm gonna join in with everyone and recommend completely zeroing all the drives (make sure you unmount them before doing it). It will take a while but at least you will have drives in a known state and can eliminate that as a possible issue.

[–] [email protected] 1 points 6 days ago (1 children)

I assume you've configured it for "single" mode?

[–] [email protected] 1 points 6 days ago

I want to use RAID 1 but I've tried single disk as well.

[–] Reliant1087 1 points 6 days ago (1 children)

Total shot in the dark but what does testdisk say?

[–] [email protected] 1 points 6 days ago (1 children)

I've never used this before so I'm not sure what to make of it. I am currently letting it analyze one of the disks and it's seeing a lot of HFS+ blocks (I assume that's what it's reporting) and a handful of ext4. That makes sense I guess, since I'm not wiping the drive, just trying to delete any partition info and/or formatting.

The only thing that seems like it might affect how the disk looks when inserted is cylinder geometry but I don't know enough about that to even guess at what to do with it. Is there something I should be looking for in testdisk?

[–] Reliant1087 1 points 5 days ago

I was hoping that testdisk would show you something funky going on with the partition table based on the parted error. No luck I guess.

My next two ideas are,

  1. As far as I know wipefs just tries to wipe what blkid sees and it might have screwed up something. What if we just dd the whole drive with zeros and let your enclosure take it from there?
  2. What does smartctl say? Might be worth it to run the long and short tests to make sure that the drives themselves are okay.