I’ve always got drives after a client leaves that we want to erase to make sure their data is gone for good and not recoverable by the next person renting the server.

This is how I do it.  I first like to remove all partitions, then run a destructive badblocks check.  This does two things, it over writes all data and at the same time tests the drive to make sure there aren’t any errors that may creep up for the next customer.

I put all my spare SAS drives into a Dell R610 server using a Dell SAS 6 Host Bus Adapter non-raid card.  If you’re using a RAID card you’re not able to individually address each drive, after inserting the drive you’ll you can issue a lsscsi to see all the drives.

[0:0:0:0]    disk    Kingston DataTraveler 2.0 1.00  /dev/sda 
[1:0:2:0]    disk    SEAGATE  ST973401LSUN72G  0556  -        
[1:0:4:0]    disk    ATA      Patriot Blast    12.2  /dev/sdd 
[1:0:5:0]    disk    FUJITSU  MAY2073RC        D108  /dev/sdb 
[1:0:6:0]    disk    FUJITSU  MBB2073RC        D406  /dev/sde 

Now I’m going to use sde, you should choose the appropriate disk for your hardware.  I first started by using wipefs to remove all partitions

wipefs -af /dev/sde

then ran badblocks with a destructive write to clear the drive contents, and test there isn’t any bad sectors.

badblocks -v -b 4096 -c 1024 -p 1 -w /dev/sde

Usually I either have no bad blocks or if I do the drive won’t read at all, there’s no medium for me.

If you did want to use bad blocks to map out the actual bad sectors, you can do something like this example, using a non-destructive test for sda2 partition.

badblocks -v -n -s /dev/sda2 > badsectors.txt

fsck -l badsectors.txt /dev/sda2
Now you’re drive is wiped successfully.