Archive for ‘Uncategorized’

March 4, 2012

Recovery partitions

Manufacturers of laptops may give a recovery disc or may not depending on their policy. If their policy is not to provide the recovery disc, they put the factory installation of Windows to the separate partition along with an utility that can deploy this particular OS installation.

Such a partition containing the OS is stored in the end of the drive and hidden from the OS by the means of Host Protected Area. The possible course of actions to recover a laptop containing Host Protected Area is:

  • you press certain keys when your computer starts up
  • BIOS discards HPA limitations
  • the system is loaded from the appeared recovery partition
  • a pre-installed program runs from this partition. Such a tool reformats the entire hard drive and puts the factory Windows installation to the drive.When the recovery process is finished, HPA is again reset.

After this process is done the laptop would be as good as new, software-wise. It should be noted that  HPA is one of the reasons a hard drive shows less capacity than expected – read more here.

And one more thing – one shouldn’t confuse two notions of partition recovery and recovery partition, because the latter is a repair process during data recovery.

January 20, 2012

Free benchmark software BenchMe

Recently, searching the web, I have stumbled across an interesting utility funny calling BenchMe to measure performance parameters of data storage devices.

This tool provides linear read speed, random access time, and IOPS for a storage. Also software lists available features the device supports e.g. TRIM, PUIS, SMART, DCO and so.

When I have launched the tool, I immediately tested all my storage devices and I found a lot of interesting – it turned out that my RAID 5 is not so fast in terms of disk access time as I expected.  Thus I realized that sometimes it may be very useful to evaluate performance characteristics of data storage devices.

December 5, 2011

How the hard drive makers provide URE values?

It turned out that vendors of hard drives provide information on URE out of the blue. This URE data is commonly utilized to substantiate bogus statements like “RAID5 is dead ” and to guess probabilities of double read error in RAID5.

People who build their own RAIDs are starting to get nervous seeing these values. But the vendor URE data seems to be lacking proper reliability.
Have a look at Hitachi official website, they specify sort of exciting URE values for 3 TB hard disk – 10-14 errors per bit read.
Let’s suppose that the value is real. So, if you take this disk and start to read data off it from the beginning to the end then the probability not to encounter a read error will be:

(1 – 10-14)(8*3*1012)~0,79

therefore, the probability that the hard drive wouldn’t be able to read one sector is about 20%.

In other words when you have a disk filled at capacity there is a significant chance (namely 20%) that you will not be able to get data off it.
This is clearly disproven by everyday practice.

Tags: ,
May 9, 2011

Array is not able to improve access time

There are the following characteristics of performance of a storage system:
•    access time
•    throughput, defined as sustained average speed of data transfer.

We know that RAID 0 increases throughput. When one plans to build a RAID, only the above throughput numbers are taken into account, not thinking about access time.

Access time is made up of seek time and rotational latency where seek time is the time to move a read head to the track and rotational latency is the time which is needed for a sector to arrive under a read head. No matter how many disks are in RAID0, it may happen that the cache doesn’t have a requested sector which at the same time is the furthest from the head.
In case of such a sector is needed the access time is the same (not better) as in case of a single hard disk. The only option to decrease access time is to stick to a Solid State Drive.

April 6, 2011

Planning a RAID

If you are planning a new RAID, you need to get several factors right.

Protection against drive failures.

It is important that RAID is not a substitute for a backup. RAID can not save you from operator errors or fire, tornado. Anyway, certain RAID levels would keep you in business when one of the member disks stops working. here we speak about RAID1, RAID 10, RAID4, RAID 5, RAID 6, and exotics like RAID5E, RAID 5 EE, and RAID-DP. There is much speculation on RAID5 reliability supported by calculations concluding that high-capacity RAID5 is flawed. These calculations are based on vendor-specified Unrecoverable Error Rate values, which can be shown to be way off-base.

Capacity.

The size of the RAID limited by the maximum disk size you want to use, number of ports available on the RAID controller, and the capacity needed for redundancy . Should you want a simple calculation of the array capacity, have a look at free RAID Calculator.

Speed requirements.

From all the really redundant RAID levels, those using mirroring, namely RAID1 and RAID10 are the preferred on random write access. If the array is mostly used for reads (similar to a media library), or a write-once-read-never pattern (e.g. a backup storage), then RAID5, RAID6, and combinations of them are fine. Should you want fast random writes, choose RAID10. To quickly learn about speed, cost, and redundancy for different RAID types, look at the “RAID triangle“. Take into account that no RAID does in fact decrease random access time. For small random access times, try SSD.

March 11, 2011

Secure Erase @ home

The best option to erase the content from a disk is to destroy this hard drive physically. Melt it in fire or drill it in several places. The side effect is, nobody is going to buy the drive after that.

In software, there are several programs both free and paid which are capable of overwriting the information by either zeros or some random noise. Once overwritten, the data is irreversibly deleted.

To get the same result yourself without special software, format the drive and then load it fully with any data you don’t care about (for example multiple copies of House MD videos). Once the big file fits no longer, continue adding smaller files. USB enclosure is fine for that  purpose, no connection to motherboard is required.

Alternatively, use Windows Vista or Windows 7 to do a complete format of the drive. It will overwrite whatever data there may be with zeros. This does not work if you use Windows XP.

If you have one of those modern Self Encrypting Disks, just changing a password gets you pretty much the same result as secure erase.

February 15, 2011

Why are only the thumbnails recovered?

Every once in a while when doing the digital image recovery, typically off a memory card, the small image previews (called thumbnails) are recovered OK, while high-resolution images are damaged.
There is a phenomenon called “file fragmentation”. The fragmentation is said to occur if the file is placed on the disk in multiple non-contiguous fragments.  The graphics showing the fragmentation can be found at the Photo Recovery Limitations page of the Digital Photo Recovery Guide.

January 16, 2011

The difference between unformat and undelete

There is a nice list of data recovery types, but the difference between undelete and unformat should be explained a little further.

Undelete” is typically used for a recovery of one file from an intact partition.

Unformat” is typically used for a recovery of multiple files and folders off the damaged volume when none of the data is accessible any more.

The undelete software relies heavily on the fact that the filesystem is OK, except for the sought-for file. During the unformat, the data recovery software must expect to properly handle significant glitches in the volume, because even the quick format overwrites parts of the filesystem with blank data.  There are certain variations of the formatter which destroy data irreversibly, by overwriting the entire media.

The “undelete” is typically much faster than the unformat and produces significantly cleaner recovered files.