The Targeted Way to Get More Data: Techniques of Selective Imaging


A regular imaging process involves acquiring the entire drive, from the sector zero to the maximum logical block addressing (LBA). In many data recovery cases, however, this approach won’t work because time is limited or the drive is highly degraded. Let’s look into various techniques of selective imaging that you can use to make imaging processes more efficient, faster, and less intensive for the drive.


Imaging by head

The first selective technique is to diagnose each read-write head of the drive to identify problematic ones and image good heads first. Then address weaker heads and/or disk platters with media degradation afterward.

For some drives, imaging by heads may be the only way to get access to data. The process of switching from head to head is one of the most intensive processes for any drive. When the drive switches to a different disk platter, it has to reacquire servo data and reposition the head to a current track. If the head has any read instabilities, the process of switching to that head may end with an exception, and the drive may become unresponsive and even start to click.

To overcome these issues, you need to image the drive head by head: first, image the entire disk platter/head 0, then head 1, then head 2 and so forth, until you acquire a complete image of the drive. Heads with different levels of degradation could be imaged using different algorithms and configuration parameters to get maximum data out of each disk platter.

In general, imaging head-by-head is much less intensive for the drive compared to a linear imaging from LBA0 to the maximum LBA, where the drive is continuously jumping from one head/zone to another one. You can also use this technique on drives with read-instability issues of any nature, not just on drives with one or a few problematic heads.

Imaging by heads may even allow you to retrieve specific files on drives that have one or a few failed heads. You can do this when all metadata of the file system and fragments of necessary files are located on good heads. For example, if the drive has 10 heads and only one bad head, you still have a good chance of recovering at least some files without needing to replace the failing heads assembly.

Furthermore, even in cases when you need to replace a heads assembly, it is still a good idea to image all good heads of the drive first and then swap the heads and get data from bad ones. This technique is useful because the donor heads may not be 100% compatible, and so some of them may not read properly.

Usually, before imaging starts, the imaging tool creates the Heads Map that you use during the imaging process to identify which sector belongs to which head. Some tools generate the head mapping in real-time while imaging, which is usually not a good idea, because it slows down the imaging process and also increases wear on the drive by continuously mixing read commands with head-mapping commands.


Imaging by type of media issue

For this technique, identify problematic areas of the drive during an initial imaging pass and retrieve data from good areas only. This approach minimizes the risk of drive failure, speeds up the imaging process, and in cases when all fragments of necessary files are located in good areas, allows you to recover data without processing problematic areas at all.

Usually you will identify problematic areas by minimizing the Read Sector Timeout so that if reading a block takes longer, the read operation is aborted by sending a Software/Hardware/PHY Reset to the drive, and imaging continues. You can also slightly adjust this process by changing the size of the read block used during the imaging. In general, the block size should be small enough (100 sectors or less), since using larger sizes will lead to missing more data in skipped blocks

After you identify and skip problematic areas during the initial imaging pass, you can perform further media diagnostics. At this stage, you can optimize the imaging algorithm to particular media issues. In other words, you can now apply selective imaging to areas with each type of media issue, where the imaging configuration is fine-tuned to address specific media problems, such as long processed sectors or bad sectors with UNC, AMNF, IDNF, or other errors. You can implement this technique by storing the status of each read sector in the Sector Map, allowing the user to select what type of media issue should be addressed at each stage, and configuring an imaging algorithm to respond to each particular issue.


Imaging by file

For this technique, image only the required files and file information. This type of selective imaging consists of two phases.

First, only file system elements, such as boot sectors, file allocation tables, file attributes, and other metadata of the file system are processed with the highest priority.

During this first phase, only target the most critical elements of the file system. You access only those sectors that contain information on files and folders of the partition and use this data to build the file tree of the partition. You shouldn’t access other optional metadata that is usually loaded by an operating system while the partition is being mounted in order to minimize wear on the drive. For example, for NTFS partitions, the most critical metadata is located in a boot sector and the MFT table, although you still may need index records, $LogFile, and other file system elements at some point during the file system recovery.

In the second phase, after you have imaged the metadata of the file system and built the file tree, you can locate specific files that need to be recovered and image sectors that belong to those files. You can skip all other sectors, since they don’t contain any targeted user data.

Most data recovery imaging tools image files fragment-by-fragment as per the file allocation information stored in the file system’s metadata. This approach has one drawback: sectors are processed in a sequence allocated by the file system, so the drive continuously jumps to different areas of the disk and switches from one head to another each time the next file fragment is accessed. Clearly, this approach significantly slows down imaging and, most importantly, it has a very high risk of drive failure because of the stress caused by the intensive read processes.

A much better approach to imaging by file is to use a drive linear imaging sequence, taking into account all other imaging factors configured by the user, such as head-by-head imaging or sequences defined by a type of media issues. In this case, instead of processing blocks of sectors in a sequence defined by the file system allocation information, all sectors that belong to selected files are imaged linearly, as if you were imaging the entire drive, but skipping all non-selected sectors.


P.S.

As the capacity of the drives increases year to year, it’s becoming obvious that imaging processes should be based on selective imaging, whether the imaging is applied to particular heads, files, or types of media issues. Selective imaging becomes mandatory not just because of the time limitations of imaging terabyte-sized drives, but especially to prevent drives from failing completely because of the stress caused by intensive read processes.

So, the next time you connect a drive to your imaging system, think about selective imaging to get more data in a faster and safer way.