RAID array Data recovery in Brighton, London, Manchester, Birmingham, Oxford, Staffordshire, Shropshire, Cheshire, Derbyshire, North West, West Midlands, Walsall, Wolverhampton, Stafford, Newcastle under Lyme, Stoke on Trent, Stone, Burton on Trent, Lichfield, Shrewsbury, Telford, Newport, Cannock, Rugeley, Uttoxeter, Keele, Trentham, Blythe Bridge, Cheadle, Barlaston, Festival Park, Hastings, Welwyn Garden City, Hertfordshire, Tamworth, Bedford, Cambridge, Northampton, Milton Keynes, Cambridgeshire, Peterborough, Leicestershire, Lincolnshire, Middlesex, Norfolk, Great Yarmouth, Lowestoft, Northamptonshire, Nottinghamshire, Warwickshire, Atherstone, Stevenage, Luton, Hemel Hempstead, Glasgow, Scotland, Edinburgh, Fort William, Dundee, Perth, Stirling, Paisley, Dumfries, Elgin, Peterhead, Fraserburgh, Oban, Basildon, Romford, Chelmsford, Benfleet, Southampton, Edgbaston, Islington, Macclesfield, Bridgnorth, Bradford, Sheffield, Leeds, Yorkshire, Fife, Kircaldy, Suffolk, Thetford, Bury St.Edmunds, Ipswich, Newcastle upon Tyne, South Shields, Sunderland, Northumbria, Middlesborough, Hartlepool, Preston, Darlington, Rotherham, Penrith, Workington, Lancaster, Durham, Barrow in Furness, Gateshead, Carlisle, Teeside, Stockton on Tees, Erith, Kent, Ashford, Northhants, Wellingborough, Northamptonshire, Bristol, Swindon, Bournemouth, Plymouth, Hastings, Maidstone, Southend on sea, Aberdeen, Inverness, Bridgnorth, Flakirk, Cardiff, Swansea, Chester, Liverpool, Blackpool, Aberystwyth, Wrexham, Carmarthen, Hull, Norwich, Gloucester, Bridgend, Torquay, Bath, Cantebury, Portsmouth, Exeter, AbuDhabi, Dubai, Sharajah, Belgium, Epsom, Surrey, Northern Ireland, Belfast, Dublin, Londonderry, Armagh, Cork, Eire
Table 7: Support experiment This table shows the average variance of constraints when all dependent constraints were held
pinned, but the remainder of the constraints varied. The jump from 5 to 10 is the biggest decrease in variance, prompting us to use
the value of 10. The results show are from the use of all the heuristics but bad on the 92 News trace.
support is too high, the heuristic will miss chances to optimize blocks which do not occur frequently enough
to pass the threshold.
Table 7 shows experiments with different values of support on the 92 News trace. This is the same
conversion variance experiment found in table 6, where the dependent constraints are held constant, but the
remainder of the constraints are varied. However, this experiment is run on one trace and with different
support thresholds. The experiment shows that a threshold of 10 decreased the amount of variance by a
large amount, leading us to use a threshold of 10 accesses as the minimum to create a constraint. However,
more detailed exploration of this area could be useful.
8.2 Performance Analyzer
There are a variety of tradeoffs in the performance analyzer. The current system uses a very high fidelity
performance analyzer, namely trace-based simulation. However, it could use a variety of cheaper, simpler
performance analysis methods as well.
It could be interesting to experiment with the effects of using the performance analyzer with or without
caching, and with or without overlapping requests. While removing these factors decreases the accuracy
of the model, it also decreases the complexity, making constraint weights more constant. If this does not
significantly decrease the fidelity of the model, this may be advantageous.
8.2.1 Disk Areas
As described earlier, place-in-area constraints do not place blocks in specific locations, but in areas of the
disk. The size of these disk areas has an effect on how the constraints are converted into an actual disk
layout. A larger disk area means that it is possible to place large sequential constraints, and gives the learner
more flexibility in applying changes.
However, large disk areas also increase the amount of variation possible while still satisfying constraints.
If the areas are too large, the performance may depend heavily on the location in the area where the
block is placed, which is not specified by the constraints. This would make the optimization method ineffective.
Experimentation into “good” disk area sizes would be an interesting line of research. We chose to split
the disk into 11 areas, along zone boundaries, as this seems like a size large enough to allow reorganization,
but still large enough for choices in placement.
8.3 Learners
There are many ways to compute constraint potential, and we have experimented with several of them.
Constraint potential attempts to quantify the usefulness of a constraint, and is calculated using constraint
weights. The ideal value for the potential of a constraint is  ��  �� , where  is the sum of the
response times for the trace using the best layout that does not include , and  is the sum of the response
times for the trace using the best layout that does include . Effectively, it is the advantage of including
the constraint. Of course, this is not feasible to directly calculate, so each type of learner uses a different
8.3.1 Compare Learner
The compare method computes the constraint potential by comparing the constraint’s weight against the
other options available for a set of blocks. It first computes a per-block weight for the constraint by dividing
the constraint weight by the number of blocks in the constraint. For each block in the constraint, it computes
the per-block weight for all other constraints including the block. The constraint potential is then  ��
���� �� 
, where  is the blocks in the constraint,  is the per block weight, and 
is the best
weight for this block by another constraint.
While this method provides a relative value between options, it does not account for the possibility of
several constraints being applied at the same time on the same set of blocks. In this case, the value of all
of the constraints from the best set will converge to zero. While providing the appropriate contrast between
overlapping blocks, it will not be useful to compare constraints on different sets of blocks, or different
combinations of constraints. This prompted us to explore other combination options.
8.3.2 Application Learner
The application method takes the overlap of constraints into account. It keeps two weights for each constraint;
an applied weight and a non-applied weight. The applied weight is set to the constraint weight when
the constraint is applied. The non-applied weight is the sum of the weights of the blocks the constraint
affects, when the constraint is not applied. The constraint potential is  ��  �� 
, where  is the
non-applied weight and 
is the applied weight.
In effect, the potential is the difference between applying the constraint and not applying it. Constraints
are applied if this value is positive, meaning that the application of the constraint decreases overall response
However, this approach does not adequately address the performance dependencies among the constraints.
The weight for application and non-application is only a single weight, and does not account for the
application state of dependent constraints. To address this aspect of the problem, we explored yet another
8.3.3 Dependency Learner
The dependency method takes the general constraint performance dependencies into account. It keeps a list
of all the dependent constraints for each constraint, as discussed in Section 7.1.2. Instead of keeping a single
applied and non-applied weight, the learner keeps track of which dependent constraints were applied and
whether the actual constraint was applied at each iteration.
The learner then sets the constraint potential to be the difference between the best value for the constraint
when it is applied subtracted from the best value when it is not applied. The learner then sorts the
list based on the potential, goes through the list, and only attempts to apply constraints that it predicts will
perform well given the constraints that have already been applied.
A more thorough approach would reorder the constraints after each constraint is applied. It then only
applies constraints that it predicts will do well in this situation. This would make the sorting step of the
learner much more computationally intensive, but exploration of this option would be interesting.
8.3.4 Evaluation
We evaluated each of the learning methods in comparison to individual heuristics. The experimental set up
is the same as used in Section 6. However, we now look at all three of the learners.
We compare the results of each learner with all but the bad heuristic (Compare, Apply, Dependency),
against each learner with all of the heuristics, including the bad heuristic (Compare w/ bad, Apply w/ bad,
Dependency w/ bad) and the base heuristics (Shuffle, Front load, Threading and Run Clustering). All of the
results are normalized to the base case, where no reorganization is performed (Base).
Figures 9 and 10 show the results of these evaluations. The values shown in the graphs are average
response times normalized to the base response time, so lower bars mean better performance. Note that
these results do not include the cost of doing the actual reorganization.
As expected, the application and dependency learners almost always perform better than the compare
learner. However, sometimes the application learner performs better, and sometimes the dependency learner


0800 0 439 549

Our Laboratories : UniRecovery guarantees :


As a single speck of dust entering a hard disk could cause fatal results. It is larger than the distance between the read/write heads and the platter in a hard disk (40 microns, a human hair being 100 microns thick) and its collision with the Hard Disk head would be extremely detrimental.

Our advanced laboratories are equipped with Clean Rooms rated as ‘Class 1000, 100 & 10 ’ where there exists no more than 1000 particles; and ‘Class 100’ where there exists no more than 100 particles whereas with "Class 10", less than 10 particles of 0,5 µm found in our Cabim-Flow which is of "Class 10".

The air quality of the Clean Rooms complies with the applicable standards BS 5295, ISO 14644-4 and Federal Standard 209. Hard disk fabrication requires a ‘Class 100’ Clean Room and UniRecovery undertake all laptop hard drive recovery work in Clean Rooms ‘Class 10’. Commonly established, recovery from “Clicking” hard disk drives, can also be achieved under these specialised Clean Room conditions.

As a general guideline, UniRecovery recover data that has been lost due to:  

78% - hardware or system malfunction
11% - human error.
7% - software corruption or program malfunctions
2% - computer viruses
1% - natural disasters
1% - other


  1. NO FIX, NO FEE : no charge if your data cannot be recovered!
  2. to charge Fixed Rates for the recovery of your data irrespective of the cost & time it takes to recover.
  3. Door to Door collection & delivery all included in our fixed rates.
  4. To recover all recoverable data.
  5. Rapid Response: in order to get you back up and running with the shortest possible delay.
  6. To maintain the highest standard of work environment with the use of Clean rooms within all our laboratories, in which air quality complies with the applicable standards BS 5295, ISO 14644-4 and Federal Standard 209.
  7. Extensive experience and expertise in recovering data from ‘inaccessible’ hard drives.
  8. Data recovery from SCSI, RAIDs, SANs or NASs servers, Mirrored and Striped Volume, Exchange Servers, RAID or NAS servers, backup tapes, floppy diskettes, Zip, Optical Media.
  9. Complete discretion with regards to client data protection.
  10. A specialist team of RAID server qualified data recovery engineers.
  11. Free post-recovery customer services: providing you with measures to prevent similar losses in the future.
  12. Recovery (if the data is recoverable) solutions for data loss caused by power surges, software or system malfunction, lightening, fires, floods, sabotage, user errors, accidental format, deletion, repartitioning.
  13. A specialist team of forensic data recovery experts for litigation cases.


RAID data recovery, Mac data recovery, Unix data recovery, Linux data recovery, Oracle data recovery, CD data recovery, Zip data recovery, DVD data recovery , Flash data recovery, Laptop data recovery, PDA data recovery, Ipaq data recovery, Maxtor HDD, Hitachi HDD, Fujitsi HDD, Seagate HDD, Hewlett-Packard HDD, HP HDD, IBM HDD, MP3 data recovery, DVD data recovery, CD-RW data recovery, DAT data recovery, Smartmedia data recovery, Network data recovery, Lost data recovery, Back-up expert data recovery, Tape data recovery, NTFS data recovery, FAT 16 data recovery, FAT 32 data recovery, Novell data recovery, Recovery tool data recovery, Compact flash data recovery, Hard drive data recovery, IDE data recovery, SCSI data recovery, Deskstar data recovery, Maxtor data recovery, Fujitsu HDD data recovery, Samsung data recovery, IBM data recovery, Seagate data recovery, Hitachi data recovery, Western Digital data recovery, Quantum data recovery, Microdrives data recovery, Easy Recovery, Recover deleted data , Data Recovery, Data Recovery Software, Undelete data, Recover, Recovery, Restore data, Unerase deleted data, unformat, Deleted, Data Destorer, fat recovery, Data, Recovery Software, File recovery, Drive Recovery, Recovery Disk , Easy data recovery, Partition recovery, Data Recovery Program, File Recovery, Disaster Recovery, Undelete File, Hard Disk Rrecovery, Win95 Data Recovery, Win98 Data Recovery, WinME data recovery, WinNT 4.x data recovery, WinXP data recovery, Windows2000 data recovery, System Utilities data recovery, File data recovery, Disk Management recovery, BitMart 2000 data recovery, Hard Drive Data Recovery, CompactFlash I, CompactFlash II, CF Compact Flash Type I Card, CF Compact Flash Type II Card, MD Micro Drive Card, XD Picture Card, SM Smart Media Card, MMC I Multi Media Type I Card, MMC II Multi Media Type II Card, RS-MMC Reduced Size Multi Media Card, SD Secure Digital Card, Mini SD Mini Secure Digital Card, TFlash T-Flash Card, MS Memory Stick Card, MS DUO Memory Stick Duo Card, MS PRO Memory Stick PRO Card, MS PRO DUO Memory Stick PRO Duo Card, MS Memory Stick Card MagicGate, MS DUO Memory Stick Duo Card MagicGate, MS PRO Memory Stick PRO Card MagicGate, MS PRO DUO Memory Stick PRO Duo Card MagicGate, MicroDrive Card and TFlash Memory Cards, Digital Camera Memory Card, RS-MMC, ATAPI Drive, JVC JY-HD10U, Secured Data Deletion, IT Security Firewall & Antiviruses, PocketPC Recocery, System File Recovery , RAID , Apple MAC Recovery , Exchange Mailbox Recovery, Tape Recovery, Exchange server database recovery, Outlook Data Recovery, .pst Data Recovery

Terms & Conditions





















1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54, 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 ,