Asynchronous Scheduling of Redundant Disk Arrays - Sanders (2000) (3 citations) (Correct)
....6 Beyond Replication Coding Storage redundancy can be reduced by storing r subblocks of each logical block and their parity. Refer to [27] for a batched scheduling algorithm. Using more sophisticated coding schemes general values for w are possible and can be used to increase fault tolerance [21, 16, 12, 8]. The simple scheduling algorithms from Section 2 are straightforward to adapt. Refer to the full paper for more details using a hypergraph model. Lemma 1 and Theorem 2 transfer. We now generalize the matching algorithm. A schedule can be represented as an R perfect matching in a bipartite graph ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures. In Proceedings of the 21st Annual International Symposium on Computer Architecture, pages 245--254, 1994.

--------------------------------------------------------------------------------
RAIDframe: A Rapid Prototyping Tool for RAID Systems - II, Gibson, Holland.. (1997) (1 citation) (Correct)
.... To see this, assume that disk 2 in the RAID Level 3 diagram within Figure 4 has failed, and note that Multiple failure tolerance can be achieved in RAID Level 3 by using more than one check disk and a more complex error detecting correcting code such as a Reed Solomon [Peterson72] or MDS code [Burkhard93, Blaum94]. RAID Level 3 has very low storage overhead and provides very high data transfer rates. Since user data is striped on a fine grain, each user access uses all the disks in the array, and hence only one access can be serviced at any one time. Thus this organization is best suited for applications ....

....matrix to distribute a block of data (a file in their terminology) into n fragments such that any m n of them suffice to reconstruct the entire file. An array constructed using such a code can tolerate (n m) concurrent failures without losing data. The second, described fully by Blaum et al. [Blaum94], clusters together sets of N 1 parity stripes where N is the number of disks in the array and stores two parity units per parity stripe. The first parity unit holds the same information as in RAID Level 5, and the second holds parity computed using one data unit from each of the parity stripes in ....

Blaum, M., Brady, J., Bruck, J., and Menon, J., "Evenodd: An Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures," Proceedings of the International Symposium of Computer Architecture (ISCA), 1994, pp. 245-54.

--------------------------------------------------------------------------------
Efficient, Distributed Data Placement Strategies for Storage.. - Brinkmann, al. (2000) (2 citations) (Correct)
....fail. Such failures require the use of fault tolerant placement strategies, which will be beyond the scope of this paper. Many strategies may be used to allow the reconstruction of lost data. Among them are, e.g. parity layouts [14, 5] declustered layouts [8, 18] or multi fault tolerant schemes [6]. It is not dicult to extend our strategies so that they not only work for the planned removal but also in case of a failure of a disk. 1.2 Previous Results The exploration of disk arrays as an ecient and exible storage system imposes a number of challenging tasks to solve. First of all, one ....

....(scheduling of requests) space requirements (bu ers) and application properties (pattern of requests) have a large impact on the usefulness of such distribution strategies. The simplest data layout used is disk striping [7] which is applied with di erent granularity in a number of approaches [14, 20, 6, 8, 4]. Here, the data is cut into equal sized blocks and assigned to disks in a round robin fashion so that logically consecutive blocks are put on consecutive disks, cycling repeatedly over all of them. This simple and e ective strategy has the disadvantage that the layout is xed for any number of ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures. In Proceedings of the 21st Annual International Symposium on Computer Architecture, pages 245-254. IEEE Computer Society TCCA and ACM SIGARCH, April 18-21, 1994.

--------------------------------------------------------------------------------
Integrated Storage and Communication System Error.. - Halvorsen, Plagemann, .. (2000) (Correct)
....the other disks can be used to determine the bit values of the lost data. In today s RAID solutions, XOR correction schemes are usually applied for reliability, but there are also other schemes proposed for disk arrays in general or as improvements of the RAID mechanisms. For example, the EVENODD [5] double disk failure scheme uses a two dimensional XOR parity calculation which shows good results both regarding redundant information overhead and performance. In [18] the problem of designing erasure correcting binary linear codes protecting against disk failures in large arrays is addressed. ....

....of a stripe in a RAID system is not suitable, because of the missing ability to correct several random losses in the network. The idea of multi dimensional parity calculations used in some disk array configurations to correct more than one disk failure, like RAID level 6 and as proposed in [5, 18], are not usable since the parity information corrects disk blocks spanning several different files, i.e. contradictory to requirement number 4. Moreover, the IETF scheme [26] using a low quality copy of the data for error recovery, is not suitable for our purposes, because it cannot handle disk ....

Blaum, M., Brady, J., Bruck, J., Menon, J.: "EVENODD:An optimal Scheme for Tolerating Double Disk Failures in RAID Architectures", Proceedings of the 21st Annual International Symposium on Computer Architecture (ISCA'94), Chicago, IL, USA, April 1994, pp. 245-254

--------------------------------------------------------------------------------
Scalable Concurrency Control and Recovery for Shared Storage.. - Khalil Amiri Garth (1999) (6 citations) (Correct)
.... are either singlephase or two phase (a read phase followed by a write phase) as shown in Figure 3(a) This distinguishing characteristic of storage clusters in fact extends to all the RAID architectures (including mirroring, single failure tolerating parity and double failure tolerating parity [Gibson92, Blaum94] as well as parity declustering [Holl94] which is particularly appropriate for large storage systems) All fault free and degraded mode high level operations are composed of either single phase or two phase collections of lowlevel requests. A single phase hostread (hostwrite) breaks down into ....

M. Blaum, J. Brady, J. Bruk, J. Menon, "EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures," Proceedings of the 21st ISCA, Chicago IL, April 18-21, 1994, pp. 245-254.

--------------------------------------------------------------------------------
LH* RS : A High-Availability Scalable Distributed Data.. - Litwin, Schwarz (1997) (Correct)
....other codes are potentially attractive as well, ABC97] H a94] BFT98] 7 Related work There were countless high availability schemes for a single site, usually 1 available and using some RAID like striping. A few schemes appeared for the (static) k 1 k availability in this context, BM93] [BBM93], H a94] and [ABC97] recently . There were also studies for the distributed environment, e.g. SG90] showing the inefficiency of any trivial striping. Deeper discussion of all these schemes, including SDDS schemes with mirroring or replication mentioned in the Introduction, is in [LMR98] ....

Blaum, M., Bruck, J., Menon, J. Evenodd: an Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures. IBM Comp. Sc. Res. Rep., (Sep. 1993), 11.

--------------------------------------------------------------------------------
On-Line Data Reconstruction In Redundant Disk Arrays - Holland (1994) (10 citations) (Correct)
....see this, 22 assume that disk 2 in the RAID Level 3 diagram within Figure 2. 6 has failed, and note that Multiple failure tolerance can be achieved in RAID Level 3 by using more than one check disk, and a more complex error detecting correcting code such as a Reed Solomon [Peterson72] or MDS code [Burkhard93, Blaum94]. RAID Level 3 has very low storage overhead and provides very high data transfer rates. Since user data is striped on a fine grain, each user access uses all the disks in the array, and hence only one access can be serviced at any one time. Thus this organization is best suited for applications ....

....matrix to distribute a block of data (a file in their terminology) into n fragments such that any m n of them suffice to reconstruct the entire file. An array constructed using such a code can tolerate (n m) concurrent failures without losing data. The second, described fully by Blaum et al. [Blaum94], clusters together sets of N 1 parity stripes, where N is the number of disks in the array, and stores two parity units per parity stripe. The first parity unit holds the same information as in RAID Level 5, and the second holds parity computed using one data unit from each of the parity stripes ....

M. Blaum, J. Brady, J. Bruck, and J. Menon, Evenodd: An Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures, Proceedings of the International Symposium on Computer Architecture, 1994, pp. 245-254.

--------------------------------------------------------------------------------
Near-Optimal Parallel Prefetching and Caching - Tracy Kimbrel (1996) (28 citations) (Correct)
.... are more complicated [16, 48, 39, 13, 21] Much research in the past on parallel I O has concentrated on techniques for striping and distributing error correction codes among redundant disk arrays or other devices to achieve high bandwidth by exploiting parallelism and to tolerate failures [27, 45, 2, 12, 10, 40, 20, 31, 36, 19, 3, 5, 26, 43, 25, 14, 4, 18, 15, 22, 44]. Our work complements these previous efforts. File access prediction (with or without application hints) can be used to provide the inputs to the algorithm described in this paper. Once future accesses are known, our algorithm determines a near optimal prefetching schedule. Our algorithm achieves ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures. In Proceedings of the 21st Annual Symposium on Computer Architecture, pages 245--254, April 1994.

--------------------------------------------------------------------------------
Improving the Performance of Coordinated Checkpointers on Networks .. - Plank (1996) (6 citations) (Correct)
....checkpointing techniques will become more useful. This paper uses Reed Solomon coding to tolerate multiple processor failures. Although this is the best general purpose method for tolerating any number of failures, there are better methods for specific numbers. For example, evenodd parity [2] is a method for tolerating two processor failures using only parity operations. As such, it is faster than using Reed Solomon coding for two processor failures. evenodd parity was not used for this experiment, but should be used in preference to Reed Solomon coding for two processor ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. In 21st Annual Int. Symp. on Comp. Arch., pages 245---254, Chicago, IL, April 1994.

--------------------------------------------------------------------------------
Integrated Parallel Prefetching and Caching - Hi Ng (1995) (Correct)
....the interaction between caching and parallel prefetching as discussed here. Much research in the past on parallel I O has concentrated on techniques for striping and distributing errorcorrection codes among redundant disk arrays or other devices to achieve high bandwidth and to tolerate failures [28, 43, 3, 14, 12, 38, 22, 30, 35, 21, 5]. These techniques were used in designing hardware disk array systems [7, 27] and parallel or distributed file systems. The Intel CFS [41] and PFS [26] allow users to stripe within a file to multiple disks over their multicomputer network to achieve high bandwidth. The research prototype 1 2 4 8 ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures. In Proceedings of the 21st Annual Symposium on Computer Architecture, pages 245--254, April 1994.

--------------------------------------------------------------------------------
AFRAID - A Frequently Redundant Array of Independent Disks - Savage, Wilkes (1996) (20 citations) (Correct)
....parity updates on other nearby activity done in the foreground; batching together updates that are physically close together; or simply doing a single, linear sweep through the disks. Similarly, existing schemes for balancing disk traffic under failure conditions can be applied to AFRAID (e.g. [Gray90c, Muntz90, Blaum94, Reddy91]) For ease of exposition, however, we concentrate here on a straightforward left symmetric RAID 5 data layout. 3. Availability model of AFRAID In this section we develop analytic models of data loss mechanisms for AFRAID, basing them on similar models for traditional RAIDs. In the next section ....

Mario Blaum, Jim Brady, Jehoshua Bruck, and Jai Menon. EVENODD: an optimal scheme for tolerating double disk failures in RAID Architectures. Proceedings of 21st International Symposium on Computer Architecture (Chicago, IL). Published as Computer Architecture News, 22(2):245--54, 18--21 April 1994.

--------------------------------------------------------------------------------
Algorithm-Based Diskless Checkpointing for Fault Tolerant.. - Plank, Kim, Dongarra (1995) (9 citations) (Correct)
....22.1 30 1000 2362 5 472 2640 278 55.6 11.7 30 5000 2362 1 2362 2447 85 85 3.5 30 Table 4: Results for PCG on on a 17 processor system. To reliably tolerate any combination of multiple processor failures, extra parity processors must be combined with more sophisticated error correction techniques [5, 8]. This means that every processor s checkpoint must be sent to multiple parity processors. In the absence of broadcast hardware, this kind of fault tolerance will likely impose too great an overhead. 8 Related Work Checkpointing on supercomputers and distributed systems has been studied and ....

M. Blaum et al. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. The 21st Int. Symp. on Comp. Arch., pp. 245---254, Apr 1994.

--------------------------------------------------------------------------------
Fault Tolerant Matrix Operations for Networks of Workstations.. - Plank (1997) (6 citations) (Correct)
....which it sends checkpoints. As m grows, the overhead of checkpointing and recovery will decrease because there is less contention for the parity processors. To tolerate any combination of m processor failures, m parity processors must be combined with more sophisticated error correction techniques [6, 8]. This means that every processor s checkpoint must be sent to multiple parity processors. In the absence of broadcast hardware, this kind of fault tolerance will likely impose too great an overhead. 0 2000 4000 6000 8000 10000 n 0 1000 2000 3000 4000 Seconds Cholesky Factorization ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. In 21st Annual International Symposium on Computer Architecture, pages 245---254, Chicago, IL, April 1994.

--------------------------------------------------------------------------------
Fault Tolerant Matrix Operations for Networks of Workstations.. - Kim, Plank (1997) (6 citations) (Correct)
....intervals. There are several more complicated schemes for configuring multiple checkpointing processors to tolerate more general sets of multiple failures. These schemes include two dimensional parity and multi dimensional parity [13] the Reed Solomon coding scheme [21, 22] and Evenodd parity [3]. ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. pages 245---254, April 1994.

--------------------------------------------------------------------------------
Algorithm-Based Diskless Checkpointing for Fault Tolerant Matrix.. - Plank (1995) (9 citations) (Correct)
....decrease, since the recovery group size is smaller, which necessitates less information to be combined at each parity processor. To reliably tolerate any combination of multiple processor failures, extra parity processors must be combined with more sophisticated error correction techniques [BM93, BBBM94, PFL94] This means that every processor s checkpoint must be sent to multiple parity processors. In the absence of broadcast hardware, this kind of fault tolerance will likely impose too great an overhead. 8 Related Work Checkpointing supercomputers and distributed systems has been studied and ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. In 21st Annual International Symposium on Computer Architecture, pages 245---254, Chicago, IL, April 1994.

--------------------------------------------------------------------------------
Data Logging: A Method for Efficient Data Updates in.. - Gabber, Korth (1998) (2 citations) (Correct)
....array as an orthogonal raid[2] Two dimensional parity[17] organizes the disks in a two dimensional array. Each data block is associated with a row parity block and a column parity block. Whenever a data block is written, the corresponding row and column parity blocks are also updated. evenodd[1] is a scheme for protecting against up to two disk failures by adding two redundant blocks per parity group. The first block contains the parity of the data blocks. The second block contains the diagonal parity of the data blocks. Both parity blocks are computed by exclusive or of the data blocks. ....

N. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. In Proceedings of the Twentyfirst International Symposium on Computer Architecture, pages 245--254, Apr. 1994.

--------------------------------------------------------------------------------
Flexible Usage of Redundancy in Disk Arrays - Schwabe, Sutherland (1999) (1 citation) (Correct)
....permits the design of data layouts that can reconstruct the contents of more than one failed disk simultaneously. Gibson et al. 4] showed how to tolerate a arbitrary set of t simultaneous disk failures using tv 1 Gamma 1 t disks worth of parity check information. More recently, Blaum et al. [1, 2] reduced the parity overhead requirements for t fault tolerant data layouts to t disks worth. However, since their results were stated in a coding theoretic framework, they did not consider the problem of how to distribute the t disks worth of parity check information throughout the ....

....for single fault tolerant layouts to a multiple fault tolerant setting, albeit with some inefficiency. As an immediate consequence, the techniques used also solve the problem of evenly distributing the parity check information in the multiple fault tolerant data layouts of Blaum et al. [1, 2]. ffl General Lower Bounds on Redundancy Overhead: In Section 3, we prove general lower bounds on the number of additional redundancy bits (a generalization of parity check information) needed to protect a collection of data bits in a disk array. One of our results generalizes a lower bound on ....

[Article contains additional citation context not shown here]

M. Blaum, J. Brady, J. Bruck and J. Menon. "EVENODD: An Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures." In Proceedings of the 1994 International Symposium on Computer Architecture, pp. 245-254, 1994.

--------------------------------------------------------------------------------
A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-like .. - Plank (1997) (28 citations) (Correct)
....and one extra write operation per write to any single device. Its main disadvantage is that it cannot recover from more than one simultaneous failure. As n grows, the ability to tolerate multiple failures becomes important [BM93] Several techniques have been developed for this [GHK 89, BM93, BBBM94, Par95] the concentration being small values of m. The most general technique for tolerating m simultaneous failures with exactly m checksum devices is a technique based on Reed Solomon coding. This fact is cited in almost all papers on RAID like systems. However, the technique itself is harder ....

....the failure of any m devices. This has application in disk arrays, network file systems and distributed checkpointing systems. This paper does not claim that RS Raid coding is the best method of coding for all applications in this domain. For example, in the case where m = 2, evenodd coding [BBBM94] solves the problem with better performance, and one dimensional parity [GHK 89] solves a similar problem with even better performance. However, RS Raid coding is the only general solution for all values of n and m. The table driven approach for multiplication and division over a Galois ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. In 21st Annual International Symposium on Computer Architecture, pages 245---254, Chicago, IL, April 1994.

--------------------------------------------------------------------------------
A Structured Approach to Redundant Disk Array Implementation - II., al. (1996) (7 citations) (Correct)
....cost, a significant number of redundant disk architectures have been proposed. These include designs for emphasizing improved write performance [Menon92, Mogi94, Polyzois93, Solworth91, Stodolsky94 ] array controller design and organization [Cao94, Drapeau94, Menon93] multiple fault tolerance [ATC90, Blaum94, STC94], performance in the presence of failure [Holland92, Muntz90] and network based RAID [Hartman93, Long94] Finally, the importance of redundant disk arrays is evidenced by their pronounced growth in revenue, projected to exceed 9 billion this year and to surpass 13 billion in 1998. Architects ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. "EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures." Proceedings of the 21st Annual International Symposium on Computer Architecture. Chicago (April 18-21, 1994) 245-254.

--------------------------------------------------------------------------------
Fault Tolerant Matrix Operations for Parallel and Distributed.. - Kim (1996) (Correct)
....configure extra checkpointing processors to tolerate multiple processor failures. For example, the paper [GHKP89] presents two dimensional parity or multidimensional parity, in which the coding information is distributed in twodimensional or multidimensional fashion, respectively. Another paper [BBBM94] introduces EVENODD parity, with which two extra processors may be used to tolerate any two failures in the system. More complicated coding schemes have been suggested to tolerate m failures with m checkpointing processors for arbitrary m [MS77, BM93, PFL94, Par95] 2.4.5 Algorithm Based ....

....are several more complicated schemes for configuring multiple checkpointing processors to tolerate more general sets of multiple failures. These schemes include two dimensional parity and multi dimensional parity [GHKP89] the ReedSolomon coding scheme [PW72, Rom92, PFL94] and Evenodd parity [BBBM94] The paper [Par95] also proposes a scheme to tolerate any two failures in RAID. One possible direction of future research is to investigate how such schemes can be employed to tolerate different groups of multiple failures or a random set of multiple failures. We expect it to be challenging to ....

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures. pages 245---254, April 1994.

--------------------------------------------------------------------------------
Backward Error Recovery in Redundant Disk Arrays - Courtright, II, Gibson (1994) (6 citations) (Correct)
....to an existing code base, thereby restricting a designer s ability to explore the design space, confining experimentation to limited departures from the current code structure. Finally, researchers are investigating more aggressive redundant disk array architectures to boost performance [Bhide92, Blaum94, Cao93, Menon93, Stodolsky93, Holland94]. The acceptance of these proposals is put at risk due to their further increases in the complexity of error handling and the difficulty of modifying existing code structure. Forward error recovery has been used with arguable success in the design of single disk systems and filesystems. Single ....

Mario Blaum, Jim Brady, Jehoshua Bruk, Jai Menon, "EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures." In Proceedings of the 21st Annual International Symposium on Computer Architecture (ISCA), Chicago IL, April 1821, 1994, pp. 245-254.

--------------------------------------------------------------------------------
LH* Schemes with Scalable Availability - Litwin, Menon, Risch (1998) Self-citation (Menon) (Correct)
....number of storage nodes. Combined with the efficient use of large distributed RAM that may now reach, e.g. 8 GB on a workstation, M97c] SDDSs should lead to performance impossible for more traditional storage systems. Known high availability schemes, e.g. variants of the RAID schemes, PGK88] [BBM93], BM92] SS90] RM96] W96] and known high availability variants of LH , LN96a] L a97] and [LR97] typically guarantee that all data remain available as long as no more than n 1 sites (buckets) of the file fail simultaneously. The value of n is a parameter chosen at file creation time. ....

....could be made n dimensional (orthogonal) for this purpose. As Fig. 2 shows, LH sa is somehow rooted in this approach, every k 2 sites forming a rectangle with horizontal and vertical parity. In [BM92] one proposes an efficient n availability scheme using the MDS codes. The EvenOdd schema in [BBM93] provides particularly efficiently for 2 availability. Some linear coding techniques, for 2 availability, or possibly for 3 availability, are also addressed in [H a94] In [NW94] there are complementary proposals for cases of correlated disk failures that may generalize to LH sa bucket failures. ....

Blaum, M., Bruck, J., Menon, J. Evenodd: an Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures. IBM Comp. Sc. Res. Rep., (Sep. 1993), 11.

--------------------------------------------------------------------------------
Design Issues For Scalable Availability LH* Schemes.. - Litwin, Menon, Risch, .. (1999) Self-citation (Menon) (Correct)
....parity records are at the sites of data records only, seems immediately applicable to the current generation of parallel database systems using static hash or range partitioning. Finally, Reed Salomon codes are one attempt to achieve the scalable availability. Other choices seem possible, ABC97] [BBM93], or [H a94] Acknowledgments This research was partly sponsored by Research Grant of IBM Almaden Res. Cntr. to Centre d Etudes et de Recherches en Informatique Applique (CERIA) d Universit Paris 9 Dauphine. ....

Blaum, M., Bruck, J., Menon, J. Evenodd: an Optimal Scheme for Tolerating Double Disk Failures in RAID Architectures. IBM Comp. Sc. Res. Rep., (Sep. 1993), 11.

--------------------------------------------------------------------------------
Algorithms for Scalable Storage Servers - Peter Sanders Max (2004) (Correct)
No context found.

M. Blaum, J. Brady, J. Bruck, and J. Menon. EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures. In Proceedings of the 21st Annual International Symposium on Computer Architecture, pages 245-254, 1994.

--------------------------------------------------------------------------------
A Transactional Approach to Redundant Disk Array Implementation - Courtright, II (1997) (5 citations) (Correct)
No context found.

Blaum, M., Brady, J., Bruck, J., and Menon, J. "EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures." Proceedings of the 21st Annual Symposium on Computer Architecture (ISCA). Los Alamitos, CA: IEEE Computer Society Press. Chicago. (April18-21, 1994) 245-254.

First 50 documents

Online articles have much greater impact More about CiteSeer.IST Add search form to your site Submit documents Feedback

CiteSeer.IST - Copyright Penn State and NEC

Links

RAID data recovery, Mac data recovery, Unix data recovery, Linux data recovery, Oracle data recovery, CD data recovery, Zip data recovery, DVD data recovery , Flash data recovery, Laptop data recovery, PDA data recovery, Ipaq data recovery, Maxtor HDD, Hitachi HDD, Fujitsi HDD, Seagate HDD, Hewlett-Packard HDD, HP HDD, IBM HDD, MP3 data recovery, DVD data recovery, CD-RW data recovery, DAT data recovery, Smartmedia data recovery, Network data recovery, Lost data recovery, Back-up expert data recovery, Tape data recovery, NTFS data recovery, FAT 16 data recovery, FAT 32 data recovery, Novell data recovery, Recovery tool data recovery, Compact flash data recovery, Hard drive data recovery, IDE data recovery, SCSI data recovery, Deskstar data recovery, Maxtor data recovery, Fujitsu HDD data recovery, Samsung data recovery, IBM data recovery, Seagate data recovery, Hitachi data recovery, Western Digital data recovery, Quantum data recovery, Microdrives data recovery, Easy Recovery, Recover deleted data , Data Recovery, Data Recovery Software, Undelete data, Recover, Recovery, Restore data, Unerase deleted data, unformat, Deleted, Data Destorer, fat recovery, Data, Recovery Software, File recovery, Drive Recovery, Recovery Disk , Easy data recovery, Partition recovery, Data Recovery Program, File Recovery, Disaster Recovery, Undelete File, Hard Disk Rrecovery, Win95 Data Recovery, Win98 Data Recovery, WinME data recovery, WinNT 4.x data recovery, WinXP data recovery, Windows2000 data recovery, System Utilities data recovery, File data recovery, Disk Management recovery, BitMart 2000 data recovery, Hard Drive Data Recovery, CompactFlash I, CompactFlash II, CF Compact Flash Type I Card,CF Compact Flash Type II Card, MD Micro Drive Card, XD Picture Card, SM Smart Media Card, MMC I Multi Media Type I Card, MMC II Multi Media Type II Card, RS-MMC Reduced Size Multi Media Card, SD Secure Digital Card, Mini SD Mini Secure Digital Card, TFlash T-Flash Card, MS Memory Stick Card, MS DUO Memory Stick Duo Card, MS PRO Memory Stick PRO Card, MS PRO DUO Memory Stick PRO Duo Card, MS Memory Stick Card MagicGate, MS DUO Memory Stick Duo Card MagicGate, MS PRO Memory Stick PRO Card MagicGate, MS PRO DUO Memory Stick PRO Duo Card MagicGate, MicroDrive Card and TFlash Memory Cards, Digital Camera Memory Card, RS-MMC, ATAPI Drive, JVC JY-HD10U, Secured Data Deletion, IT Security Firewall & Antiviruses, PocketPC Recocery, System File Recovery , RAID