What system writes data on 2 or more disks simultaneously?

When mirroring is used on Windows Server 2003, the first disk in the set must contain files to be mirrored. The second disk must be at least the same size as the first, because information from the first disk needs to be written to it. If it were smaller, then all the data on the first disk couldn't be copied to the second. If the second disk is larger, the extra space is wasted.

The two disks in a mirror are seen as a single volume by the operating system and share a single drive letter. If one of the mirrors in a mirrored volume fails, the mirrored volume must be broken, using the Disk Management tool. This makes the other mirror a separate volume. This gives the still-functioning volume its own drive letter and enables you to create a new, mirrored volume using the free disk space on another disk, in order to restore fault tolerance.

Additional fault tolerance can be achieved with RAID 1 by using separate disk controllers for each disk; this is called duplexing. If only mirroring is used, both disks can become unavailable if the controller on which they are installed fails. With duplexing, each disk is installed on a different controller, so that if one of the disks fails or a controller fails, the data is still available through the other disk in the pair.

Mirroring disks not only provides fault tolerance, but can also impact performance. Multiple reads and writes to the disks are performed each time new data needs to be accessed or saved. Because data is read from both mirrored volumes, this can increase performance, because two different disks are reading the data at the same time. However, when data is written to the mirrored disks, there is a decrease in performance, because both disks must have the same data written to them.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978159749281200007X

Nokia Network Voyager

Andrew Hay, ... Warren Verbanec, in Nokia Firewall, VPN, and IPSO Configuration Guide, 2009

Configuring Disk Mirroring

The Nokia disk mirroring feature [RAID Level 1] protects against downtime in the event of a hard-disk failure on your Nokia appliance [for platforms that support the feature], but you must have a second hard disk drive installed on your appliance.

Disk mirroring allows you to configure a mirror set composed of a source hard disk drive and a mirror hard disk drive that uses Nokia Network Voyager. The source drive is where IPSO is installed. When configuring a mirror set, the hard disk drives are synchronized [the source hard disk drive is fully copied to the mirror hard disk drive] and all new data written to the source drive is also written to the mirror hard disk drive. If the source hard disk drive fails, the mirror hard disk drive automatically replaces your source hard disk drive without an interruption of service on the Nokia appliance.

The source and mirror hard disk drives can be warm-swapped on appliances other than the IP500 series, meaning you can replace a failed hard disk drive without shutting down your Nokia appliance. You can also monitor the status of a mirror set, the synchronization time, and the system log entries. Figure 4.9 shows the Disk Mirroring Configuration page.

Figure 4.9. The Disk Mirroring Configuration Page

To create a mirror set or enable disk mirroring, perform the following steps:

1

In the system tree, click Configuration | System Configuration | Disk Mirroring.

2

Under Mirror Set, select Create. [The source and mirror hard disk drives should have identical geometries.]

3

Click Apply.

A message indicating that a mirror set was created appears. Numbers indicate which hard disk drive is the source and which is the mirror, and that mirror synchronization is in progress.

The synchronization percent value in the Mirror Set table indicates the percentage of synchronization zones that are copied from the source to the mirror disk. A zone is equivalent to contiguous disk sectors. When all zones are copied to the mirror disk, the synchronization percent value is 100. For a 20GB disk, the synchronization time is approximately 20 to 30 minutes.

To delete or disable disk mirroring, on the Disk Mirroring page, next to the item for removal or deactivation, select Delete. Click Apply and then click Save to make the changes permanent.

You can only delete a mirror set that is 100-percent synchronized.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597492867000048

MCSA/MCSE 70-294: Ensuring Active Directory Availability

Michael Cross, ... Thomas W. Shinder Dr.Technical Editor, in MCSE [Exam 70-294] Study Guide, 2003

RAID-1

RAID-1 is more commonly referred to as disk mirroring or disk duplexing, and involves the use of two disk drives. It can be implemented through hardware, or through software using the disk management features built into Windows Server 2003. A RAID-1 volume created using the Windows software is referred to as a mirrored volume. With RAID-1, each drive is a “mirror image” of the other. Disk duplexing refers to mirroring when each drive is attached to its own separate controller or communication channel.

Note

Fault tolerant software RAID [RAID-1 and RAID-5] can only be implemented in Windows Server 2003 on disks that have been converted from basic to dynamic status. Dynamic disks offer more flexibility in management options, but cannot be accessed locally by operating systems prior to Windows 2000. Because it is recommended that production servers not be installed in a dual boot configuration, this should not be an issue with your DCs.

Duplexing can improve both read and write performance. In a duplexed environment, both disks can be read from or written to simultaneously. In a mirrored environment, only one of the drives can be read from or written to at a time. Because data must be written to both drives, write performance can be slightly degraded in a mirrored environment. For the reasons stated previously, this is less true in a duplexed environment. Data can be read from either disk in a mirrored environment and from both disks simultaneously in a duplexed environment, thereby increasing read performance. Fault tolerance is enhanced because if one of the drives malfunctions in a mirrored environment, or one of the drives, controllers, or communications channels malfunctions in a duplex environment, the server remains functional using the other hardware in mirrored configuration.

A duplexed volume appears to the operating system as a mirrored volume. The difference is in the hardware configuration; the software does not recognize any difference between mirrored and duplexed volumes. If a disk in a RAID-1 volume fails, you should break the mirror, add a new disk, and recreate the mirror to restore fault tolerance.

Test Day Tip

When using software RAID, the Windows Server 2003 boot and system files can only exist on a RAID-1 [mirrored or duplexed] volume. You cannot use a Windows RAID-5 volume for disks that hold the boot or system volumes. This limitation does not exist when using hardware RAID, in addition, hardware RAID offers higher performance and is preferred if the extra cost is acceptable to the organization.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781931836944500179

Basic Concepts of I/O Systems

Pierre Bijaoui, Juergen Hasslauer, in Designing Storage for Exchange 2007 SP1, 2008

RAID1

RAID1 is another fundamental RAID level that keeps and maintains the exact same content of a disk drive on one or more additional disks. RAID1 is often referred to as disk mirroring, and carries the great advantage of duplicating the contents of a given disk drive onto another drive. Should the initial drive fail [or any member of the RAID1 set], operations can still continue by getting data on other valid members of the RAID1 set [Figure 2-6].

Figure 2-6. RAID1 disk structure

The advantage of RAID1 is the high level of data redundancy and availability. In fact, it carries other advantages, especially in the area of point-in-time backup operations, which is discussed later. The disadvantage of RAID1 is that you lose capacity. If you combine two 72 GB disk drives in a RAID1 array, the resulting raw capacity is 72 GB. You've just lost the capacity of the mirroring drive. The same thing applies if you arrange three drives together: The resulting capacity is still the size of one disk.

From a request rate perspective, the deal is somewhat different depending on whether you mostly read from the volume or write to it. In read operations, the RAID controller can read from either of the two volumes [in a two-volume RAID1 set] and yield a performance level of 200 I/O per second, given that your individual disk drives can sustain 100 I/O per second.

When it comes to write operations, the data have to be written to both volumes at the same time, and in some cases, the resulting response time is dependent on the slowest drive to complete the operation. Therefore, a two-disk RAID1 set has a write performance of 100 I/O per second. This is the first instance in which we see that the read/write ratio gains importance in the actual performance of the volume. When we compare the RAID levels, you will see that the R/W ratio can weigh significantly in choosing the most appropriate level for your environment.

Figure 2-6 describes the block layout of a RAID1 volume. Note that for two disks, the contents are the same. As mentioned earlier, you could combine more than two drives in a RAID1 set. A three-member RAID1 volume can handle the loss of two disk drives [really, the probability of losing two drives in a three-disk RAID1 set is quite small]. The other advantage is that you can “break” the mirror [without necessarily affecting the current operation of the drive, although it's best to do this with a “quiesced” volume], and in a matter of seconds you have a duplicate copy of your original volume. This is a fast way to back up data. Given the decreasing price of storage these days [in terms of $/GB], this can be an interesting approach, and we'll see that in high-end environments, this becomes a sound data duplication strategy for backups of the replica database of Exchange 2007 CCR passive nodes.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781555583088000028

Architecture of a Data Warehouse

Lilian Hobbs, ... Pete Smith, in Oracle 10g Data Warehousing, 2005

Striping and RAID

Striping, or RAID, is the ability to spread the I/O requests across multiple disks in parallel. RAID is an acronym, which stands for Redundant Arrays of Inexpensive Disks, and the basic concept of RAID is to combine small, inexpensive disks to yield improved performance and reliability and have them appear to the server as a single, larger disk. We touched upon this problem earlier in the chapter when we looked at adding more disks into our warehouse architecture and where RAID provides a solution for better throughput and reliability by using multiple disks.

RAID can be described as levels, which progress from 0 to 10. As you progress through the levels, you get different performance and reliability. For example, rather than storing all of the data on one file on a single disk, the data is striped across multiple disks, which yields better performance during reading, because all of the disks potentially get read in parallel rather than just one.

Striping is the process where the disks are split into separate sections and the sections are utilized round-robin style on each disk. So, for example, if your stripe size is 1M, then the first 1M of your file is written to disk 1, the second 1M stripe is written to disk 2, and so on. This splitting of the data into stripes that are written to different disks is done either at the byte level or the block level and is performed automatically and completely transparently to the applications, whether it is a file for a word processor or a sophisticated database. Striping has the effect of reducing the contention for disk areas and improving I/O throughput to the server. Figure 3.8 shows a four disk RAID, where the stripe width is four and stripe size is the size of the individual amounts of data written to each disk.

Figure 3.8. Four-way RAID Showing Stripe Size and Width

The process of mirroring the data across multiple disks involves duplicating the information that we write to one disk by writing a complete copy to a second disk so we always have a copy of our data.

The RAID levels are:

RAID 0: striping [i.e., striping our files so that each successive stripe occurs on a different disk in a group of disks]

RAID 1: disk mirroring. Where an exact copy of each disk is automatically maintained. If one disk fails, then the data is available on the other disk until a new disk can be inserted and the copy made while the system is on-line. The cost of this is that it doubles the number of disks that are required.

RAID 3: striping with parity at the byte level. Uses an additional parity disk with other data, where parity data is simply a calculated numerical value from using the data on the actual data disks, so that if any one of the data disks fail, then the remaining data in conjunction with the parity data can rebuild the lost data. The parity information is on a separate, dedicated disk, for example, five data disks and one parity disk.

RAID 4: block data striping with parity. Same as RAID 3 but working at the block and not byte level.

RAID 5: block striping rotated parity. Both the parity and the data are striped across the disks. This removes the need for a dedicated parity disk, because the parity information is rotated across all disks. Read performance is better, but write performance can be poor. At least three disks are needed for a RAID 5 array.

RAID 10: [also known as 0+1]: mirrored stripe sets providing better security from the mirroring and better performance from the striping.

RAID can be implemented either in hardware, via a RAID controller for the disks, or in software, via a tool called a logical volume manager [LVM]. An LVM is a piece of software that could be provided either as part of the operating system, from a third-party vendor [e.g., Veritas], or from the disk storage vendor [e.g., from EMC]. The LVM combines areas on separate disks and makes them appear as a single unit to the operating system, and in the process it provides software striping and mirroring. Performing RAID in hardware may be easier, but there may be the disadvantage that it can be more difficult to change at a later stage—for example, if more disks are to be added; performing RAID in software can take advantage of a number of powerful volume managers available. Regardless of the mechanism, the I/O request from the server is transparently split by the controller or LVM into separate I/Os for the individual disks.

From a performance perspective, the DBA and systems administrator must carefully consider not just the type of RAID that they wish to employ and on which disks, but they must also consider items such as the RAID stripe size [i.e., the byte size of each stripe on the disks] and the number of disks in the stripe set.

A high-level perspective of the operations needed to use a volume manager to prepare the disks for Oracle can be seen in the following text. Different hardware vendors will have their own volume managers and there are third-party ones, and in addition, different operating systems may have a slightly different approach, so the steps listed here are intended to be very generic. The DBA and the system administrator must:

Create or initialize the physical volumes. Define the disk as a physical volume and write certain configuration information to it.

Create the physical volume groups. This is the process where the physical volumes are collected together to create the physical volume group. A physical volume can only belong to one volume group, but a volume group may contain many physical volumes. During this step the physical volume is partitioned into units of space called physical extents: These are the smallest units of storage used on the disk, and an extent is a contiguous section of the disk.

Create logical volumes in the volume groups. This is the step when striping and mirroring are implemented.

Create the file systems on the logical volumes. The file system is the structure on the disk necessary for storing and accessing the data. Without the file system it is not possible to create directories and files. For example, the file systems can be FAT32 or NTFS, which will be familiar to Windows users, or Linux Ext2, Linux Ext 3, and Linux Swap, which will be familiar to Linux users.

Mount the file systems. Make the file systems known and usable to the operating system.

Then, finally, use the file systems for the database files [i.e., create the database tablespaces using files on these file systems].

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781555583224500059

Physical Data Warehouse Design

Daniel Linstedt, Michael Olschimke, in Building a Scalable Data Warehouse with Data Vault 2.0, 2016

8.6.2.1 Hardware Considerations for Data Vault Layer

Because the Data Vault is the permanent storage for the enterprise data, the data needs to be secured against hardware faults. The first set of options is the RAID setup used for the physical drive. The best economical options should be either RAID-1 for disk mirroring or RAID-5 for adding parity bits to the data written to disk. The second option is often favored because it presents a compromise between available disk space and fault tolerance. Keep in mind that the Data Vault layer stores the history of all enterprise data that is in the data warehouse. Therefore, the size of the database could become very large.

While RAID-1 and RAID-5 are not the fastest available RAID options, it is worth favoring them for the Data Vault layer, due to the added fault tolerance of the physical disk. If one disk fails, it might affect the availability of the data warehouse [because the RAID array might be rebuilt], but the data is not necessarily at risk. However, if more than one disk fails at the same time, this is not the case any longer. In such a case, for example during a large-scale disaster that affects the whole server infrastructure, the collected enterprise data is still at risk.

To cope with such a fatal scenario, it is possible to replicate the data warehouse between multiple data centers. In this case, the data centers are usually geographically distributed and connected via a network [either a Virtual Private Network [VPN] over the Internet or a private network using satellite connections, etc.]. In order to make this scenario work, the network connection has to be fast enough to replicate all incoming data. This scenario might be required if multiple geographically distributed enterprise data warehouses [EDWs] should be made available around the world and a central EDW is not sufficient. Note, however, that the requirements for replicated data warehouses are high.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128025109000088

Risk Mitigation Strategy Development

Susan Snedaker, Chris Rima, in Business Continuity and Disaster Recovery Planning for IT Professionals [Second Edition], 2014

Developing your risk mitigation strategy

The steps in developing your risk mitigation strategy are as follows:

1.

Gather your recovery data.

2.

Compare cost, capability, and service levels of options in each category.

3.

Determine if the options remaining are risk acceptance, avoidance, limitation, or transference and which, if any, are more desirable.

4.

Select the option or options that best meet your company’s needs.

Now that you’ve gathered these data on various recovery options, you can review them in relation to cost, capabilities, and service levels.

These data can now be compiled into a document in whatever format is suitable for your needs. Some people like to use a grid or matrix, and others prefer an outline format. The key is to create a highly usable document that delineates the choices you’ve made. Let’s look at two examples. In our first sample, we look at a small segment of data that might be included with regard to backups. This uses a grid or matrix style, and it should give you an idea about what data to include and how you might approach it. In our second sample, we’ll use text without a grid, so you can compare which method might work better for you.

Sample 1: Section from Mitigation Strategy for Critical Data

Category SelectionOptionCost, Capability, SLAsRisk MitigationData backup—frequencyContinuousExpensive, zero downtime, exceeds MTDPotential solution, depending on cost to implementDailyModerate, up to 8 hours of potential lost data, 3 hours to restore, meets MTDImplement daily backup process to reduce likelihood of significant data loss and to reduce recovery time to meet MTDWeeklyModerate, up to 5 days of potential lost data, 12 hours to restore, may meet MTDMonthlyLow, does not meet MTDData backup—typeFullLongest backup time, shortest recovery time, meets MTDIncrementalMedium backup time, longest recovery time, exceeds MTDDifferentialMedium backup time, medium recovery time, meets MTDDifferential backup meets MTDs at the lowest costData backup—methodTape backupsLongest recovery time, least expensive, may not meet MTDElectronic vaultingLong recovery time, somewhat expensive, may not meet MTDData replicationMedium recovery time, medium expense, may meet MTDDisk shadowingFast recovery time, medium expense, may meet MTDBased on cost constraints, this option may meet MTD. This and disk mirroring will be explored in terms of cost, time, and feasibilityDisk mirroringFast recovery time, medium expense, may meet MTDBased on cost constraints, this option may meet MTD. This and disk shadowing will be explored in terms of cost, time, and feasibilityStorage virtualizationFast recovery time, high expense, removes localized failure risk, meets MTDStorage area networkFast recovery time, higher expense, removed single point of failure, may remove localized failure risk, meets MTDWide area high availability clusteringFast recovery time, higher expense, removes single point of failure, may remove localized failure risk, meets MTDRemote mirroringContinuous availability, zero recovery time, highest expense, removes single point of failure and localized failure risk, exceeds MTD

Sample 2: Section from Mitigation Strategy for Critical Data

Critical Data Recovery Options [selected choice is underlined]

1.

Data backup frequency

A.

Continuous—expensive, zero downtime, exceeds MTD. Not suitable due to cost.

B.

Daily—moderate, up to 8 hours potential lost data. 3-hour recovery time. Meets MTD. Best choice based on cost and time factors.

C.

Weekly—moderate, up to 5 days lost data, 12 hours to restore, may meet MTD. Although cost is acceptable, the recovery time for this option just barely meets MTD and does not provide any leeway. Therefore, this option is not as suitable as daily.

D.

Monthly—low cost, does not meet MTD. Not suitable due to time.

2.

Data backup type

A.

Full—uses the fewest tapes, takes the most time to back up, least time to recover, exceeds MTD. Not suitable due to time to back up.

B.

Incremental—uses moderate number of tapes, takes less time to back up than full, moderate time to recover. Just barely meets MTD. Not suitable due to time to recover.

C.

Differential—uses moderate number of tapes, takes less time to back up than full, takes less time to recover than incremental. Meets MTD. Suitable due to time and cost.

3.

Data backup method

A.

Tape backup—longest recovery time, least expensive, does not meet MTD.

B.

Electronic vaulting—longer recovery time, somewhat expensive, may not meet MTD.

C.

Data replication—medium recovery time, medium expense, may meet MTD.

D.

Disk shadowing—fast recovery time, medium expense, may meet MTD.

E.

Disk mirroring—fast recovery time, medium expense, may meet MTD.

F.

Storage virtualization—fast recovery time, high expense, removes localized failure risk, meets MTD.

G.

Storage area network—fast recovery time, higher expense, removed single point of failure, may remove localized failure risk, meets MTD.

H.

Wide area high availability clustering—fast recovery time, higher expense, removes single point of failure, may remove localized failure risk, meets MTD.

I.

Remote mirroring—continuous availability, zero recovery time, highest expense, removes single point of failure and localized failure risk, exceeds MTD.

As you can see from both examples, you may need to do additional research before deciding on the right backup method for critical data. It’s clear that a weekly backup scheme might work, but the problems inherent in a local backup process might not be acceptable. You can also see from the data that while a weekly differential backup strategy might be acceptable, disk mirroring is also an option. In some cases, these two backup objectives might be at odds, might be redundant, or might not make sense for your organization. Once you’ve looked at these data, you can determine the best risk mitigation strategy for that business function and, ultimately, for your entire business.

Your final strategy might be to set up disk mirroring and perform weekly backups of data that are sent to a remote data storage vault. This reduces your recovery time if something happens to a disk [mirroring] and also protects you if you have a fire in the building that destroys all disks. You should include a section to your Critical Data Recovery Options called “Selected Strategy” and delineate the exact strategy you select. When you move into writing your business continuity and disaster recovery plan, you’ll have the information you need in order to begin implementing these strategies. Avoid having to review this material at length a second time by including enough information so that the rationale behind the selected strategy is clear.

Remember, too, that when you’re selecting your strategy, you should consider risk controls already in place and attempt to build on, rather than replace or circumvent, those solutions. There may be some cases where you want to completely revamp your approach and this is the place to make those decisions. In other cases, you may simply confirm that you’re covered in these areas. For example, you may already have disk mirroring and remote data backups in place. If so, you should have looked at your MTD, cost, and capability requirements and determined that these solutions are acceptable. Make a note of that finding. Later, when you’re looking at your BC/DR plan, you don’t want to have to go back through all these steps to determine if you used due diligence in making this decision. If something goes wrong down the line, you will also have documentation to show that you used a logical and accepted methodology for making these decisions. These data may also be used as the start of an after action review if things go wrong, so it’s helpful to have it fully documented so your root cause analysis begins with documented facts.

For each critical business process, you need to identify an associated risk mitigation strategy. Some strategies will cover more than one critical business process, so you should not end up with as many strategies as you have critical business functions and processes. For example, your data management strategies will cover many of your critical business processes. By assessing these data with the big picture in mind, you can find areas where risk mitigation choices can cover more than one critical area. If you were to look at these strategies area by area only, you might miss opportunities to generate some economies of scale, which come from being able to apply one solution to many problems. These solutions become less expensive when they have more than one use. As mentioned earlier, any time you can use a solution across multiple business functions, you have a stronger business case for the expenditure. If you implement a remote data storage solution that meets data availability requirements for normal day-to-day business and it also meets your business continuity and disaster recovery needs, you’re going to find more support for the cost and implementation of such a solution. At the very least, you’ll be able to make a stronger business case for the investment.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124105263000064

Antiforensic Techniques

Nihad Ahmad Hassan, Rami Hijazi, in Data Hiding Techniques in Windows OS, 2017

Warning When Using Data Destruction Tools

From the previous section, we can note that software antirecovery tools can offer a high level of security for personal users and companies; however, if you are dealing with top secret data, physical destruction is still the safest solution. To achieve the maximum security possible, it is important to be aware of the following issues when using wiping software tools:

If you are deleting data that is scattered across many disk drives [RAID or mirroring disks], make sure to wipe all other locations to avoid leaving copies in other places.

Some files [like MS Office® documents] can have a previous version file stored somewhere other than the main file location that has been erased. To counter this issue, delete all previous restore points and file history. More details to follow.

Make sure to delete all instances of a file; for example, a user may destroy a specific file and forget other copies in other locations.

Hibernation and virtual memory [page files] files can contain IM chat, browser history, encryption passwords, decrypted documents, and other important data.

RAM memory can be captured from a running PC, resulting in recovering important files and even passwords from it.

Windows® by default stores thumbnails of graphics files and certain documents and movie files in the thumbnail cache file. Although thumbnail images are small, they still give sufficient information of any prior existence of the deleted file[s].

Volume shadow copies and file history may contain copies of your deleted files.

Windows event logs and registry can contain information about your PC usage. Such logs can give important evidence of your previous usage of Windows®. Registry antiforensics will be thoroughly discussed later.

As we note, data can be scattered in different locations on our disk drive; deleting a file and using a shredding tool may not be enough alone to securely destroy important data.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128044490000075

Performance-Tuning Tools and Techniques

In Designing SQL Server 2000 Databases, 2001

Disks

No one will discount the importance of RAM, but it very important to note that the most common hardware bottleneck for database systems is related to disk input and output [I/O]. That might seem somewhat intuitive, because after all, a database is about its data. However, even in simple data transactions, many more files are involved than the files that contain the source data, and all of them are stored and retrieved from the disk system. Among the most important files you should be aware of when trying to optimize your database server are these:

Table files

Nonclustered index files

Transaction log files

Full-text indexes

TempDB database files

Windows operating system files

Windows paging files

One of the most fundamental things you need to choose when you first start optimizing your database server is the type of disk system to use. Most often, the only appropriate choice is a redundant array of independent disks [RAID]. Defined in 1987, RAID not only provides better performance, but it offers fault tolerance. When choosing a RAID solution, you need to balance your needs for data redundancy, speed, and cost.

RAID0

RAID0 uses striping without parity, providing the fastest possible read and write times. Striping data spreads it out across multiple disks so that the system can write and read from the multiple disks simultaneously, providing greater throughput. Since the data is striped across multiple disks, the failure of any single disk will result in the loss of data. A RAID0 system comprising five 8GB disks would provide 40GB of storage.

Tip

A hardware-based RAID system is much faster than a software-based RAID system. A software-based system must use system resources to manage the RAID operations. These resources are not available for the operating system or applications, including SQL Server, so server performance is not optimal. Many hardware-based RAID controllers also provide large disk cache resources, increasing the performance of disk read and write operations.

RAID1

RAID1 provides redundancy by implementing disk mirroring. A RAID0 system is a system of two disks, one containing a copy of the first; if one fails, the other takes over and there has been no loss of data. A RAID1 system consisting of two 8GB disks would provide 8 MB of storage. RAID1 does not use data striping and as a consequence it is not faster than a single-disk system.

RAID5

RAID5 stripes the data as RAID0 does, but RAID5 provides redundancy through striping extra parity information. These extra data are used to recover from failure of any one disk. Because of this additional information, RAID5 does not write information as fast as RAID0 or RAID0+1, but it does read information nearly as fast. A RAID5 system requires an extra disk, and with it, parity information is striped across all disks. A RAID5 system with five 8GB disks can store only 32GB of data: [5 - 1] * 8GB = 32GB. Each disk would have 1.6GB parity information and 6.4GB of data. A system with six 8GB disks could store only 40GB of data: [6 - 1] * 8GB = 40GB.

RAID0+1

RAID0+1 systems use striping without parity, providing the fastest possible read and write times and redundancy by implementing disk mirroring. RAID0+1 is a RAID0 disk-striping system that has had RAID1 disk mirroring applied for redundancy. Whereas RAID5 increases cost a bit, RAID0+1 increases cost a great deal: In order to provide 40GB of storage with 8GB hard drives, you would need 10 hard drives—five for the striping and five more to mirror the disks that have been striped!

Placing Files

As you can see in Table 14.2, you can useRAID0 disk arrays for files that do not require fault tolerance—in other words, for information that can be rebuilt—but they require optimal performance such as the Windows paging file, TempDB database, and full-text indexes. It might be nice to have these files on a system that provides fault tolerance, but it is not essential. The Windows paging file and TempDB will be rebuilt each time the computer is rebooted. All the data that a full-text index represents are stored in the database tables, and the index can be rebuilt at any time. If you want to preserve the speed but absolutely require the fault tolerance, you can place those files on RAID0+1 disk arrays, but your cost will go up significantly due to the disk drive requirements to support mirroring.

Table 14.2. Common RAID Levels

LevelDescriptionFault ToleranceRead PerformanceWrite PerformanceCostNo RAIDSingle diskNoNormalNormalInexpensiveRAID0Striping without parityNoFastFastExpensiveRAID1MirroringYesNormalNormalModerateRAID5Striping with parityYesFastSlowerExpensiveRAID0+1Striping with mirroringYesFastFastMost expensive

If you do require fault tolerance but can sacrifice some performance, or if you cannot afford the cost, you can use RAID5 disks. Typically, the files that contain the data in tables and transaction logs are stored on RAID5 disk arrays. They are common, affordable, fast at writing information [though not as fast as RAID0 or RAID0+1], very fast at reading information, and, most important, fault tolerant. Of course, if you can spare the expense, you can also achieve fault tolerance with greater speed using the more expensive RAID0+1 systems.

In addition to the disk arrays themselves, the number and use of I/O controllers and data channels also contribute to the success of data access. Similarly to hardware versus software RAID systems, a good I/O controller should have its own processing ability so that it does not burden the general-purpose CPUs that will be needed to handle SQL Server operations. Generally speaking, SCSI controllers are good enough, but a Fibre Channel controller will give you the best performance at a higher price. Fibre Channel controllers can sustain a throughput of 100MBps [there are plans increase the throughput to 200MBps and 400MBps], and SCSI can sustain a throughput of only approximately 40MBps. Each controller has a number of channels that it can use to “talk” to its disks. Obviously, the best scenario is to use one channel to one disk array. Likewise, the lower the ratio of I/O controllers to disk arrays, the better your overall performance.

Each controller is important to the success of your system, and ideally you would place each on its own RAID disk arrays, but often that is not possible. Commonly, we choose how to distribute these files among a fixed set of disks. To do that effectively, we must know something of our system. If you can, you should place the tables that carry heavy traffic on different controllers as their indexes. This will allow a degree of parallelism for both reads and writes. If you insert a row or delete a row from a table, each nonfull-text index must be implemented in the transaction. If the indexes are on the same disk system, they will be done serially. However, if they are on different disk systems, the indexes could be updated in parallel to updating the table. In this example, there are other factors to consider, too. Most updates and deletes are logged operations, and that information will need to be written to the transaction log. If the transaction log is on a different RAID controller, this operation too can be done in parallel.

Likewise, you can see that queries that utilize a great deal of TempDB or the paging file should be placed on different disk arrays and controllers than those that contain their source data. If possible, you should place the operating system on its own disks that are fault tolerant, but they do not have to be particularly fast.

Is a storage system that links any number of disk drives?

A storage system that links any number of disk drives [a disk array] so that they act as a single disk. Computers that have multiple hardware systems performing the same task at the same time. A network device used to prevent attenuation when packets are traveling long distances.

What is like an intranet except it shares its resources with users from a distant location?

An extranet is a private network, too. It works similarly to a company intranet; however an extranet allows access to authorized users from outside the company. These external users may include suppliers and partners.

Is the ability of each organizations information system to work with the other?

Scalability is the ability of each organization's information system to work with the other, sharing both data and services.

Is typically a pre recorded talk show that can be downloaded from a variety of sources?

Podcasts are typically available as a series of prerecorded talk-radio shows that users can download to their computers or mobile devices. Podcasters often publish episodes on a regular schedule.

Chủ Đề