2 # Block device driver configuration
6 bool "Multiple devices driver support (RAID and LVM)"
9 Support multiple physical spindles through a single logical device.
10 Required for RAID and logical volume management.
15 tristate "RAID support"
17 This driver lets you combine several hard disk partitions into one
18 logical block device. This can be used to simply append one
19 partition to another one or to combine several redundant hard disks
20 into a RAID1/4/5 device so as to provide protection against hard
21 disk failures. This is called "Software RAID" since the combining of
22 the partitions is done by the kernel. "Hardware RAID" means that the
23 combining is done by a dedicated controller; if you have such a
24 controller, you do not need to say Y here.
26 More information about Software RAID on Linux is contained in the
27 Software RAID mini-HOWTO, available from
28 <http://www.tldp.org/docs.html#howto>. There you will also learn
29 where to get the supporting user space utilities raidtools.
34 bool "Autodetect RAID arrays during kernel boot"
35 depends on BLK_DEV_MD=y
38 If you say Y here, then the kernel will try to autodetect raid
39 arrays as part of its boot process.
41 If you don't use raid and say Y, this autodetection can cause
42 a several-second delay in the boot time due to various
43 synchronisation steps that are part of this step.
48 tristate "Linear (append) mode"
51 If you say Y here, then your multiple devices driver will be able to
52 use the so-called linear mode, i.e. it will combine the hard disk
53 partitions by simply appending one to the other.
55 To compile this as a module, choose M here: the module
56 will be called linear.
61 tristate "RAID-0 (striping) mode"
64 If you say Y here, then your multiple devices driver will be able to
65 use the so-called raid0 mode, i.e. it will combine the hard disk
66 partitions into one logical device in such a fashion as to fill them
67 up evenly, one chunk here and one chunk there. This will increase
68 the throughput rate if the partitions reside on distinct disks.
70 Information about Software RAID on Linux is contained in the
71 Software-RAID mini-HOWTO, available from
72 <http://www.tldp.org/docs.html#howto>. There you will also
73 learn where to get the supporting user space utilities raidtools.
75 To compile this as a module, choose M here: the module
81 tristate "RAID-1 (mirroring) mode"
84 A RAID-1 set consists of several disk drives which are exact copies
85 of each other. In the event of a mirror failure, the RAID driver
86 will continue to use the operational mirrors in the set, providing
87 an error free MD (multiple device) to the higher levels of the
88 kernel. In a set with N drives, the available space is the capacity
89 of a single drive, and the set protects against a failure of (N - 1)
92 Information about Software RAID on Linux is contained in the
93 Software-RAID mini-HOWTO, available from
94 <http://www.tldp.org/docs.html#howto>. There you will also
95 learn where to get the supporting user space utilities raidtools.
97 If you want to use such a RAID-1 set, say Y. To compile this code
98 as a module, choose M here: the module will be called raid1.
103 tristate "RAID-10 (mirrored striping) mode (EXPERIMENTAL)"
104 depends on BLK_DEV_MD && EXPERIMENTAL
106 RAID-10 provides a combination of striping (RAID-0) and
107 mirroring (RAID-1) with easier configuration and more flexible
109 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
110 be the same size (or at least, only as much as the smallest device
112 RAID-10 provides a variety of layouts that provide different levels
113 of redundancy and performance.
115 RAID-10 requires mdadm-1.7.0 or later, available at:
117 ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
122 tristate "RAID-4/RAID-5/RAID-6 mode"
127 select ASYNC_RAID6_RECOV
129 A RAID-5 set of N drives with a capacity of C MB per drive provides
130 the capacity of C * (N - 1) MB, and protects against a failure
131 of a single drive. For a given sector (row) number, (N - 1) drives
132 contain data sectors, and one drive contains the parity protection.
133 For a RAID-4 set, the parity blocks are present on a single drive,
134 while a RAID-5 set distributes the parity across the drives in one
135 of the available parity distribution methods.
137 A RAID-6 set of N drives with a capacity of C MB per drive
138 provides the capacity of C * (N - 2) MB, and protects
139 against a failure of any two drives. For a given sector
140 (row) number, (N - 2) drives contain data sectors, and two
141 drives contains two independent redundancy syndromes. Like
142 RAID-5, RAID-6 distributes the syndromes across the drives
143 in one of the available parity distribution methods.
145 Information about Software RAID on Linux is contained in the
146 Software-RAID mini-HOWTO, available from
147 <http://www.tldp.org/docs.html#howto>. There you will also
148 learn where to get the supporting user space utilities raidtools.
150 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To
151 compile this code as a module, choose M here: the module
152 will be called raid456.
156 config MULTICORE_RAID456
157 bool "RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)"
158 depends on MD_RAID456
160 depends on EXPERIMENTAL
162 Enable the raid456 module to dispatch per-stripe raid operations to a
170 config ASYNC_RAID6_TEST
171 tristate "Self test for hardware accelerated raid6 recovery"
172 depends on MD_RAID6_PQ
173 select ASYNC_RAID6_RECOV
175 This is a one-shot self test that permutes through the
176 recovery of all the possible two disk failure scenarios for a
177 N-disk array. Recovery is performed with the asynchronous
178 raid6 recovery routines, and will optionally use an offload
179 engine if one is available.
184 tristate "Multipath I/O support"
185 depends on BLK_DEV_MD
187 MD_MULTIPATH provides a simple multi-path personality for use
188 the MD framework. It is not under active development. New
189 projects should consider using DM_MULTIPATH which has more
190 features and more testing.
195 tristate "Faulty test module for MD"
196 depends on BLK_DEV_MD
198 The "faulty" module allows for a block device that occasionally returns
199 read or write errors. It is useful for testing.
204 tristate "Device mapper support"
206 Device-mapper is a low level volume manager. It works by allowing
207 people to specify mappings for ranges of logical sectors. Various
208 mapping types are available, in addition people may write their own
209 modules containing custom mappings if they wish.
211 Higher level volume managers such as LVM2 use this driver.
213 To compile this as a module, choose M here: the module will be
219 boolean "Device mapper debugging support"
220 depends on BLK_DEV_DM
222 Enable this for messages that may help debug device-mapper problems.
227 tristate "Crypt target support"
228 depends on BLK_DEV_DM
232 This device-mapper target allows you to create a device that
233 transparently encrypts the data on it. You'll need to activate
234 the ciphers you're going to use in the cryptoapi configuration.
236 Information on how to use dm-crypt can be found on
238 <http://www.saout.de/misc/dm-crypt/>
240 To compile this code as a module, choose M here: the module will
246 tristate "Snapshot target"
247 depends on BLK_DEV_DM
249 Allow volume managers to take writable snapshots of a device.
253 depends on BLK_DEV_DM
256 tristate "Mirror target"
257 depends on BLK_DEV_DM
260 Allow volume managers to mirror logical volumes, also
261 needed for live data migration tools such as 'pvmove'.
263 config DM_LOG_USERSPACE
264 tristate "Mirror userspace logging (EXPERIMENTAL)"
265 depends on DM_MIRROR && EXPERIMENTAL && NET
268 The userspace logging module provides a mechanism for
269 relaying the dm-dirty-log API to userspace. Log designs
270 which are more suited to userspace implementation (e.g.
271 shared storage logs) or experimental logs can be implemented
272 by leveraging this framework.
275 tristate "Zero target"
276 depends on BLK_DEV_DM
278 A target that discards writes, and returns all zeroes for
279 reads. Useful in some recovery situations.
282 tristate "Multipath target"
283 depends on BLK_DEV_DM
284 # nasty syntax but means make DM_MULTIPATH independent
285 # of SCSI_DH if the latter isn't defined but if
286 # it is, DM_MULTIPATH must depend on it. We get a build
287 # error if SCSI_DH=m and DM_MULTIPATH=y
288 depends on SCSI_DH || !SCSI_DH
290 Allow volume managers to support multipath hardware.
292 config DM_MULTIPATH_QL
293 tristate "I/O Path Selector based on the number of in-flight I/Os"
294 depends on DM_MULTIPATH
296 This path selector is a dynamic load balancer which selects
297 the path with the least number of in-flight I/Os.
301 config DM_MULTIPATH_ST
302 tristate "I/O Path Selector based on the service time"
303 depends on DM_MULTIPATH
305 This path selector is a dynamic load balancer which selects
306 the path expected to complete the incoming I/O in the shortest
312 tristate "I/O delaying target (EXPERIMENTAL)"
313 depends on BLK_DEV_DM && EXPERIMENTAL
315 A target that delays reads and/or writes and can send
316 them to different devices. Useful for testing.
321 tristate "RAID 4/5 target (EXPERIMENTAL)"
323 depends on BLK_DEV_DM && EXPERIMENTAL
325 A target that supports RAID4 and RAID5 mappings.
330 bool "DM uevents (EXPERIMENTAL)"
331 depends on BLK_DEV_DM && EXPERIMENTAL
333 Generate udev events for DM events.