College Mono
Sri Sathya Sai College for Women, Bhopal
(Under Autonomous Scheme of U.G.C)
NAAC Re-accredited Autonomous College under the UGC Scheme with 'A' Grade
Ms. Girjesh Chouhan

Analysis of Ext3, Ext4 and XFS over networks in RAID Environment using Linux Operating System.

Minor Research Project
Funded by U.G.C.
F.No.:MS-5/102054/XII/14- 15/CRO
2015 - 2017

 

Executive Summary

Redundant Array of Independent Disks (RAID)

The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives appears to the computer as a single logical storage unit or drive.
RAID allows information to access several disks. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes.
RAID consistently distributes data across each drive in the array. RAID then breaks down the data into consistently-sized chunks (commonly 32K or 64k, although other values are acceptable). Each chunk is then written to a hard drive in the RAID array according to the RAID level employed. When the data is read, the process is reversed, giving the illusion that the multiple drives in the array are actually one large drive.
System Administrators and others who manage large amounts of data would benefit from using RAID technology. Primary reasons to deploy RAID include:

  • Enhances speed
  • Increases storage capacity using a single virtual disk
  • Minimizes disk failure

Hardware RAID

The hardware-based array manages the RAID subsystem independently from the host. It presents a single disk per RAID array to the host.
A Hardware RAID device connects to the SCSI controller and presents the RAID arrays as a single SCSI drive. An external RAID system moves all RAID handling "intelligence" into a controller located in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and appears to the host as a single disk.
RAID controller cards function like a SCSI controller to the operating system, and handle all the actual drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI controller) and then adds them to the RAID controllers configuration, and the operating system won't know the difference.

 

 

Software RAID

Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis are not required. Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's faster CPUs, Software RAID outperforms Hardware RAID.
The Linux kernel contains an MD driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load.

RAID Levels and Linear Support

RAID supports various configurations, including levels 0, 1, 4, 5, and linear. These RAID types are defined as follows:

  • Level 0 — RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into strips and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. The storage capacity of a level 0 array is equal to the total capacity of the member disks in a Hardware RAID or the total capacity of member partitions in a Software RAID.
  • Level 1 — RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high data-transfer rates when reading but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost. The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID.
  • Level 4 — Level 4 uses parity concentrated on a single disk drive to protect data. It is better suited to transaction I/O rather than large file transfers. Because the dedicated parity disk represents an inherent bottleneck, level 4 is seldom used without accompanying technologies such as write-back caching. Although RAID level 4 is an option in some RAID partitioning schemes, it is not an option allowed in Red Hat Enterprise Linux RAID installations. The storage capacity of Hardware RAID level 4 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 4 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size.
  • Level 5 — This is the most common type of RAID. By distributing parity across some or all of an array's member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process. With modern CPUs and Software RAID, that usually is not a very big problem. As with level 4, the result is asymmetrical performance, with reads substantially outperforming writes. Level 5 is often used with write-back caching to reduce the asymmetry. The storage capacity of Hardware RAID level 5 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 5 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size.
  • Linear RAID — Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy and, in fact, decreases reliability — if any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

Description: D:\MRP\MRP pics for raid,network,lvm\Pictures\Screenshot from 2017-03-02 12-20-46.pngFollowing is the screenshot of two RAID arrays which I have created at the time of Installation on RAID Level 5 and RAID Level 1 using three hard-disks of 7200RPM:

 

 

 

Below is the screenshot of three additional RAID arrays which I have created for keeping the ext3, ext4, XFS file system along with the two raid arrays created at the time of installation :
Description: D:\MRP\MRP pics for raid,network,lvm\Pictures\Screenshot from 2017-03-04 13-40-39.png


Read Write Performance analysis for ext3, ext4 and xfs file systems in system configured with RAID.

Write speed of file systems using oflag=dsync in dd command can be analyzed from the following chart:

Here is the analysis of Read Speed of File systems:


 

Read Write Performance analysis for ext3, ext4 and xfs file systems in Networking

Below is the screenshot of mounted partitions of server machine in the client:
Description: D:\MRP\MRP pics for raid,network,lvm\Pictures\nw client\Screenshot from 2017-03-03 17-09-29.png

 

 

 

 

Here is the analysis of ext3, ext4 and xfs for read speed:

Here is the analysis of ext3, ext4 and xfs for write speed:

 

Here is the summary of speed in MB/s to write on ext3, ext4 and xfs file systems(using switch):

 

 

 

 

 

Here is the summary of speed in MB/s to write on ext3, ext4 and xfs file systems using oflag=dsync:

 


 

Scalability check using LVM for ext3, ext4 and xfs

This is the total scheme of logical volumes which I have maintained in my analysis:

 

 

 

 

 

Screen below showing all logical volumes mounted:
Description: D:\MRP\MRP pics for raid,network,lvm\Pictures\lv for 3 file sys\Screenshot from 2017-03-03 12-41-19.png
Following is the screenshot of lv-xfs logical volume formatted with xfs file system in which the resize gets failed:
Description: D:\MRP\MRP pics for raid,network,lvm\Pictures\MRP PICS OF SERVER\Screenshot from 2017-03-16 12-45-09.png

 

 

Conclusion(contribution to the society)

Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to the large number of options available and the trade-offs involved. The new ext4 filesystem brings many new features and enhancements to ext3, making it a good choice for a variety of workloads. A tremendous amount of work has gone into bringing ext4 to Linux, with a busy roadmap ahead to finalize ext4 for production use. What was once essentially a simple filesystem has become an enterprise-ready solution, with a good balance of scalability, reliability, performance and stability. That is why, the ext3 user community will have the option to upgrade their filesystem and take advantage of the newest generation of the ext family.

In this project, I have summarized the read, write, network and scalability results of three mutually compatible file systems ext3, ext4 and xfs. Obtained results have confirmed majority of our expectations. A new, modern, 64-bit file system ext4 has shown superior characteristics when compared to its predecessor, ext3. Its performances is superior in all the test procedures. A number of innovative techniques for file allocation (extents, persistent pre-allocation, delayed allocation and multiblock allocator), and enhanced Htree indexing for larger directories which is always on, improved journaling techniques and buffer cache mechanism. This way ext4 file system has shown superior performances in comparison to its predecessor in difficult conditions, such as working with read/write operations that I mostly used in my experiments i.e. with RAID, Networks & LVM. It can be concluded that ext4 performs better than any of its predecessors, which means that journaling techniques combined with the cache mechanism, not only have not slow down the system, but improved its performance.
The obtained results are encouraging for all Linux users to use ext4 file system. If both your server and your storage device are large, XFS is likely to be the best choice. Even with smaller storage arrays, XFS performs very well when the average file sizes are large, for example hundreds of megabytes in size.
If your existing workload has performed well with Ext4, staying with Ext4 on Red Hat Enterprise Linux 6 or migrating to XFS on Red Hat Enterprise Linux 7 should provide you and your applications with a very familiar environment. Two key advantages of Ext4 over Ext3 on the same storage include faster filesystem check and repair times and higher streaming read and write performance on high-speed devices.
Another way to characterize this is that the Ext4 filesystem variants tend to perform better
on systems that have limited I/O capability. Ext3 and Ext4 perform better on limited bandwidth (< 200MB/s) and up to ~1,000 iops capability.
For anything with higher capability, XFS tends to be faster. XFS is a file system that was de­signed from day one for computer systems with large numbers of CPUs and large disk arrays. It focuses on supporting large files and good streaming I/O performance. It also has some interesting administrative features not supported by other Linux file systems.
XFS also consumes about twice the CPU-per-metadata operation compared to Ext3 and Ext4, so if you have a CPU-bound workload with little concurrency, then the Ext3 or Ext4 variants will be faster. In general Ext3 or Ext4 is better if an application uses a single read/write thread and small files, while XFS shines when an application uses multiple read/write threads and bigger files. We recommend that you measure the performance of your specific application on your target server and storage system to make sure you choose the appropriate type of filesystem. If data is large, then nothing better than XFS, which can be easily seen from the support of maximum limits it provides :

Limit

ext3

ext4

XFS

max file system size

16 TiB

16 TiB

16 EiB

max file size

2 TiB

16 TiB

8 EiB

max extent size

4 kiB

128 MiB

8 GiB

max extended attribute size

4 kiB

4 kiB

64 kiB

max inode number

232

232

264