Measure Disk IO, Latency, IOPS
I will be measuring this on my Btrfs pool mounted under /myraid
IOPS with FIO
cd /myraid
yum install -y make gcc libaio-devel || ( apt-get update && apt-get install -y make gcc libaio-dev </dev/null )
wget https://github.com/Crowd9/Benchmark/raw/master/fio-2.0.9.tar.gz ; tar xf fio*
cd fio*
make
Random read/write performance
./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [2716K/972K /s] [679 /243 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=19383: Wed Aug 10 14:20:54 2016
read : io=3072.4MB, bw=4549.6KB/s, iops=1137 , runt=691524msec
write: io=1023.7MB, bw=1515.9KB/s, iops=378 , runt=691524msec
cpu : usr=1.45%, sys=11.65%, ctx=822425, majf=0, minf=3
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=786524/w=262052/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=3072.4MB, aggrb=4549KB/s, minb=4549KB/s, maxb=4549KB/s, mint=691524msec, maxt=691524msec
WRITE: io=1023.7MB, aggrb=1515KB/s, minb=1515KB/s, maxb=1515KB/s, mint=691524msec, maxt=691524msec
Random read performance
./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
Random write performance
./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
latency with IOPing
cd /myraid
yum install -y make gcc libaio-devel || ( apt-get update && apt-get install -y make gcc libaio-dev </dev/null )
wget https://ioping.googlecode.com/files/ioping-0.6.tar.gz ; tar xf ioping*
cd ioping*
make
./ioping -c 10 /myraid
4096 bytes from /myraid ( ): request=1 time=0.2 ms
4096 bytes from /myraid ( ): request=2 time=0.3 ms
4096 bytes from /myraid ( ): request=3 time=0.2 ms
4096 bytes from /myraid ( ): request=4 time=0.3 ms
4096 bytes from /myraid ( ): request=5 time=0.3 ms
4096 bytes from /myraid ( ): request=6 time=0.3 ms
4096 bytes from /myraid ( ): request=7 time=0.3 ms
4096 bytes from /myraid ( ): request=8 time=0.3 ms
4096 bytes from /myraid ( ): request=9 time=0.2 ms
4096 bytes from /myraid ( ): request=10 time=0.3 ms
Using dd
direct (use direct I/O for data)
dsync (use synchronized I/O for data)
sync (likewise, but also for metadata)
dd --help for more info
Using dd for throughput
dd if=/dev/zero of=/myraid/testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 2.03493 s, 528 MB/s
dd if=/dev/zero of=/myraid/testfile bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 2.30498 s, 466 MB/s
Using dd for latency
dd if=/dev/zero of=/myraid/testfile bs=512 count=1000 oflag=direct
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.260032 s, 2.0 MB/s
dd if=/dev/zero of=/myraid/testfile bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 35.154 s, 14.6 kB/s
No comments:
Post a Comment