freebsd 70 zfs and iscsi
A friend of mine got my the coolest birthday present I think I’ve ever recieved, and that was a bunch of new and super kick ass hardware. This will soon replace my current server, which is in such bad shape it cannot compile java code, or perl from source. Before I replace it, I wanted to play around with ZFS that comes with FreeBSD 7.0.
Here is a quick rundown of it all:
Intel Core 2 Duo E6750 (2.66Mhz 4MB cache)
Intel S975XBX2 workstation motherboard
AMCC 3Ware 9650SE 4 port SATA RAID controller (4x PCI-e)
- Batter backup for the 3Ware so I can enable cached writes
2GB ECC Crucial Memory Kit
750 Watt PC Power & Cooling power supply
ASUS EN6200 LE 16x PCI-e nVidia GFX card
Plextor DVD+RW PX-810SA SATA
4 Western Digital 1TB Drives
All of this was of the highest quality, and Chris said since he got my into FreeBSD, he felt I should have a stable and rock solid system since my current “server” has died 6 times. So after getting it all put together and powering it up… it wouldn’t post. I swapped the cpu with an older Pentium 4D that I’ve had lying around until my htpc comes back up and that worked. It turns out, the motherboard doesn’t support the 1333Mhz bus speed of the E6750 Core 2 Duo. So I’ve done some testing with the p4 as the cpu, and most of the tests were I/O bound and not CPU.
First off was to test out ZFS. Since it is new to FreeBSD (new in general really) I followed a ZFS tuning guide for 7.0 and followed some pretty stock directions:
$ zpool create tank raidz da0 da1 da2 da3
Which automatically mounted a 2.7TB filesystem. This was a lot nicer than fooling around with partitioning and filesystem tools. I also like the feature set of ZFS compared to standard RAID’s like self healing and data checksums. Performance was a little slower. Doing a simple ‘dd’ with 1mb block size showed about 101MB/sec
dd if=/dev/zero of=/tank/1gb.dat bs=1m count=1000<br></br> 1000+0 records in<br></br> 1000+0 records out<br></br> 1048576000 bytes transferred in 10.420047 secs (100630640 bytes/sec)<br></br>
Not bad, and since my GigE network cannot saturate that type of I/O I’m pretty satisfied with those results. I use this system as a network file server, among the other network related services like NAT, www, and mail, so my biggest concern was getting more than 80MB/sec (around gigabit ethernet’s limit).
BUUUT I am underutilizing my fancy 3Ware raid controller, so I can’t just leave that alone. I blew away the ZFS volume and created a full RAID5 with the controller. Everything was fine with the exception of fdisk, which isn’t usable for large volumes. I emailed 3Ware’s support wondering why my 2.78TB volume was only being partitioned at 722GB. They quickly responded with ‘use gpt’, which I did:
$ gpt create /dev/da0<br></br> $ gpt add -t ufs /dev/da0<br></br> /dev/da0p1 added<br></br> $ gpt show /dev/da0<br></br> start size index contents<br></br> 0 1 PMBR<br></br> 1 1 Pri GPT header<br></br> 2 32 Pri GPT table<br></br> 34 5859311549 1 GPT part - FreeBSD UFS/UFS2<br></br> 5859311583 32 Sec GPT table<br></br> 5859311615 1 Sec GPT header<br></br>
I did a newfs -U -O2 and mounted the new /SafeKeg. Another dd test showed higher numbers, about 150MB/sec, and with the tw_cli tool installed I can manage the 3Ware card from FreeBSD itself. Very cool. I did play with iSCSI, exporting a 50GB file to my Windows desktop, but the performance was incredibly slow, like 1-2MB/sec. Terrible! I’m not sure if Free/NetBSD’s iscsi-target is at fault, or Window’s iscs initiator, or if iscsi is just not all that up to snuff. I thought it would be nice to utilize, but I’d like to expierement with it more to see exactly how much performance I can squeeze out of it.
After getting a few comments on my poor iscsi performance, I have patched to the latest version and I have been MUCH happier with the results. On my little home network I’ve gotten around 40MB/sec for reads and writes.