Categories
solaris zfs

Solaris SATA chipsets I can run ZSF with

After my last post, I set about finding out more info on SATA chipsets that I can use under Solaris. The main thing for me is that for some reason, I have drives fail fairly regularly. Maybe I am missing part of the necessary rituals to appease the gods of hardware, but the rotating rust that I have in my house really makes me paranoid about my data. Yes, I currently do offsite nightly backups over the internet, but that can’t keep EVERYTHING safe (I’m looking at you, Birdman).

Hence, ZFS. Now, I wanted to make a cheap, reliable, cheap data tank for my, err, data. I need a mobo that is supported by OpenSolaris, even down to the onboard SATA controllers (not so much for sound, however). Initially, all the info pointed to a Nforce chipset, and that would have been fine. Unfortunately at the moment I can’t seem to find a single cheap AM2 CPU in Brisbane, which I need to have because the old CPUs don’t have the new funky chipsets. The cheapest mobos were all VIA chipset jobbies and unsupported by Solaris for the SATA. Or so I thought. But being the good noodle that I am, I just checked that before writing it off again – after all I had found out reccently that Sil 3114 chipsets were OK, so wrong once, wrong more than once perhaps?

Anyway, here’s my findings – Solaris SATA support for onboard chipsets. Turns out all those cheap VIA boards like the ASUS K8V-X SE are just fine, and teamed with a 4-port Sil 3114 PCI card, I’ll be laughing (I would prefer one of the Micro-ATX variants of these boards but they use the K8M800 chipset for which I can find absolutely no information one way or another… logic dictates that it shouldn’t matter, but the “K8T800” is fine and the “K8T800 Pro” is not so it ain’t necessarily so). If I am wrong in any of this, please correct me, but as with everything on the interwebs, check it yourself first, and DON’T BLAME ME if it doesn’t work. No warranty, YMMV, etc.

Looks like the K8V-X SE motherboard has a Realtek 8201CL ethernet chipset, and I can’t find any drivers, even on this compendium of solaris network drivers. Don’t mind though, I’ve got oooh, 3 PCI cards lying about unused… mostly Realtek 8129 / 8139 chipset so I’m laughing. Turns out I also have 4 sound cards (including 2 SB Live). What the hell am I going to do with 4 sound cards?

14 replies on “Solaris SATA chipsets I can run ZSF with”

You’re not alone ;-)

I’ve been running a file server at home for a few years now. Currently, it’s Linux (Fedora) with the OS on RAID1 and data on RAID5 under LVM. I’m using 3 120GB sata disks with the Syba 4 port SATA card. The disks are outside the PC with their own power supply and fan. I have 42″ sata cables. And a cardboard contraption to duct air.

Anyways, it works well but I want ZFS. So I got another syba with some 500GB sata disks, same cables, etc. Right now I’ve got Solaris 10 u3 on a dual PII 400 w/ gigabit ethernet. Next step is recognizing the syba….

The Silicon Images 3132 chipset (PCIe) is supported (at least in 11/06) by the si3124 driver. I did have to add the following line to /etc/driver_aliases: si3124 “pci1095,3132”

I needed support for at least 6 SATA disks in ZFS
(plus root-fs Disksuite-mirrored on two PATA)

I have tested Solaris10_u3 (11/06) on two SATA based boards
One I can recommend and one i won’t recommend:

1) MSI K9N Platinum :) Recommended
http://www.msi.com.tw/program/products/mainboard/mbd/pro_mbd_detail.php?UID=730
It has “everything you need” and everything works “out of the box”:
nForce570Ultra,DDR2,Socket-AM2,2xGbLAN,USB,Firew,PCI-Ex16,
1xPATA-133, 6xSATA2, ALC883 7.1 :)
Only thing I added was OSS http://www.opensound.com/download.cgi to get Linux-familiar sound. (not needed on a fileserever)

2) MSI K8N Diamond Plus :/ It works, *but*
nForce4 SLI X16, Socket-939, SB Audigy, GbLAN, PCI-Ex16
The second Ether-NIC Marvell Yukon is only supported via a skgesol_x64v8.19.1.3.tar.Z (or newer) from SysKonnect http://www.skd.de/e_en/support/driver.html
The onboard Sil3132 does *not* work (beacuse it has RAID-BIOS and cant be patched to non-RAID)
Sil3132 cards was the only PCIe-based I could find at a resonable price, and blogs indicated it might work, so I bought a Sil3132 add-incard. A big *hack* was needed to get it working. First I had to flash the BIOS to non-RAID (download from Silicon Image). The Solaris10 11/06 paniced repeatedly at boot when i added ‘si3124 “pci1095,3132″‘ to /etc/driver_aliases ! So I figured out the blogs indicating succes were about Solaris11 (aka Nevada, Express) — *not* Solaris10. So I downloaded Solaris Nevada Express, and replaced the Solaris10 11/06 SUNWsi3124 (pkgrm) with SUNWsi3124 (pkgadd) from Nevada. It worked and has been stable for 3 months now. *But* beware: Regular kernel-patches wilreplace the alien SUNWsi3124! It is a Hack and i works OK for ZFS, but don’t do this with root-fs! (ZFS is robust, so it just makes the ZFS pool unawailable till the problem is fixed in case the alien SUNWsi3124 drops out, and goes online again when you have hacked back SUNWsi3124 (even if /dev/dsk has been renumbered :)

Nice link :)

I’ll agree with the bastard comment but I’ll give FreeBSD cudos for the handbook which makes it a bit easier to get up and going and stay up and going for newbies.

Still waiting to see what we can do with ZFS under the hood in Leopard …

I have to say I am hanging out for that.

Yeah, FreeBSD is a better choice for me on a home box – the Solaris installer was a little ‘antique’. The only problem I have on FreeBSD is dependency management in ports – I think I just don’t know how to administer it well yet, so that makes it my fault, not its. I do hear good things about Solaris in production unix boxen though…

Actively using FreeBSD 7-current and not very happy:
1. It says it’s recommended to have 512M+ RAM – the statement doesn’t have much to do with reality.
It doens’t matter how much ram you have, you can go rock-stable with 256M and have it crash with 1GB even after tuning of the kernel memory size. The outcome depends on how actively you are using the ZFS – looks like under heavy load you may cause either some sort of race conditions or it’s just “not enough” kernel memory – but kernel panics and you may be lucky if the whole thing reboots. No data loss was observed. Running zpool scrub may eat lots of kernel memory as well.
2. As I said before – you may be lucky if the whole thing reboots after a kernel panic and not just freezes.
3. ztest fails almost at once, but that’s more about threading library problems, I guess.
4. I really doubt that ZFS will be usable until something changes in kernel architecture (that is kernel memory outage will not cause panic, at least), or ZFS will be tunable to use a fixed amount of kernel memory – or else it’ll not matter how much ram you have you will still be able to crash under active usage patterns – that is e.g. rapidly creating, writing and deleting lots of small files. This requires relatively fast CPU/drives, or you’ll be unable to crash :).
5. ZFS concept of giving 1 user 1 file system instead of disk quota and general idea of having tons of FS doesn’t look so well when you have to share that over NFS – instead of sharing just one folder you’ll have to share 1000?! Nonsense…

Solaris is not good for lots of people as well:
1. It’s package management sucks.
2. Compiling lots of things requires much more effort than it does using FreeBSD ports collection, more effort then doing that on some Linux, and generally requires knowledge of what you are doing and how this should be done on Solaris – e.g. no /usr/local in it (surely, you can create it, edit the paths and so on…). No common tools as well that one is used to have on *BSD/Linux

You do not have to share 1000 file systems… assuming you have /home/$user shares, you just have to share /home, and all children FS will be shared as well, just as if they were directories on a single FS.

I have similar problem. I think I will just buy an external SATA2 card, this way I dont have to worry about motherboards.
This card AoC-SAT2-MV8.cfm is reported to work (two sites) with Solaris:
http://napobo3.blogspot.com/2006/04/sata2-under-b36.html

It is PCI-X, however several people says PCI-X works in a normal PCI slot. Best would be in a server mobo.

No, this doesn’t work, at least on FreeBSD. If each user has hist own FS – it has to be shared and mounted separately. Sun is aware of the problem – it should be present with any unix system – you can’t share other FS even if it’s mounted into subdirectory of a parent FS. Sun proposes to use automaunter for facilitation of the whole thing – but it’s sort of weird to have 1000 shares mounted, even if it’s fully-automated.
http://www.sun.com/bigadmin/xperts/sessions/21_zfs/#5