Active boot disk serveraid drivers9/22/2023 ![]() If that's true, there would be no issue using these for ZFS. Give me one solid speck of information that shows these cards are bunk and I'll run to eBay and post these guys up ASAP :-DĬlick to expand.So basically you're saying that even in RAID mode, the disks not configured as being part of an array (thus lacking metadata) would be served directly to the OS without interference from the RAID firmware? I'd appreciate it if somebody else started testing these cards as well to begin ruling out the "flakey" stuff that was happening with previous driver revisions. The cool thing about the storage manager is that once you install the software on your server, you can take the client portion of the software and install it on your end-user machine to gain access to the HBA's settings from there. Installing it is a matter of running an install.sh script and rebooting. Go to the following link:Ĭlick on Support and Downloads and you'll be presented with all of the drivers for the card, the latest firmware, and the MegaRAID Storage Manager (near the bottom). I believe you can make the change through MegaCLI, but I did it through the add-on utilty called MegaRAID Storage Manager. Again, I've also done minimal testing with failing hard drives to see if I could get the system to do a drive dropout. Not that my experience is the authority on the matter, but I've passed nearly 10TB of data through my systems so far and not a single hiccup. I'm pretty sure that you can find any computer product on the market via Newegg, read the reviews, and you'll discover that every product has it's outliers when it comes to reliability. The latest drivers were released at the turn of the year, nearly seven months after. Mind you, all of these accounts of failure were also around May 2010 with an older set of drivers. ![]() There are two other web links in that thread on page three that detail a kernel panic (in OpenSolaris), a ZIL failure (OpenSolaris and Nexenta), and two cards that dropped dead with no explanation of how they failed. acesea in this thread said the same exact thing about OpenIndiana, which I have no experience with. You've said the term "flakey" with regard to these drivers in another thread if I remember correctly, but don't recall you ever posting anything concrete or consistent. " command, I was up and running after a restart. After unzipping and doing a simple "pkgadd -d. ![]() ![]() No, I had to download the imr_sas driver directly from LSI's website to get them both up and running. Is the M1015 for everyone, no, only if you want to save a few bucks over a 9211-8i and ARE FEELING ADVENTURESOME, because while results in my own testing have been positive it needs more testing by more people before anyone stamps a guarantee on it for ZFS use.īut you're right about keeping good backups. Is target-mode firmware technically cleaner for JBOD based striped arrays like ZFS given that it removes some complexity and perceived overhead, perhaps, probably, but still arguable, as performance is pretty much identical between 9211 w/ IT and M1015 in my testing with Solaris Express 11 and raidz2. when disks aren't part of a raidset the card simply doesn't interfere or intervene, there's nothing *for* it to error correct on a pass-through disk because there's no other duplicate or parity data to correct it *with*. Thus ERC timeout issue is irrelevant for JBOD disks on 9240/M1015 because default behavior of the iMR light raid stack *in last several firmware revs* is to treat unconfigured disks as JBOD/dumb mode and doesn't apply its raid-oriented ruleset to them, including ERC timeout - they are true passthrough and it does not mark them with any metadata. gone are the days of "simulated" JBOD on a raid controller by having to configure individual disks in separate RAID-0's. You're making the assumption that a 9240/M1015 treats any disk as a raid disk, it doesn't. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |