SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support
Forum Rules FAQs Community Downloads Today's Posts Search

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #61  
Old 06-24-2008, 12:48 PM
MrD MrD is offline
Sage Aficionado
 
Join Date: Feb 2005
Location: Washington DC
Posts: 387
Quote:
Originally Posted by jerryt View Post
The Areca 12XX and Highpoint 3XXX series cards support MAID. I can comfirm that it works well on the Areca cards. I like that the drives are at 28c whenever I check the SMART monitoring, I have another raids 10 setup with a SIL 3132 card and the drives run continuous at 48c.
Be careful using spin-up / spin-down on consumer drives (i.e. IDE). I have burned up a few drives with this feature enabled.

Of course I was using RAID so YMMV; this was on an Areca 1120 card.

I put my DVDs on the RAID filesystem, recordings are on a separate physical disk.
__________________
[size=1]-MrD
=============
Linux Server 7.1.9 (1)HD300 (1) HD200 (1) HD100 (2) PC Clients
Intel Xeon L? 32Gb
CetonTV cable card /FIOS
Reply With Quote
  #62  
Old 06-24-2008, 01:35 PM
jerryt jerryt is offline
Sage Fanatic
 
Join Date: Oct 2007
Posts: 832
Quote:
Originally Posted by MrD View Post
Be careful using spin-up / spin-down on consumer drives (i.e. IDE). I have burned up a few drives with this feature enabled.

Of course I was using RAID so YMMV; this was on an Areca 1120 card.

I put my DVDs on the RAID filesystem, recordings are on a separate physical disk.
My ARC-1230 is for up to 12 SATA drives.

Last edited by jerryt; 06-24-2008 at 02:45 PM.
Reply With Quote
  #63  
Old 06-24-2008, 02:21 PM
MrD MrD is offline
Sage Aficionado
 
Join Date: Feb 2005
Location: Washington DC
Posts: 387
Quote:
Originally Posted by jerryt View Post
My ARC-1230 is for up to 8 SATA drives.
SATA and IDE are equivalent from a reliability perspective. It's SCSI or SAS that it designed for high(er) reliabilty.
__________________
[size=1]-MrD
=============
Linux Server 7.1.9 (1)HD300 (1) HD200 (1) HD100 (2) PC Clients
Intel Xeon L? 32Gb
CetonTV cable card /FIOS
Reply With Quote
  #64  
Old 06-24-2008, 02:48 PM
jerryt jerryt is offline
Sage Fanatic
 
Join Date: Oct 2007
Posts: 832
Quote:
Originally Posted by MrD View Post
SATA and IDE are equivalent from a reliability perspective. It's SCSI or SAS that it designed for high(er) reliabilty.
I will see if this is a problem, I can always turn off "Drive spin down" if it is a problem. The Seagate drives are all new and have a five year warranty, so I have time to try the spin down feature.
Reply With Quote
  #65  
Old 07-03-2008, 06:54 AM
jerryt jerryt is offline
Sage Fanatic
 
Join Date: Oct 2007
Posts: 832
Quote:
Originally Posted by jerryt View Post
Resolved this issue. Rebuilt array (Non LBA64) < 2TB and Sage Server, Client, and Media Extender all playing nice. (A theory is that there may have been a file on the array which was using port 42024, a virus?)
Rebuilt the array with LBA64, GPT partitions as 2.1TB then expanded to 2.8TB and Sage still working fine.

Super having everything working together!!
Reply With Quote
  #66  
Old 07-03-2008, 09:25 AM
briands briands is offline
Sage Icon
 
Join Date: Aug 2004
Location: Bloomington, IN
Posts: 1,093
For those that don't frequent AVS...

For those that don't frequent AVS, thought this might put some of our obsessions in perspective
Reply With Quote
  #67  
Old 08-12-2008, 02:30 PM
mikehaney mikehaney is offline
Sage User
 
Join Date: Jul 2008
Posts: 39
This has been an endless struggle for me as well. I have about 2.5 TB of data currently using UnRAID. I have tried several times to switch to something else but always end up coming back for the flexibility. But I would really love to get everything down to 1 server. Currently I have my UnRaid server, and a Windows Home Server running Sage. I tried using WHS for everything, but the performance was lacking and it just wasn't flexible enough.

What I am thinking about is switching to the Linux version of Sage and run everything on a single Linux box. For a couple of months I played around with running Linux (Ubuntu) on my server, and even though it was a lot more involved to setup, the performance of LVM over RAID5 was blazing. I never thought software RAID could be that quick. I'm just a little worried about going with Linux for everything, because I don't want to limit my options. For example - is there a comskip equivalent I can run on Linux (that is supported by Sage)?

I'm just wondering if anyone has tried this option (Sage for Linux with software RAID5) and what the limitations might be?
Reply With Quote
  #68  
Old 08-12-2008, 02:38 PM
svemuri svemuri is offline
Sage Advanced User
 
Join Date: Dec 2005
Posts: 79
I have this setup running with 2TB in RAID5. CentOS 5. 1 Avermedia A180 HD OTA, 1 Happauge HVR-1800 OTA, 1 Happauge PVR-500 MCE (two ports) connected to Dish receivers. 3 HD extenders. No major issues.
I've given up on UnRAID.
Quote:
Originally Posted by mikehaney View Post

I'm just wondering if anyone has tried this option (Sage for Linux with software RAID5) and what the limitations might be?
Reply With Quote
  #69  
Old 08-12-2008, 06:28 PM
unkyjoe's Avatar
unkyjoe unkyjoe is offline
Sage Advanced User
 
Join Date: Dec 2006
Location: Seguin TX
Posts: 221
I tried using Linux as my media server, it was just too complicated.

So I build a W2K3 server with a decent Intel MB and lots of ram.

I dont care about recordings being lost so I worry not about raid, you actually take a performance hit with RAID which is why you really need a good quality raid card.

I simply have my drives in a 3-5 Addonics SATA adapter unit, running with standard on-board SATA controllers and one add on card. It runs as the Sage server, and storage server for all audio-video-dvd's on the home network.

The only thing I use RAID for is the system drive, I use raid 1 to simply mirror the drives.

My biggest thing with Linux was the SMB performance, it was taking 8-10 seconds to load files across the network with Linux serving the files, with W2K3 the performance was much better, 1-2 seconds to open a file.

I still use Linux for my telephone system
__________________
SageTV HD100 Extender
SageTV MVP Extender
Media Server-WHS - 2GB Ram - 3TB storage - Windows 7 MC - My Movies Plugin for Win7MC
"Life is a banquet and most people are starving to death!"
Reply With Quote
  #70  
Old 08-12-2008, 06:46 PM
pez's Avatar
pez pez is offline
Sage Advanced User
 
Join Date: Aug 2004
Location: Arizona
Posts: 165
Has anyone tried this sw raid solution from ciproco? Tom sHardward reviewed it in an infomeracial.

It looks interesting because of OCE and OLRM, which might be worth the $50 they want for it.

-pez
Reply With Quote
  #71  
Old 08-13-2008, 12:31 AM
erik erik is offline
Sage Aficionado
 
Join Date: May 2005
Posts: 467
Quote:
Originally Posted by mikehaney View Post
For example - is there a comskip equivalent I can run on Linux (that is supported by Sage)?
Comskip runs with equal performance under wine on Linux
__________________
Support Comskip, visit the forum and donate at http://www.comskip.org/
Reply With Quote
  #72  
Old 08-14-2008, 06:45 AM
Fuzzy's Avatar
Fuzzy Fuzzy is offline
SageTVaholic
 
Join Date: Sep 2005
Location: Jurupa Valley, CA
Posts: 9,957
A lot of this talk of NAS's and such just seems like a waste for me. I have a Windows XP machine running my Sage Server, all background tasks (comskip, transcode, etc), and it has all my storage directly attached. I have done away with RAID'ing my Sage Recordings, and simply broke it into individual drives. Sage does a great job of spreading the love among the storage locations. This also lets the individual drives spin down, and don't get locked up with the group like in a RAID array. Adding more Sage storage is extremely easy, as you just add a drive, and add that to the list of Sage Storage locations. No RAID migration required, no rebuilding problems.

I do still have 2 drives mirrored for my critical data backups (kids pictures, record keeping, etc), and those are also backed up on my desktop (which has plenty of unused room).

This is all on one machine which saves considerable juice (over two seperate ones). I may in the near future purchase some USB and HDMI extension cables and make this my HD client as well, since it usually has plenty of Horsepower left over.
__________________
Buy Fuzzy a beer! (Fuzzy likes beer)

unRAID Server: i7-6700, 32GB RAM, Dual 128GB SSD cache and 13TB pool, with SageTVv9, openDCT, Logitech Media Server and Plex Media Server each in Dockers.
Sources: HRHR Prime with Charter CableCard. HDHR-US for OTA.
Primary Client: HD-300 through XBoxOne in Living Room, Samsung HLT-6189S
Other Clients: Mi Box in Master Bedroom, HD-200 in kids room
Reply With Quote
  #73  
Old 08-18-2008, 12:08 AM
Ikarius's Avatar
Ikarius Ikarius is offline
Sage Advanced User
 
Join Date: Aug 2008
Posts: 84
I figure I'll reply here since I *just* finished building a box that does exactly what you're asking about. I gave up on external enclosures, as they all seem to have terrible cooling/airflow, and as a result, drives in them usually die somewhere between 6 months and 1 year for me.


I had a few additional bits I wanted;
1. I wanted a small form-factor case
2. I wanted to be able to run VMware for virtual machines on the server

Hardware was the following:

Case: Apevia X-Qpack (shoebox factor, micro-ATX mobo) http://www.newegg.com/Product/Produc...82E16811144110

Motherboard: ASUS P5BV-M (server class motherboard, micro-atx form factor) http://www.newegg.com/Product/Produc...82E16813131257

Drive enclosures: Icydock SATA enclosure (gives me 3 hot-plug SATA trays that go in 2 5.25" external drive bays)
http://www.newegg.com/Product/Produc...82E16817994026

SATA drives: 3 WD 1 Terabyte green drives (low-power big drives)
http://www.newegg.com/Product/Produc...82E16822136151

Boot drive: 4gb PATA Flash drive ( boot / OS drive )
http://www.logicsupply.com/products/fdm40xdi4g


8 gig RAM, 2.4gig core-2 duo chip

Software:
I loaded the system with Ubuntu 8.04 x64 using a USB CD-ROM drive, plugged in the 3 SATA drives and built a software RAID-5 setup with them. Total usable space on the RAID set is 2 terabytes. I benchmarked the software RAID volume at 80 mb/sec writes, and 130 mb/sec reads. That's far from insane, but it beats the living snot out of ANY consumer grade NAS you can purchase. I chose to go software RAID as linux's implementation is very reasonably speedy, and I'm not up a creek without a paddle due to a RAID controller failing 2 - 3 years down the line, and finding out I can't replace that exact model, and can't necessarily read my RAID set with whatever new model is being sold (yes, I've seen that happen).

I've got the system running now, SageTV is running, VMware is running, and I'm seriously loving it. Small form-factor, the chipsets used for the server class motherboard are terrific (reliable stuff, intel & broadcom, no marvell or j-micron crap). Booting off a flash disk was extra points on the cool-factor. If one of the RAID drives fails, I can simply yank it out, and replace the drive. Won't even need to reboot the server.

If you're interested and want more details, feel free to ask.

Caveat: I'm a professional systems admin, so I do this stuff for a living. There were some pretty hairy bits I went through, mostly due to having to start out with 2 of the 1tb drives and then grow onto the 3rd so I could transfer my old media collection onto the RAID set.

Cheers
Ikarius

Last edited by Ikarius; 08-18-2008 at 12:31 AM.
Reply With Quote
  #74  
Old 08-13-2009, 01:56 AM
aarcane aarcane is offline
New Member
 
Join Date: Aug 2009
Posts: 3
I came across this thread looking for general massive storage solutions. One highly scalable solution I've come up with doesn't require any special hardware and will work with any linux distro and most BSDs is as follows.

Storage starts in cheap cases ($50-$80 on newegg), I can reasonably get 12 drives of any size in an arbitrary case. a decent motherboard with a few PCIE ports isn't too much more expensive. an intel gigabit card ($30) and an SATA controller or two to supplement what the motherboard provides and you have 12 drives with no external attachments and no port multipliers (which can get expensive). throw in a decently fast CPU (AMD are cheap) and you can run any of a few various configurations. Assuming 1TB drives, which are widely available at under $100 from various manufacturers:

1) 12 drives in raid 10 for 1 massive 6TB volume.
2) 6 drives in raid 6 for 2 Medium volumes of 4TB or 8TB total.
3) 4 drives in raid5 for 3 small volumes of 3TB or 9TB total.

you can of course also make a number of other configurations, and if you want to invest more $$ in the underlying hardware, you can get more storage per system.

if absolute cost/TB is your goal, but you require some sort of redundancy, go with option 3. if you've got some $$ to blow, go with option 1. it's all up to you.

you configure your groups using software raid and export the resultant disks using AOE or iSCSI.

that's the backend node setup.

You then have a head node. it's connected to a switch, and then to each backend node.

the head node mounts every exported group and then runs linux LVM atop that. each group is a pv. of course with LVM you can have one large VG or a number of smaller ones. it's up to you. I'm in favor of one large VG. you can create a massive (think: easily 20TB+) lv and format it with XFS. (XFS performs well on every disk operation except delete). you can make another lv (2-5TB?) and format it with any filesystem you prefer. beauty of LVM is that you can map an lv to a pv within a vg. if you export a 6TB group of raid10 from one system and map a 6TB filesystem atop it, you suddenly have 6TB of drive space which can tolerate up to 6 simultaneous drive failures.

Once you ahve your drives partitioned and mounted:

/media/video (20TB+)
/media/important (6TB)
/media/backups (6TB)

you can export them to linux clients on your main LAN using nfs, or windows clients using Samba.

in my scheme, users backup their important files (DVD rips? School works?) to /media/important. amanda comes along at night and runs a backup of everything in important to /media/backups. /media/backups is on a higher end server using a nicer motherboard, more CPU, and locked away in a secure palce in my house on the far end of a long bit of cat6 cable.

this system has a simple growth plan.

each year you can buy a new cheap system in about 3 months at $100/mo. then each month you plop down $100/drive. when you have enough to bring up your new group, you export it, then start buying the next drives.
when you need more space, you simply lvm pvcreate; lvm vgextend; lvm lvresize; xfs_growfs and you're done.

you can grow anywhere from 3TB/4Mo up to 6TB/Year at the price of $100/mo.

Important Notes:

Buy various brand names and product lines -- The primary problem with raid arrays is drive failure. Statistics show that drives from a single manufacturer are likely to fail at approximately the same time, meaning: if you have 4 drives from one manufacturer that you bought at the same time fro the same company of the same model, they're more likely to fail together, and all your data goes poof. ergo: mix and match. keep the capacity similar, and you won't waste more than a few MB per drive.

Boot Volume? -- linux needs a hard drive to boot from. there are a few options here:
1) buy a 1.5TB drive with your 1TB drives, poof, instant 500GB boot volume.
2) buy a USB drive. 2-8GB for only $20, and you're good to go. hope it doesn't fall out or get kicked.
3) use that annoying IDE port on your mobo -- 12 drives per case is a very easy to do, you can easily find a slot for one more.
4) PXE Boot from your head node -- root over NFS is easy to manage if you have a small OS and lots of RAM. Note: this doesn't work well with consumer distros like *hat*, *core*, fedora*, *man*, *buntu*, etc. they have large base installs.
5) LiveCD. -- gotta custom press one, probably. stupid default configs.

Gigabit -- spring for gigabit. it'll make everything faster.

Power -- Most of the power consumption in a computer is in the monitor. even for LCDs. you don't need 1 monitor per server. if you're competent you'll need 1 monitor total. you hook it up to your server long enough to get sshd running, then do the rest from your laptop. done.
Additionally, Linux, when properly tuned with hdparm, will spin-down inactive volumes. yes, this works in raid arrays, lvm, and even in aoe and iscsi.

Drives on the head-node? -- Maybe. Do you want to have a small local backed up partition? it's a tradeoff. if you have raid5/6 running, they take alot of CPU power, and you'll have less CPU cycles left for doing other stuff. if your backend node count runs high enough, and your throughput is sustained enough, you probably don't want disks in your head node. if you only have 2 backends and no plans to grow, then you can probably save some $$ by putting one or two groups in your head node.

Saturation? -- most usage patterns will lead to 1 group filling, then another filling, then a third. very little sustained transfer to multiple groups. when reads occur, they're more likely to occur on the same groups you've written to. (you're more likely to watch last night's house than last years). ergo, bonded connections aren't worth it most of the time. 1gigabit link would suffice for most patterns. if you do need to bond for some extra agregate bandwidth, only on the head node and the newest backend node.

Using the head? -- if there aren't alot of backend nodes, or you don't use alot of disk access, the head node is certainly capable of running a commercial scanner, or a web server, or other services. due to it's mission critical nature, I wouldn't advise using it as your internet gateway/router, nor as any sort of media player or desktop system. my original concept calls for having it manage backups, and maybe network monitoring.

the whole stack is basically:
smb/nfs | gigabit
XFS
LVM
aoe/iscsi | gigabit
raid{10/,5/,6}
sata.

commercial NAS/SAN solutions do basically the same thing, only substitute FibreChannel for the gigabit parts and hardware raid instead of software raid and have a lower drive capacity at higher costs.

I've been thinking about this for a long time, and I know I've covered alot of the bases, but I'd love any feedback you guys might have, or any issues you all see.
Reply With Quote
  #75  
Old 08-13-2009, 02:34 AM
Fuzzy's Avatar
Fuzzy Fuzzy is offline
SageTVaholic
 
Join Date: Sep 2005
Location: Jurupa Valley, CA
Posts: 9,957
Some of this depends on your use. If this is purly for SageTV Storage, you'd be better off just installing the drives in the Sage server, as opposed to a separate NAS setup. Performance and simplicity will be greatly improved.

You could also build a separate SAS attached case for your drives, with its own power supply, but still run directly from the server. You're going to need a large, or numerous drive controllers to work with your proposed 12+ drive NAS anyways, might as well spring for a decent SAS card, that would do the RAID in hardware, and put that into the server itself.
__________________
Buy Fuzzy a beer! (Fuzzy likes beer)

unRAID Server: i7-6700, 32GB RAM, Dual 128GB SSD cache and 13TB pool, with SageTVv9, openDCT, Logitech Media Server and Plex Media Server each in Dockers.
Sources: HRHR Prime with Charter CableCard. HDHR-US for OTA.
Primary Client: HD-300 through XBoxOne in Living Room, Samsung HLT-6189S
Other Clients: Mi Box in Master Bedroom, HD-200 in kids room
Reply With Quote
  #76  
Old 08-13-2009, 03:35 AM
Ikarius's Avatar
Ikarius Ikarius is offline
Sage Advanced User
 
Join Date: Aug 2008
Posts: 84
I've been looking at this stuff for a LONG time. I posted the server I built a while back in this same thread, but 3 drives is a bit small. One thing which I seem to have stuck in my head is the absolute desire to keep the server package small. I'm still looking for what I want... eventually it should show up.

One pretty darn interesting box which has shown up is http://www.tranquilpc-shop.co.uk/aca...E_SERVERS.html

It's awfully close, but has two issues to me- no drives other than the 5 drives, which will be the RAID. I'd like a separate boot device- preferably a 2.5" SSD device. It's also really underpowered.

I'm contemplating going to a really small server box and coupling it via SAS to an external enclosure with 5 drive bays. That may get me where I'd like to be with server power, size, and storage capacity.

On the OS side of things, I've played a while with OpenSolaris, and found it to be superlative. It's vastly cleaner than linux- the product of a single company driving the vision. ZFS is also absolutely *amazing* technology, and now is my preferred route for managing large storage. With drives as large as we have today, it is becoming more and more likely that we'll run into a non recoverable error on a drive when another fails- leading to a failure of the rebuild... leading to loss of *all* the data on the RAID. ZFS supports background scrubbing- which can detect these ahead of time, so they're not run into when you cannot afford them- at RAID rebuild time.

The singular problem with Opensolaris is.... SageTV and the requisite drivers for capture devices don't run on it. Perhaps the answer is splitting the storage server away from the Sage server, but I truly want an all-in-one package, and as others have pointed out- having sage running on the machine with the directly attached drives has better performance. Linux's NFS code is frankly- awful. And while the Samba project has produced a great piece of software, the CIFS protocol itself is poor.

Decisions decisions.

Cheers
Ikarius
__________________

SageTV 6.6.2, SageMC+CenterSage Theme
Server: Intel Core2 Q6600, 8gb memory, 3x 1tb WD EACS drives, software RAID5 2tb capacity, 4gb Flash boot drive, Ubuntu 8.0.4 Server edition
Capture: 1x HD-PVR -> Motorola DTC6200
Clients: 1x STX-HD100 1x STX-HD200, Windows & OSX Clients

Last edited by Ikarius; 08-13-2009 at 03:43 AM.
Reply With Quote
  #77  
Old 08-13-2009, 10:12 AM
gjvrieze gjvrieze is offline
Sage Advanced User
 
Join Date: Dec 2007
Posts: 116
Here is my configuration:

File/Sage TV Server:

14.08TB

Norco 4020 Case
ASUS DSBF-DE
Dual Intel Xeon Quads e5410s.
4GBs FB-DIMM
80GB Boot
(14) 1TB Seagate drives in RAID 6
Areca 1280ML RAID controller

Backup Server:

11TB

ASUS M2N-SLI Deluxe
AMD 4200x2
2GB RAM
120GB Boot
(2) 1TBs
(6) 1.5TBs
(No RAID)


__________________
Rackmount 20 drive hotswap case, 2x Intel e5410 Xeons, 4GB FBDIMM, 1x 80GB (OS), 14x 1TB in raid 5(DATA!!), Aerca 1280DML, Server 2k3 Enterprise x64 R2. Sage 6.4.8
Reply With Quote
  #78  
Old 08-13-2009, 11:00 AM
Ikarius's Avatar
Ikarius Ikarius is offline
Sage Advanced User
 
Join Date: Aug 2008
Posts: 84
I think I just found the case I've been looking for for so damn long.

http://www.pcdesignlab.com/Product-Qv2E

Shoebox form-factor, takes standard micro-atx motherboards, and has 3 external 5.25" drive bays. I have just emailed them asking them to confirm that this case will work with something like: http://www.pc-pitstop.com/sas_cables.../jage35r40.asp allowing 5 hot-plug SATA drives. As most server motherboards come with 6x SATA connectors, that leaves one free SATA connector to drive an internally mounted SSD drive.

I'll let y'all know what they have to say!

Oh- and briands- That link you posted from AVS forums- I've got two of those same 24-drive supermicro chassis I'm using at work; filled with 1Tb hard drives, running opensolaris. They're awesome!


Cheers
Ikarius
__________________

SageTV 6.6.2, SageMC+CenterSage Theme
Server: Intel Core2 Q6600, 8gb memory, 3x 1tb WD EACS drives, software RAID5 2tb capacity, 4gb Flash boot drive, Ubuntu 8.0.4 Server edition
Capture: 1x HD-PVR -> Motorola DTC6200
Clients: 1x STX-HD100 1x STX-HD200, Windows & OSX Clients

Last edited by Ikarius; 08-13-2009 at 11:21 AM.
Reply With Quote
  #79  
Old 08-13-2009, 12:55 PM
aarcane aarcane is offline
New Member
 
Join Date: Aug 2009
Posts: 3
you're both right, opensolaris would be a superior choice for the head node in my setup most likely. There are the few hidden requirements (dhcpd for managing the backend nodes, etc.) that I'm sure will be no problem. Anyway, the point of moving the drives into inexpensive chassis with inexpensive motherboards is that it's cheaper to get 12 drives in a single chassis and raided using software raid and iscsi than it is to buy a commercial manufactured product. I can't even find an empty rack-mount 12 drive chassis for under $300, but I can build one myself with iscsi and raid for under $200.

http://www.tranquilpc-shop.co.uk/aca...E_SERVERS.html
in the 4+1 configuration, this chassis has 4 drives in a raid array and 1 drive not in a raid array. the drive not in the array is not hot-swappable, and is suitable for an OS drive. the other 4 of course can be exported as a single volume over NFS, Samba, or AOE or iSCSI depending on your setup.

Problem I have with hardware raid in general is that once you initialize your array and copy over the first bits of data, you're stuck with your proprietary choice of controller. not so bad, until things go wrong. when that controller dies, you have to replace it with an IDENTICAL contoller and hope that it's got a recovery path for you. (If it's a good manufacturer they'll have a recovery path, but some cheaper cards might not allow you to recover an array). With linux's software raid array, you just need to remember your array specifications, and you can usually look that information up with just the raw drives. a raidtab is optional and makes recovery easier. you just copy your raidtab and move your existing raid array from the dead system to a new system, load the array, and then optionally fsck it.

I'm off to research how opensolaris' zfs would support large numbers of drives in multiple chassis~
Reply With Quote
  #80  
Old 08-13-2009, 02:12 PM
Ikarius's Avatar
Ikarius Ikarius is offline
Sage Advanced User
 
Join Date: Aug 2008
Posts: 84
Sigh. got the email back from them- there's things in the way of fitting a 5-in-3 SATA hotswap bay in the box. The guy did say they've been considering making a case where such a setup WOULD work, however. Anyone else interested, feel free to shoot them a message saying you're interested in such a product!


Cheers
Ikarius
__________________

SageTV 6.6.2, SageMC+CenterSage Theme
Server: Intel Core2 Q6600, 8gb memory, 3x 1tb WD EACS drives, software RAID5 2tb capacity, 4gb Flash boot drive, Ubuntu 8.0.4 Server edition
Capture: 1x HD-PVR -> Motorola DTC6200
Clients: 1x STX-HD100 1x STX-HD200, Windows & OSX Clients
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Potentially 15TB Sage Server Setup Advice rajczi Hardware Support 16 02-01-2008 11:00 PM
Video Browser Search JREkiwi SageTV Studio 24 04-26-2007 11:00 PM
Storage questions, NAS, WOL, lots of stuff! Kirby Hardware Support 36 08-21-2006 06:59 PM
Excluding imported audio/video from title search? GTwannabe SageTV Software 0 05-25-2006 03:27 PM


All times are GMT -6. The time now is 04:13 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.