SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support
Forum Rules FAQs Community Downloads Today's Posts Search

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #21  
Old 11-08-2010, 03:03 PM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
Well, I finally got all the parts in and I've been testing it on a limited basis without any issues so far..

Hardware;
CPU - Intel Xeon e5640
MB - Supermicro X8ST3-F
Memory - 12GB Kingston ECC
Hypervisor - ESXi 4.1

I loaded ESXi onto a 4GB USB flash drive and boot from there. I created a WHS VM (2 vCPU and 1 GB) and installed SageTV 7. I moved one of the DualTV cards from my current system and configured it for PCI passthrough, as well as the onboard SAS controller. I then added a 400GB drive using RDM for a recording drive. SageTV is configured to use the DualTV card as well as my HDHomerun as recording sources.

As a test, I set it up to record two SD and two HD shows at the same time and then connected one of my HD200's and checked the recordings. While I didn't watch the shows through all the way though, I did watch about 10 minutes of continuous video of each, and skipped around in the videos looking for any corruption/skipping/audio sync errors. Couldn't find any.

This was with the commercial skip plugin installed, and I can see the edl files so it appears to have run. I need to load up Showanalyzer and run another test, since that is what I've been using on the old WHS.

Still trying to decide if I want to keep SageTV on the WHS VM, or create a seperate VM running XP (or even Win7) to seperate SageTV from my WHS. My main reason for a seperate VM would be to install the WHS connector software and be able to backup that system so I can easily restore it at a later point if I need too.... including all my SageTV settings.

This was while running a Server '03 VM (2 vCPU, 2 GB RAM - VCenter), a Server 08 VM (1 vCPU, 2 GB RAM - AD), an XP VM (1 vCPU, 1GB RAM), and a Win7 VM (1 vCPU, 1GB RAM). All the OS VMDK files are currently stored and running off a single WD Blue 250GB drive in a Dell Optiplex 760 running Openfiler for iSCSI.

My current plan is to pickup up a Perc5i and configure a 4-drive RAID10 array for the VMs (still booting off of a USB drive for ESXi). All the storage drives for the WHS VM will be connected to the onboard SAS controller.

The other nice thing was that I tested encoding a DVD using MediaShrink, and the VM with only 2 vCPU took 5 minutes longer than my current AMD 9750 system. I can live with that minor loss in speed This was not while recording of course.
Reply With Quote
  #22  
Old 11-08-2010, 07:58 PM
DigitalMan DigitalMan is offline
Sage Advanced User
 
Join Date: Aug 2007
Posts: 82
Glad to hear it. That was what I was expecting you would see. I don't think you need to be concerned about encoding dvd's while recording. I do that frequently even while another VM is encoding to h.264 (although I have 8 cores, plus hyperthreading).

I would also suggest you can increase the CPU share in ESXi for the sage VM and make the encoding VM 4 CPU's without any issue. Depending on the share values this means Sage gets all the CPU it wants when it needs it and others can use it all when it doesn't.
Reply With Quote
  #23  
Old 11-08-2010, 11:03 PM
crusing crusing is offline
Sage Advanced User
 
Join Date: Sep 2010
Location: Kennewick, WA
Posts: 160
Please keep posting on your experiences with this. I have been contemplating this same thing for a while as the PC's have sprouted like mushrooms in my house. Thanks !
Reply With Quote
  #24  
Old 12-16-2010, 12:20 PM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
Sorry for not updating recently.

WARNING... this post contains a lot of rambling due to massive undecisiveness

All my testing with SAGETV in a VM was going smoothly and I had planned on migrating to the new setup at Thanksgiving since I took the week off. Bought a nice new SageTV 7 license to do the testing and planned on upgrading at the same time. Then my wife put a hold on that because she didn't want to deal with the change... and had a lot of stuff planned for the week so I wouldn't have time for the migration

About this time, I also found out that Microsoft has dropped DE from WHSv2 so that left me in a "what now" state. I don't see the point in doing a new WHS install if MS is basically killing of it's next generation. What's the point in continuing to use a dying product... right?

That got me looking at UnRAID, which then led me down the patch of NAS/SAN options and made me start thinking about setting up a second system just for storage for two reasons...

1) At some point I may add another ESXi host and having an iSCSI/NFS storage would give me HA for my VMs. Especially with the HDHomeRun Prime coming out and my hopes that it will be supported in SageTV "somehow"

2) Cables for the Perc5i to the Norco 4220 (SFF-80087) are expensive! $50/cable just to go from SFF-8484 to SFF-8087??? For that price I can buy a Norco RPC-470 case and throw my old Intel E6400 desktop components with the data drives I already have. Load up Openfiler/Freenas/Nexentastor and have my SAN storage.

Now that leaves me with a dilemma. With the loss of DE in WHSv2 I don't see any reason to continue down that path. The SageTV WHS plugin was nice, but after years of using SageTV installing it manually in XP/Win7 isn't that hard. v7 makes it even easier with the plugin system.

So now I have to decide if I want to build a seperate SAN system to house my data drives or run them in the same box.

Using the external SAN approach, I have 6 SATA on the motherboard, plus the Perc5i (with breakout cables and battery). Running Openfiler/Freenas off a USB drive, I could set up a software-raid 4-disk RAID10 for VMs and 2 disks (no RAID) for TV recording off the motherboard ports. Then run a 5-disk RAID5 with my WD20EADS drives off the PERC5i for media storage passed to a VM and a seperate 2-disk RAID1 array for ISOs/etc. OR... I could pick up a 1068e-based card, ditch the Perc5i, and run all software-raid which would give me more future expansion options by just adding another SATA/SAS HBA in the future.

Combining everything in the same box would mean having to pass the onboard SAS controller/ports to a VM, either SAN/NAS or just a fileserver. It also means buying those expensive cables if I want to make use of the PERC card for a RAID10 VM datastore. I know some people run Freenas/Openfiler/etc as a VM to house other VMs but that just seems "kludgy" to me.

I also have to figure out a new solution for my media server VM. If I keep the drives in the same box as the ESXi, UnRaid might be an option but the USB drive requirement doesn't work for me if I go with an external SAN solution since I would want to make use of HA at some point. Along with the other features from WHS that I made use of (file access via Internet, web-host for forum software, PC backup, etc.).

I guess at this point I'm back to looking for suggestions/advice on the above ramblings and testing various scenerios. I'm also kind of waiting to see what happens with the HDHR Prime. My SD tuners will put a stop to any HA options, so that cuts any reason for an external SAN setup, but I don't really want to start down the "single-box" path only to waste the money on cables (which would be almost 1/2 the cost for the Prime) when I decide to move to the external SAN setup
Reply With Quote
  #25  
Old 12-22-2010, 10:38 AM
elefante elefante is offline
Sage User
 
Join Date: Feb 2010
Location: New York
Posts: 18
Yes virtualization works

I have the following set up and working perfectly:

2 ESXi (E+) with 8 GB USB (you should have 5 GB of space at a minimum for esx swap, if you are going to vmotion), 4 way AMD 940 (AM2), 16 GB each.

I have about 20 VM's running in this config, but my SAGETV VM is 2 vCPU, 2 GB and I run comskip and 2 HDHR (through IP), all Intel NIC. I have 2 HD200, 2 HD300 and set Java up to 860MB. No skips, no problems. For each vCPU that you add and don't need, you are wasting RAM and scheduling cycles. I have this conversation w/ my customers all day long. Also 64 bit VM's use more RAM (reserved) than 32 bit VM's so only use 64 bit VM's if you really need to. This can quickly add up to hundreds of megs.

Contrary to urban legends, vSphere is fully capable of hosting almost any workload (even massive OLTP databases) if configured properly, and that includes streaming video and the like. BTW - streaming video is very easy, unless you are rendering it...

My file server is a virtualized (4.1i) ESX host which runs vCenter VM and my file server VM on it (no vmotion obviously)

I use a RAID card (3ware 9650) which they have direct drivers for ESXi so this is exposed directly to the host for which I use iSCSI. I use opensuse and iSCSI enterprise target. I've been running this since ESX 3.0, so it is quite stable.

I chose to run in a VM because I wanted to run vCenter isolated from the other cluster (my prod VM's), and since the FS VM was in VMFS (thin), it was very easy to snap the image to an external SCSI (SATA) drive for backup purposes. No backup software needed. If you have access to vCenter licenses, you can use vDR which is our version of dedup, however if you thin provision your windows VM's (and keep data drives separate) my typical VM takes no more than 12 GB, so in total I have a little over 200 GB in used space for 20 VM's, which is nothing on a 300 GB iSCSI LUN.

I hear unRAID works fine, but keep in mind it is a quasi RAID 3 setup and it is designed for streaming apps, not day to day VMware. It will probably work for a small home setup, but it's really meant for data drives.

If you are into low maint and low power, these environments support VMware (iSCSI or NFS). I have the Synology for my DR solution, and I found the software to be the best. Note: iSCSI is faster than NFS for properly configured home setups. When you hear about the speed of NFS on VMW they are referring to NetApp, which costs lots of money and your home system will not have that technology.

1. Synology
2. Qnap
3. Iomega (EMC which owns VMW)

There are probably others, but I have tested these personally and found them best in the order I mentioned above. The Syn DS1010+ rocks, however due to my specific needs (vCenter), I will continue to run ESXi for my storage server.

I too am on the fence about SageTV, because I'm getting FIOS and I don't want to drop $200 for each HDPVR when I can simply get a HDhomerun prime (3 tuners) and do the same. So VZW gives me the MR-PVR for a year, so I will continue running SageTV for 2011, but if they don't get it fixed by end of 2011 I start selling off my Sage hardware and move on like I did from Snapstream.

Since Sage has been quiet on a number of fronts, I picked up a ROKU to do my Netflix (for $59+ it was worth avoiding the aggravation in waiting for Sage) in my non-critical TV's, and PS3 for bluray/Netflix in my HT.

Let me know if you have any more questions...
Reply With Quote
  #26  
Old 02-18-2011, 09:11 PM
ChePazzo ChePazzo is offline
Sage Aficionado
 
Join Date: Oct 2004
Posts: 287
I started reading this thread because I was curious how sage would operate as a VM guest.

It seems from the posts that there are 2 things that don't work well:
1. recording from PCI tuners
2. having more than one physical drive mapped to the sage VM.

and there is one thing that works well:
1. networking (specifically mentioned network tuners)

Any idea how the following would work:
1. Sage server running on VM Guest (Linux because I'm a fanboy)
2. all tuners are HDHR network tuners
3. all storage is on dedicated NFS server (mount /var/media/ on guest sage server)

That way, you have farmed out the recording and storage to the network and the guest VM just does what it needs to do and uses very little CPU and mem.
Reply With Quote
  #27  
Old 11-29-2011, 08:51 PM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
Resurrecting the thread with updates...

So, over one year later I finally made the full migration to a VM for my SageTV needs. It has been a long road with Microsoft's decision to neuter WHS in their second iteration, waiting for the HDHomeRun Prime to become available, testing..testing.. and more testing. The Prime was actually the last piece I needed before making the transition as I wanted to make the VM capable of High Availability across my ESXi (4.1 update 1) hosts, should one of them go down.

In the end, I scrapped the idea of using a PERC5i or any other RAID controller and instead decided to build my primary ESXi box as an All-in-one host with a SAN VM dedicated on it. For the SAN OS I went with OpenIndiana v151 and Napp-it to run ZFS. This allows me to avoid a hardware RAID card cost (using cheaper IT-mode HBA cards like the IBM M1015 and BR10i). It also allows me to easily migrate the arrays to a new box if I decide to split out the SAN duties to a separate physical system.

Primary System specs
Intel Xeon E5640 CPU (4 cores + hyperthreading)
Supermicro X8ST3-F motherboard
24 GB RAM
M1015 HBA
BR10i HBA
4-port Intel GB LAN card
(2) 160 GB laptop drives - ESXi install + SAN VM one, local datastore on the other
(4) Seagate ES 750GB drives
(2) Samsung HD103SJ 1 TB drives
(2) WD Black 1TB drives
(5) WD WD20EADS Green 2 TB drives
(3) Hitachi 2TB 7200RPM drives
Norco 4220 4U rackmount case

Secondary Host
Intel Xeon 3430 CPU
Supermicro X8SIL-F Motherboard
8 GB RAM
4-port Intel GB LAN card
160GB SATA drive
Aerocool Masstige case

Tertiary Host
Dell Optiplex 745
Intel Core2Duo E6400
8 GB RAM
2-port Broadcom GB LAN card
160GB Sata Drive

Tuners
HDHomeRun (original white version)
HDHomeRun Prime


Currently the Openindiana VM is configured with 6 GB RAM, 2 vCPU, and I've passed the motherboard's onboard SAS controller, as well as the two HBA cards to it via directpath. So far the drives are arranged as follows;
(1) Stripped + Mirrored array (RAID10) of the 4 750GB drives for VMs
(1) Stripped + Mirrored array (RAID10) of the 4 1TB drives for TV recording

The 8 2TB drives will be configured as a RAIDZ2 array for media/data storage once I receive a new power supply (750W) to handle all the drives. My trusty Corsair 520HX just barely has enough oomph for all those drives to initially spin up and I don't care to push it and run the risk of blowing something

For the SageTV VM I went with a Windows7 Professional install configured with 2 vCPU and 2 GB of RAM. I may increase that up to 4GB, but for the last couple days it hasn't run into any problems. I decided to keep the SageTV server seperate from my virtualized WHS2011 server so I can backup the OS drive easily.

The biggest snag I ran into was getting the recording drives setup for the SageTV server. Since I want it to be able to move between hosts I didn't want to pass a HBA card and physical disks to the system. My original plan was to just create a vDisk from my NFS storage on the SAN. Testing this however caused a significant amount of stuttering as soon as Comskip engaged, and a minor amount of stuttering even without Comskip. Apparently the overhead packet transfers and I/O for NFS didn't want to play nice.

The solution was to instead share the array via iSCSI from the SAN and connect to it with the Micorosoft iSCSI Initiator in Windows 7. The performance was significantly improved by using iSCSI. With iSCSI my CrystalDiskMark sequential write is approximately 100 MB/s and the read is approximately 80 MB/s. When attempting to use the vdisk on an NFS datastore, the sequential write was approximately 70 MB/s and the read was approximately 50 MB/s.

So far, so good. I've stress tested the new setup by recording on all the tuners and watching a recorded show on one of my extenders without a single stutter.
Reply With Quote
  #28  
Old 12-01-2011, 10:11 AM
m1abrams's Avatar
m1abrams m1abrams is offline
Sage Aficionado
 
Join Date: Sep 2004
Posts: 445
I notice that few people do a RAID1 for the ESX install and datastore. Is there a reason for not doing that? I would think having your datastore drive go belly up would take a bit of time to recover from, sure you would not lose any data since your data drives are other places but you would lose your VM installs. I suppose you could easily recover by recreating the SAN VM and then restore the other VMs via the backups stored on the SAN VM. But would just swaping in a new drive for the failed one be much easier and not really cost all that much more and you would suffer no downtime.
Reply With Quote
  #29  
Old 12-01-2011, 12:35 PM
michelkenny michelkenny is offline
Sage Advanced User
 
Join Date: May 2005
Location: Canada
Posts: 233
Quote:
Originally Posted by m1abrams View Post
I notice that few people do a RAID1 for the ESX install and datastore. Is there a reason for not doing that?
ESXi needs a real hardware RAID controller in order to do RAID ($500+). Any onboard RAID controllers of your motherboard are almost certainly not supported in RAID mode, nor will the cheap add on cards.
Reply With Quote
  #30  
Old 12-01-2011, 12:43 PM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
The only reason I didn't RAID1 the ESXi disk is because I didn't want to spend the cash on a supported controller Unfortunately, VMWare doesn't support most onboard controllers for RAID functions.

In reality, the only thing on the ESXi "install" disk is ESXi and the OI VM. The extra datastore disk is just for temp storage if I need to move things around. All my other VMs will be stored on the datastore provided by the OI VM.

I've gotten pretty familiar with setting up ESXi and OI with the number of times I've changed things during testing, so I could probably rebuild from scratch in about an hour. With OI + Napp-it, it is really easy to re-import arrays during a reinstall.

However, I am still considering finding a cheap supported 2-port controller to install and rebuild the ESXi install + SAN VM just to be on the safe side. Probably when I decide to move to vSphere 5.0.
Reply With Quote
  #31  
Old 12-01-2011, 08:18 PM
m1abrams's Avatar
m1abrams m1abrams is offline
Sage Aficionado
 
Join Date: Sep 2004
Posts: 445
Ok so I am still new to ESXi. If I understand you correctly you have ESXi installed and 1 VM the one with the SAN on the single drive datastore. The datastore for your other VMs reside on a datastore that is on a volume managed by the SAN?
That is rather meta!

So the ESXi server boots it starts up the SAN VM, do you need to manually mount the other datastore or can ESXi wait for the SAN VM to start before it trys to mount the datastore and boot the other VMs that reside on that datastore?
Reply With Quote
  #32  
Old 12-02-2011, 07:50 AM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
Quote:
Originally Posted by m1abrams View Post
Ok so I am still new to ESXi. If I understand you correctly you have ESXi installed and 1 VM the one with the SAN on the single drive datastore. The datastore for your other VMs reside on a datastore that is on a volume managed by the SAN?
That is rather meta!
Yes, all VMs except for the SAN storage actually reside on the drives that are being managed by the SAN OS via passthrough of the HBA cards.

Quote:
Originally Posted by m1abrams View Post
So the ESXi server boots it starts up the SAN VM, do you need to manually mount the other datastore or can ESXi wait for the SAN VM to start before it trys to mount the datastore and boot the other VMs that reside on that datastore?
This is a can be a problem if using iSCSI, but using NFS datastores instead allows ESXi to connect to the datastore as soon as the SAN VM finishes booting. Basically, I set the startup options for ESXi to automatically start SAN VM on bootup. So, as soon as ESXi finishes booting, it kicks the SAN VM in gear and as soon as that finishes, ESXi can see the datastores and I can start the other VMs.

Of course, this only happens during a complete power failure that lasts longer than my UPS can handle, or I have to shut the pimary ESXi box down for some reason. If a complete power outage happens, then things get a little more difficult for me because I also run an Untangle VM for a virtual router/dhcp server and my vCenter server is virtualized. So, when the power finally comes back up I have to use a hardwired system (with a static IP address) to connect directly to the ESXi hosts to start up the router and vCenter VM's also (after the SAN VM has finished booting). I'm still working on getting them configured for auto-start, but since they bounce between hosts it makes it a little more difficult. It's not too hard though, since I've managed to walk the wife through the process over the phone on that rare occasion when it happened and I wasn't around.
Reply With Quote
  #33  
Old 12-02-2011, 08:07 AM
m1abrams's Avatar
m1abrams m1abrams is offline
Sage Aficionado
 
Join Date: Sep 2004
Posts: 445
So discounting of course the expense of a proper RAID controller that ESXi supports wouldn't it be a little more robust and simpler to have all the VMs reside on a datastore that ESXi can see from startup and have the datastore on a RAID1 for high availability. IE if you had a RAID controller would you continue to store the VMs on a datastore managed by the SAN? Is there some benefit to doing it this way that I am not seeing?
Reply With Quote
  #34  
Old 12-02-2011, 12:04 PM
farscapesg1 farscapesg1 is offline
Sage Advanced User
 
Join Date: Mar 2005
Posts: 202
It would.. until you want to add another ESXi host. One host can't see the local storage on another host. That is where a SAN comes into play. Normally, in most productions environments, the SAN would be a seperate storage system that all the ESXi hosts can communicate with and access VMs from. This is one of the pieces that allows HA, Fault Tolerance, vMotion, etc. to work.

By using a virtualized SAN configuration, as long as the primary is up, the other hosts can access the virtualized SAN and perform HA and vMotion functions. That way I can test/learn/play with these features at home by taking down either the secondary or tertiary hosts and make sure HA/FT/vMotion is configured properly.

If your just running a single host, yeah putting all the VM's on a local datastore controlled by a RAID controller would work fine and be a lot simpler. But I wanted to play with some of the storage features ZFS offers that beats any hardware-based RAID. Things like block-level storage, fixing the RAID "write hole" issue, being non-dependant on specific hardware, etc.
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Virtualize with ESXi 4.0 harrijay SageTV Linux 47 12-09-2010 06:37 PM
WHS Sage Server access to shares on another server rmac321 SageTV Software 23 03-11-2009 07:20 PM
Sage TV for Windows Home Server Software, HPMediaSmart Server, and HD homerun tuners c309 SageTV Software 15 08-17-2008 06:03 AM
How Does SageTV Media Server for Windows Home Server work? PhillTheChill SageTV Software 7 11-16-2007 01:46 PM
Samba Server went down for 60 minutes, files still on server, not in SageTV? perfessor101 SageTV Software 2 04-13-2006 12:18 PM


All times are GMT -6. The time now is 11:00 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.