|
General Discussion General discussion about SageTV and related companies, products, and technologies. |
|
Thread Tools | Search this Thread | Display Modes |
#1
|
|||
|
|||
Deinterlacing question
Just a quickie - if you have say a 1080i signal, so 60 fields a second, and you want to display it on a 1080p60 screen, why do you need to do deinterlacing? Why can't the processor fill in the missing lines in each field with black, and then let the eye be fooled by the resulting "interlaced" image in the same way that it is in a genuine CRT interlaced screen that is only drawing half the lines in each scan?
|
#2
|
||||
|
||||
the windows desktop is progressive, even if the monitor displaying that image is interlaced.
Weave deinterlace is probably what you need, but you need to get perfect scanline matching between source video lines, and display lines -- any scaling of the video will mess this up..
__________________
Check out my enhancements for Sage in the Sage Customisations and Sageplugins Wiki |
#3
|
||||
|
||||
You would find it very hard to match the scan lines up with the display, as neilm is saying.
If you have a 1080P display, then you are a lucky person. I have a TV that will take a 1080P input, but scales it and displays it as 1080i. Most likely, you want to deinterlace with the Nvidia decoders and display at 1080P with your card. Displaying the desktop in 1080i causes stuttering in my experience. Windows can do interlaced output if the video card supports it (like my Radeon 9800), but it doesn't like to.
__________________
Mike Janer SageTV HD300 Extender X2 Sage Server: AMD X4 620,2048MB RAM,SageTV 7.x ,2X HDHR Primes, 2x HDHomerun(original). 80GB OS Drive, Video Drives: Local 2TB Drive GB RAID5 |
#4
|
|||
|
|||
It's a theoretical question, not a practical one!
My understanding is that your standard CRT interlaced TV fools the eye by drawing all the even lines as field 1, then all the odd lines as field 2, then all the even lines as field 1; and if you are using NTSC it's doing that at a rate of 60 fields a second. The eye/brain then gets clever and sticks it all together into a smooth image. However, an LCD draws all the lines (a frame) at the rate of 60 fields a second. And as nielm says, the windows desktop has to be progressive. So *if*, instead of deinterlacing, you interspersed each of the lines in field 1 with a black line, and each of the lines of field 2 with a black line, and then kicked it out as progressive, wouldn't you get the same visual effect as a CRT interlaced monitor? And wouldn't the eye/brain therefore stick it all back together again just as it does if you watch an interlaced TV? |
#5
|
||||
|
||||
Yes, there is an option, in the Nvidia decoder to display the fields separately, but I can't tell how it does that. It may do exactly what you are talking about , but I am not sure.
__________________
Mike Janer SageTV HD300 Extender X2 Sage Server: AMD X4 620,2048MB RAM,SageTV 7.x ,2X HDHR Primes, 2x HDHomerun(original). 80GB OS Drive, Video Drives: Local 2TB Drive GB RAID5 |
#6
|
|||
|
|||
I'd guess that's just bob deinterlacing - so it will display the fields separately, but upsize each by 100% in order to make them fit the 576 horizontal resolution. Which is fine, really; just seems a little unnecessary if the human eye can do it all for you. Lot less computational power to stick a line of black pixels between each line of data than to calculate what the colour of each pixel should be on the fly.
There's a hidden subtext here - like you I have a Radeon and am using a VGA cable, so the Radeon is outputting an interlaced image to the TV. And the horizontal resolution and the refresh rate of the recording is identical to the image the Radeon is putting out. So really I don't want to deinterlace at all, but of course I have to because that's how Windows works. I'm a little concerned that there's a possibility that when I do bob deinterlacing, it's the "real" lines that are being effectively deleted by the graphics card and the calculated, filler lines that are being kept! Which would be a shame. Of course, if I could persuade it to intersperse the real lines with black ones, I would know for sure as I would either be getting a black screen (as the graphics card merged the two frames into an interlaced frame using the black lines each time) or the image as it was recorded. |
#7
|
|||
|
|||
Quote:
|
#8
|
||||
|
||||
You'd loose half the brightness of your display if you filled in with black lines.
|
#9
|
||||
|
||||
Quote:
I would think, when every other line is being drawn on an interlaced screen, the lines being skipped are still glowing somewhat, due to the fact that the screen is a phosphor coating. Having black interleaved in the whole mix of a picture would cut the brightness (as Stanger said), where whites would seem gray, and darker colors would be closer to black. Doesn't sound like a good combo to me.
__________________
Mike Janer SageTV HD300 Extender X2 Sage Server: AMD X4 620,2048MB RAM,SageTV 7.x ,2X HDHR Primes, 2x HDHomerun(original). 80GB OS Drive, Video Drives: Local 2TB Drive GB RAID5 |
#10
|
|||
|
|||
Fair enough, and I agree that if it was a good idea someone brighter than me would already have done it! I was just trying to understand what the problem with it was. Losing brightness sounds fair enough as a reason.
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|