Mark Kent wrote:
>
>>
>> For blu-ray where the actual data per frame is greater plus what ever the
>> algorithm for decoding it, then a USB with a reasonably large buffer
>> might do the job. I don't know the actual details of blu-ray, just the
>> odd bits and pieces that are available on the Internet, so I don't really
>> know how CPU intensive it is.
>>
>> Streaming isn't really that different from this sort of buffering.
>
> Oh yes it is, Sir! It's completely different.
>
No, it isn't. It can be different from a programming perspective. For
example for streaming that requires any amount of buffering you would
always use a ring buffer or use a class with a dynamic buffer that can be
extended at the rate required by your particular stream. For other
buffering you may not need to do that. Generally in buffering you know the
limit of your buffer, in streams you are less likely to have a fixed value
for a buffer size, though you will probably have a max value.
A max value for very high speed is going to be extremely high, so if you
went for a fixed buffer for that you are likely to be wasteing a great deal
of memory when it is actually rarely used.
If your stream is such that it requires little processing, then what ever
speed the stream it is very unlikely that you need a large buffer. But you
will need some, because in a shared environment you can not assume that CPU
time is always available.
>
>> The
>> primary difference is that processing can not look behind, so what is
>> passing through filtering or processing is not dependant on what is yet
>> to come and has no more than knockon effect from what has already been
>> processed. Of cause the streaming process can make use of controls codes
>> for the streaming system itself, which in itself can change the
>> processing of data that follows. But it is still a steam, like a fast
>> river, it flows in one direction, it wears trenches from the river bed
>> that affects the water following, that water might be carrying rocks that
>> changes the flow of the water that follows.
>
> Sorry - entirely the wrong analogy, pretty, but not appropriate.
>
> The critical piece, which you have not recognised (most people don't),
> is that the human brain cannot cope with jitter, wander, lost packets,
> and so on, but for a audio stream, the stream terminates in the head of
> the listener. Gaps are not allowed.
>
That is wrong and has been wrong since the start of digital sound (and
analogue, but the recovery was natural for analogue rather than a process).
Digital sound and video has to be able to recover from losses. The
processing of these must be very fault tollerent. Better to have a bit of
noise or silence than to stop the work on every problem, because none of us
would ever get to hear or see anything on our computers or TVs if it was.
It is this tollerence that makes it ideal for streaming, the packets
following do not depend on the success of the packets ahead of them.
|
|