- 8,725
Interesting line of thinking but the difference is that a digital audio signal is a series of samples that when put together represent the waveform, for video information each pixel is told to emit or let pass (depending on type of display) light of particular frequencies so the video signal has nothing to do with generating the waveform of the light, if you want to think in audio terms then it's more like those old pianos with the paper rolls where a hole in the paper causes a key to be played.
That too!
The piano roll is a good analogy to a discrete element display.
So, vision (and video in general) is much more complex than hearing and audio in general. I find it fascinating, then, that we are more used to analysing what we can see, and being able to explain odd effects in our vision than we are with our hearing.
EDIT: I probably should have mentioned the difference between the idea of "frequencies we can hear", and of "the threshold between discrete sounds that we no longer discern as separate", which is heavily dependent on the nature of the sounds. That is probably a better comparison, but only really compares to the "at what point do streams of static images appear to be in motion smoothly".
Anyway, I once tested myself with sequential, dry (no reverb) gunshot sounds and I could discern pairs down to 6 milliseconds apart (I knew they were separate, though.) That corresponds to about 167 Hz. If I didn't know they were separate, that figure may have been as low as 90 Hz (11 ms.)
Had the sound been different (longer, for example) or of a different timbre / frequency content (and how that content evolved over its length) then the results would be very different, surely.
How does that help us with the frame-rate issue? It doesn't, save for the idea that comparing frame-rates of different materials / media is pointless, as has already been stated.
Last edited: