Post by Runner Post by Runner
Just updated my system today from 16.04 to 18.04 and my MKV's no
longer play correctly, blocky and tearing. Have tried uninstalling
and reinstalling VLC. Also tried a VLC appimage for 18.04, still
blocky. Any help to resolve welcome. Thank you.
Just to follow up, I tried mpv media player and it plays them perfectly,
but I would still like to use VLC.
While this is a windows page, there is a suggestion that
blocky playback in VLC is due to the selection of
"Hardware Accelerated Decoding". Check the setting in
your Preferences and play with it a bit and retest.
You could be using Nouveau for your NVidia video card or
you could be using the Restricted Driver selection
provided by NVidia (binary blob). The vintage of your
card affects acceleration options (i.e. whether the
video SIP has MP4 of H.264 or H.265 or MPEG2 or whatever).
With Nouveau, there might be some emulated decoding and
a few things accelerated (hardware scaler).
There are a *lot* of variables in there.
The MKV is the "container". There are CODECs inside for
the video stream and the audio stream. The accelerated
decoding is for the CODEC used. Hardware tends to support
Hollywood formats. Ogg/Theora would not be something
NVidia or AMD would be in a rush to put in shader decoders
To debug your situation, we'd need to know
MKV video - what codecs were inside, specifically the video stream ?
An application like "ffplay" from "ffmpeg" would tell you.
video card ?
video driver used to make video card work ? Use Inxi to get details.
vlc preferences used ?
You could compare how MPV video player is playing your
video (maybe unaccelerated, nothing but CPU for decoder).
Does MPV have preferences ?
The Nouveau people tell you a bit about what work they've
done to harness the SIP in the GPU.
NVidia have some details for their hardware. AMD
should have a similar page. NVidia has NVdec, NVenc, CUDA (shader)
as examples. The FreeDesktop page gives the Linux names of
some of the subsystems in software, used for video playback.
The very first hardware acceleration feature was IDCT (Inverse Discrete
Cosine Transform), which is a part of working in the frequency domain,
with macroblocks. A lot of the video formats work in the frequency
domain, throwing away sharpness to decrease file size. In modern times,
snooty developers have stopped using IDCT - presumably the processor
is now faster at it than consulting some video card, so the
shoe has shifted to the other foot now. But at one time, that
was our "acceleration" - it's just a tiny portion of the decoding
process. The NVidia chart above, the *entire* chain is in hardware
now, but not for every codec out there. This means, if all was working
as well as it should, for a few movies the only CPU needed
would be to fetch a block of data from the hard drive
every once in a while (close to 0% CPU). When the CPU does the
decoding, it might take 30% to do the job. In the old days,
it wasn't uncommon for CPU to range from 10% (very good quality
software) to 100% (CPU pegged because software was so bad at it).
There's more CPU power available today, so it's less likely to
rail. All that the hardware acceleration does, is annoy the