Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

Why was the 8×8 DCT size chosen?

chosen dct size
0
Posted

Why was the 8×8 DCT size chosen?

0

A. Experiments showed little compaction gains could be achieved with larger transform sizes, especially in light of the increased implementation complexity. A fast DCT algorithm will require roughly double the number of arithmetic operations per sample when the linear transform point size is doubled. Naturally, the best compaction efficiency has been demonstrated using locally adaptive block sizes (e.g. 16×16, 16×8, 8×8, 8×4, and 4×4) [See Gary Sullivan and Rich Baker “Efficient Quadtree Coding of Images and Video,” ICASSP 91, pp 2661-2664.]. Inevitably, adaptive block transformation sizes introduce additional side information overhead while forcing the decoder to implement programmable or hardwired recursive DCT algorithms. If the DCT size becomes too large, then more edges (local discontinuities) and the like become absorbed into the transform block, resulting in wider propagation of Gibbs (ringing) and other unpleasant phenomena. Finally, with larger transform sizes, the DC term is

0

A. Experiments showed little improvements with larger sizes vs. the increased complexity. A fast DCT algorithm will require roughly double the arithmetic operations per sample when the transform point size is doubled. Naturally, the best compaction efficiency has been demonstrated using locally adaptive block sizes (e.g. 16×16, 16×8, 8×8, 8×4, and 4×4) [See Baker and Sullivan]. Naturally, this introduces additional side information overhead and forces the decoder to implement programmable or hardwired recursive DCT algorithms. If the DCT size becomes too large, then more edges (local discontinuities) and the like become absorbed into the transform block, resulting in wider propagation of Gibbs (ringing) and other phenomena. Finally, with larger transform sizes, the DC term is even more critically sensitive to quantization noise.

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.