Welcome to the Multimedia Course: Multimedia System Standards
OER initiative for Building Collabrative Learning for Multimedia
Patience Factor
Patience Factor: Studies have shown that computer users have a patience factor of 2-4 secs i.e. when a key is pressed , they expect a feedback in 2-4 secs. A response after 2 secs disturbs the natural rhythm, when response goes to 4 sec range the user gets concerned and types the same key again.
Why Compression
Text data which is ASCII Files is transferred at 100kbps( 50 pages of text) and most LANS easily handle this. However for images, video, audio which are large data objects requiring large storage space. In order to manage large multimedia data objects need to be compressed to reduce file size and at the receiver end it needs to be reconstructed, decompressed . These Multimedia elements need to be stored, retrieved, transmitted , displayed and hence compression techniques are required.
The compression and decompression techniques are being used for variety of applications like facsimile, printer, document storage and retrieval, teleconferencing, multimedia messaging systems. Compression principles try to eliminate redundancies (Example : In B/W picture we can use a code mechanism to store repeated white pixels and in absence it will be treated as black. Compression techniques attempt to reduce horizontal and vertical redundancies in CCITT standards
Huge Data Demands:
The way the humans are communicating is increasing huge demand of bandwidth. As the technology is developing people seek screens which are larger with more resolution, there is need of high density TV. On a LAN , WAN, MAN, Internet depending on the bandwidth available appropriate techniques are required for reliable communications. Hence there is need for techniques of compression and decompression.
Hence CCITT ( Consultative Committee for Telephone and Telegraph) has standardized compression and decompression techniques.
Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed (the option of decompressing the video in full before watching it may be inconvenient, and requires storage space for the decompressed video). The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data. Compression was one of the main drivers for the growth of information during the past two decades.
Lossless versus lossy compression
Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy. For example, in English text, the letter 'e' is much more common than the letter 'z', and the probability that the letter 'q' will be followed by the letter 'z' is very small. Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable. Generally, a lossy data compression will be guided by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to variations in color. JPEG image compression works in part by "rounding off" some of this less-important information. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression.
Lossy
Lossy image compression is used in digital cameras, to increase storage capacities with minimal degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 Video codec for video compression.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the signal. Compression of human speech is often performed with even more specialized techniques, so that "speech compression" or "voice coding" is sometimes distinguished as a separate discipline from "audio compression". Different audio and speech compression standards are listed under audio codecs. Voice compression is used in Internet telephony for example, while audio compression is used for CD ripping and is decoded by audio players.
Lossless
The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ which is optimized for decompression speed and compression ratio, but compression can be slow. DEFLATE is used in PKZIP, gzip and PNG. LZW (Lempel–Ziv–Welch) is used in GIF images. Also noteworthy are the LZR (LZ–Renau) methods, which serve as the basis of the Zip method. LZ methods utilize a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded (e.g. SHRI, LZX). A current LZ-based coding scheme that performs well is LZX, used in Microsoft's CAB format.
The very best modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling.
In a further refinement of these techniques, statistical predictions can be coupled to an algorithm called arithmetic coding. Arithmetic coding, invented by Jorma Rissanen, and turned into a practical method by Witten, Neal, and Cleary, achieves superior compression to the better-known Huffman algorithm, and lends itself especially well to adaptive data compression tasks where the predictions are strongly context-dependent. Arithmetic coding is used in the bilevel image-compression standard JBIG, and the document-compression standard DjVu. The text entry system, Dasher, is an inverse-arithmetic-coder.
Compression
Welcome to the Course !
|