Help


[permalink] [id link]
+
Page "MPEG-1 Audio Layer II" ¶ 5
from Wikipedia
Edit
Promote Demote Fragment Fix

Some Related Sentences

audio and coding
Compression of human speech is often performed with even more specialized techniques, so that " speech compression " or " voice coding " is sometimes distinguished as a separate discipline from " audio compression ".
LPC may also be thought of as a basic perceptual coding technique ; reconstruction of an audio signal using a linear predictor shapes the coder's quantization noise into the spectrum of the target signal, partially masking it.
Lossy formats are often used for the distribution of streaming audio, or interactive applications ( such as the coding of speech for digital transmission in cell phone networks ).
A literature compendium for a large variety of audio coding systems was published in the IEEE Journal on Selected Areas in Communications ( JSAC ), February 1988.
While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual ( i. e. masking ) techniques and some kind of frequency analysis and back-end noiseless coding.
It supports hierarchical transmission of up to three layers and uses MPEG-2 video and advanced audio coding.
While data reduction ( compression, be it lossy or lossless ) is a main goal of transform coding, it also allows other goals: one may represent data more accurately for the original amount of space for example, in principle, if one starts with an analog or high-resolution digital master, an MP3 file of a given size should provide a better representation than a raw uncompressed audio in WAV or AIFF file of the same size.
Further, a transform coding may provide a better domain for manipulating or otherwise editing the data for example, equalization of audio is most naturally expressed in the frequency domain ( boost the bass, for instance ) rather than in the raw time domain.
* MPEG-2 ( 1995 ): Generic coding of moving pictures and associated audio information ( ISO / IEC 13818 ).
MPEG-2 is a standard for " the generic coding of moving pictures and associated audio information ".
The MPEG-2 Audio section, defined in Part 3 ( ISO / IEC 13818-3 ) of the standard, enhances MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5. 1 multichannel.
; Part 3: Audio compression codec for perceptual coding of audio signals.
MPEG-3 is the designation for a group of audio and video coding standards agreed upon by the Moving Picture Experts Group ( MPEG ) designed to handle HDTV signals at 1080p in the range of 20 to 40 megabits per second.
It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO / IEC Moving Picture Experts Group ( MPEG ) ( ISO / IEC JTC1 / SC29 / WG11 ) under the formal standard ISO / IEC 14496 Coding of audio-visual objects.
Speech coding is the application of data compression of digital audio signals containing speech.
Speech coding uses speech-specific parameter estimation using audio signal processing techniques to model the speech signal, combined with generic data compression algorithms to represent the resulting modeled parameters in a compact bitstream.
The techniques employed in speech coding are similar to those used in audio data compression and audio coding where knowledge in psychoacoustics is used to transmit only data that is relevant to the human auditory system.
Speech coding differs from other forms of audio coding in that speech is a much simpler signal than most other audio signals, and a lot more statistical information is available about the properties of speech.

audio and algorithm
Since the invention of the MIDI system in the early 1980s, for example, some people have worked on programs which map MIDI notes to an algorithm and then can either output sounds or music through the computer's sound card or write an audio file for other programs to play.
In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples which must be analysed before a block of audio is processed.
Perhaps the earliest algorithms used in speech encoding ( and audio data compression in general ) were the A-law algorithm and the µ-law algorithm.
In conjunction with an appropriate codec using MPEG or various manufacturers proprietary algorithms, an ISDN BRI can be used to send stereo bi-directional audio coded at 128 kbit / s with 20 Hz 20 kHz audio bandwidth, although commonly the G. 722 algorithm is used with a single 64 kbit / s B channel to send much lower latency mono audio at the expense of audio quality.
In software, an " audio codec " is a computer program implementing an algorithm that compresses and decompresses digital audio data according to a given audio file format or streaming media audio format.
The object of the algorithm is to represent the high-fidelity audio signal with minimum number of bits while retaining the quality.
Digital audio compressed by FLAC's algorithm can typically be reduced to 50 60 % of its original size, and decompressed into an identical copy of the original audio data.
However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Layer II audio standard.
The Layer III ( MP3 ) component uses a lossy compression algorithm that was designed to greatly reduce the amount of data required to represent an audio recording and sound like a decent reproduction of the original uncompressed audio for most listeners.
** Chi-Min Liu and Wen-Chieh Lee, " A unified fast algorithm for cosine modulated filterbanks in current audio standards ", J.
* Perceptual Audio Coding, an audio compression algorithm
AEC is an algorithm which is able to detect when sounds or utterances reenter the audio input of the videoconferencing codec, which came from the audio output of the same system, after some time delay.
Lossless Transform Audio Compression ( LTAC ) is a compression algorithm developed by Tilman Liebchen, Marcus Purat and Peter Noll at Institute for Telecommunications, Technical University Berlin ( TU Berlin ), to compress PCM audio in a lossless manner, unlike conventional lossy audio compression algorithms ( like MP3 ).

audio and used
Commonly used electronic devices which are found in practically every hospital are closed-circuit TV and audio systems for internal paging and instruction, along with radiation counters, timers, and similar devices.
Beginning with Reginald Fessenden's audio demonstrations in 1906, it was also the original method used for audio radio transmissions, and remains in use today by many forms of communication —" AM " is often used to refer to the mediumwave broadcast band ( see AM radio ).
The PowerPC CPU on PowerUP boards is usually used as a coprocessor for heavy computations ( a powerful CPU is needed to run MAME for example, but even decoding JPEG pictures and MP3 audio was considered heavy computation at the time ).
This is the primary recording format used in many professional audio workstations in the television and film industry.
Stand-alone, file based, multi-track recorders from Sound Devices, This of course results in a reduction in audio quality, but a variety of techniques are used, mainly by exploiting psychoacoustics, to remove the data that has least effect on perceived quality.
* AIFF standard audio file format used by Apple.
* amr-AMR-NB audio, used primarily for speech.
* Au the standard audio file format used by Sun, Unix and Java.
* awb-AMR-WB audio, used primarily for speech, same as the ITU-T's G. 722. 2 specification.
* mmf-a Samsung audio format that is used in ringtones.
* raw a raw file can contain audio in any format but is usually used with PCM audio data.
* wav standard audio file container format used mainly in Windows PCs.
* AD ))), logo used by ABC to mark programs that are augmented with an audio description track
* Bedini Audio Spectral Enhancer, an audio signal processor used in the early 1990s
The pair later applied the technique to printed media and audio recordings in an effort to decode the material's implicit content, hypothesizing that such a technique could be used to discover the true meaning of a given text.
The Mini CD has various diameters ranging from ; they are sometimes used for CD singles, storing up to 24 minutes of audio or delivering device drivers.
Songs often used the audio from the videos as samples incorporated into the music, such as with the songs " Timber " ( released as a single in 1998 ) and " Pan Opticon ".
In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language ( MML ) and MIDI interfaces, which were most often used to produce video game music, or chiptunes.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible ( or less audible ) components of the signal.
Voice compression is used in Internet telephony for example, while audio compression is used for CD ripping and is decoded by audio players.

0.184 seconds.