Transmission of Video Signals - E

Unit v
Sangeetha.v
Video: Analog Video Camera
A camcorder (videocamera recorder) is an electronic device that combines a video camera and
a video recorder into one unit.[1][2][3] Equipment manufacturers do not seem to have strict
guidelines for the term usage. Marketing materials may present a video recording device as a
camcorder, but the delivery package would identify content as video camera recorder.
In order to differentiate a camcorder from other devices that are capable of recording video, like
mobile phones and digital compact cameras, a camcorder is generally identified as a portable,
self-contained device having video capture and recording as its primary function.[4][5]
The earliest camcorders employed analog recording onto videotape. Tape-based camcorders use
removable media in the form of video cassettes. Nowadays, digital recording has become the
norm, with tape being gradually replaced with other storage media such as internal flash
memory, hard drive, and SD card. As of January 2011, none of the new consumer-class
camcorders announced at the 2011 International Consumer Electronics Show record on tape.[6]
Camcorders that do not use magnetic tape are often called tapeless camcorders, while
camcorders that permit using more than one type of medium, like built-in hard disk drive and
memory card, are sometimes called hybrid camcorders.
Overview
Camcorders contain 3 major components: lens, imager, and recorder. The lens gathers and
focuses light on the imager. The imager (usually a CCD or CMOS sensor on modern
camcorders; earlier examples often used vidicon tubes) converts incident light into an electrical
signal. Finally, the recorder converts the electric signal into video and encodes it into a storable
form. More commonly, the optics and imager are referred to as the camera section.
[edit] Lens
The lens is the first component in the light path. The camcorder's optics generally have one or
more of the following adjustments:




aperture or iris to regulate the exposure and to control depth of field;
zoom to control the focal length and angle of view;
shutter speed to regulate the exposure and to maintain desired motion portrayal;
gain to amplify signal strength in low-light conditions;

neutral density filter to regulate the exposure.
In consumer units, the above adjustments are often automatically controlled by the camcorder's
electronics, but can be adjusted manually if desired. Professional units offer direct user control of
all major optical functions.
[edit] Imager
The imager converts light into electric signal. The camera lens projects an image onto the
imager surface, exposing the photosensitive array to light. The light exposure is converted into
electrical charge. At the end of the timed exposure, the imager converts the accumulated charge
into a continuous analog voltage at the imager's output terminals. After scan-out is complete, the
photosites are reset to start the exposure-process for the next video frame.
[edit] Recorder
The recorder is responsible for writing the video-signal onto a recording medium (such as
magnetic videotape.) The record function involves many signal-processing steps, and
historically, the recording-process introduced some distortion and noise into the stored video,
such that playback of the stored-signal may not retain the same characteristics/detail as the live
video feed.
All but the most primitive camcorders imaginable also need to have a recorder-controlling
section which allows the user to control the camcorder, switch the recorder into playback mode
for reviewing the recorded footage and an image control section which controls exposure, focus
and white-balance.
The image recorded need not be limited to what appeared in the viewfinder. For documentation
of events, such as used by police, the field of view overlays such things as the time and date of
the recording along the top and bottom of the image. Such things as the police car or constable to
which the recorder has been allotted may also appear; also the speed of the car at the time of
recording. Compass direction at time of recording and geographical coordinates may also be
possible. These are not kept to world-standard fields; "month/day/year" may be seen, as well as
"day/month/year", besides the ISO standard "year-month-day".
[edit] Consumer camcorders
[edit] Analog vs. digital
Camcorders are often classified by their storage device: VHS, VHS-C, Betamax, Video8 are
examples of 20th century videotape-based camcorders which record video in analog form.
Newer digital video camcorder formats include Digital8, MiniDV, DVD, Hard disk drive direct
to disk recording and solid-state semiconductor Flash memory memory. While all these formats
record video in digital form, currently formats like Digital8, MiniDV and DVD have been losing
favor, and are no longer used in the most recent consumer camcorders.
In older analog camcorders, the imaging device was based on vacuum tube technology where the
charge on a light sensitive target was in direct proportion to the amount of light striking it. A
popular example of such an imaging tube was the Vidicon. Newer analog and all digital
camcorders use a solid state Charge Coupled Device (CCD) imaging device, or more recently a
CMOS imager. Both of these latter devices use photodiodes that pass a current proportional to
the light striking them (i.e. they are analog detectors), but that current is then digitised before
being electronically 'scanned' before being fed to the imager's output. The principal difference in
the latter two devices is in the manner in which that 'scanning' is accomplished. In the CCD, the
diodes are all sampled simultaneously, and the scanning then achieved by passing the digitised
data from one register to the next (the Charge Coupled element). In the CMOS device the diodes
are sampled directly by the scanning logic.
The take up of digital video storage improved quality. MiniDV storage allows full resolution
video (720x576 for PAL, 720x480 for NTSC), unlike previous analogue consumer video
standards. Digital video does not experience colour bleeding, jitter, or fade, although some users
still prefer the analog nature of Hi8 and Super VHS-C, since neither of these produce the
"background blur" or "mosquito noise" of digital video compression. In many cases, a highquality analog recording shows more detail (such as rough textures on a wall) than a compressed
digital recording (which would show the same wall as flat and featureless).
Unlike analog video formats, the digital video formats do not suffer generation loss during
dubbing, but can be more prone to complete loss. Theoretically digital information can be stored
indefinitely with zero deterioration on a digital storage device (such as a hard drive), however
since some digital formats (like MiniDV) often squeeze tracks only ~10 micrometers apart
(versus 19 to 58 μm for VHS), a digital recording is more vulnerable to wrinkles or stretches in
the tape that could permanently erase several scenes worth of digital data, but the additions
tracking and error correction code on the tape will generally compensate for most defects. On
analog media similar damage barely registers as "noise" in the video, still leaving a deteriorated
but watchable video. The only limitation is that this video has to be played on a completely
analogue viewing system, otherwise the tape will not display any video due to the damage and
sync problems. Even digital recordings on DVD are known to suffer from DVD rot that
permanently erase huge chunks of data. Thus the one advantage analog seems to have in this
respect is that an analog recording may be "usable" even after the media it is stored on has
suffered severe deterioration whereas it has been noticed[9] that even slight media degradation in
digital recordings may cause them to suffer from an "all or nothing" failure, i.e. the digital
recording will end up being totally un-playable without very expensive restoration work.
[edit] Modern recording media
For more information, see tapeless camcorder.
While some older digital camcorders record video on Microdrives and size-reduced DVD-RAM
or DVD-Rs, as of 2011 most recent camcorders record video on flash memory devices and small
hard disks, using MPEG-1, MPEG-2 or MPEG-4 formats. However, because these codecs use
inter-frame compression, frame-specific-editing requires frame regeneration, which incurs
additional processing and can cause loss of picture information. (In professional usage, it is
common to use a codec that will store every frame individually. This provides easier and faster
frame-specific editing of scenes.)
Other digital consumer camcorders record in DV or HDV format on tape and transfer content
over FireWire (some also use USB 2.0) to a computer, where the huge files (for DV, 1GB for 4
to 4.6 minutes in PAL/NTSC resolutions) can be edited, converted, and (with many camcorders)
also recorded back to tape. The transfer is done in real time, so the complete transfer of a 60
minute tape needs one hour to transfer and about 13GB disk space for the raw footage only—
excluding any space needed for render files, and other media. Time spent in post-production
(editing) to select and cut the best shots varies from instantaneous "magic" movies to hours of
tedious selection, arrangement and rendering.
[edit] Consumer market
As the mass consumer market favors ease of use, portability, and price, most of the consumergrade camcorders sold today emphasize handling and automation features over raw audio/video
performance. Thus, the majority of devices capable of functioning as camcorders are camera
phones or compact digital cameras, for which video is only a feature or a secondary capability.
Even for separate devices intended primarily for motion video, this segment has followed an
evolutionary path driven by relentless miniaturization and cost reduction, made possible by
progress in design and manufacturing. Miniaturization conflicts with the imager's ability to
gather light, and designers have delicately balanced improvements in sensor sensitivity with
sensor size reduction, shrinking the overall camera imager & optics, while maintaining
reasonably noise-free video in broad daylight. Indoor or dim light shooting is generally
unacceptably noisy, and in such conditions, artificial lighting is highly recommended.
Mechanical controls cannot scale below a certain size, and manual camera operation has given
way to camera-controlled automation for every shooting parameter (focus, aperture, shutter
speed, white balance, etc.) The few models that do retain manual override frequently require the
user to navigate a cumbersome menu interface. Outputs include USB 2.0, Composite and SVideo, and IEEE 1394/Firewire (for MiniDV models). On the plus side, today's camcorders are
affordable to a wider segment of the consumer market, and available in a wider variety of form
factors and functionality, from the classic camcorder shape, to small flip-cameras, to videocapable camera-phones and "digicams."
At the high-end of the consumer market, there is a greater emphasis on user control and
advanced shooting modes. Feature-wise, there is some overlap between the high-end consumer
and "prosumer" markets. More expensive consumer camcorders generally offer manual exposure
control, HDMI output and external audio input, progressive-scan framerates (24fps, 25fps,
30fps), and better lenses than basic models. In order to maximize low-light capability, color
reproduction, and frame resolution, a few manufacturers offer multi-CCD/CMOS camcorders,
which mimic the 3-element imager design used in professional equipment. Field tests have
demonstrated most consumer camcorders (regardless of price), to produce noisy video in low
light.
Before the 21st century, video editing was a difficult task requiring a minimum of two recorders
and possibly a desktop video workstation to control them. Now, the typical home personal
computer can hold several hours of standard-definition video, and is fast enough to edit footage
without additional upgrades. Most consumer camcorders are sold with basic video editing
software, so users can easily create their own DVDs, or share their edited footage online.
JVC GZ-MG555 hybrid camcorder (MPEG-2 SD Video)
In the first world market, nearly all camcorders sold today are digital. Tape-based
(MiniDV/HDV) camcorders are no longer popular, since tapeless models (SD card & internal
drive) cost almost the same, but offer much greater convenience. For example, video captured on
SD card can be transferred to a computer much faster than from digital tape. Hard disk
camcorders feature the longest continuous recording time, though the durability of the hard drive
is a concern for harsh and high-altitude environments. As of January 2011, none of the new
consumer-class camcorders announced at the 2011 International Consumer Electronics Show
record on tape.[6] However, in some parts of the world, newly-manufactured tape camcorders
might still be available due to the lower purchasing power or greater price sensitivity of the
consumers in these areas.
[edit] Other devices with video-capture capability
Video-capture capability is not confined to camcorders. Cellphones, digital single lens reflex and
compact digicams, laptops, and personal media players frequently offer some form of videocapture capability. In general, these multipurpose-devices offer less functionality for videocapture, than a traditional camcorder. The absence of manual adjustments, external-audio input,
and even basic usability functions (such as autofocus and lens-zoom) are common limitations.
Few can capture to standard TV-video formats (480p60, 720p60, 1080i30), and instead record in
either non-TV resolutions (320x240, 640x480, etc.) or slower frame-rates (15fps, 30fps.)
When used in the role of a camcorder, a multipurpose-device tends to offer inferior handling and
audio/video performance, which limits its usability for extended and/or adverse shooting
situations. However, much as camera-equipped cellphones are now ubiquitous, video-equipped
electronic devices will likely become commonplace, replacing the market for low-end
camcorders.
The past few years have seen the introduction of DSLR cameras with high-definition video.
Although they still suffer from the typical handling and usability deficiencies of other
multipurpose-devices, HDSLR video offers two videographic features unavailable on consumer
camcorders: shallow depth-of-field and interchangeable lenses. Professional video cameras
possessing these capabilities are currently more expensive than even the most expensive videocapable DSLR. In video applications where the DSLR's operational deficiencies can be mitigated
by meticulous planning of the each shooting location, a growing number of video productions
are employing DSLRs, such as the Canon 5D Mark II, to fulfill the desire for depth-of-field and
optical-perspective control. Whether in a studio or on-location setup, the scene's environmental
factors and camera placement are known beforehand, allowing the director of photography to
determine the proper camera/lens setup and apply any necessary environmental adjustments,
such as lighting.
A recent development to combine the feature-sets of full-feature still-camera and camcorder in a
single unit, is the combo-camera. The Sanyo Xacti HD1 was the first such combo unit,
combining the features of a 5.1 megapixel still-camera with a 720p video recorder. Overall, the
product was a step forward in terms of a single-device's combined level of handling and usability
. The combo camera's concept has caught on with competing manufacturers; Canon and Sony
have introduced camcorders with still-photo performance approaching a traditional digicam,
while Panasonic has introduced a DSLR-body with video features approaching a traditional
camcorder. Hitachi have introduced the DZHV 584E/EW which has 1080p resolution. This
model comes with a 3" pop-up touch screen, housed in a slim line-case and about the size of a
mobile phone.
[edit] Interchangeable lens camcorder
As reverse of DSLR cameras with high-definition video, in 2011 at least there are 2
Interchangeable lens camcorders which can capture Full HD video with full control of
camcorder, Panasonic AG-AF100 and Sony NEX-VG10 both with big sensor, not as usual as
non-professional camcorders. The DSLR lenses can be used with adapter for versatilities.[10]
[edit] Camcorder with built-in projector
In 2011 Sony released the HDR-PJ10/30/50 HD camcorders. These are the first camcorders in
the world to incorporate a small projector located on the side of the unit. This feature allows the
user to show their video to a group of viewers without the need to connect up to a television or a
full-size projector or even to upload onto a computer. Such a feature would have been
unimaginable only a generation ago. The specification varies between each model, the HDRPJ10 is the base model with 16GB internal memory. The HDR-PJ30 has double the capacity
(32GB), an additional light to aid in darkness and the ability to shoot a 25p image making the
video appear as if it was shot on film. The HDR-PJ50 is the top of the range model with a
220GB hard disk drive as well as the light to aid in darkness. While the projector could be seen
as useful feature, it's unknown whether any other manufacturer would include such a feature on a
camcorder in the future, only time will tell.[11]
File:Sony HDR-PJ10.jpg
Sony HDR-PJ10 camcorder with built in projector
[edit] 3D Camcorder
In 2011 Panasonic released the world's first camcorder to be capable of shooting in 3D, the
HDC-SDT750. It is a regular 2D camcorder that can shoot in full HD while 3D is achieved by
the detachable conversion lens. Sony subsequently released its own 3D camcorder, the HDRTD10. Unlike the Panasonic, the Sony HDR-TD10 has the 3D lens built in but it can still shoot a
normal 2D video. The down side to this is that it results in a rather ugly design and a high price
tag (£1,005.70 for the Sony vs. £686 for the Panasonic on Amazon). Panasonic have also
released normal 2D camcorders with optional 3D recording with the conversion lens being an
optional extra. The HDC-SD90, HDC-SD900, HDC-TM900 and HDC-HS900 are marketed as
'3D ready' being affordable regular 2D camcorders with the option to add the 3D capability at a
later date. Sony and some other manufacturers have even marketed 3D pocket camcorders, an
example being the Sony MHS-FS3. Sony are releasing the DEV-5 3D camcorder, however Sony
markets it as digital recording binoculars due to the shape of the unit. The down side of this
unusual camcorder is the hefty price tag, £2,605 on Sony's website, almost treble that of the Sony
HDR-TD10. Currently only Panasonic and Sony manufacture 3D camcorders but it is unknown
is to whether these will catch on, only time will tell.
[edit] Uses
[edit] Media
Operating a camcorder
Camcorders have found use in nearly all corners of electronic media, from electronic news
organizations to TV/current-affairs productions. In locations away from a distribution
infrastructure, camcorders are invaluable for initial video acquisition. Subsequently, the video is
transmitted electronically to a studio/production center for broadcast. Scheduled events such as
official press conferences, where a video infrastructure is readily available or can be feasibly
deployed in advance, are still covered by studio-type video cameras (tethered to "production
trucks.")
[edit] Home video
For casual use, camcorders often cover weddings, birthdays, graduation ceremonies, children
growing up, and other personal events. The rise of the consumer camcorder in the mid to late
'80s led to the creation of shows such as the long-running America's Funniest Home Videos,
where people could showcase homemade video footage.
[edit] Politics
Political protestors who have capitalized on the value of media coverage use camcorders to film
things they believe to be unjust. Animal rights protesters who break into factory farms and
animal testing labs use camcorders to film the conditions the animals are living in. Anti-hunting
protesters film fox hunts. People expecting to witness political crimes use cameras for
surveillance to collect evidence. Activist videos often appear on Indymedia.
The police use camcorders to film riots, protests and the crowds at sporting events. The film can
be used to spot and pick out troublemakers, who can then be prosecuted in court. In countries
such as the United States, the use of compact dashboard camcorders in police cars allows the
police to retain a record of any activity that takes place in front of the car, such as interaction
with a motorist stopped on the highway.
[edit] Entertainment and movies
Camcorders are often used in the production of low-budget TV shows where the production crew
does not have access to more expensive equipment. There are even examples of movies shot
entirely on consumer camcorder equipment (such as The Blair Witch Project and 28 Days Later).
In addition, many academic filmmaking programs have switched from 16mm film to digital
video, due to the vastly reduced expense and ease of editing of the digital medium as well as the
increasing scarcity of film stock and equipment. Some camcorder manufacturers cater to this
market, particularly Canon and Panasonic, who both support "24p" (24 frame/s, progressive
scan; same frame rate as standard cinema film) video in some of their high-end models for easy
film conversion.
Even high-budget cinema is done using camcorders in some cases; George Lucas used Sony
CineAlta camcorders in two of his three Star Wars prequel movies. This process is referred to as
digital cinematography.
[edit] Education, Teacher Evaluation and Teacher Preparation
Secondary Education and Higher Education in the developed world is increasingly integrating
digital media and computing into the fabric of students learning experiences. Students often use
camcorders to record video diaries, make short films, and develop a variety of multi-media
projects across subject disciplines.
Meanwhile teacher evaluation increasingly involves teacher's classroom lessons being digitally
recorded for review by school administrators and school district officials. This is especially
common during the process of tenure-granting (or withholding), and in cases where teacher's
continued tenure may be in question. Some feel the use of digital recording allows both school
districts and teacher's unions an opportunity to more objectively and comprehensively review
aspects of teacher performance in the classroom setting, whilst others, such as Alfie Kohn are far
more skeptical.
Recently in many a top ranked School of Education, integration of student camcorder-created
material as well as other digital technology has ingrained itself into the fabric of new teacher
preparation courses. The University of Oxford Department of Education PGCE programme and
NYU's Steinhardt School's Department of Teaching and Learning MAT programme provide two
examples of this trend.
The USC Rossier School of Education takes this one step further, by insisting that all students
purchase their own camcorder (or similar digital video recording devise) as a prerequisite to
beginning their MAT education programmes, many of which are delivered entirely online. These
programmes employ a modified version of Adobe Connect to deliver the entire taught
component of the MAT@USC. MAT students in-class teaching is captured by camcorder, posted
to USC's web portal, and then go through a process of evaluation by faculty in a similar manner
to what they would use if they were physically present in class.
In this way the use of the camcorder has allowed USC to entirely de-centralize its teacher
preparation away from Southern California to most American states, and several countries
around the world; and this has greatly increased the number of teachers they are able to train at
once. With significant teacher shortages looming in the USA, UK, Canada and Australia over the
next few years, this is likely to be a model which other institutions seek to emulate
Transmission of Video Signals
Introduction
This is not meant to be a text book on transmission but is intended to remove some of the
mystery associated with various methods of transmission. Many approximations and
simplifications have been used in writing this guide. This is to make the subject more
understandable to those people not familiar with the theories. For general application in the
design of CCTV systems it should be more than adequate and at least point the way to the main
questions that must be addressed. The manufacturers of transmission equipment will usually be
only too keen to help in final design.
This first part deals with the transmission of video signals by cables. Part 2 deals with the
transmission of video signals by other methods such as microwave, telephone systems, etc.
Diagram 1 illustrates the many methods of getting a picture from a camera to a monitor. The
choice will often be dictated by circumstances on the location of cameras and controls. Often
there will be more than one option for types of transmission. In these cases there will possibly be
trade offs between quality and security of signal against cost.
General Principles
The video signal
A field of video is created by the CCD being scanned across and down exactly 312 1/2 times and
this reproduced on the monitor. A second scan of 312 1/2 lines is exactly 1/2 a line down and
interlaced with the first scan to form a picture with 625 lines. This is known as a 2:1 interlaced
picture. The combined 625 line is known as a frame of video and made up from two interlaced
fields. The total voltage produced is one volt from the bottom of the sync pulse to the top of the
white level, hence one volt peak to peak(p/p). The luminance (brightness) element of the signal
is from 0.3 volts to one volt, therefore is 0.7 volts maximum. This is known as a composite video
signal because the synchronising and video information are combined into a single signal.
In the case of a colour signal, further information has to be provided. The colour information is
superimposed onto the video signal by means of a colour sub-carrier. A short reference signal,
known as the chroma burst, is added to the back porch after the horizontal sync pulse to detect
the difference in position or phase.
The transmission system must be capable of reproducing this signal accurately at the receiving
end with no loss of information.
Note that the imaging device is scanned 625 times but the actual resolution is defined by the
number of pixels making up the device.
Synchronising
The video signal from a TV camera has to provide a variety of information at the monitor for a
correct TV picture to be displayed. This information can be divided into: Synchronising pulses
that tell the monitor when to start a line and a frame; video information that tells the monitor how
bright a particular point in the picture should be; chrominance that tells the monitor what colours
a particular part of the picture should be (colour cameras only).
Bandwidth
The composite video output from the average CCTV camera covers a bandwidth ranging from
5Hz to many MHz. The upper frequency is primarily determined by the resolution of the camera
and whether it is monochrome or colour. For every 100 lines of resolution, a bandwidth of 1MHz
approximately is required. Therefore, a camera with 600 lines resolution gives out a video signal
with a bandwidth of approximately 6MHz. This principle applies to both colour and
monochrome cameras. However colour cameras also have to produce a colour signal
(chrominance), as well as a monochrome output (luminance). The chrominance signal is
modulated on a 4.43MHz carrier wave in the PAL system therefore a colour signal, regardless of
definition, has a bandwidth of at least 5MHz.
Requirements To Produce A Good Quality Picture
From the above it will be obvious that to produce a good quality picture on a monitor, the video
signal must be applied to the monitor with little or no distortion of any of its elements, i.e. the
time relationship of the various signals and amplitude of these signals. However in CCTV
systems the camera has to be connected to a monitor by a cable or another means, such as Fibre
Optic or Micro Wave link. This interconnection requires special equipment to interface the video
signal to the transmission medium. In cable transmission, special amplifiers may be required to
compensate for the cable losses that are frequency dependant.
Cable Transmission
All cables, no matter what their length or quality, produce problems when used for the
transmission of video signals, . the main problem being related to the wide bandwidth
requirements of a video signal. All cables produce a loss of signal that is dependent primarily on
the frequency, the higher the frequency, the higher the loss. This means that as a video signal
travels along a cable it loses its high frequency components faster than its low frequency
components. The result of this is a loss of the fine detail (definition) in the picture.
The human eye is very tolerant of errors of this type; a significant loss of detail is not usually
objectionable unless the loss is very large. This is fortunate, as the losses of the high frequency
components are very high on the types of cables usually used in CCTV systems. For instance,
using the common coaxial cables URM70 or RG59, 50% of the signal at 5MHz is lost in 200
metres of cable. To compensate for these losses, special amplifiers may be used. These provide
the ability to amplify selectively the high frequency components of the video signal to overcome
the cable losses.
Cable Types
There are two main types of cable used for transmitting video signals, which are: Unbalanced
(coaxial) and balanced (twisted pair). The construction of each is shown in diagrams 2 and 3. An
unbalanced signal is one in which the signal level is a voltage referenced to ground. For instance
a video signal from the camera is between 0.3 and 1.0 volts above zero (ground level). The shield
is the ground level.
A balanced signal is a video signal that has been converted for transmission along a medium
other than coaxial cable. Here the signal voltage is the difference between the voltage in each
conductor.
External interference is picked up by all types of cable. Rejection of this interference is effected
in different ways. Coaxial cable relies on the centre conductor being well screened by the outer
copper braid. There are many types of coaxial cable and care should be taken to select one with a
95% braid. In the case of a twisted pair cable, interference is picked up by both conductors in the
same direction equally. The video signal is travelling in opposite directions in the two
conductors. The interference can then be balanced out by using the correct type of amplifier.
This only responds to the signal difference in the two conductors and is known as a differential
amplifier.
Unbalanced (Coaxial) Cables
This type of cable is made in many different impedances. In this case impedance is measured
between the inner conductor and the outer sheath. 75 Ohm impedance cable is the standard used
in CCTV systems. Most video equipment is designed to operate at this impedance. Coaxial
cables with an impedance of 75 Ohms are available in many different mechanical formats,
including single wire armoured and irradiated PVC sheathed cable for direct burial. The cables
available range in performance from relatively poor to excellent. Performance is normally
measured in high frequency loss per 100 metres. The lower this loss figure, the less the distortion
to the video signal. Therefore, higher quality cables should be used when transmitting the signal
over long distances.
Another factor that should be considered carefully when selecting coaxial cables is the quality of
the cable screen. This, as its name suggests, provides protection from interference for the centre
core, as once interference enters the cable it is almost impossible to remove.
Balanced (Twisted Pair) Cables
In a twisted pair each pair of cables is twisted with a slow twist of about one to two twists per
metre. These cables are made in many different impedances, 100 to 150 Ohms being the most
common. Balanced cables have been used for many years in the largest cable networks in the
world. Where the circumstances demand, these have advantages over coaxial cables of similar
size. Twisted pair cables are frequently used where there would be an unacceptable loss due to a
long run of coaxial cable.
The main advantages are:
1) The ability to reject unwanted interference.
2) Lower losses at high frequencies per unit length.
3) Smaller size.
4) Availability of multi-pair cables.
5) Lower cost.
The advantages must be considered in relation to the cost of the equipment required for this type
of transmission. A launch amplifier to convert the video signal is needed at the camera end and
an equalising amplifier to reconstruct the signal at the control end.
Impedance
It is extremely important that the impedances of the signal source, cable, and load are all equal.
Any mismatch in these will produce unpleasant and unacceptable effects in the displayed picture.
These effects can include the production of ghost images and ringing on sharp edges, also the
loss or increase in a discrete section of the frequency band within the video signal.
The impedance of a cable is primarily determined by its physical construction, the thickness of
the conductors and the spacing between them being the most important factors. The materials
used as insulators within the cable also affect this characteristic. Although the signal currents are
very low, the sizes of the conductors within the cable are very important. The higher frequency
components of the video signal travel only in the surface layer of the conductors.
For maximum power transfer, the load, cable and source impedance must be equal. If there is
any mismatch some of the signal will not be absorbed by the load. Instead it will be reflected
back along the cable to produce what is commonly known as a ghost image.
Mixing Cable And Equipment Types
It is essential that coaxial cables and balanced cables should only be used with the correct type of
equipment. Unpredictable results will occur if the incorrect cable type is used. For instance, if the
intention is to use a balanced cable, this cannot be connected directly to a coaxial cable or an
amplifier designed to drive a coaxial cable. Some form of device is required to be connected
between the two cable types so that both cables are correctly matched. This piece of equipment
may be an amplifier or video isolation transformer.
Cable Joints
Every joint in a cable produces a small change in the impedance at that point. The mechanical
layouts of the conductors change where it is joined. This cannot be avoided. However, the
changes in impedance should be minimised by using the correct connectors. When in line joints
are being made, ensure the mechanical layout of the joint follows the cable layout as closely as
possible. The number of joints in a cable should be minimised, as each joint is a potential source
of problems and will produce some reflections in the cable.
The Decibel (dB)
Cable and amplifier performance are usually defined as a certain loss or gain of signal expressed
in Decibels (dB). The dB is not a unit of measure but is a way of defining a ratio between two
signals. The dB was originally developed to simplify the calculation of the performance of
telephone networks, where there were many amplifiers and lengths of cable on a network.
The calculations become extremely difficult, and often produce very large figures using ordinary
ratios, when many of them have to be multiplied and divided to work out the signal levels of the
network. However these calculations become relatively simple if the ratios are converted to the
logarithm of the ratio, which can then be just added and subtracted. This therefore, is the reason
for using the decibel, which in simple terms is:
10 x log (ratio)
This dB (power dB) is often used to measure power relative to a fixed level. It is not a measure
in its own right. If the impedance at which the measurements are made is constant, the dB
becomes 20 x log (ratio). This is the dB (voltage dB) which is normally used to define cable loss
or amplifier gain in the CCTV industry.
The advantage of using this method becomes obvious when working out the performance of a
network containing more than one or two items. Many people who do not use dBs all the time
have problems relating them to real ratios. The key figures to remember are:
If the ratio is 2:1, then 20 x log 2= 20 x .310 = 6.021, e.g. 6dB.
If the ratio is 10:1, then 20 x log 10= 20 x 1 =20, e.g. 20 dB.
If the ratio is 20:1, then 20 x log 20= 20 x 1.3=26, e.g. 26 dB.
Similarly a ratio of 100:1 is equal to 40 dB.
Therefore, put in reverse, some common ratios are:
6 dB is a loss or gain of 2:1
20 dB is a loss or gain of 10:1
26 dB is a loss or gain of 20:1
40 dB is a loss or gain of 100:1
Diagram 5 illustrates the relationship between the measure of signal to noise in dB and as a ratio.
Example Of Network Transmission
The following example illustrates a typical network and how to calculate the losses and gains.
To work out the net loss or gain of signal on a network, add the amplifier gains and subtract the
cable losses.
1st cable -- loss 12dB, 1st amplifier -- gain 6dB
2nd cable -- loss 20dB, 2nd amplifier -- gain 26dB
3rd cable -- loss 6dB.
The result would be: -12dB + 6dB - 20dB + 26dB - 6dB = -6dB
i.e.. 1/2 the input signal is present at the end of the 3rd cable. This calculation is much easier than
if the original ratios were used:
Reduction Of Signal To Noise Ratio.
When a video signal is amplified the noise, as well as the signal, is increased. If the amplifier
were perfect then the resulting signal to noise ratio would remain unchanged. Amplifiers are not
perfect and can introduce extra noise into the signal. The amount of noise introduced increases as
the amplifier approaches its maximum gain setting. A typical amplifier or repeater operating at
maximum gain may reduce the signal to noise ratio by about 3dB. Consequently, it is not
advisable to run such equipment at the maximum levels. This is similar to the results of turning
the volume up too high on a domestic HI FI. A lot of interference is evident and most units are
only operated at up to about half their maximum rating.
In the same way as the net gain or loss in a network can be simply calculated by adding the dB
values arithmetically, so can the reduction in signal to noise ratio. In the previous example if the
original s/n ratio is 50 dB at the camera then after two amplifiers the s/n ratio could be reduced to
44dB. After four amplifiers this could be reduced to 44 - 12 = 32 dB. At this signal to noise ratio
the picture would show a lot of 'snow' and be close to the limit of a usable picture. This then is
the limit of the distance that a video signal may be transmitted using this type of transmission.
Therefore, besides calculating the losses and gains of the network the reduction in s/n ratio must
also be calculated. This example assumes that the worst case is considered. Manufacturers' data
or assistance should be sought if equipment is to be used at maximum settings.
Misuse Of The dB
The term dB is very often misused as a measurement, which it is not. This practice is very
common. However, the correct way of stating a measurement is +/- YdB's relative to a base
level. It is a common, though technically incorrect, practice not to mention the base level, which
can lead to the assumption that the dB is a unit of measure.
Examples Of Typical Configuration
Diagram 7 shows some typical configurations for cabled systems.
Cable Performance
Overall cable performance is usually defined for its ability to pass high frequency signals. After
selecting the correct type of cable with the desired impedance, the next most important factor is
the cable transmission loss at frequencies within the video band. Most cable manufacturers
provide figures at 5MHz and 10MHz. The 5MHz figure is the most important for CCTV use.
The cable losses will be defined as a loss in dB at 5MHz per 100 metres. Care should be taken
when dealing with cables of American origin as these are often defined as loss per 100 feet.
Generally, the larger the size and the more expensive the cable, the better will be its
performance. This holds true for most cables as larger conductors produce the least loss.
If the loss is given for a frequency but not the one required, the conversion is as follows.
Assuming the cable is rated at 3.5 dB loss per 100 metres at 10MHz, then the loss at a frequency
of 5MHz would be:
Note that before using this conversion the cable specification should be checked to ensure that it
will transmit satisfactorily at 5 MHz. Some cables are designed specifically for high frequency
transmission only, and will not be suitable for the lower frequencies used in CCTV.
Cable Selection
The important factors when selecting a cable for a particular installation are:
1) Establish the type of cable to use, coaxial or twisted pair.
2) Select a range of cables of the correct impedance.
3) Select the correct mechanical format, i.e. normal cable to be laid in ducts or single wire
armoured for direct burial etc.
4) Consider the distance the cable is required to run and calculate the length of cable required.
Do not forget to make allowances in this calculation for unseen problems in installing the cable.
A minimum of a 10% allowance should always be made. This provides a safety margin to cover
inaccurate site drawings, sections of the cable running vertically and other problems likely to be
met during installation.
5) When the length of cable has been established, assess the high frequency loss from the cable
data.
6) Once the cable loss has been estimated, then the equipment requirement can be established.
Cable Specifications
The data for twisted pair cables is not always easy to obtain. However, most telephone type
cables are highly suitable for video transmission. Even the internal telephone subscriber cable
can be used over quite long distances for video, with the correct equipment. (Typical losses at
5MHz are 4dB per 100 metres.) If in doubt about the suitability of a twisted pair cable, the
general rules are that suitable cables will be unscreened and will have a very slow twist to the
conductors, 1 to 3 twists per metre.
Many twisted pair cables are advertised as "Wide Band Data Cables." These are usually of
American origin and are heavily screened. They are designed for use with computers and are
generally unsuitable for video use. If a cable is to be used about which there is some doubt, it is
worth testing the cable with the equipment to be used before installation. Although this may be
considered as a waste of time, it can avoid a costly mistake in the installation.
Tests can be run with the cable on drums as the performance will improve when the cable is
taken off the drums and installed. When faced with using existing cables on a site, the only safe
way to establish if they are suitable is to run an actual test with the equipment it is intended to
use.
The problems that can be encountered when attempting to use existing cables include:
Cables that have absorbed water or moisture.
The cable route is much longer than it appears.
Other cables have been connected in parallel.
Bad joints.
If in any doubt, run a transmission test.
Transmission Equipment and Methods
General
When considering the preceding details regarding cable performance, it is obvious that special
equipment is required to transmit video signals over long cables. The type of equipment required
is dependent on the length of cable involved and the required performance.
This equipment falls under two headings:
1) Launch Equipment
Launch equipment is designed to precondition the video signal for transmission over the cables.
2) Cable Equalising Equipment
Cable equalising amplifiers are designed to provide variable compensation to make up for the
losses after the video signal has been transmitted over the cables.
Selection Of Cable And Equipment
When selecting the cable and equipment for a particular installation the following rules apply:
1) Select the cable to be used, noting the high frequency loss associated with the length of the
cable selected.
2) Select the line transmission equipment required to compensate for the cable loss.
3) Sometimes it is possible to save on the installation cost by using a cheaper cable with more
powerful equipment.
4) Determine the level of performance required.
5) For colour transmission, it is wise to allow a margin of 6dB extra equalisation in the
equipment over the projected cable losses.
6) For high quality monochrome transmission no margin is required other than the 10% for
variations in cable length mentioned previously.
7) An acceptable monochrome picture can be obtained with a net loss of 6dB over the
transmission link.
Example:Cable = 1000 Metres of URM70 = Loss of 33dB at 5MHz.
Equipment required for full equalisation = Launch Amplifier with +12dB at 5MHz + Cable
equalising amplifier with +32dB of equalising at 5MHz.
This combination of equipment provides a total of +44dB at 5MHz against a cable loss of -33dB
giving +11dB at 5MHz in hand.
This configuration will provide a first class colour picture. In fact it would work well up to a
cable length of 1200 metres.
Transmission Levels
The normal transmission levels for video signals in the CCTV industry are:
Coaxial Cable:- 1 Volt of composite video, terminated in 75 Ohms, positive going, i.e. Sync tips
at 0V and peak white at 1 Volt.
Twisted Pair Cables:- 2.0 Volts balanced, terminated in the characteristic impedance of the cable,
normally between 110 and 140 Ohms.
Typical cable losses.
A selection of commonly used cable specifications is given below.
Cable ref. Type
Impedance Loss/100Metres
CT125
Coaxial
75W
1.1dB
CT305
Coaxial
75W
0.5dB
CT600
Coaxial
75W
0.3dB
URM70
Coaxial
75W
3.3dB
RG59
Coaxial
75W
2.25dB
TR42/036 Twisted Pair 110W
2.1dB
9207
Twisted Pair 100W
2.3dB
9182
Twisted Pair 150W
2.7dB
Principles Of Transmission
The object of using special transmission amplifiers is to be able to produce a video frequency
response that is a mirror image of the cable loss. The net result is that the video output will be a
faithful reproduction of the input and effectively the cable loss disappears completely. The above
is a much simplified version of what happens in a correctly installed transmission link.
The example in Diagram 8 shows that the equaliser response is produced by being able to adjust
the gain of the amplifier at different frequencies. In this case the amplifier has five sections
operating at 1, 2, 3, 4, and 5MHz.
Pre-Emphasis
If the higher frequencies of the video signal are sent at an increased level, this will reduce the
high frequency noise by reducing the amount of amplification required at the end of the cable.
This method of changing the video signal is known as pre-emphasis.
Cable Equalisation
A cable equalising amplifier acts rather like the audio "Graphic Equaliser" with which most
people are familiar. It enables the gain of the amplifier to be adjusted independently at different
frequencies within the video band. The object of this is to be able to produce a mirror image of
the cable response.
Each amplifier requires setting up to match the cable with which it is to be used. Once set, it
should never require readjustment unless a drastic change in the installation is made.
Test Equipment Required
Correct cable equalisation cannot be achieved without the use of special test equipment. This
enables the various adjustments to be set to optimum. Some people claim to be able to set up this
type of equipment "by eye". No matter how experienced a person is, the results obtained by
attempting to use this method will be always inferior to those produced with the proper test
equipment.
Pulse And Bar Generator
This produces a special wave form that is designed to show problems in a video transmission
link. The timing and period of the chroma burst are especially important in the transmission of
colour signals, particularly if multiplexing equipment is incorporated in the system.
Oscilloscope
This is required to observe the wave form from the pulse and bar generator and should have a
bandwidth of at least 10MHz.
Object Of Adjusting The Equipment
The object of setting up the video line transmission equipment is to obtain a true replica of the
Pulse and Bar wave form after it has been transmitted through the amplifiers and cable. If this is
achieved, a satisfactory picture will be produced by the monitor.
Method Of Adjustment
The pulse and bar generator should be connected in place of the camera. The resultant wave form
is viewed on the oscilloscope at the output of the amplifier before the monitor. If a launch
amplifier is being used, the output level of this should be set first to 1 Volt with no pre-emphasis.
The gain of the cable equalising amplifier should then be set to give 1 Volt output.
The equalising controls should then be adjusted in ascending order, i.e. low frequency (LF) lift
first to obtain the best equalisation. Each control affects a different portion of the video signal, to
obtain the best results. The controls may need adjusting more than once as there is a certain
amount of interaction between them.
Once the controls are set to optimum in the equalising amplifier, the high frequency (HF) lift
control in the launch amplifier should then be adjusted to give the required pre-emphasis. The
HF lift controls in the equalising amplifier should then be able to be set to a lower level. Care
must be taken to ensure that the launch amplifier output is not overloaded as this may produce
peculiar results.
Repeater Amplifiers
When a video signal has to be transmitted over extremely long or poor quality cables, it is
necessary to use a repeater amplifier within the system. The distance along the cable at which it
should be installed can be calculated from the cable loss figures. When using repeater amplifiers,
an extra allowance of 3dB should be made for the cable loss. It is better to insert a repeater
amplifier in a cable run before the video signal deteriorates too much, than to attempt to equalise
a very poor quality signal. There is no actual limit to the length of cable and number of repeater
amplifiers that can be used. The problem that occurs is that the signal to noise ratio deteriorates
with each amplifier.
The practical limit is approximately 4 repeater amplifiers in cascade with a launch and equalising
amplifier at the ends of the cable. This configuration can easily operate over cable lengths of 50
Km or more if the correct type of cable is used. This applies equally to coaxial or balanced
cables.
Method Of Adjustment
The method of setting up a system with repeater amplifiers is identical to adjusting a single
equalising amplifier. The pulse and bar signals are inserted in the cable at the position of the last
repeater amplifier. This enables the final equalising amplifier to be adjusted. When this is
completed, the pulse and bar unit is moved up the next section of cable to enable the last repeater
to be set up. The procedure is then repeated working along the cable towards the camera position
until the launch amplifier is reached. Great care should be taken when setting up a transmission
link using repeater amplifiers. This is because once an error has been introduced into the video
signal by an incorrectly adjusted amplifier it cannot be corrected by miss-setting another
amplifier. Errors are normally additive and a slight mis-setting of several amplifiers will produce
unacceptable results.
Earth Currents
When installing TV cameras or other equipment on large sites, the potential of the earth
connection provided for the equipment can vary by quite large voltages (up to 50 Volts). This
can produce high currents in cables connected between different points on the site and will
produce interference on the video signal.
Most video equalising amplifiers have differential inputs that can reject a certain amount of
interference due to earth potential variations (up to 10 Volts). However, it is good practice, and a
safe precaution, to break the earth connection using a video transformer or opto-coupled
equalising amplifier on long cables. It is not safe or legal to remove earth connections from
equipment and rely on the earth provided by the video cable.
This latter procedure, which is still common practice in the CCTV industry, is in breach of the
electrical safety regulations and is extremely dangerous and should on no account be used
Video Signal Formats
Video Signal Formats
The purpose of this article is to explain the main differences between the various different
Video Signal Formats. RGB, Component, S-Video and Composite are terms that are commonly
heard, but what do they mean and which one should you use.
First the basics A video signal originates in one of two ways.


Optically - From a Camera or Scanner
Electronically - From a Graphics card
Irrespective of how it originates, initially it consists of electrical signals that represent the
intensities of the three Primary Colours of Light. RED, GREEN, BLUE. Additionaly there are two
other timing signals to indicate the start of each frame of the picture VERTICAL SYNC, and
each line of the picture HORIZONTAL SYNC.
At its final destination it must recreate the original image by emitting Red, Green and Blue
light. A TV displays this on a Cathode Ray Tube (CRT) which emits light when a beam of
electrons hits a phosphor coating on the face of the tube. If you look closely at your TV with a
magnifying glass you will see one of the two images below.
It is how these signals are processed, stored and transmitted to the display that ultimately
decides how good or bad the picture will be.
Starting with the best and working down the list, here are the definitions of the different
formats and how many cables are required to convey the signal to the distant end. Typically
used connectors are also shown although other connector types may also be found on
equipment.
RGBHV
5 cables (5 x BNC connectors or 15 pin High Density D-Type
connector)
A PC outputs RGBHV from its VGA connector (I wont mention any digital
formats as that is not relevant to this topic). That is the PUREST form of
ANALOGUE video you will find as each of the 5 signals is transfered
discretely.
RGBS
4 cables (4 x BNC connectors or SCART)
aka RGB
Discrete colour signals but a COMPOSITE SYNC (S) signal containing H & V
pulses. Many items of domestic video equipment that claim to output RGB
actually output RGB+CompositeVideo rather than RGB+CompositeSync. For
a TV this is no problem but some monitors can get upset if they expect true
composite sync pulses.
RGsB
3 cables (3 x BNC connectors)
aka SoG
Sync On Green
As RGBS except that instead of a separate Sync, the Sync signal is sent on
the GREEN colour signal just like Composite Video. This format is used by
some Graphics Workstations.
Component 3 cables (3 x BNC connectors or 3 x Phono connectors aka RCA
Jacks)
Video
aka Y-Cr-Cb
S-Video
aka Y-C
Composite
Video
A black and white composite video signal containing Luminance LUMA (Y)
Brightness information and composite sync. Cr and Cb are two signals
containing matrixed colour information to extract the Red/Blue from the
picture information in the Y signal. Once the red and blue is removed the
only information left is green. This format is used by many DVD players
although UK display equipment rarely has inputs for Component Video
2 cables (4 pin MiniDin connector, SCART or 2 x BNC Connectors)
A black and white composite video signal containing Luminance LUMA (Y)
Brightness information and composite sync. CHROMA (C) contains ALL the
colour information. S-Video outputs are becoming commonplace on domestic
AV equipment and almost all AV amplifiers support the switching of S-Video
in addition to Composite Video
1 cable (SCART, Phono [usually yellow] or BNC connector)
Almost the lowest of the low. A Composite Video signal contains all of the
brightness, colour and timing information for the picture. Because of this
there can be noticable artefacts introduced into the picture.
In order for the colour information to be combined with the brightness and
timing information it must be encoded. There are three main colour encoding
systems in use throughout the world with some of them also having variants.
NTSC
Developed in the USA in the 1950's, this was the first commercial
Colour TV system to be launched. Early technical difficulties earned
it the nickname "Never The Same Colour"
PAL
The main rival to NTSC, PAL was a European development launched
in the 1960's and using lessons learnt from the earlier NTSC system
it employed techniques to overcome some of the colour problems
suffered by its rival
SECAM Developed around the same time as PAL, SECAM is the French entry
in the TV Standards arena.
All three colour standards are incompatable although many modern TV sets
are multi-standard and can display almost any signal.
RF
Can have many signals on 1 cable. (Coaxial Plug or F Connector)
The lowest of the low. Composite Video and usually Audio as well, modulated
to a much higher frequency but can enable multiple signals to be distributed
over the same cable by choosing different carrier frequencies. This is the
method used for mass distribution of TV signals either via Terrestrial Aerial,
Cable TV feed or Satellite distribution. If the carrier frequency is not carefully
chosen different signals on the same cable can cause harmonic interference
to each other causing strange patterning on the screen.
Which signal type is best ?
Without doubt the answer is RGBHV but that doesnt neccesarily mean that is the best one for
you to use. For starters... it may not be an option available to you.
Which signal should I use ?
This depends on a number of factors
1.
2.
3.
4.
What OUTPUTS are available from the Video Source
What INPUTS are available on the viewing device
What CABLE do I have available
Do I have to SWITCH the signal en-route to its destination
For domestic use, S-Video would tend be the most commonly supported quality format.
Almost all AV amplifiers support S-Video switching whereas very few support any higher
formats such as Component or RGB.
TV sets normally only have one input that will support RGB but often have several that
support S-Video. DVD Players generally have the widest range of output formats.
Can I change the signal format ?
YES Converters are readily available to convert an RGB signal into S-Video. A well regarded
converter is the RGB 2 S-Video from JS Technology. Several KAT5 customers are using these
units to convert RGB signals so that they can distribute them around the home as S-Video
sent over Low Cost CAT5 cabling.
Can I change the signal to a HIGHER format ?
You CAN... but there is nothing to be gained
ANY form of signal conversion will cause some degradation of the signal. If you attempt to
Upconvert a signal it may well look worse after you have finished. Some of the detail has
already been lost in the down conversion and the best you can hope for is a signal that looks
the same.
As stated at the begining of this article, the display device ultimately has to display an RGBHV
signal so the TV or Projector already has circuitry that will do that conversion for you at no
cost. It is in a TV manufacturers interest to ensure that this conversion is a flawless as
possible and an awful lot of effort goes into the circuit design. Due to the high production
volumes for TV's, top quality components become more affordable to the manufacturer. By
comparison, an external converter will have much lower sales and will almost certainly be very
expensive or of inferior quality.
KAT5 AV Distribution
S-Video is the ideal format to be distributed around the home over CAT5 cable using KAT5
AVS modules. The four pairs of the CAT5 cable carry the Luminance (Y) and Chrominance (C)
signals and the Left and Right Audio channels.
This gives a vastly superior picture than that obtained from RF distribution with the added
attraction of Stereo Audio. If the source is Dolby Pro-Logic encoded material then surround
sound will be available to any TV sets capable of reproducing it.
The Final Decision
The final decision is down to the user. Wherever possible you should use the highest possible
standard but take into consideration the source material as well. There is no point using your
highest quality input for a Digital TV receiver if the channels you watch have such low bitrates
that the picture suffers badly from pixellation. It will just make it more obvious. Save that
input for your best source such as DVD.
Television Broadcasting Standards
Broadcast television systems are encoding or formatting standards for the transmission and
reception of terrestrial television signals. There are three main analog television systems in
current use around the world: NTSC, PAL, and SECAM. These systems have several
components, including a set of technical parameters for the broadcasting signal, a encoder
system for encoding color, and possibly a system for encoding multichannel television sound
(MTS).
In digital television (DTV), all of these elements are combined in a single digital transmission
system
Frames
Main article: Film frame
Ignoring color, all television systems work in essentially the same manner. The monochrome
image seen by a camera (now, the luminance component of a color image) is divided into
horizontal scan lines, some number of which make up a single image or frame. A monochrome
image is theoretically continuous, and thus unlimited in horizontal resolution, but to make
television practical, a limit had to be placed on the bandwidth of the television signal, which puts
an ultimate limit on the horizontal resolution possible. When color was introduced, this limit of
necessity became fixed. All current analog television systems are interlaced; alternate rows of
the frame are transmitted in sequence, followed by the remaining rows in their sequence. Each
half of the frame is called a video field, and the rate at which fields are transmitted is one of the
fundamental parameters of a video system. It is related to the utility frequency at which the
electricity distribution system operates, to avoid flicker resulting from the beat between the
television screen deflection system and nearby mains generated magnetic fields. All digital, or
"fixed pixel", displays have progressive scanning and must deinterlace an interlaced source. Use
of inexpensive deinterlacing hardware is a typical difference between lower- vs. higher-priced
flat panel displays (Plasma display, LCD, etc.).
All films and other filmed material shot at 24 frames per second must be transferred to video
frame rates using a telecine in order to prevent severe motion jitter effects. Typically, for 25
frame/s formats (European among other countries with 50 Hz mains supply), the content is PAL
speedup, while a technique known as "3:2 pulldown" is used for 30 frame/s formats (North
America among other countries with 60 Hz mains supply) to match the film frame rate to the
video frame rate without speeding up the play back.
[edit] Viewing technology
Analog television signal standards are designed to be displayed on a cathode ray tube (CRT), and
so the physics of these devices necessarily controls the format of the video signal. The image on
a CRT is painted by a moving beam of electrons which hits a phosphor coating on the front of
the tube. This electron beam is steered by a magnetic field generated by powerful electromagnets
close to the source of the electron beam.
In order to reorient this magnetic steering mechanism, a certain amount of time is required due to
the inductance of the magnets; the greater the change, the greater the time it takes for the electron
beam to settle in the new spot.
For this reason, it is necessary to shut off the electron beam (corresponding to a video signal of
zero luminance) during the time it takes to reorient the beam from the end of one line to the
beginning of the next (horizontal retrace) and from the bottom of the screen to the top (vertical
retrace or vertical blanking interval). The horizontal retrace is accounted for in the time allotted
to each scan line, but the vertical retrace is accounted for as phantom lines which are never
displayed but which are included in the number of lines per frame defined for each video system.
Since the electron beam must be turned off in any case, the result is gaps in the television signal,
which can be used to transmit other information, such as test signals or color identification
signals.
The temporal gaps translate into a comb-like frequency spectrum for the signal, where the teeth
are spaced at line frequency and concentrate most of the energy; the space between the teeth can
be used to insert a color subcarrier.
[edit] Hidden signalling
Broadcasters later developed mechanisms to transmit digital information on the phantom lines,
used mostly for teletext and closed captioning:





PAL-Plus uses a hidden signalling scheme to indicate if it exists, and if so what operational
mode it is in.
NTSC has been modified by the Advanced Television Standards Committee to support an
anti-ghosting signal that is inserted on a non-visible scan line.
Teletext uses hidden signalling to transmit information pages.
NTSC Closed Captioning signalling uses signalling that is nearly identical to teletext
signalling.
Widescreen All 625 line systems incorporate pulses on line 23 that flag to the display that a
16:9 widescreen image is being broadcast, though this option is not currently used on analog
transmissions.
[edit] Overscan
Main article: Overscan
Television images are unique in that they must incorporate regions of the picture with
reasonable-quality content, that will never be seen by some viewers.
[edit] Interlacing
Main article: Interlaced video
In a purely analog system, field order is merely a matter of convention. For digitally recorded
material it becomes necessary to rearrange the field order when conversion takes place from one
standard to another.
[edit] Image polarity
Another parameter of analog television systems, minor by comparison, is the choice of whether
vision modulation is positive or negative. Some of the earliest electronic television systems such
as the British 405-line (system A) used positive modulation. It was also used in the two Belgian
systems (system C, 625 lines, and System F, 819 lines) and the two French systems (system E,
819 lines, and system L, 625 lines). In positive modulation systems, the maximum luminance
value is represented by the maximum carrier power; in negative modulation, the maximum
luminance value is represented by zero carrier power. All newer analog video systems use
negative modulation with the exception of the French System L.
Impulsive noise, especially from older automotive ignition systems, caused white spots to appear
on the screens of television receivers using positive modulation but they could use simple
synchronization circuits. Impulsive noise in negative modulation systems appears as dark spots
that are less visible, but picture synchronization was seriously degraded when using simple
synchronization. The synchronization problem was overcome with the invention of phase-locked
synchronization circuits. When these first appeared in Britain in the early 1950s one name used
to describe them was "flywheel synchronisation".
Older televisions for positive modulation systems were sometimes equipped with a peak video
signal inverter that would turn the white interference spots dark. This was usually user-adjustable
with a control on the rear of the television labelled "White Spot Limiter" in Britain or
"Antiparasite" in France. If adjusted incorrectly it would turn bright white picture content dark.
Most of the positive modulation television systems ceased operation by the mid 1980s. The
French System L continued on up to the transition to digital broadcasting. Positive modulation
was one of several unique technical features that originally protected the French electronics and
broadcasting industry from foreign competition and rendered French TV sets incapable of
receiving broadcasts from neighboring countries.
Another advantage of negative modulation is that, since the synchronizing pulses represent
maximum carrier power, it is relatively easy to arrange the receiver Automatic Gain Control to
only operate during sync pulses and thus get a constant amplitude video signal to drive the rest of
the TV set. This was not possible for many years with positive modulation as the peak carrier
power varied depending on picture content. Modern digital processing circuits have achieved a
similar effect but using the front porch of the video signal.
[edit] Modulation
Given all of these parameters, the result is a mostly-continuous analog signal which can be
modulated onto a radio-frequency carrier and transmitted through an antenna. All analog
television systems use vestigial sideband modulation, a form of amplitude modulation in which
one sideband is partially removed. This reduces the bandwidth of the transmitted signal, enabling
narrower channels to be used.
[edit] Audio
In analog television, the analog audio portion of a broadcast is invariably modulated separately
from the video. Most commonly, the audio and video are combined at the transmitter before
being presented to the antenna, but in some cases separate aural and visual antennas can be used.
In all cases where negative video is used, FM is used for the standard monaural audio; systems
with positive video use AM sound and intercarrier receiver technology can not be incorporated.
Stereo, or more generally multi-channel, audio is encoded using a number of schemes which
(except in the French systems) are independent of the video system. The principal systems are
NICAM, which uses a digital audio encoding; double-FM (known under a variety of names,
notably Zweikanalton, A2 Stereo, West German Stereo, German Stereo or IGR Stereo), in which
case each audio channel is separately modulated in FM and added to the broadcast signal; and
BTSC (also known as MTS), which multiplexes additional audio channels into the FM audio
carrier. All three systems are compatible with monaural FM audio, but only NICAM may be
used with the French AM audio systems.
[edit] Evolution
For historical reasons, some countries use a different video system on UHF than they do on the
VHF bands. In a few countries, most notably the United Kingdom, television broadcasting on
VHF has been entirely shut down. Note that the British 405-line system A, unlike all the other
systems, suppressed the upper sideband rather than the lower—befitting its status as the oldest
operating television system to survive into the color era (although was never officially broadcast
with color encoding). System A was tested with all three color systems, and production
equipment was designed and ready to be built; System A might have survived, as NTSC-A, had
the British government not decided to harmonize with the rest of Europe on a 625-line video
standard, implemented in Britain as PAL-I on UHF only.
The French 819 line system E was a post-war effort to advance France's standing in television
technology. Its 819-lines were almost high definition even by today's standards. Like the British
system A, it was VHF only and remained black & white until its shutdown in 1984 in France and
1985 in Monaco. It was tested with SECAM in the early stages, but later the decision was made
to adopt color in 625-lines. Thus France adopted system L on UHF only and abandoned system
E.
In many parts of the world, analog television broadcasting has been shut down completely, or
restricted only to low-power relay transmitters; see Digital television transition for a timeline of
the analog shutdown.
PC Video
Video File Formats and CODECs
A video codec is a device or software that enables video compression and/or decompression for
digital video. The compression usually employs lossy data compression. Historically, video was
stored as an analog signal on magnetic tape. Around the time when the compact disc entered the
market as a digital-format replacement for analog audio, it became feasible to also begin storing
and using video in digital form, and a variety of such technologies began to emerge.
Audio and video call for customized methods of compression. Engineers and mathematicians
have tried a number of solutions for tackling this problem.
There is a complex balance between the video quality, the quantity of the data needed to
represent it (also known as the bit rate), the complexity of the encoding and decoding algorithms,
robustness to data losses and errors, ease of editing, random access, the state of the art of
compression algorithm design, end-to-end delay, and a number of other factors.
Applications
Digital video codecs are found in DVD systems (players, recorders), Video CD systems, in
emerging satellite and digital terrestrial broadcast systems, various digital devices and software
products with video recording and/or playing capability. Online video material is encoded by a
variety of codecs, and this has led to the availability of codec packs - a pre-assembled set of
commonly used codecs combined with an installer available as a software package for PCs.
Encoding media by the public has seen an upsurge with the availability of CD and DVD-writers.
[edit] Video codec design
Video codecs seek to represent a fundamentally analog data set in a digital format. Because of
the design of analog video signals, which represent luma and color information separately, a
common first step in image compression in codec design is to represent and store the image in a
YCbCr color space. The conversion to YCbCr provides two benefits: first, it improves
compressibility by providing decorrelation of the color signals; and second, it separates the luma
signal, which is perceptually much more important, from the chroma signal, which is less
perceptually important and which can be represented at lower resolution to achieve more
efficient data compression. It is common to represent the ratios of information stored in these
different channels in the following way Y:Cb:Cr. Refer to the following article for more
information about Chroma subsampling.
Different codecs will use different chroma subsampling ratios as appropriate to their
compression needs. Video compression schemes for Web and DVD make use of a 4:2:0 color
sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs
designed to function at much higher bitrates and to record a greater amount of color information
for post-production manipulation sample in 3:1:1 (uncommon), 4:2:2 and 4:4:4 ratios. Examples
of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), and then
Sony's HDCAM-SR (4:4:4) or Panasonic's HDD5 (4:2:2). Apple's new Prores HQ 422 codec
also samples in 4:2:2 color space. More codecs that sample in 4:4:4 patterns exist as well, but are
less common, and tend to be used internally in post-production houses. It is also worth noting
that video codecs can operate in RGB space as well. These codecs tend not to sample the red,
green, and blue channels in different ratios, since there is less perceptual motivation for doing
so—just the blue channel could be undersampled.
Some amount of spatial and temporal downsampling may also be used to reduce the raw data
rate before the basic encoding process. The most popular such transform is the 8x8 discrete
cosine transform (DCT). Codecs which make use of a wavelet transform are also entering the
market, especially in camera workflows which involve dealing with RAW image formatting in
motion sequences. The output of the transform is first quantized, then entropy encoding is
applied to the quantized values. When a DCT has been used, the coefficients are typically
scanned using a zig-zag scan order, and the entropy coding typically combines a number of
consecutive zero-valued quantized coefficients with the value of the next non-zero quantized
coefficient into a single symbol, and also has special ways of indicating when all of the
remaining quantized coefficient values are equal to zero. The entropy coding method typically
uses variable-length coding tables. Some encoders can compress the video in a multiple step
process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially better
quality compression.
The decoding process consists of performing, to the extent possible, an inversion of each stage of
the encoding process. The one stage that cannot be exactly inverted is the quantization stage.
There, a best-effort approximation of inversion is performed. This part of the process is often
called "inverse quantization" or "dequantization", although quantization is an inherently noninvertible process.
This process involves representing the video image as a set of macroblocks. For more
information about this critical facet of video codec design, see B-frames.
Video codec designs are often standardized or will be in the future- i.e., specified precisely in a
published document. However, only the decoding process needs to be standardized to enable
interoperability. The encoding process is typically not specified at all in a standard, and
implementers are free to design their encoder however they want, as long as the video can be
decoded in the specified manner. For this reason, the quality of the video produced by decoding
the results of different encoders that use the same video codec standard can vary dramatically
from one encoder implementation to another.
[edit] Commonly used video codecs
Main article: List of codecs
A variety of video compression formats can be implemented on PCs and in consumer electronics
equipment. It is therefore possible for multiple codecs to be available in the same product,
avoiding the need to choose a single dominant video compression format for compatibility
reasons.
Video in most of the publicly documented or standardized video compression formats can be
created with multiple encoders made by different people. Many video codecs use common,
standard video compression formats, which makes them compatible. For example, video created
with a standard MPEG-4 Part 2 codec such as Xvid can be decoded (played back) using any
other standard MPEG-4 Part 2 codec such as FFmpeg MPEG-4 or DivX Pro Codec, because they
all use the same video format.
Some widely-used software codecs are listed below.
[edit] Lossless codecs


FFv1: FFv1's compression factor is comparable to Motion JPEG 2000, but based on quicker
algorithms (allows real-time capture). Written by Michael Niedermayer and published as part
of FFmpeg under to GNU GPL.
Huffyuv: Huffyuv (or HuffYUV) is a very fast, lossless Win32 video codec written by Ben
Rudiak-Gould and published under the terms of the GNU GPL as free software, meant to
replace uncompressed YCbCr as a video capture format.



Lagarith: A more up-to-date fork of Huffyuv is available as Lagarith.
YULS
x264 has a lossless mode.
[edit] MPEG-4 Part 2 codecs




DivX Pro Codec: A proprietary MPEG-4 ASP codec made by DivX, Inc.
Xvid: Free/open-source implementation of MPEG-4 ASP, originally based on the OpenDivX
project.
FFmpeg MPEG-4: Included in the open-source libavcodec codec library, which is used by
default for decoding and/or encoding in many open-source video players, frameworks,
editors and encoding tools such as MPlayer, VLC, ffdshow or GStreamer. Compatible with
other standard MPEG-4 codecs like Xvid or DivX Pro Codec.
3ivx: A commercial MPEG-4 codec created by 3ivx Technologies.
[edit] H.264/MPEG-4 AVC codecs




x264: A GPL-licensed implementation of the H.264 video standard. x264 is only an encoder.
Nero Digital: Commercial MPEG-4 ASP and AVC codecs developed by Nero AG.
QuickTime H.264: H.264 implementation released by Apple.
DivX Pro Codec: An H.264 decoder and encoder was added in version 7.
[edit] Microsoft codecs


WMV (Windows Media Video): Microsoft's family of proprietary video codec designs
including WMV 7, WMV 8, and WMV 9. The latest generation of WMV is standardized by
SMPTE as the VC-1 standard.
MS MPEG-4v3: A proprietary and not MPEG-4 compliant video codec created by
Microsoft. Released as a part of Windows Media Tools 4. A hacked version of Microsoft's
MPEG-4v3 codec became known as DivX ;-).
[edit] On2 codecs


VP6, VP6-E, VP6-S, VP7, VP8: Proprietary high definition video compression formats and
codecs developed by On2 Technologies used in platforms such as Adobe Flash Player 8 and
above, Adobe Flash Lite, Java FX and other mobile and desktop video platforms. Supports
resolution up to 720p and 1080p. VP8 has been made open source by Google under the name
libvpx or VP8 codec library.
libtheora: A reference implementation of the Theora video compression format developed
by the Xiph.org Foundation, based upon On2 Technologies' VP3 codec, and christened by
On2 as the successor in VP3's lineage. Theora is targeted at competing with MPEG-4 video
and similar lower-bitrate video compression schemes.
[edit] Other codecs







Schrödinger and dirac-research: implementations of the Dirac compression format
developed by BBC Research at the BBC. Dirac provides video compression from web video
up to ultra HD and beyond.
DNxHD codec: a lossy high-definition video production codec developed by Avid
Technology. It is an implementation of VC-3.
Sorenson 3: A video compression format and codec that is popularly used by Apple's
QuickTime, sharing many features with H.264. Many movie trailers found on the web use
this compression format.
Sorenson Spark: A codec and compression format that was licensed to Macromedia for use
in its Flash Video starting with Flash Player 6. It is considered as an incomplete
implementation of the H.263 standard.
RealVideo: Developed by RealNetworks. A popular compression format and codec
technology a few years ago, now fading in importance for a variety of reasons.
Cinepak: A very early codec used by Apple's QuickTime.
Indeo, an older video compression format and codec initially developed by Intel.
All of the codecs above have their qualities and drawbacks. Comparisons are frequently
published. The trade-off between compression power, speed, and fidelity (including artifacts) is
usually considered the most important figure of technical merit.
[edit] Missing codecs and video-file issues
A common problem, when an end user wants to watch a video stream encoded with a specific
codec, is that if the exact codec is not present and properly installed on the user's machine, the
video won't play (or won't play optimally).
MPlayer or VLC media player contain many popular codecs in a portable standalone library,
available for many operating systems, including Windows, Linux, and Mac OS X. This also
resolves many issues within Windows in conflicting and poorly installed codecs
Video Editing Software
Open source software
[edit] Non-linear video editing software
See also: List of free and open source software packages#Video editing
These software applications allow non-linear editing of videos:




Avidemux (cross-platform)
AviSynth (Windows)
Blender VSE (cross-platform)
CineFX Formerly known as: Jashaka (introduced as "Jahshaka Reinvented") (Cross
platform)












Cinelerra (GNU/Linux)
Vizrt (Viz Easy Cut)
Ingex (GNU/Linux)
Kdenlive (GNU/Linux, Mac OS X, FreeBSD)
Kino (GNU/Linux)
LiVES (GNU/Linux, BSD, IRIX, Mac OS X, Darwin)
Lightworks (Windows, Mac OS X and Linux versions will be released in late 2011)
Lumiera (GNU/Linux)
Open Movie Editor (GNU/Linux)
OpenShot Video Editor (GNU/Linux)
PiTiVi (GNU/Linux)
VLMC VideoLan Movie Creator (GNU/Linux, Mac OS X, Windows)
[edit] Video encoding and conversion tools














FFmpeg
Format Factory
HandBrake
Ingex (GNU/Linux)
MEncoder
MPEG Streamclip
Nandub
ppmtompeg MPEG-1 encoder, part of netpbm package.
RAD Game Tools Bink and Smacker
Thoggen (GNU/Linux)
VirtualDub (Windows)
VirtualDubMod (Windows) (based on VirtualDub, but with additional input/output formats)
VLC Media Player (Microsoft Windows, Mac OS X, GNU/Linux)
WinFF GUI Video Converter (Linux, Windows)
[edit] Proprietary software
[edit] Non-linear video editing software




Adobe Systems
o Premiere Elements (Mac OS X, Windows)
o Premiere Pro (Mac OS X, Windows)
o Encore (Mac OS X, Windows)
o After Effects (Mac OS X, Windows)
o Adobe Premiere Express (Adobe Flash Player)
Apple Inc.
o Final Cut Express (Mac OS X)
o Final Cut Pro (Mac OS X)
o iMovie (Mac OS X)
ArcSoft ShowBiz (discontinued)
AVS Video Editor (Windows)




















Autodesk Autodesk Smoke (Mac OS X)
Avid Technology
o Avid DS (Windows)
o Media Composer (Windows, Mac OS X)
o Avid NewsCutter
o Avid Symphony (Windows, Mac OS X)
o Avid Studio (Windows)
o Xpress Pro (discontinued)
o Avid Liquid (discontinued)
Corel (formerly Ulead Systems)
o VideoStudio (Windows)
o MediaStudio Pro (discontinued)
CyberLink PowerDirector (Windows)
Edius from Thomson Grass Valley, formerly Canopus Corporation (Windows)
Elecard AVC HD Editor
EVS Broadcast Equipment
o Xedio CleanEdit (Windows)
FORscene (Java on Mac OS X, Windows, Linux)
FXhome Limited (HitFilm) (Windows)
Lightworks (Windows, planned Mac OS X and Linux versions for late 2011)
Magix
o Video easy
o Movie Edit Pro
o Video Pro X
Media 100
o HD Suite (Mac OS X)
o HDe (Mac OS X)
o SDe (Mac OS X)
o Producer (Mac OS X)
o Producer Suite (Mac OS X)
Montage Extreme (Windows)
muvee Technologies
o muvee Reveal 8.0 (Windows)
o muvee autoProducer 6.0 (Windows)
NCH Videopad (Windows)
Nero Vision (Windows)
NewTek
o Video Toaster (Windows, hardware suite)
Pinnacle Studio (Windows)
Quantel
o iQ (Windows)
o eQ (Windows)
o sQ (Windows)
o Newsbox (Windows)
Roxio
o Creator and MyDVD (Windows)
o







Toast (Mac)
Serif MoviePlus (Windows)
SGO Mistika (Linux)
Sony Creative Software
o Sony Vegas Movie Studio (Windows)
o Sony Vegas Pro (Windows)
Windows Movie Maker (Windows)
Windows Live Movie Maker (Windows)
Womble Multimedia
o MPEG Video Wizard DVD(Windows)
o MPEG Video Wizard (Windows)
o MPEG-VCR (Windows)
Clesh (Java on Mac OS X, Windows, Linux)
[edit] Video encoding and conversion tools












MPEG Video Wizard DVD (Windows)
Cinema Craft Encoder (MS Windows)
Apple Compressor (Mac OS X)
iCR from Snell & Wilcox (Windows)
On2 Flix (Mac OS X, Windows)
ProCoder from Thomson Grass Valley, formerly Canopus Corporation (MS Windows)
Apple QuickTime Pro (Mac OS X, Windows)
Roxio Easy Media Creator
Sorenson Squeeze
Telestream Episode (Mac OS X, Windows)
TMPGEnc (Windows)
Elecard Converter Studio line
[edit] Freeware (free proprietary software)
[edit] Non-linear video editing software

Pinnacle VideoSpin (Windows)
[edit] Video encoding and conversion tools







FormatFactory (Windows)
Ingest Machine DV (Windows)
MediaCoder
MPEG Streamclip (Windows, Mac OS X)
SUPER (Windows) Frontend for ffmpeg, Mencoder and a few other encoders. Contains
DirectShow optimizations as well.
ZConvert (Windows)
TMPGEnc Commercial Version (Windows)


Windows Media Encoder (Windows)
XMedia Recode
[edit] Online software
[edit] Video encoding and conversion tools


Zamzar
Zencoder
[edit] Media management and online video editing


Kaltura
Plumi
Animation: Types of Animation
Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model
positions in order to create an illusion of movement. The effect is an optical illusion of motion
due to the phenomenon of persistence of vision, and can be created and demonstrated in several
ways. The most common method of presenting animation is as a motion picture or video
program, although there are other methods.
Etymology
From Latin animātiō, "the act of bringing to life"; from animō ("to animate" or "give life to") + ātiō ("the act of").
[edit] Early examples
Main article: History of animation
Five images sequence from a vase found in Iran
An Egyptian burial chamber mural, approximately 4000 years old, showing wrestlers in action. Even
though this may appear similar to a series of animation drawings, there was no way of viewing the
images in motion. It does, however, indicate the artist's intention of depicting motion.
Early examples of attempts to capture the phenomenon of motion drawing can be found in
paleolithic cave paintings, where animals are depicted with multiple legs in superimposed
positions, clearly attempting to convey the perception of motion.
A 5,000 year old earthen bowl found in Iran in Shahr-i Sokhta has five images of a goat painted
along the sides. This has been claimed to be an example of early animation.[1] However, since no
equipment existed to show the images in motion, such a series of images cannot be called
animation in a true sense of the word.[2]
A Chinese zoetrope-type device had been invented in 180 AD.[3] The phenakistoscope,
praxinoscope, and the common flip book were early popular animation devices invented during
the 19th century.
These devices produced the appearance of movement from sequential drawings using
technological means, but animation did not really develop much further until the advent of
cinematography.
There is no single person who can be considered the "creator" of film animation, as there were
several people working on projects which could be considered animation at about the same time.
Georges Méliès was a creator of special-effect films; he was generally one of the first people to
use animation with his technique. He discovered a technique by accident which was to stop the
camera rolling to change something in the scene, and then continue rolling the film. This idea
was later known as stop-motion animation. Méliès discovered this technique accidentally when
his camera broke down while shooting a bus driving by. When he had fixed the camera, a hearse
happened to be passing by just as Méliès restarted rolling the film, his end result was that he had
managed to make a bus transform into a hearse. This was just one of the great contributors to
animation in the early years.
The earliest surviving stop-motion advertising film was an English short by Arthur MelbourneCooper called Matches: An Appeal (1899). Developed for the Bryant and May Matchsticks
company, it involved stop-motion animation of wired-together matches writing a patriotic call to
action on a blackboard.
J. Stuart Blackton was possibly the first American film-maker to use the techniques of stopmotion and hand-drawn animation. Introduced to film-making by Edison, he pioneered these
concepts at the turn of the 20th century, with his first copyrighted work dated 1900. Several of
his films, among them The Enchanted Drawing (1900) and Humorous Phases of Funny Faces
(1906) were film versions of Blackton's "lightning artist" routine, and utilized modified versions
of Méliès' early stop-motion techniques to make a series of blackboard drawings appear to move
and reshape themselves. 'Humorous Phases of Funny Faces' is regularly cited as the first true
animated film, and Blackton is considered the first true animator.
Fantasmagorie by Emile Cohl, 1908
Another French artist, Émile Cohl, began drawing cartoon strips and created a film in 1908
called Fantasmagorie. The film largely consisted of a stick figure moving about and
encountering all manner of morphing objects, such as a wine bottle that transforms into a flower.
There were also sections of live action where the animator’s hands would enter the scene. The
film was created by drawing each frame on paper and then shooting each frame onto negative
film, which gave the picture a blackboard look. This makes Fantasmagorie the first animated
film created using what came to be known as traditional (hand-drawn) animation.
Following the successes of Blackton and Cohl, many other artists began experimenting with
animation. One such artist was Winsor McCay, a successful newspaper cartoonist, who created
detailed animations that required a team of artists and painstaking attention for detail. Each
frame was drawn on paper; which invariably required backgrounds and characters to be redrawn
and animated. Among McCay's most noted films are Little Nemo (1911), Gertie the Dinosaur
(1914) and The Sinking of the Lusitania (1918).
The production of animated short films, typically referred to as "cartoons", became an industry
of its own during the 1910s, and cartoon shorts were produced to be shown in movie theaters.
The most successful early animation producer was John Randolph Bray, who, along with
animator Earl Hurd, patented the cel animation process which dominated the animation industry
for the rest of the decade.
El Apóstol (Spanish: "The Apostle") was a 1917 Argentine animated film utilizing cutout
animation, and the world's first animated feature film.[citation needed]
[edit] Techniques
[edit] Traditional animation
Main article: Traditional animation
An example of traditional animation, a horse animated by rotoscoping from Eadweard Muybridge's 19th
century photos
Traditional animation (also called cel animation or hand-drawn animation) was the process used
for most animated films of the 20th century. The individual frames of a traditionally animated
film are photographs of drawings, which are first drawn on paper. To create the illusion of
movement, each drawing differs slightly from the one before it. The animators' drawings are
traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints
in assigned colors or tones on the side opposite the line drawings. The completed character cels
are photographed one-by-one onto motion picture film against a painted background by a
rostrum camera.
The traditional cel animation process became obsolete by the beginning of the 21st century.
Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a
computer system. Various software programs are used to color the drawings and simulate camera
movement and effects. The final animated piece is output to one of several delivery media,
including traditional 35 mm film and newer media such as digital video. The "look" of traditional
cel animation is still preserved, and the character animators' work has remained essentially the
same over the past 70 years. Some animation producers have used the term "tradigital" to
describe cel animation which makes extensive use of computer technology.
Examples of traditionally animated feature films include Pinocchio (United States, 1940),
Animal Farm (United Kingdom, 1954), and Akira (Japan, 1988). Traditional animated films
which were produced with the aid of computer technology include The Lion King (US, 1994) Sen
to Chihiro no Kamikakushi (Spirited Away) (Japan, 2001), and Les Triplettes de Belleville
(2003).




Full animation refers to the process of producing high-quality traditionally animated films, which
regularly use detailed drawings and plausible movement. Fully animated films can be done in a
variety of styles, from more realistically animated works such as those produced by the Walt Disney
studio (Beauty and the Beast, Aladdin, Lion King) to the more 'cartoony' styles of those produced by
the Warner Bros. animation studio. Many of the Disney animated features are examples of full
animation, as are non-Disney works such as The Secret of NIMH (US, 1982), The Iron Giant (US,
1999), and Nocturna (Spain, 2007).
Limited animation involves the use of less detailed and/or more stylized drawings and methods of
movement. Pioneered by the artists at the American studio United Productions of America, limited
animation can be used as a method of stylized artistic expression, as in Gerald McBoing Boing (US,
1951), Yellow Submarine (UK, 1968), and much of the anime produced in Japan. Its primary use,
however, has been in producing cost-effective animated content for media such as television (the
work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web
cartoons).
Rotoscoping is a technique, patented by Max Fleischer in 1917, where animators trace live-action
movement, frame by frame. The source film can be directly copied from actors' outlines into
animated drawings, as in The Lord of the Rings (US, 1978), or used in a stylized and expressive
manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are:
Fire and Ice (USA, 1983) and Heavy Metal (1981).
Live-action/animation is a technique, when combining hand-drawn characters into live action
shots. One of the earlier uses of it was Koko the Clown when Koko was drawn over live action
footage. Other examples would include Who Framed Roger Rabbit? (USA, 1988), Space Jam (USA,
1996) and Osmosis Jones (USA, 2002).
[edit] Stop motion
A stop-motion animation of a moving coin
Main article: Stop motion
Stop-motion animation is used to describe animation created by physically manipulating realworld objects and photographing them one frame of film at a time to create the illusion of
movement. There are many different types of stop-motion animation, usually named after the
type of media used to create the animation. Computer software is widely available to create this
type of animation.

Puppet animation typically involves stop-motion puppet figures interacting with each other in a
constructed environment, in contrast to the real-world interaction in model animation. The puppets
generally have an armature inside of them to keep them still and steady as well as constraining
them to move at particular joints. Examples include The Tale of the Fox (France, 1937), The
Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Jiří
Trnka and the TV series Robot Chicken (US, 2005–present).
o Puppetoon, created using techniques developed by George Pal, are puppet-animated films
which typically use a different version of a puppet for different frames, rather than simply
manipulating one existing puppet.
Clay animation


Clay animation, or Plasticine animation often abbreviated as claymation, uses figures made of clay
or a similar malleable material to create stop-motion animation. The figures may have an armature
or wire frame inside of them, similar to the related puppet animation (below), that can be
manipulated in order to pose the figures. Alternatively, the figures may be made entirely of clay,
such as in the films of Bruce Bickford, where clay creatures morph into a variety of different shapes.
Examples of clay-animated works include The Gumby Show (US, 1957–1967) Morph shorts (UK,
1977–2000), Wallace and Gromit shorts (UK, as of 1989), Jan Švankmajer's Dimensions of Dialogue
(Czechoslovakia, 1982), The Trap Door (UK, 1984). Films include Wallace & Gromit: The Curse of the
Were-Rabbit, Chicken Run and The Adventures of Mark Twain.
Cutout animation is a type of stop-motion animation produced by moving 2-dimensional pieces of
material such as paper or cloth. Examples include Terry Gilliam's animated sequences from Monty
Python's Flying Circus (UK, 1969–1974); Fantastic Planet (France/Czechoslovakia, 1973) ; Tale of
Tales (Russia, 1979), The pilot episode of the TV series (and sometimes in episodes) of South Park
(US, 1997).
A clay animation scene from a Finnish television commercial
o



Silhouette animation is a variant of cutout animation in which the characters are backlit and
only visible as silhouettes. Examples include The Adventures of Prince Achmed (Weimar
Republic, 1926) and Princes et princesses (France, 2000).
Model animation refers to stop-motion animation created to interact with and exist as a part of a
live-action world. Intercutting, matte effects, and split screens are often employed to blend stopmotion characters or objects with live actors and settings. Examples include the work of Ray
Harryhausen, as seen in films such Jason and the Argonauts (1963), and the work of Willis O'Brien
on films such as King Kong (1933 film).
o Go motion is a variant of model animation which uses various techniques to create motion blur
between frames of film, which is not present in traditional stop-motion. The technique was
invented by Industrial Light & Magic and Phil Tippett to create special effects scenes for the film
The Empire Strikes Back (1980). Another example is the dragon named Vermithrax from
Dragonslayer (1981 film).
Object animation refers to the use of regular inanimate objects in stop-motion animation, as
opposed to specially created items.
o Graphic animation uses non-drawn flat visual graphic material (photographs, newspaper
clippings, magazines, etc.) which are sometimes manipulated frame-by-frame to create
movement. At other times, the graphics remain stationary, while the stop-motion camera is
moved to create on-screen action.
Pixilation involves the use of live humans as stop motion characters. This allows for a number of
surreal effects, including disappearances and reappearances, allowing people to appear to slide
across the ground, and other such effects. Examples of pixilation include The Secret Adventures of
Tom Thumb and Angry Kid shorts.
[edit] Computer animation
Main article: Computer animation
A short gif animation of Earth
Computer animation encompasses a variety of techniques, the unifying factor being that the
animation is created digitally on a computer.
[edit] 2D animation
2D animation figures are created and/or edited on the computer using 2D bitmap graphics or
created and edited using 2D vector graphics. This includes automated computerized versions of
traditional animation techniques such as of, interpolated morphing, onion skinning and
interpolated rotoscoping.
2D animation has many applications, including analog computer animation, Flash animation and
PowerPoint animation. Cinemagraphs are still photographs in the form of an animated GIF file
of which part is animated.
[edit] 3D animation
3D animation is digitally modeled and manipulated by an animator. In order to manipulate a
mesh, it is given a digital skeletal structure that can be used to control the mesh. This process is
called rigging. Various other techniques can be applied, such as mathematical functions (ex.
gravity, particle simulations), simulated fur or hair, effects such as fire and water and the use of
motion capture to name but a few, these techniques fall under the category of 3D dynamics.
Well-made 3D animations can be difficult to distinguish from live action and are commonly used
as visual effects for recent movies. Toy Story (1995, USA) is the first feature-length film to be
created and rendered entirely using 3D graphics.
[edit] Terms



Photo realistic animation, is used primarily for animation that attempts to resemble real life. Using
advanced rendering that makes detailed skin, plants, water, fire, clouds, etc. to mimic real life.
Examples include Up (2009, USA), Kung-Fu Panda (2008, USA), Ice Age (2002, USA).
Cel-shaded animation, is used to mimic traditional animation using CG software. Shading looked
stark and less blending colors. Examples include, Skyland (2007, France), Appleseed (2007, Japan),
The Legend of Zelda: Wind Waker (2002, Japan)
Motion capture, is used when live action actors wear special suits that allow computers to copy
their movements into CG characters. Examples include Polar Express (2004, USA), Beowulf (2007,
USA), Disney's A Christmas Carol (2009, USA)
2D animation techniques tend to focus on image manipulation while 3D techniques usually build
virtual worlds in which characters and objects move and interact. 3D animation can create
images that seem real to the viewer.
[edit] Other animation techniques






Drawn on film animation: a technique where footage is produced by creating the images directly on
film stock, for example by Norman McLaren, Len Lye and Stan Brakhage.
Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil
paints on sheets of glass, for example by Aleksandr Petrov.
Erasure animation: a technique using tradition 2D medium, photographed over time as the artist
manipulates the image. For example, William Kentridge is famous for his charcoal erasure films, and
Piotr Dumała for his auteur technique of animating scratches on plaster.
Pinscreen animation: makes use of a screen filled with movable pins, which can be moved in or out
by pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows.
The technique has been used to create animated films with a range of textural effects difficult to
achieve with traditional cel animation.
Sand animation: sand is moved around on a back- or front-lighted piece of glass to create each
frame for an animated film. This creates an interesting effect when animated because of the light
contrast.
Flip book: A flip book (sometimes, especially in British English, called a flick book) is a book with a
series of pictures that vary gradually from one page to the next, so that when the pages are turned
rapidly, the pictures appear to animate by simulating motion or some other change. Flip books are
often illustrated books for children, but may also be geared towards adults and employ a series of
photographs rather than drawings. Flip books are not always separate books, but may appear as an
added feature in ordinary books or magazines, often in the page corners. Software packages and
websites are also available that convert digital video files into custom-made flip books.
[edit] Other techniques and approaches





Character animation
Chuckimation
Multi-sketching
Special effects animation
Animatronics

Stop motion
Computer Assisted Animation
Computer animation is the process used for generating animated images by using computer
graphics. The more general term computer generated imagery encompasses both static scenes
and dynamic images, while computer animation only refers to moving images.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics
are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target
of the animation is the computer itself, but sometimes the target is another medium, such as film.
Computer animation is essentially a digital successor to the stop motion techniques used in
traditional animation with 3D models and frame-by-frame animation of 2D illustrations.
Computer generated animations are more controllable than other more physically based
processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and
because it allows the creation of images that would not be feasible using any other technology. It
can also allow a single graphic artist to produce such content without the use of actors, expensive
set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly
replaced by a new image that is similar to it, but advanced slightly in the time domain (usually at
a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is
achieved with television and motion pictures.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures
are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and
separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes,
mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in
appearance between key frames are automatically calculated by the computer in a process known
as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after modeling is complete. For 2D vector
animations, the rendering process is the key frame illustration process, while tweened frames are
rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a
different format or medium such as film or digital video. The frames may also be rendered in real
time as they are presented to the end-user audience. Low bandwidth animations transmitted via
the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real
time as an alternative to streaming or pre-loaded high bandwidth animations.
In most 3D computer animation systems, an animator creates a simplified representation of a
character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the
skeletal model is defined by animation variables, or Avars. In human and animal characters,
many parts of the skeletal model correspond to actual bones, but skeletal animation is also used
to animate other things, such as facial features (though other methods for facial animation exist).
The character "Woody" in Toy Story, for example, uses 700 Avars, including 100 Avars in the
face. The computer does not usually render the skeletal model directly (it is invisible), but uses
the skeletal model to compute the exact position and orientation of the character, which is
eventually rendered into an image. Thus by changing the values of Avars over time, the animator
creates motion by making the character move from frame to frame.
There are several methods for generating the Avar values to obtain realistic motion.
Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame,
they usually set Avars at strategic points (frames) in time and let the computer interpolate or
'tween' between them, a process called keyframing. Keyframing puts control in the hands of the
animator, and has roots in hand-drawn traditional animation.
In contrast, a newer method called motion capture makes use of live action. When computer
animation is driven by motion capture, a real performer acts out the scene as if they were the
character to be animated. His or her motion is recorded to a computer using video cameras and
markers, and that performance is then applied to the animated character.
Each method has its advantages, and as of 2007, games and films are using either or both of
these methods in productions. Keyframe animation can produce motions that would be difficult
or impossible to act out, while motion capture can reproduce the subtleties of a particular actor.
For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy
provided the performance for the character Davy Jones. Even though Nighy himself doesn't
appear in the film, the movie benefited from his performance by recording the nuances of his
body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations
where believable, realistic behavior and action is required, but the types of characters required
exceed what can be done through conventional costuming
Computer animation development equipment
Computer animation can be created with a computer and animation software. Some impressive
animation can be achieved even with basic programs; however, the rendering can take a lot of
time on an ordinary home computer. Because of this, video game animators tend to use low
resolution, low polygon count renders, such that the graphics can be rendered in real time on a
home computer. Photorealistic animation would be impractical in this context.
Professional animators of movies, television, and video sequences on computer games make
photorealistic animation with high detail. This level of quality for movie animation would take
tens to hundreds of years to create on a home computer. Many powerful workstation computers
are used instead. Graphics workstation computers use two to four processors, and thus are a lot
more powerful than a home computer, and are specialized for rendering. A large number of
workstations (known as a render farm) are networked together to effectively act as a giant
computer. The result is a computer-animated movie that can be completed in about one to five
years (this process is not comprised solely of rendering, however). A workstation typically costs
$2,000 to $16,000, with the more expensive stations being able to render much faster, due to the
more technologically advanced hardware that they contain. Professionals also use digital movie
cameras, motion capture or performance capture, bluescreens, film editing software, props, and
other tools for movie animation.
Creating Movement