Analog Video

Analog video input (either PAL or NTSC) is captured by dedicated hardware, and an FPGA is used for buffering the scanlines between video input and DSP.

From: Multi-Camera Networks , 2009

Electronics Elements (Detailed Discussion)

Thomas Norman CPP, PSP, CSC , in Integrated Security Systems Design (Second Edition), 2014

Capturing and Displaying Analog Video

Analog video is created in a video camera by scanning an electron beam across a phosphor. The beam intensity is determined by the amount of light on each small area of the phosphor, which itself responds to the light being focused on it by a lens. That beam is then transmitted to a recording, switching, or display device. Analog switchers simply make a connection between devices by closing a relay dry contact. Recorders simply record the voltage changes of the electron beam onto tape, and display devices convert the voltage back into an electron beam and aim it at another phosphorus surface, which is the display monitor that is viewed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128000229000061

Video Coding: Fundamentals

Mohammed Ebrahim Al-Mualla , ... David R. Bull , in Video Coding for Mobile Communications, 2002

2.3.3 Analog Video Systems

There are three main analog video systems. In most of Western Europe, a 625/50 PAL system is used. In Russia, France, the Middle East, and Eastern Europe, a 625/50 SECAM system is used. In North America and Japan, a 525/60 NTSC system is used. All three systems are interlaced with a 4:3 aspect ratio.

The three systems are composite. This means that the chroma components are first bandlimited and then combined (for example, by frequency interleaving) with the luma component. The resulting composite video signal has the same bandwidth as the original luma signal. For example, in the 625/50 PAL system, the luma signal has a bandwidth of 5.5 MHz. The chroma signals are bandlimited to about 1.5 MHz and then QAM (quadrature amplitude modulation) modulated with a color subcarrier at 4.43 MHz above the picture carrier. For a more detailed discussion of these systems the reader is referred to Ref. 13. There are also other analog video systems that use separate components (component video) or a separate luma component and a composite chroma component (S-video) [10].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120530793500044

Lightwave Analog Video Transmission

Mary R. Phillips , Thomas E. Darcie , in Optical Fiber Telecommunications (Third Edition), Volume A, 1997

2 Analog Lightwave Systems

The analog lightwave system must take multiple frequency-division multiplexed video channels at the input, convert them to an optical signal, and transport them over several tens of kilometers of single-mode fiber and/or through passive fiber splitters to an optical receiver where they are converted back to RF. The end-to-end link must satisfy strict noise and distortion requirements. Traditional analog lightwave links are intensity modulated/direct detection (IM/DD) systems. In this technique, the analog video channel spectrum simply modulates the optical power such that the intensity spectrum of the light is the same as the original RF signal (plus a DC component). At the receiver a photodiode converts the modulated intensity back into an RF signal. The simplicity of the IM/DD system makes it attractive, although it comes at the cost of very stringent performance requirements of the lightwave components. The focus of this chapter is on components for a simple IM/DD lightwave system that transports a standard band of amplitude modulated–vestigial sideband (AM-VSB) analog video channels. The lightwave system consists of the transmitter, optional optical amplifiers, a fiber transport network, and an optical receiver.

We discuss the technical requirements of each system component for two exemplary systems. The first is an 80-channel system that is typical of a high-performance cable trunk system. The system performance is defined by several parameters that are discussed later. We desire a carrier-to-noise ratio (CNR) of 52   dB or more, composite second-order (CSO) distortion of –60 dBc or less, and composite triple beat (CTB) of –65 dBc or less, and we assume that this is achieved with an optical modulation depth (OMD) of 3.5% per channel. Trunk systems in use today support between 40 and 110 channels with numbers similar to those in our example. These specifications allow additional degradation from a coaxial distribution system with typically three coaxial-cable amplifiers in cascade, while still achieving the following specifications at the television set: a CNR of 46   dB or more, CSO distortion of –53 dBc or less, CTB of –53 dBc or less, as required by the National Association of Broadcasters [1]. The requirements for the European PAL systems are somewhat less stringent.

Our second example is for an FTTC or a PON system where degradation from a coax distribution system need not be budgeted. Also, given that we envision the analog video to be delivered with a broad variety of other digital (including video) services, we assume that 50 channels of analog video are sufficient. In this case, the performance at the output of the optical receiver needs to have only a CNR of 47   dB or more, CSO distortion less than or equal to –56 dBc, and CTB less than or equal to –56 dBc. Given the reduced linearity requirements and reduced channel load, we assume that this is achievable with an OMD of 5% per channel.

2.1 VIDEO FORMATS

The challenge in delivering analog video over fiber systems is in meeting strict noise and linearity requirements. These arrive from the complexity and fragility of the AM-VSB video format. Brilliantly designed many decades ago [ 2] for high spectral efficiency, the single-sideband (vestigial sideband [VSB]) amplitude-modulated (AM) format requires 6-MHz channel spacing (8   MHz in Europe) between video carriers. Baseband video, including intensity and color information, is used to AM a video carrier. This is VSB filtered and frequency multiplexed with a frequency-modulated (FM) audio signal, which results in a spectrum as shown in Fig. 14.1. The dominant feature in the spectrum is the remaining video carrier. Video information appears between the video carrier and the color subcarrier, at power levels many tens of decibels below the video carrier. Noise and distortion products must therefore be small in order not to interfere with picture quality.

Fig. 14.1. Single-channel amplitude-modulated vestigial-sideband (AM-VSB) video spectrum showing a video carrier, color and audio subcarriers, and time-averaged modulation sidebands at low levels relative to the video carrier.

Multiple video channels are frequency multiplexed according to a particular plan. The most popular in the United States is the standard National Cable Television Association (NCTA) plan, as shown in Fig. 14.2. Video carriers are nominally spaced by 6   MHz, but with irregularities to fit around the FM radio band. As is seen later, the distribution of distortion products that result from the nonlinear mixing between these multiple carriers provides information about the type of nonlinear impairment involved.

Fig. 14.2. Multichannel AM-VSB spectrum as transmitted on a typical cable television system. The variation in the video carrier level results from instantaneous differences in amplitude modulation (AM) depth. The features resolved in the spectrum are the video carriers and the frequency-modulated (FM) audio subcarriers.

The time-varying nature of the live video spectrum makes system diagnostics difficult. Depending on the luminance of the instantaneous point along the image sweep, and the time relative to synchronization and sweep pulses, the magnitude of the instantaneous video carrier varies by up to 6   dB. This makes accurate carrier measurement with a spectrum analyzer difficult. In addition, modulation sidebands obscure distortion products that form the CSO and CTB distortion. In order to perform stable and accurate measurements, the industry uses a set of continuous unmodulated video carriers as a test signal. In what follows, we consider the performance parameters in the context of these unmodulated test signals.

Alternative video formats provide much greater immunity to impairment than AM-VSB provides. FM video has been used for decades for satellite and trunk transmission [3], where a high CNR cannot be achieved. The required 15-dB signal-to-noise ratio (SNR) can be achieved easily over a variety of lightwave systems [4,5]. A typical FM channel requires between 30 and 40   MHz of bandwidth, which makes it unpopular (in the United States) for terrestrial broadcast or cable delivery. Furthermore, the cost of converting between FM and AM-VSB is a disadvantage for systems that deliver FM video to the home. The techniques described in this chapter are applicable to FM video transmission, but many of the impairments that are discussed will not be problematic.

Emerging compressed digital video (CDV) technology will eventually displace AM-VSB and FM. CDV eliminates inter- and intraframe redundancy to compress National Television System Committee (NTSC)-like video into less than 5   Mb/s [6]. When CDV is combined with advanced modem technology, the result is high-quality video with a higher spectral efficiency than that of AM-VSB, and with a much lower required CNR. But, as with FM video, the conversion cost and the embedded base of analog equipment will prevent this new technology from displacing AM-VSB rapidly.

When used with advanced modem technology, CDV allows a trade-off between spectral efficiency and required bandwidth [7,8]. The required CNR increases from less than 20   dB for simple modems like quadrature phase-shift keying (QPSK) (2 bits/s/Hz), to close to 30   dB for 64-QAM (quadrature amplitude modulation) (6 bits/s/Hz). As even higher spectral efficiencies are employed, the digital video channel becomes more like an analog video channel, in terms of transmission requirements. Much of this chapter is therefore applicable to these digital–RF channels. Transmission of both analog and digital–RF channels simultaneously from the same laser has received considerable attention, primarily because of the onset of clipping-induced impulse noise [9,10]. This topic is not discussed in this chapter.

2.2 HFC SYSTEMS

As mentioned in the introduction, the availability of high-performance analog lightwave technology has had a great impact on the cable industry. The key was to be able to replace cascades of dozens of coaxial amplifiers, as shown in Fig. 14.3, with fiber, as shown in Fig. 14.4 . Rather than suffering the accumulated noise and distortion of the amplifier chain, high-fidelity analog video could be interjected at distributed points throughout the serving area [ 11]. These fiber nodes (FNs) contain the optical receivers and electronic amplifiers needed to drive relatively short coaxial distribution systems. In addition to improved picture quality, many factors came together to result in the rapid acceptance of this system approach within the cable industry. Because the FNs serve typically between 200 and 2000 subscribers, the lightwave cost per subscriber is small. By eliminating the long amplifier cascades, transmission failure due to amplifier failure is less frequent and affects far fewer subscribers. Finally, because the maximum length of coax serving any subscriber is relatively short, the total bandwidth that can be supported on the coax is increased significantly. Practical amplifier and equalization (to compensate for the frequency-dependent loss of coaxial cable) technology would now enable system bandwidths close to 1   GHz, whereas long amplifier cascades were limited to less than typically 450   MHz.

Fig. 14.3. Prelightwave community-antenna television (CATV) system with long cascades of electronic amplifiers and coax, showing a typical carrier-to-noise (C/N) ratio degradation along spans.

Fig. 14.4. Fiber overlay to the CATV system, in which fiber nodes are used to subdivide serving areas. High-quality signals are delivered far into the serving area using linear lightwave.

Various specific HFC systems have been implemented, but the most popular system with both LECs and cable operators has approximately 500 subscribers served from each FN. This is achieved with typically three or fewer coaxial amplifiers in cascade. This is a reasonable trade-off between present-day cost and performance. An example is shown in Fig. 14.5. More aggressive system designs seek to eliminate all amplifiers outside the FNs by serving fewer than 200 subscribers per FN. These passive coax systems cost more but have better reliability and lower noise in the upstream band (typically from 5 to 40   MHz).

Fig. 14.5. Hybrid fiber coax (HFC) broadband access system. Broadband analog video, broadband digital video, and switched voice and data services are delivered by analog lightwave to fiber nodes. Fiber nodes convert analog optical signals to electrical and drive coaxial distribution systems that serve between 200 and 2000 subscribers. Various network interface units (NIUs) provide the required digital-to-RF conversion.

Analog lightwave is used in three primary manners in HFC systems. First, the cost of converting from digital or FM video formats that can be delivered over long distances and satellite networks to the AM-VSB formats for cable transmission is high. It is therefore desirable to minimize the number of head ends or central offices in which this is done. This head-end consolidation requires that the multichannel AM-VSB spectrum be distributed over long distances between head ends. This requires extremely high-fidelity performance over distances in the range of 50   km, which can be achieved using linearized high-power lasers and/or optical amplifiers.

The second class of analog application in HFC is for the trunk systems between head ends and FNs, as shown in Fig. 14.5. This requires transmission over usually less than 30   km, with performance as discussed in our 80-channel example system. Directly modulated DFB lasers, usually at 1.3 μm, are generally used for these links. One of these lasers can support multiple (typically four) FNs using passive optical splitters.

The third application is the delivery of narrow band digital information between the head end and the FNs. This includes telephony or switched digital services. These are generally converted to RF channels using modems at the head end or subscriber and transported as RF through the fiber and coax. Separate lasers are often used for these services, as shown in Fig. 14.5, so that spare lasers can be provisioned in the event of failures. Robust modem techniques like QPSK are used to ensure immunity to noise, particularly in the upstream band, so that a high CNR (compared with AM-VSB) generally is not required. These narrowcast lasers can be relatively inexpensive, because the requirements can be met by low-performance DFB lasers or in some cases even Fabry–Perot lasers. Low cost is critical because many more of these lasers are required than are required for AM-VSB delivery.

2.3 LOOP FIBER SYSTEMS

HFC provides a low-cost medium for combined analog video and narrow band digital services, but problems with ingress noise and cable deterioration lead many to prefer more fiber-intensive alternatives. Various systems that deploy FTTC or fiber to the home (FTTH) are attractive, including PONs and several FTTC systems in which a curbside switch serves multiple subscribers. In all cases, one challenge is to deliver the broadcast spectrum of multiple analog video channels to the optical network unit (ONU) at the curb or home.

Figure 14.6 shows an FTTC system that uses a point-to-point fiber feeder, and a point-to-multipoint PON, in which a fiber feeder is shared by multiple ONUs. Either system can be FTTC or FTTH, if feasible economically. The difficulty for analog lightwave in these applications is that each optical receiver serves a small number of subscribers (1–24), so that the cost of analog lightwave components is not shared by as many subscribers as with HFC (typically 500). This constraint is offset somewhat by the reduced performance requirements, as described in our 50-channel example system.

Fig. 14.6. Typical (a) point-to-point and (b) point-to-multipoint subscriber loop access systems. P represents points in the network at which power must be supplied (in addition to the central office [CO]). Each optical network unit (ONU) connected to the host digital terminal (HDT) can serve multiple subscribers in a fiber-to-the-curb configuration, or just one subscriber, for fiber to the home. The point-to-multipoint system uses a fiber splitter or wavelength-division multiplexer (WDM).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080513164500187

Digital Set-Top Terminals and Consumer Interfaces

Walter Ciciora , ... Michael Adams , in Modern Cable Television Technology (Second Edition), 2004

BB Video

This is the common baseband analog video interface described in Chapter 2 and Appendix B. Sound is not included. The significant frequencies extend from nearly 0 to 4.2 MHz for NTSC video and from nearly 0 to 5 MHz for PAL video. Composite video (including sync, as described in Appendix B) is transmitted at a level of 1 volt p-p, with a source and load impedance of 75 ohms. Normally, the common "RCA" connectors are used, though occasionally other connectors, such as BNC, are employed. Impedance matching is important if the connecting cable is long, to prevent ringing or ghosting. The quality of the interface is better than that of the Ch. 3/4 interface described next but not as good as the other analog interfaces described, because the chrominance information is combined with the luminance, and the two must be separated before they can be processed. The filters used to separate the two are imperfect; even if perfect filters were available, there would be some inevitable interference.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558608283500242

Live HDR Video Broadcast Production

I.G. Olaizola , ... J. Gorostegui , in High Dynamic Range Video, 2017

1.1 SDTV, HDTV, and UHDTV

Digital TV and video technologies were originally based on analog video formats. Therefore, standard definition TV (SDTV) takes its visual characteristics from NTSC (480/60i) and PAL/SECAM (575/50i) as it is defined by the ITU Recommendation BT.601 [ 1] as well as by SMPTE 259M [2]. High definition TV (HDTV) introduces a big change in spacial resolution (720p, 1080i, 1080p) but the color gamut is similar to SDTV and other aspects as the frame rate is not increased. HDTV is defined by the ITU-R BT.709 [3] and shares the same color primaries as sRGB. While the resolution in HDTV can be considered as acceptable for usual distances between the TV screen and the viewer, both, the color gamut (Fig. 1) and dynamic range of the human visual system (HVS) are far beyond the capabilities of HDTV. Therefore, ultra-high definition TV (UHDTV) improves all the dimensions involved in video: resolution, frame rate, color gamut, and dynamic range. UHDTV has been defined by ITU in ITU-R BT.2020 [4]. This recommendation can by summarized as:

Fig. 1. BT.2020 color space (creative commons BY-SA 3.0).

Aspect ratio: 16:9 (square pixels)

Resolution: 4K (3840 × 2160) or 8K (7680 × 4320)

Frame rate (Hz): 120, 120/1.001, 100, 60, 60/1.001, 50, 30, 30/1.001, 25, 24, 24/1.001

Only progressive scan mode

Coding format: 10 or 12 bits per component

Wide color gamut

Opto-electronic transfer

High dynamic range (not defined yet)

Regarding the HDR capabilities of BT.2020, even if there are not specific details about how to process and represent HDR data, there is a real constraint introduced by the 12 bits per component defined as the maximum bit-depth.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012809477800008X

Video Interfaces

Keith Jack , in Digital Video and DSP, 2008

Publisher Summary

This chapter focuses on video interfaces. Video interfaces make it possible to exchange video information between devices. Interface standards are classified into analog video interfaces and digital video interfaces. Analog video interfaces types include S-Video, SCART, SDTV RGB interface, HDTV RGB interface, SDTV YPbPr interface, HDTV YPbPr interface, D-Connector interface, and VGA interface. Most consumer video components in Europe support one or two 21-pin SCART connectors. These connectors allow analog R'G'B' video or S-video, composite video, and analog stereo audio to be transmitted between equipment using a single cable. Some HDTV consumer video equipment supports an analog R'G'B' video interface in which three separate RCA phono connectors (consumer market) or BNC connectors (pro-video and PC market) are used. Most HDTV consumer video equipment supports an analog YPbPr video interface in which three separate RCA phono connectors (consumer market) or BNC connectors (pro-video market) are used. This chapter explains the features of digital video interfaces such as pro-video component interfaces pro-video transport interfaces, IC component interfaces, consumer component interfaces, and consumer transport interfaces.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750689755000042

Video Surveillance Systems

James Sinopoli , in Smart Building Systems for Architects, Owners and Builders, 2010

Video Transmission

Transmission of the video signal captured from a surveillance camera to the security control center has typically occurred through coaxial cable, the traditional cable for analog video. With changes in the technology more installations are using unshielded, twisted-pair copper cable, fiber optic cable and wireless solutions. Unshielded twisted pair is even being used with analog cameras, with baluns (an interface between balanced signals and unbalanced signals) or a manufacturer's proprietary technology, which may allow signaling over long distances.

With IP cameras transmission is accomplished through unshielded, twisted-pair cabling as part of a structured telecommunications cabling system. Fiber optic cable is utilized for exceptionally long cable runs or for exterior cameras where lighting protection is a concern. The distance between the camera and the headend equipment, as well as cost, security of signal, and resolution, may be considered in selecting the physical transmission media.

Wireless transmission can be used for cameras where cable is impractical or costly. Wireless can be deployed rapidly but it may require power and line of sight between locations; it may also be susceptible to interference. Wireless technologies include "Wi-Fi," infrared, microwave and free-span optic (FSO) systems.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781856176538000077

Disc and Tape Recording

Ian Sinclair , in Electronics Simplified (Third Edition), 2011

Video and Digital Recording

The recording of sound on tape presented difficulties enough, and at one time the recording of video signals with a bandwidth of up to 5.5   MHz, and of digital sound, would have appeared to be totally impossible. The main problem is the speed of the tape. For high-quality sound recording, a tape speed of 15   i.p.s. was once regarded as the absolute minimum that could be used for a bandwidth of 30   Hz to about 15   kHz. Improvements in tape and recording head technology made it possible to achieve this bandwidth with speeds of around 1   i.p.s., but there is still a large gap between this performance and what is required for video or for digital sound recording. This amounts to requiring a speed increase of some 300 times the speed required for audio recording. Early video recorders in the 1950s used tape speeds as high as 360   i.p.s. along with very large reels of tape.

Analog video recording, even now, does not cope with the full bandwidth of a video signal, and various methods of coding the signal are used to reduce the bandwidth that is required. In addition, the luminance (black and white) video signals are frequency-modulated on to a carrier, and the color signals that are already in this form have their carrier frequency shifted (see Chapter 8 for more details of luminance and color signals).

For domestic video recorders, the maximum bandwidth requirement can be decreased to about 3   MHz without making the picture quality unacceptable, but the main problem that had to be solved was how to achieve a tape speed that would accommodate even this reduced bandwidth. In fact, the frequency of the carrier ranges between 3.8 and 4.8   MHz as it is frequency-modulated to avoid the problems of uneven amplitude when such high frequencies are recorded on tape.

The brilliant solution evolved by Alexander Poniatoff (founder of the Ampex corporation) was to move the recording head across the tape rather than move the tape over a head. Two (now often four) heads are used, located on the surface of a revolving drum, and the tape is wound round this drum so that the heads follow a slanting path (a helical scan) from one edge of the tape to the other (Figure 7.13). The signal is switched from head to head so that it is always applied to the head that is in contact with the tape. At a drum rotation speed of around 1500 rotations per second, this is equivalent to moving the tape past a head at about 5 meters per second.

Figure 7.13. Principle of rotary-head video recording. The two (or more) heads are mounted on a drum, and the tape is wrapped at a slight angle. This makes the head trace out a sloping track across the tape as the drum revolves and the tape is pulled around it at a slow rate

Though the way that the head and the tape are moved is very different from that used for the older tape recorders, the principles of analog video recording remain unchanged. The block diagram for a video cassette recorder is very different from that of a sound recorder of the older type, but the differences are due to the signal processing that is needed on the video signals rather than to differences in recording principles.

Note

Videotape is, at the time of writing, almost obsolete (as attested by the huge stacks of videotapes in charity shops) and has been superseded by digital versatile disc (DVD), the digital system that uses the same principles as CD. This is particularly suited to digital television signals, and in the UK has been used mainly for players of unnecessarily expensive discs (it costs less to press out a DVD than to record a tape). DVD recorders with a recordable and reusable disc are readily available and of good performance, and another answer to the need for domestic recording and replay has been to use a computer hard drive (magnetic disc) along with conversion circuits so that ordinary analog television signals (as well as digital signals) can be recorded digitally.

These hard drive units have typically a capacity of up to 45 hours, so that a single unit can cope with most domestic recording needs. With the switch to digital television in the UK complete in 2011 the use of hard-drive video recorders will be almost universal. A particular advantage of these hard-drive recorders is that (using buffer stages) they can record and replay simultaneously, so that when the unit is switched on it is possible to view a live program, place on hold while answering a telephone call or having a meal, and resume viewing later. The facility is available also for digital radios in the areas where reception is possible. Ideally, we might have a hard-drive recorder along with a DVD recorder so that really useful recordings could be preserved.

Later sound recorders (before the extensive use of CD recorders) for very high-quality applications used digital audio tape (DAT). This operated by converting the sound into digital codes (see Chapter 9) and recording these (wide-band) signals on to tape using a helical scan such as is used on video recorders. The main problem connected with DAT is that the recordings are too perfect. On earlier equipment, successive recording (making a copy or a copy of a copy) results in noticeable degradation of the sound quality, but such copying with DAT equipment causes no detectable degradation even after hundreds of successive copies. This would make it easy to copy and distribute music taken from CDs, and the record manufacturers succeeded in preventing this misuse of DAT (though not in the Far East). DAT recorders that were sold in the UK were therefore fitted with circuits that limited the number of copies that could be made, and the DAT system disappeared when recordable CDs and, later, DVDs were developed.

Summary

Tape as a recording medium hardly seems adequate for sound recording, and its use for video and for digital sound has been a triumph of technical development. As so often happens, however, the relentless progress of technology has made tape-based systems obsolescent just as they seemed to have reached their pinnacle of perfection.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978008097063910007X

Overview

Ivan P. Kaminow , in Optical Fiber Telecommunications (Third Edition), Volume A, 1997

A Survey of Fiber Optics in Local Access Architectures (Chapter 13)

The Telecommunications Act of 1996 has opened the local access market to competition and turmoil. New applications based on switched broadband digital networks, as well as conventional telephone and broadcast analog video networks, are adding to the mix of options. Furthermore, business factors, such as the projected customer take rate, far outweigh technology issues.

In Chapter 13, Nicholas J. Frigo discusses the economics, new architectures, and novel components that enter the access debate. The architectural proposals include fiber to the home (FTTH), TDM PON, WDM PON, hybrid fiber coax (HFC), and switched digital video (SDV) networks. The critical optical components, described in Volume IIIB, include WDM lasers and receivers, waveguide grating routers, and low-cost modulators.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080513164500059

Introduction to Cable Television

Walter Ciciora , ... Michael Adams , in Modern Cable Television Technology (Second Edition), 2004

1.1 Introduction

Cable television is an industry and a technology that has outgrown its historical name. Modern "cable television" networks are used to provide a wide range of services, including analog and digital video, digital audio, high-speed data, and telephony.

The essential distinguishing characteristics of cable television networks are that they include broadband (typically 0.5-1 GHz of total bandwidth), highly linear distribution systems designed to carry many modulated radio frequency (RF) signals with a minimal amount of mutual interference between a central point and many customers, where signals are delivered via coaxial cables to and from terminal equipment. Because of these characteristics, the networks are service-agnostic to the extent that they will carry any information that can be modulated on a compatible RF carrier. Modern cable television networks are almost always two-way, use optical fiber extensively, and are segmentable so as to allow simultaneous frequency reuse in various network sections.

Historically, the cable television business was based exclusively on delivery of television programming, and it has been very successful in that regard. As of 1999, nearly 97% of U.S. television households had cable television service available, and approximately 66 million households subscribed to at least the lowest tier of video service, representing almost 67% of U.S. television households. 1 Those levels have changed only slowly over the past few years.

Because cable television has been so successful and has enjoyed such vigorous growth and acceptance, it has spawned video competitors, including prerecorded media, direct broadcast satellite (DBS), video streaming over the Internet, as well as the interest of the telephone industry. Increasing revenue streams from connected households, made possible by multiple service offerings, has also changed the economics of the industry sufficiently that customers in some markets now have a choice between two cable television operators who have constructed parallel distribution networks serving the same homes. In the common terminology of the cable industry, the company building the second network is known as an overbuilder. Overbuilders may offer service as a second, franchised cable television operator or as an open video system (OVS) operator. Taking advantage of new technologies and, in particular, the falling prices of electro-optical components, these networks typically use optical fibers to carry signals closer to subscribers than legacy cable operators, in some cases all the way to subscriber homes.

In the world of high-speed data communications, and Internet access in particular, services offered by cable operators have codeveloped with other wired and wireless options. For residential users, Internet access was historically provided almost exclusively through dial-up modems directly to Internet service providers (ISPs). Many applications, however, run discouragingly slowly at the data rates that are possible through standard telephone connections, and this has led cable companies to develop connection services that are 10–100 times faster. In the competition among broadband service providers, cable has outsold its competitors by about 2:1, with a telephone technology known as digital subscriber line (DSL) providing the strongest competition to date. Competing satellite and wireless terrestrial data transport technologies are still developing market share.

In offering voice telephone service, it is the cable operator who is the over-builder, since residential telephone service was available to virtually every household in the United States before cable television operators entered that market segment. When offered by cable television operators, telephone service is regulated by the same agencies who regulate the incumbent telephone companies, leading to completely different regulatory authorities for services that share the same physical network.

Cable-offered telephony comes in at least two technical versions and two product classifications. Initially, telephone offerings utilized dedicated signal-processing equipment that was carefully engineered to meet the high reliability expectations of this service. As of early 2003, that type of equipment was still used to service the large majority of installed telephone customers. The newest version, known as voice-over-IP (VoIP), shares terminal equipment with the high-speed data service, offering the potential for reduced equipment costs.

Regardless of how the signals are handled technically, cable operators may offer a primary-line service or only a secondary-line service. Primary-line service competes directly with the incumbent telephone operator for all residential telephone business, but it requires that the cable network and equipment meet the reliability standards to be the "life-line" communications link for customers in the case of an emergency (while not mandated by regulatory agencies, this voluntary requirement is generally defined as a service that is available at least 99.99% of the time). Secondary-line service cedes the first line in each home to the incumbent but competes for additional lines. The assumption is that the availability needn't be as high, saving the cable operator capital upgrade cost and allowing it to offer a lower price.

In the following chapters, you will gain a solid understanding of the technologies required to deliver broadband services to and from homes. You will see how the pieces fit together to make up a complete system for the transmission of information and entertainment choices to consumers. If yours is a related business, you will better understand how it fits with the cable industry. If you are already knowledgeable in some aspects of the broadband networks used by cable, this book will fill in the gaps.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558608283500035