Chapter 2. Fundamentals of Telecommunications
Page 2 of 3
Wireless Media
Since the invention of the Wireless Telegraph in 1896 communication system designers have sought to use wireless because of the reduced infrastructure cost and complexity, when compared to wireline communication systems. There is no need to construct miles of telephone line poles or cable trenches. Simply put in a few strategically positioned radio towers and transmit around the world. Today, wireless systems are significantly more complex because we want to allow millions of users to make telephone calls or receive feature length movies via wireless systems. There are four general types of wireless (radio) communication systems:
- Cellular Telephone
- Basic 2-Way Radio
- Point-to-point
- Wi-Fi (Wireless Fidelity), and recently, Wi-Max
Traffic signal and freeway management systems use three of the variants to support operations, and are considering the use of Wi-Fi. The Wi-Fi/Wi-Max systems are becoming increasingly ubiquitous in their deployment, and a part of most telecommunication deployment strategies. Chapter 7 provides a description of a proposed use of Wi-Max for the Irving, Texas traffic signal system. Wi-Fi/Wi-Max systems are Ethernet based and allow for a seamless transition from wireless to wireline.
Cellular Systems
Cellular wireless telephone networks provide users with a mobile extension of their wireline voice networks. During the past 10 years, cellular telephone systems moved from a luxury that only wealthy individuals or corporations could afford to a commodity service that is affordable, and used by majority of adults (also large numbers of children and teenagers) in North America Europe and the Pacific Rim countries. Many third world countries are expanding their telephone networks via the creation of an extensive wireless system that eliminates the cost of constructing the "wired" systems.
Cellular telephone systems come in two basic varieties: Analog or Digital. Analog uses two standards – AMPS (advanced mobile phone system), and GSM (global system for mobile communication). AMPS was deployed in North America and GSM was deployed in most of the rest of the world. The two systems are not compatible. Different telephone handsets are required, or at least a hand set that will incorporate both systems.
Many digital variants are in the process of being deployed. Because the cost of deployment is so high cellular carriers have been building the required infrastructure in several stages. The first stage is called 2nd generation. Carriers took the existing analog service with one user per radio channel and added a system that would allow several users per radio channel. These are multiplexing schemes called TDMA (time division multiple access) – an analog service, and CDMA (code division multiple access) – a digital service. The basic plan was to move from 2nd to 3rd generation within a short period of time. The upgrade would provide users with voice and data services. The data services would provide significantly higher throughput – greater than 56 Kbps – for internet and e-mail access. However, the cost of the upgrade was so high that carriers decided to take an intermediate step – 2.5G – to provide improved voice and low bandwidth data services. Carriers were hoping that the internet boom of the late 1990's would create a consumer demand for wireless internet services. However, this demand did not materialize, and the carriers slowed their deployment of 3G systems. With the advent of inexpensive Wi-Fi/Wi-Max systems, general consumer and commercial demand, and laptop computers that can run for 5 hours on battery, wireless carriers are deploying "overlay systems" that can provide broadband internet access and general networking services. Wi-Fi/Wi-Max will be discussed later in this chapter, and an example of a Wi-Fi/Wi-Max system used for traffic signal control will be discussed in Chapter 7.
CDPD
An "overlay system" is one that is built upon and existing infrastructure. Because there is a basic cellular transmitter site infrastructure in place, wireless carriers can deploy Wi-Fi systems for substantially less money than upgrading to 3G. |
CDPD (cellular digital packet data) is an analog data overlay that has been in operation since 1993. This service provides data throughput at 9.6 Kbps (in theory up to 19.2 Kbps), and is an overlay to the analog cellular telephone system. CDPD is being used by a number of communities as a wireless communication link to control traffic signal systems. As the analog cell systems are converted to digital, CDPD is being phased out. The wireless carriers are not providing a substitute. If you have an existing system that relies on CDPD service, you will need to change to a new service.
Point-to-Point Radio Systems
These are radio systems that communicate between fixed locations. Generally, they are used as a replacement for wireline systems. Point-to-Point systems can be established using almost any radio frequency. However, most systems are developed using frequencies in the microwave spectrum of 800 MHz to 30 GHz.
The Federal Communications Commission has designated groups of frequencies throughout the usable radio spectrum for "fixed service" use.
Microwave
Microwave is a fixed point-to-point service that provides connectivity between major communication nodes. Telephone and long distance companies use the service to provide backup for their cabled (wireline) infrastructure and to reach remote locations. Public Safety agencies use microwave to connect 2-way radio transmitter sites to a central location. Businesses also use these systems for the same purposes.
The frequencies allocated for this service are in the 6 and 11 gigahertz ranges. All users are required to obtain a license for use from the FCC (Federal Communications Commission). Frequency licenses are granted on a non-interfering (with other users) basis. Systems can be designed to operate over distances of about 20 miles between any two points. Other frequencies available in the 900 megahertz, 2 and 23 gigahertz range do not require a license. Because these frequencies do not require a license it is up to users to resolve any interference problems without support from the FCC. As with all microwave, the FCC permits only point-to-point uses. Many DOTs are using spread spectrum systems in the 900 MHz and 2 GHz bands.
Spread Spectrum Radio
Spread spectrum radio is a technology that "spreads" the transmission over a group of radio frequencies. Two techniques are used. The most common is called "frequency hopping" The radio uses one frequency at a time but at pre-determined intervals jumps to another frequency to help provide a "secure" transmission. The second system actually spreads the transmission over several frequencies at the same time. The method helps to prevent interference from other users. These systems are generally used for distances of less than 2 air miles.
Spread spectrum technology for telecommunication systems was originally developed during World War II (6). Most notably, the technology has inherent features that provide for a very secure means of communicating even in "unfriendly" RF environments. Despite widespread military use, the technology was not made available for commercial use until l995 when the U.S. Federal Communications Commission (FCC) issued ruling 15.247 permitting the use of spread spectrum technology for commercial applications in the 900, 2400 and 5800 MHz frequency bands.
The basic concept of spread spectrum refers to an RF modulation technique that spreads the transmit signal over a wide spectrum (or bandwidth). Contrary to conventional narrow band modulation techniques that are evaluated by their ability to concentrate a signal in a narrow bandwidth, spread spectrum modulation techniques use a much wider bandwidth.
A typical spread spectrum transmitter will integrate the actual signal with a sequence of bits that codes (referred to as pseudo-random code) and spreads the signal over a bandwidth usually from 20 to 30 MHz.
The spreading is actually accomplished using one of two different methods – direct sequence or frequency hopping. Direct sequence spread spectrum uses the pseudo-random code, integrated with the signal, to generate a binary signal that can be duplicated and synchronized at both the transmitter and receiver. The resulting signal evenly distributes the power over a wider frequency spectrum. Direct sequence spread spectrum is usually used to transmit higher speed digital data for T1, or high-speed wireless data networks.
The second popular type of spread spectrum modulation is frequency hopping. This technique is similar to a conventional narrow band carrier with a narrow transmit bandwidth. The difference from the former is a random hopping sequence within the total channel bandwidth. Spread spectrum modulation techniques have a principal advantage over other radio techniques: the transmitted signal is diluted over a wide bandwidth, which minimizes the amount of power present at any given frequency. The net result is a signal that is below the noise floor of conventional narrow band receivers, but is still within the minimum receiver threshold for a spread spectrum receiver. While the receiver is able to detect very low signal powers, the receivers are also designed to reject unwanted carriers, including signals which are considerably higher in power than the desired spread spectrum signal. Each transmitter and receiver is programmed with unique spreading sequences which are used to de-spread the desired signal and spread the undesired signal, effectively canceling the noise.
Two-Way Radio
Coverage for all radio systems is expressed in terms of "air miles", because radio waves tend to travel in straight lines. |
Two-way radio systems have been in common use since the 1930's. Originally used by the military, various federal agencies, police, fire and ambulance, and local governments, its use has expanded to include almost every aspect of our social infrastructure, including individual citizens using "Ham Radio" systems. Most commonly used frequencies are in the 30, 150, 450 – 512 and 800 megahertz ranges. Coverage is usually expressed in terms of "air mile radius". Systems in the 150 MHz band can typically cover 15 to 30 air miles in radius from a single transmitter location.
The FCC has been encouraging the use of regional systems that incorporate all state, county and municipal agencies into a single group of radio channels. The available radio spectrum is being re-allocated to accommodate these systems. At the same time, the FCC is restricting transmitter power outputs and antenna height. Many of the early (1970s & 1980s) system designs sought to use maximum transmitter power outputs of more than 100 watts and very high antenna sites. This usually created interference problems with other users on the same radio frequency.
Today, many Departments of Transportation are joining forces with public safety agencies to create a common radio communication system. This allows for easier coordination of resources to resolve traffic incidents.
Wi-Fi
Wi-Fi Systems is a term that is applied to a generic point-to-multipoint data communication service. The Federal Communications Commission (FCC) has set aside radio spectrum in the 900 MHz, 2GHz and 5 GHz frequency range. The frequencies are available for use by the general population and commercial enterprise. No licenses are required, and the only restrictions are that systems not exceed power or antenna height requirements. Complete rules governing the use of Wi-Fi systems are listed under FCC rules: Title 47 CFR Part 15.
In its 1989 revision of the Part 15 rules, the Commission established new general emission limits in order to create more flexible opportunities for the development of new unlicensed transmitting devices. These more general rules allow the operation of unlicensed devices for any application provided that the device complies with specified emission limits. This revision also established new "restricted bands" to protect certain sensitive radio operations, such as satellite downlink bands, and federal government operations, and prohibited transmissions by unlicensed devices in those bands. The rules have been further modified to add spectrum and encourage the growth of Wi-Fi systems. The FCC Spectrum Policy Task Force issued a report from its Unlicensed Devices and Experimental Licenses Working Group in November 2002.
The full report is available at the following web site: http://www.fcc.gov/sptf/files/E&UWGFinalReport.doc. The report takes note of the significant growth of Wireless ISPs and the increasing use of this service to provide broadband internet access in rural areas. The report makes recommendations for consideration of increasing authorized power output in rural areas as well as making additional spectrum available.
Many traffic signal and freeway management systems are currently using spread spectrum radio systems in the ISM bands. The following table provides some estimates of the number of devices used in the unlicensed spectrum – this information was provided by the Consumer Electronics Association (CEA) to the FCC in September, 2002:
The IEEE is working on developing and improving a number of wireless transmission standards for Wi-Fi. The most widely used are the 802.11 series. The reader should check the IEEE web site for the latest standards being issued, and consult with equipment vendors and systems engineers to determine which are applicable for their specific requirements. A continuing theme throughout this handbook is that no single system or standard is a solution for all problems.
WLAN
A wireless LAN lets users roam around a building with a computer (equipped with a wireless LAN card) and stay connected to their network without being connected to a wire. The standard for WLANs put out by the Institute of Electrical and Electronics Engineers (IEEE) called "802.11B" or "Wi-Fi" is making WLAN use faster and easier. A WLAN can reach 150 m radius indoors and 300 m outdoors. WLANs require a wired access point that connects all the wireless devices into the wired network.
802.11A, is supposed to transfer data at even higher speeds of up to 54 Mbps in the 5 GHz band.
802.11B transfers data at speeds of up to 11 Mbps in the 2.4 GHz radio band (a license is not required for this band).
802.11G, offers up to 54 Mbps data rates, functions in the 2.4 GHz range, and is compatible with 802.11B. Equipment using the 802.11B standard will work in an 802.11G system.
WLANs are used on college campuses, in office buildings, and homes, allowing multiple users access to one Internet connection. WLAN hubs are also deployed in many airports, and popular commercial establishments such as coffee shops and restaurants. These hubs allow laptop users to connect to the Internet.
A major drawback of WLANs is the lack of security. Research has found flaws in the 802.11 systems. Transmissions can be intercepted making it easy for hackers to interfere with communications. Another problem is overcrowding of the bandwidth. Too many people or businesses using WLANs in the same area, can overcrowd the frequency band. Problems with signal interference can occur and there are fears that the airwaves may become overloaded. Despite these drawbacks, WLANs are a successful and popular technology, which are widespread and being incorporated into most new laptop and personal digital assistant (PDA) computers.
Note: two good sources of information on these types of systems are the Wi-Max organization at http://www.wimaxforum.org, and the Wi-Fi organization at http://www.wi-fi.org.
Wi-MAX
Based on the IEEE 802.16 series of standards, Wi-MAX is a wide area wireless system with a coverage area stated in terms of miles rather than feet. The standard was developed to provide for fixed point-to-multipoint coverage with broadband capabilities. For the latest information on the evolving 802.16 standards, check the IEEE web site: http://grouper.ieee.org/groups/802/16/index.html
"802.16 is a group of broadband wireless communications standards for metropolitan area networks (MANs) developed by a working group of the Institute of Electrical and Electronics Engineers (IEEE). The original 802.16 standard, published in December 2001, specified fixed point-to-multipoint broadband wireless systems operating in the 10-66 GHz licensed spectrum. An amendment, 802.16a, approved in January 2003, specified non-line-of-sight extensions in the 2-11 GHz spectrum, delivering up to 70 Mbps at distances up to 31 miles. Officially called the WirelessMAN™ specification, 802.16 standards are expected to enable multimedia applications with wireless connection and, with a range of up to 30 miles, provide a viable last mile technology." (7)
Chapter seven provides a presentation of how the City of Irving Texas is using an 802.16 based system to reduce the overall cost of deploying a new centrally controlled traffic signal system.
Free Space Optics (FSO)
(FSO) Free Space Optics is another wireless system being used today. Instead of using radio frequencies, this system uses a LASER transmitted through the air between two points. The LASER can be used for transmission of broadcast quality video. These systems are limited to an effective range of 3 air miles.
Transmission Signaling Interfaces
All Freeway Management and Traffic Signal systems rely on a communications process to support their operations. Some use a very simple process with "low speed" data transmitted between a single device and a central computer. The basic transmission of the data, accomplished by transmitting bit by bit over a single path (wire or some other transmission medium) between two communication points, is called "serial". Other systems use a more complex process with multiple bits transmitted simultaneously over multiple paths between two points, or "parallel". This section looks at the transmission of voice, data, and video and the various types of physical and logical interfaces used for this purpose.
Data & Voice Signaling – Basics
Carrier (the telephone company) operated communication networks are primarily designed to facilitate the transmission of voice in either analog or digital formats. During the 1990s, many new telecommunication companies were formed dedicated to providing efficient transmission of data. Some of these companies attempted to build their own communications networks, but ran out of financial resources because of the enormous expense for construction, operation and maintenance. Many of their customers wanted to use one network for both voice and data, and only wanted to deal with one communications provider.
Voice signaling interfaces are very simple. They are either 2 or 4 wire, and pass frequencies in the 0 to 4,000 Hz range. This same frequency range is used by modems to interface with the telephone networks. The modem converts the output of a computer to voice frequencies. Traffic signal controllers have used modems to communicate over voice based telephone networks for many years. They use a combination of dial-up and private line services. The digital output of the traffic signal field controller is converted to an analog format by the modem.
Data is information content. Analog and digital are formats for transmitting information. |
In the early 1960s, the Telephone Companies (Carriers) recognized that customers would want to transmit data via the existing networks. The Carriers began to add equipment and processes to their networks that would support the need. They started with the use of private fixed point-to-point services and then added switched data communication services via the PSTN (Public Switched Telephone Network).
Data can be transmitted in either an analog or digital format and, transmitted via 2 or 4 wire circuit. In an analog system, most data is transmitted using a dial-up modem via the PSTN. Private line systems (leased from a Carrier) normally use a 4 wire communication circuit. Some carriers do offer 2 wire private line services. Private line service is provided as either analog, or digital. Private-line circuits are always point-to-point and never run through a switch.
Analog Private-line circuits are normally referred to as 3002 or 3004. The 3000 designation refers to available bandwidth. The 2 and 4 refer to the number of wires in the circuit. Digital Private-line services are: DDS (Digital Data Service – 56 Kbps or less); T-1/T-3; DS-1/DS-3; Fractional T-1; SONET; Ethernet (a recent addition to the types of available services).
Very often, the terms T-1/T-3 and DS-1/DS-3 are used interchangeably. However, there is a fundamental difference between the service offerings. T-1/T-3 circuits are formatted by the Carrier into voice channel equivalents. All multiplexing equipment with maintenance and operation is provided by the Carrier. Pricing for these services is usually regulated by state public utility commissions (called a tariff). DS-1/DS-3 circuits are provided without format. The end-user is responsible for supplying the multiplexing equipment to format the circuit. Maintenance and operation is the responsibility of the end-user. The Carrier is only responsible for making certain that the circuit is always available for use. Fees for DS-1/DS-3 circuits are typically not regulated, and are based on market competition.
Electro-Mechanical Signal Interfaces
Electro-mechanical interfaces for data transmission and signaling normally fall under the following standards: RS-232; RS-422; RS-423; RS-449; RS-485. Each of these standards provides for the connector wiring diagrams and electrical signaling values for communications purposes. These standards were developed by the EIA (Electronic Industries Alliance) and the TIA (Telecommunications Industry Association). More information can be found at the EIA web site – http://www.eia.org, and the TIA web site – http://www.tiaonline.org. Notice in the following diagram that all 25 pins of a 25 pin connector have an assigned function. This is due to the fact that the connector standard was developed prior to the development of software that would control most of the functions. Most personal computers (PC) use a 9 pin variation. There are also variations that use 3 or 5 pins. Please check the equipment manufacturer's recommendations.
Many communication system problems occur because the connectors are not properly wired. Following is a listing of some of the connector standards that use a d-sub miniature type connector:
If you are encountering a communication problem – check the connectors and their wiring pattern. Remember the basic elements of all communications systems: transmitter; receiver; medium. If the transmitter wire is connected to the same numbered pin at both ends, the receivers can't hear. |
- RS 449 (EIA-449)
- RS 530 (EIA 530)
- RS-232D
- RS232
- RS366
- RS422 37pin
- RS422 9pin
- Serial (PC25)
- Serial (PC9)
All of the above are based on a similar standard and there is latitude for manufacturers to use some of the leads in these connectors for a special function.
If your system is connected via a carrier network the transmitter-receiver cross over is done in the network. In a private network the crossover is accounted for in the basic network design. Double check to make certain that the design accounts for transmitter to receiver connections.
Video Transmission
During the past 15 years, traffic and freeway management agencies have been integrating the use of CCTV cameras into their operational programs. The heavy use of this technology has created a need to deploy very high bandwidth communication networks. The transmission of video is not very different from voice or data. Video is transmitted in either an analog or digital format. Video transmitted in an analog format must travel over coaxial cable or fiber optic cable. The bandwidth requirements cannot be easily handled by twisted pair configurations.
Video can be transmitted in a digital format via twisted pair. It can be transmitted in a broadband arrangement as full quality and full motion, or as a compressed signal offering lower image or motion qualities. Via twisted pair, video is either transmitted in a compressed format, or sent frame-by-frame. The frame-by-frame process is usually called "slow-scan video".
Full color broadcast analog video requires a substantial amount of bandwidth that far exceeds the capacity of the typical twisted pair analog voice communication circuit of 4 KHz. Early commercial television networks were connected via Coaxial cable systems provided by AT&T Long Distance. These networks were very costly to operate and maintain, and had a limited capability.
Transmission of analog video requires large amounts of bandwidth, and power. The most common use of analog video (outside of commercial broadcast TV) is for closed circuit surveillance systems. The cameras used in these systems use less bandwidth than traditional broadcast quality cameras, and are only required to send a signal for several hundred feet. For transmission distances (of analog video) of more than 500 feet, the system designer must resort to the use of triaxial cable, or fiber optics. Depending upon other requirements, the system designer can convert the video to another signal format. The video can be converted to a radio (or light) frequency, digitized, or compressed.
Cable companies have traditionally converted television broadcast signals to a radio frequency. With this technique, they can provide from 8 to 40 analog channels in a cable system using coaxial cable (more about multiplexing later in this chapter). Cable company operators wanting to provide hundreds of program channels will convert the video to a radio frequency, and then digitize. The cable company is able to take advantage of using both fiber and coaxial cable. These are called HFC (hybrid fiber coax) systems. Fiber is used to get the signal from the cable company main broadcast center to a group of houses. The existing coaxial cable is used to supply the signal to individual houses.
Early freeway management systems used analog video converted to RF and transmitted over coaxial cable. Later systems used fiber optic cable with either RF signal conversion, or frequency division multiplexing (see Multiplexing in this chapter).
With the introduction high bandwidth microprocessors and efficient video compression algorithms, there has been a shift from analog video transmission systems to digital systems. New processes such as Video over IP (Internet Protocol) and streaming video allow for the broadcast of video incident images to many user agencies via low (relatively) cost communication networks. Before looking at the systems, let's take a look at the various types of video compression schemes.
Video Compression
Compressed Video – Since the mid-1990s, FMS system designers have turned to digital compression of video to maximize resources, and reduce overall communication systems costs. The digital compression of video allows system operators to move video between operation centers using standard communication networks technologies.
Video compression systems can be divided into two categories – hardware compression and software compression. All video compression systems use a Codec. The term Codec is an abbreviation for coder/decoder. A codec can be either a software application or a piece of hardware that processes video through complex algorithms, which compress the file and then decompress it for playback. Unlike other kinds of file-compression packages that require you to compress/decompress a file before viewing, video codecs decompress the video on the fly, allowing immediate viewing. This discussion will focus on hardware compression technologies.
Video CODECS
Codecs work in two ways – using temporal and spatial compression. Both schemes generally work with "lossy" compression, which means information that is redundant or unnoticeable to the viewer gets discarded (and hence is not retrievable).
Temporal compression is a method of compression which looks for information that is not necessary for continuity to the human eye It looks at the video information on a frame-by-frame basis for changes between frames. For example, if you're working with video of a section of freeway, there's a lot of redundant information in the image. The background rarely changes and most of the motion involved is from vehicles passing through the scene. The compression algorithm compares the first frame (known as a key frame) with the next (called a delta frame) to find anything that changes. After the key frame, it only keeps the information that does change, thus deleting a large portion of image. It does this for each frame. If there is a scene change, it tags the first frame of the new scene as the next key frame and continues comparing the following frames with this new key frame. As the number of key frames increases, so does the amount of motion delay. This will happen if an operator is panning a camera from left to right.
Spatial compression uses a different method to delete information that is common to the entire file or an entire sequence within the file. It also looks for redundant information, but instead of specifying each pixel in an area, it defines that area using coordinates.
Both of these compression methods reduce the overall transmission bandwidth requirements. If this is not sufficient, one can make a larger reduction by reducing the frame rate (that is, how many frames of video go by in a given second). Depending on the degree of changes one makes in each of these areas, the final output can vary greatly in quality.
Hardware codecs are an efficient way to compress and decompress video files. Hardware codecs are expensive, but deliver high-quality results. Using a hardware-compression device will deliver high-quality source video, but requires viewers to have the same decompression device in order to watch it. Hardware codecs are used often in video conferencing, where the equipment of the audience and the broadcaster are configured in the same way. A number of standards have been developed for video compression – MPEG, JPEG, and video conferencing.
Video Compression
MPEG stands for the Moving Picture Experts Group. MPEG is an ISO/IEC working group, established in 1988 to develop standards for digital audio and video formats. There are five MPEG standards being used or in development. Each compression standard was designed with a specific application and bit rate in mind, although MPEG compression scales well with increased bit rates.
Following is a list of video compression standards:
- MPEG-1 – designed for transmission rates of up to 1.5 Mbit/sec – is a standard for the compression of moving pictures and audio. This was based on CD-ROM video applications, and is a popular standard for video on the Internet, transmitted as .mpg files. In addition, level 3 of MPEG-1 is the most popular standard for digital compression of audio—known as MP3. This standard is available in most of the video codec units supplied for FMS and traffic management systems.
- MPEG-2 – designed for transmission rates between 1.5 and 15 Mbit/sec – is a standard on which Digital Television set top boxes and DVD compression is based. It is based on MPEG-1, but designed for the compression and transmission of digital broadcast television. The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced video. MPEG-2 scales well to HDTV resolution and bit rates, obviating the need for an MPEG-3. This standard is also provided in many of the video codecs supplied for FMS.
- MPEG-4 – a standard for multimedia and Web compression - MPEG-4 is an object-based compression, similar in nature to the Virtual Reality Modeling Language (VRML). Individual objects within a scene are tracked separately and compressed together to create an MPEG4 file. The files are sent as data packages and assembled at the viewer end. The result is a high quality motion picture. The more image data that is sent the greater the lag-time (or latency) before the video begins to play. Currently, this compression standard is not suited for real-time traffic observation systems that require pan-tilt-zoom capability. The "forward and store" scheme used in this system inhibits eye-hand coordination. However, this is an evolving standard. The latency factor between image capture and image viewing is being reduced. The latency factor can be reduced to a minimum if the image and motion quality do not have to meet commercial video production standards. Most surveillance systems can function without this quality and can use pan-tilt-zoom functions.
- MPEG-7 – this standard, currently under development, is also called the Multimedia Content Description Interface. When released, it is hoped that this standard will provide a framework for multimedia content that will include information on content manipulation, filtering and personalization, as well as the integrity and security of the content. Contrary to the previous MPEG standards, which described actual content, MPEG-7 will represent information about the content.
- MPEG-21 – work on this standard, also called the Multimedia Framework, has just begun. MPEG-21 will attempt to describe the elements needed to build an infrastructure for the delivery and consumption of multimedia content, and how they will relate to each other.
- JPEG – stands for Joint Photographic Experts Group. It is also an ISO/IEC working group, but works to build standards for continuous tone image coding. JPEG is a lossy compression technique used for full-color or gray-scale images, by exploiting the fact that the human eye will not notice small color changes. Motion JPEG is a standard that is used for compression of images transmitted from CCTV cameras. It provides compressed motion in the same manner as MPEG, but is based on the JPEG standard.
- H.261 – is an ITU standard designed for two-way communication over ISDN lines (video conferencing) and supports data rates which are multiples of 64Kbit/s.
- H.263 – is based on H.261 with enhancements that improve video quality over modems.
Streaming Video
Streaming video relies on the video compression standards listed above, and primarily on the MPEG-4 video CODEC. Streaming video is not a transmission technique. Streaming video is a protocol for the efficient movement of entertainment (or news broadcasts) to individual users via the internet.
A streaming video system requires the use of a video server to store content that is downloaded to the end user via a communications network. The first few seconds of the program are forwarded to the viewer, with the additional information downloaded as the first images are being viewed. This provides the end user with a continuous program, or "stream' of information – hence, the term streaming video.
Streaming video can be used to provide travelers with delayed (by five to ten seconds) images from traffic intersections, or live reports from transportation management centers. This technique can also be used to connect public safety agencies to direct video feeds from traffic incident locations via the internet, or an intranet. The video codecs used to support streaming are software based, not hardware based. Several common video codec applications are in use throughout the world. Your desktop PC probably has two (or more) of these applications. Microsoft's "Windows Media Player" and Apple's "Quicktime" are two of the most popular. Real Networks has a very popular media player. These media players are designed to play multiple types of media files. Almost all PC manufacturers include media player software as part of their package. A discussion of Video over IP and Ethernet will be presented in chapters 7 and 9.
Basic Telephone Service
This section will look at the various aspects of basic voice and analog telephone services including dial-up and special service voice and data circuits. The primary reason for existence of traditional telephone (including cellular) carriers is to provide person-to-person voice communications. This fact will continue to remain true as long as our present system of telecommunications endures. Certainly, the methods and processes used to provide that communication have gone through tremendous change in the past 25 years, however, we can count on the basic process to continue to be used into the foreseeable future. Voice over IP (VoIP) is beginning to emerge as a replacement for traditional switched analog voice services. During the writing of this handbook, most of the major communication carriers announced that construction investments would be shifted to "Internet Telephony" systems (the carrier term for VoIP).
P.O.T.S., or dial-up, is the term for the primary telecommunication service that we all use today. The service is always analog (to the end user), is always switched, and is always 2-wire. The call process involves a protocol that keeps telephone sets in an "idle" status until a user wants to make a call. It is important to understand this protocol, because the dial-up modems used on a 170, 2070, or NEMA traffic signal controller, or an ITS device such as a Variable Message Sign, must follow the same process. The process involves using a telephone number to identify the other end of the communication circuit. When making a normal voice telephone call this may not seem like a very lengthy process. However, use this protocol in a system that requires polling with fast turn-around times, and the process won't work. The reader should note that P.O.T.S. is a shared system and that connections between any two points are not guaranteed. Also, a dial-up connection presents the possibility of a security breach that can be used to corrupt the system.
Special service or fixed point-to-point telephone circuits are used to directly connect field devices to a control point, or one traffic control center to another, or in any process where immediate and guaranteed communication is required. These circuits are specifically designed and constructed for the use of a single customer. They are never routed through a switch. Customers pay an initial fee for installation of the service (this is called a non-recurring charge), and a monthly (recurring charge) use, distance, and maintenance fee.
Several types of special service circuits are available. The most common are 2 and 4 wire – generally referred to as 3002 and 3004. Others provide for special signaling such as E&M (Ear & Mouth), FXO/FXS (Foreign Exchange Office/Station), ARD (Automatic Ring-down).
Traffic Signal, Freeway Management and ITS systems requiring the use of analog special service generally use 3002 and/or 3004 telephone circuits. In circumstances where a direct voice link is required between a TOC and a field office, or Public Safety Dispatch Point, an ARD circuit can be used.
6. Hedy Lamarr, once considered as the most beautiful woman in Hollywood was a co-inventor of the frequency hopping technique for spread spectrum radio. Hedy and her co-inventor George Antiel, a musician, created the technique as part of a guidance system for torpedoes during WWII.