Compresión de imágenes es vital para un sistema de videovigilancia

Fredrik Nilsson
SecurityInfoWatch.com
Every digital video surveillance system uses compression in order to manage file size when transporting video over the network for storage and viewing. Bandwidth and storage requirements render uncompressed video impractical and expensive, so compression technologies have emerged as an efficient way to reduce the amount of data sent over the network. In short, compression saves money.
Today there are many kinds of compression available. Compression technology can be proprietary – invented and supported by one only vendor – or based on a standard and supported by many vendors. Selecting the right compression is vital to ensuring the success of a video surveillance installation. It provides the appropriate quality at the budgeted cost and ensures the system is future proof. Selecting the right compression can even determine whether video is admissible in court cases, an important consideration for security and surveillance installations.
Compression Terminology
The effectiveness of an image compression technique is determined by the compression ratio, calculated as the original (uncompressed) image file size divided by the resulting (compressed) image file size. At a higher compression ratio, less bandwidth is consumed at a given frame rate. If bandwidth is kept consistent, the frame rate is increased. A higher compression ratio also results in lower image quality for each individual image.
[See Images A, B, C and D at right to compare how different compression formats can affect your final image quality.]
There are essentially two approaches to compression: lossless or lossy. In lossless compression, each pixel is unchanged, resulting in an identical image after the image is decompressed for viewing. Files remain relatively large in a lossless system, which makes them impractical for use in network video solutions. A well-known lossless compression format is the Graphics Interchange Format , better known as a .GIF image.
To overcome these problems, several lossy compression standards have been developed, such as JPEG and MPEG. The fundamental idea in lossy compression is to reduce portions of the image that appear invisible to the human eye, thereby decreasing the size of the data transmitted and stored.
A Note on Still Images
Video is essentially a stream of individual images. The most widely accepted standard for still image compression is the Joint Photographic Expert Groups (JPEG) standard. It was developed in the 1980s and has been integrated into standard Web browsers. JPEG decreases file sizes by making use of similarities between neighboring pixels in the image and the limitations of the human eye. Other lossy image compression techniques include JPEG2000 and Wavelet. JPEG is by far the most common and most widely supported compression standard for still images.
Motion JPEG is the most commonly used standard in network video systems, however it is technically a still-image compression technique. When employing Motion JPEG compression, network cameras capture individual images and compress them into JPEG format – similar to a still picture – and there is no compression between the individual frames. If a network camera captures and compresses 30 individual still images per second, it makes them available as a continuous flow of images resulting in full-motion video. As each individual image is a complete JPEG compressed image, they all have the same guaranteed quality, determined by the compression ratio for the network camera or video server.
Video Compression
Video compression uses a similar method as that of still image compression. However, it adds compression between the frames to further reduce the average file size. MPEG is one of the best-known audio and video compression standards and was created by the Motion Picture Experts Group in the late 1980s. MPEG compression utilizes one frame as a reference. Each additional frame saves and transports only the image information that is different from the original. If there is little change between the images, there will be few differences resulting in a high compression ratio. With significant movement in the images the compression ratio will be much lower. The video is then reconstructed at the viewing station based on the reference image and the «difference data.» MPEG video compression leads to lower data volumes being transmitted across the network than with JPEG.
[Images E and F (above, at right) give an example of the difference between how a Motion JPEG storage format works and that of an MPEG format.]
The MPEG standard has evolved since its inception. MPEG-1 was released in 1993 and was intended for storing digital video onto CDs. For MPEG-1, the focus was on keeping the bit-rate (the amount of data transmitted via the network per second) relatively constant. However, this created inconsistent image quality, typically comparable to that of videotapes.
MPEG-2 was approved in 1994 and was designed for video on DVDs, digital high-definition TV, interactive storage media, digital broadcast video, and cable TV. The MPEG-2 project focused on extending the MPEG-1 compression technique to cover larger, higher quality pictures with a lower compression ratio and higher bit-rate.
For network video systems, MPEG-4 is a major improvement from MPEG-2. It was approved as a standard in 2000, and there are many more tools in MPEG-4 to lower the bit-rate needed and achieve higher image qualities. MPEG-4 comes in many different versions. Simple Profile is the lowest quality, while Advance Simple Profile (Part 2) provides much higher quality video. A newer version of MPEG-4 called Part 10 (or AVC – Advanced Video Coding, or H.264) is also available.
With a limited bandwidth available, users can opt for a constant bit-rate (CBR), which generates a constant, pre-set bit-rate. However, the image quality will vary depending on the amount of motion in the scene. As an alternative, users can use a variable bit-rate (VBR) where parameters can be set to maintain high image quality regardless of the motion in the scene. This option is generally preferred in surveillance applications. Because the actual bit-rate will vary with VBR, the network infrastructure must have enough capacity to transport the video.
The MPEG-4 vs. Motion JPEG Debate
As described above, MPEG-4 and Motion JPEG each employ a different technique for reducing the amount of data transferred and stored in a network video system. There are advantages and disadvantages to each, so it is best to consider the goals of the overall surveillance system when deciding which of the two standards is most appropriate.
Due to its simplicity, Motion JPEG is often a good choice. There is limited delay between image capturing, encoding, transfer, decoding, and finally display. In other words, Motion JPEG has very little latency, making it most suitable for real-time viewing, image processing, motion detection or object tracking.
Motion JPEG also guarantees image quality regardless of movement or image complexity. It offers the flexibility to select either high image quality (low compression) or lower image quality (high compression), with the benefit of smaller file sizes and decreased bandwidth usage. The frame rate can easily be adjusted to limit bandwidth usage, without loss of image quality.
However, Motion JPEG files are still typically larger than those compressed with the MPEG-4 standard. MPEG-4 requires less bandwidth and storage to transfer data resulting in cost savings. At lower frame rates (below 5 fps) the bandwidth savings created by using MPEG-4 are limited. Employing Motion JPEG network cameras with video motion detection built in, is an interesting alternative, if a higher frame rate is only required a portion of the time when motion is in the image. If the bandwidth is limited, or if video is to be recorded continuously at a high frame rate, MPEG-4 may be the preferred option. Because of the more complex compression in a MPEG-4 system, there is more latency before video is available at the viewing station. The viewing station needs to be more powerful (and hence expensive) to decode MPEG4, as opposed to the decoding of Motion JPEG streams.
One of the best ways to maximize the benefits of both standards is to look for network video products that can deliver simultaneous MPEG-4 and Motion JPEG streams. This gives users the flexibility to both maximize image quality for recording and reduce bandwidth needs for live viewing.
One other item to keep in mind is that both MPEG-2 and MPEG-4 are subject to licensing fees, which can add additional costs to the maintenance of a network video system. It is important to ask your vendor if the license fees are paid. If not, you will incur additional costs later on.
Other Considerations
Another important consideration is the use of proprietary compression. Some vendors don’t adhere to a standard 100 percent or use their own techniques. If proprietary compression is used, users will no longer be able to access or view their files should that particular vendor stop supporting that technology.
Proprietary compression also comes into consideration if the surveillance video will potentially be used in court. If so, using industry standard compression ensures that video evidence will be admissible. Some courts believe that evidentiary video should be based on individual frames, not related to each other or manipulated. This would eliminate MPEG because of the way the information is processed. The British court system, which has been leading digital video admissibility, requires an audit trail that describes how the images were obtained, where they were stored, etc., to make sure the information is not tampered with in any way. As digital video becomes more widely adopted, the issue of admissibility in court will be one to watch.
Compression is one of the most important factors to building a successful network video system. It influences image and video quality, latency, cost of the network, storage, and can even determine whether video is court admissible. Because of these considerations, it is important to choose your compression standard carefully … otherwise, the video may be rendered obsolete for your purposes.
Does one compression standard fit all?
When considering this question and when designing a network video application, the following issues should be addressed:
What frame rate is required?
Is the same frame rate needed at all times?
Is recording/monitoring needed at all times, or only upon motion/event?
For how long must the video be stored?
What resolution is required?
What image quality is required?
What level of latency (total time for encoding and decoding) is acceptable?
How robus/secure must the system be?
What is the available network bandwidth?
What is the budget for the system?
About the author: As the general manager for Axis Communications, Fredrik Nilsson oversees the company’s operations in North America. In this role, he manages all aspects of the business, including sales, marketing, business expansion and finance. He can be reached via email at [email protected].

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *