|Year : 2011 | Volume
| Issue : 1 | Page : 29-39
Motion Vector Recovery Based Error Concealment for H.264 Video Communication: A Review
Kavish Seth1, V Kamakoti2, S Srinivasan1
1 Department of Electrical Engineering, Indian Institute of Technology - Madras, Chennai - 600036, India
2 Department of Computer Science & Engineering, Indian Institute of Technology - Madras, Chennai - 600036, India
|Date of Web Publication||3-Jan-2011|
Department of Electrical Engineering, Indian Institute of Technology - Madras, Chennai - 600036
| Abstract|| |
Error concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. A subclass of this error concealment in video communication is known as motion vector recovery (MVR). MVR techniques try to retrieve the lost motion information in the compressed video streams based on the available information in the locality (both spatial and temporal) of the lost data. The activities and practice in the area of MVR-based error concealment during the last two decades has been mainly elaborated here. A performance comparison of the prominent MVR techniques has also been presented.
Keywords: Digital video, Error concealment, H.264, Motion vectors, Video communication.
|How to cite this article:|
Seth K, Kamakoti V, Srinivasan S. Motion Vector Recovery Based Error Concealment for H.264 Video Communication: A Review. IETE Tech Rev 2011;28:29-39
|How to cite this URL:|
Seth K, Kamakoti V, Srinivasan S. Motion Vector Recovery Based Error Concealment for H.264 Video Communication: A Review. IETE Tech Rev [serial online] 2011 [cited 2014 Mar 12];28:29-39. Available from: http://tr.ietejournals.org/text.asp?2011/28/1/29/74509
| 1. Introduction|| |
With the explosive growth of the Internet and the wireless network, video services over these networks are becoming more and more popular. As a result, several compression-centric coding standards, such as MPEG-2/4, H.263 and H.264, have been developed to transmit video sequences especially over bandwidth-limited channels. The common techniques employed by these video-coding standards include discrete cosine transform (DCT), motion estimation/motion compensation (ME/MC), and variable length coding (VLC).
All of the above are successful as they exploit the spatial, temporal, and statistical redundancy in video streams to compress the same. However, highly compressed video bit streams are susceptible to transmission errors. These errors can propagate both spatially and temporally while decoding the current and subsequent video frames, thus leading to a severe degradation in the visual quality at the receiving side. Transmission errors can be roughly classified into two categories: 'random bit errors' and 'erasure errors'. Random bit errors are caused by the imperfections of physical channels, causing bit insertion, bit inversion, and/or bit deletion. Depending upon the coding methods and the affected information content, the impact of random bit errors can range from negligible to objectionable. Erasure errors, on the other hand, can be caused by packet loss in packet networks, burst errors in storage media due to the physical defects, or transient system failures. The effect of erasure errors is much more destructive than random bit errors as the former causes loss or damage to contiguous segment of bits. To illustrate the visual artifacts caused by transmission errors, [Figure 1] shows two reconstructed video frames from a 1354 Kbps H.264 coded video stream, one without error [Figure 1]a and another with 15% macroblock (MB) loss [Figure 1]b in P type frames. Note that the peak signal-to-noise ratio (PSNR) is 39.02 dB in [Figure 1]a, while it is reduced to 13.24 dB in [Figure 1]b.
|Figure 1: Subjective quality comparison for the "Stefan (QCIF)" sequence at 15% MBs loss in 9th frame at 1354 Kbps (a) Without Error; PSNR=39.02 dB (b) 15% MBs lost; PSNR=13.24 dB.|
Click here to view
Several techniques have been proposed to combat this visual quality degradation caused by errors that occur during transmission of such compressed videos. They may be classified as follows , : 1) Error resilience based techniques that improve the robustness of videos against transmission errors; 2) techniques that initiate an automatic retransmission request (ARQ) on a decoding error; and 3) error concealment (EC) based techniques that hide or recover the errors by using the other non-erroneous video information received. The motion vectors (MVs) are the vital components of information in P- and B-types of frames in video-coding standards. The MVs are computed using the ME techniques at the encoder end and used at the decoder end by the MC techniques to reconstruct the actual video frame. If this motion information is lost during transmission of compressed video stream over a communication channel, it would be essential to retrieve this information by means of using an EC technique. The EC techniques, which are used to retrieve the lost MVs, are known as MVR techniques.
This paper is broadly divided into two parts. The first part (Section 2) will focus on providing an overview of the error control and EC techniques used in video communication. The second part (Section 3) will focus specifically on the MVR-based EC techniques reported in the literature. Performance comparison among the prominent MVR techniques is presented in Section 4. Section 5 draws some concluding remarks. For quick reference for the readers, the paper includes an appendix containing the list of abbreviations.
| 2. Overview of Error Control and Error Concealment Techniques|| |
Several error-resilient coding techniques have been proposed ,,,,,, such as multiple description coding (MDC)  , layered video coding  , error-resilient entropy coding (EREC)  , and reversible variable length coding (RVLC)  . A prominent drawback of these techniques is the low degree of compression. The error resilience based methods proposed in ,, rely on feedback from the decoder at the receiving end. In these techniques, error propagation cannot be suppressed immediately after the detection of an erroneous frame due to the round-trip time needed to transmit the feedback signal. The ARQ-based techniques also introduce retransmission delay in case of decoding errors. Thus, these techniques are not suitable in the context of real-time video streaming.
Out of the techniques mentioned above, the post-processing EC methods are the most prominent ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, . These error concealment (EC) methods are further classified into three, namely, frequency, spatial, and temporal schemes. There are also hybrid schemes that use more than one of these three. Importantly, these methods can be made adaptive to changing error tolerance requirements, based on the channel noise. All the three EC schemes mentioned above use the data that is spatially/temporally adjacent to the lost data, to recover the latter. In the case of H.264 standard, the video is transmitted as a set of frames. Each frame is made of several MBs that in turn are divided into sub-MBs. Each such sub-MB comprises a motion vector (MV). The frequency concealment based techniques , estimate the discrete cosine transform (DCT) coefficients of a missing block using the corresponding DCT coefficients of its neighbor blocks. These techniques use either all the DCT coefficients or only the DC values of the DCT coefficients  . Spatial approaches on the other hand exploit the correlation between data (pixels or MVs) belonging to the blocks that are neighbors to the lost MB on the same frame, to recover the lost MB ,,,,,,,,,,,,,,,,,, . Temporal concealment based techniques use the blocks from other frames for recovering the lost data either by attempting to reconstruct the MV of the lost MB, or by searching for a block that has a good match to the neighborhood of the missing MB ,,, . A variety of hybrid algorithms have been proposed which combine more than one of the frequency, spatial, and temporal approaches. For example, in the temporal concealment technique, the correctness of the data in the recovered MB can be improved by spatial smoothing at its edges, to make it conform to the neighbors, however at the expense of additional complexity. In  , a lost MB is recovered by satisfying a weighted combination of spatial and temporal smoothness constraints. Often, EC involves using a single fixed method for reconstructing any of the lost MBs. However, a few adaptive EC methods have been proposed, where the spatial and the temporal concealment methods are interchangeably used based on certain attributes of the frame to be concealed (e.g. scene change, irregular motion, etc.) ,, .
This paper focuses on the EC methods that attempt to optimally estimate the erroneously received motion fields based on available surrounding information. Generally, this type of EC solutions is termed as MVR. The most common temporal MVR method is the naοve temporal replacement (TR), which replaces the lost MVs with (0, 0)  . A boundary matching algorithm (BMA) presented by Wang et al.  for the H.26L test model calculates several MVs for each lost 8×8 MB as follows: Each MV is computed as the average of the MVs of some 4×4 or other shaped (e.g. 4×8) blocks spatially adjacent to the lost one. Out of these computed MVs, the MV that minimizes the sum of absolute boundary differences between the reconstructed MB and its neighboring MBs will be used as the MV of the lost MB.
Some more sophisticated approaches have also been proposed to enhance the estimation quality of the lost MVs. For example, Zheng et al.  proposed to recover the lost MVs by using the Lagrange interpolation (LAGI) formula. Specifically, LAGI was used to constitute a polynomial that describes the motion tendency of MVs. Zheng et al.  proposed a polynomial interpolation model (PIM) that used a second-order polynomial to predict the lost MVs. Lie and Gao , proposed a technique to find the lost MVs by optimizing the boundary distortion of an entire slice (group of MBs) using a dynamic programming based technique. Further, in order to reduce the time complexity of the recovery algorithm, they employed an adaptive Kalman-filtering technique that however degraded the quality of the recovered video. More recently, hybrid algorithms have been studied that explored both spatial and temporal correlations to obtain better recovery of the lost MVs. In  , Chen et al. proposed a priority-driven region-matching algorithm to exploit the spatial and temporal information. In this technique, the lost area (a collection of lost MBs) was recovered region-by-region and a priority term was defined for every region to determine the restoration order. A two-stage MVR algorithm is presented in  . In the first stage, a spatio-temporal BMA is proposed to recover the lost MVs. In the second stage, instead of directly copying the reference MB as the final recovered pixel values, a partial differential equation based algorithm is used to refine the reconstruction. In  , a Gaussian mixture modeling is applied to recover the lost MVs for the block-based packet video. A hybridized EC technique is proposed in  . This technique not only uses the spatially and temporally correlated information to recover the lost MVs but also the tandem utilization of two new coding tools, namely, the directional spatial prediction for intra-coding, and, the variable block size MC of H.264/AVC.
| 3. Motion Vector Recovery|| |
In the previous section, error control and EC techniques in video communication were reviewed. Rest of this paper focuses on the MVR techniques reported in the literature, which are more popular as they effectively address two important problems related to EC in video communication, namely, the computational time requirement; and, quality of the recovered video. These MVR techniques take less time to execute without compromising the quality of the recovered video and that make them highly suitable for real-time video streaming applications. The remaining part of this section describes these MVR techniques in detail.
3.1 Temporal Replacement based MVR Scheme
As mentioned in the previous section, the most common temporal MVR method is the temporal replacement (TR), which replaces the lost MVs with (0, 0)  . This signifies that no movement happened in the lost area of a given frame compared to the previous frame. Since all the lost MVs are replaced by (0, 0) in the TR technique, this technique is the fastest among all the existing MVR techniques reported in the literature. The main drawback with the TR technique is the poor quality of the recovered video compared to the other MVR techniques discussed below.
3.2 Boundary Matching based MVR Algorithm
As mentioned in the previous section, the BMA presented by Wang et al.  for the H.26L test model calculates the MVs for each of the lost MBs by using a matching algorithm. The H.264 standard offers the following flexibility in configuring the size of an MB. Each of the MBs can be any one of the following sizes: single MB which is a 16×16 pixel matrix, four sub-MBs which are 8×8 pixel matrices, eight sub-MBs which are either 4×8 pixel or 8×4 pixel matrices each, and sixteen sub-MBs which are 4×4 pixel matrices each. In each of the above cases, every sub-MB is associated with an MV.
The BMA works only on the MBs that are configured as 8×8 sub-MBs. If in case an MB is realized as a single 16×16 pixel matrix, it is divided into four 8×8 sub-MBs. The MVs of the new sub-MBs are the same as that of the larger sized MB. In case an MB is divided into other sizes, for example 4×8, 8×4, etc., the sub-MBs of smaller sizes are merged to form a larger 8×8 sub-MB. In this case the MVs of the newly formed larger sub-MB are the average of the MVs of the smaller sized sub-MBs that were merged to form the larger one. The prediction of the lost MV for this sub-MB is done by choosing one of the MVs from other correctly decoded/recovered adjacent sub-MBs.
The decision of which MV of a neighboring sub-MB be used as prediction for a lost sub-MB is made as described below: In this procedure, all the MV values of the correctly decoded/recovered sub-MB adjacent to the lost sub-MB are considered. The MV value of one such sub-MB is taken and is assigned as the MV value of the lost sub-MB. Now the sub-MB is inserted into its place in the frame and the luminance change across its boundaries are computed. The above step is repeated for each of the adjacent non-erroneous sub-MBs of the lost sub-MB and that MV which gives out the smallest luminance change across the boundaries of the lost sub-MB is chosen as its predicted MV value. The luminance change in the boundary of two sub-MBs is the average of the absolute difference values of the pixels in the boundary. Though this technique ensures reasonable quality of the picture in the recovered video, it requires very large computational time compared to its counterparts discussed further in this paper.
Unlike other video-coding standards, the MVs of H.264 cover smaller area of the video frame being encoded , . This leads to a strong correlation between neighboring MVs, thus making H.264 standard amenable for statistical analysis to recover the lost MVs. The techniques discussed further in this paper are based on such statistical analysis.
3.3 MVR based on Lagrange Interpolation
This sub-section presents an MVR method that is based on the Lagrange Interpolation (LAGI) formula  . Lagrange interpolation formula is one of the most widely used interpolation functions. Its computational cost is lower than most of the other interpolation functions reported in the literature.
The remaining of this section describes how a third order (n=3) polynomial interpolation can be used for MVR. As mentioned earlier, the H.264 standard divides every frame into several MBs. Each MB is associated with 1 to 16 MVs ensuring backward compatibility with previous standards. [Figure 2] shows an H.264 frame segment with 9 MBs denoted by F m,n, where m and n denote the spatial location of the MB within the frame. Each MB is associated with 16 MVs. In [Figure 2], let F m,n denote the lost MB. As in the case of many EC algorithms for MVR, it is assumed that either two of the vertically adjacent or two of the horizontally adjacent MBs of the lost MB are correctly decoded , . In [Figure 2], it is assumed without loss of generality that both the horizontally adjacent MBs of F m,n are error-free. In this case, the lost MVs of F m,n are recovered row-by-row. Let MV ij (0≤i, j ≤3) denote the correct MVs that belong to the rows of the horizontally adjacent MBs of F m,n as shown in [Figure 2]. Let V ij 0(0≤i, j ≤3) represent the MVs of the rows of F m,n that need to be recovered.
The procedure to recover one row of MVs of F m,n is described as follows: Based on the LAGI formula, as shown in [Table 1], the correct neighboring MVs MV i , ..., MV i3 and the corresponding values of x coordinates (p i s) are used to constitute a Lagrange polynomial. The value of V ij can be computed as follows:
It is obvious from above equations that the values of Lagrange parameters (L 0j ,..., L 3j) are constant across all the lost MBs. Similarly, the recovery of the other rows follows the same procedure. A similar procedure is followed if the vertically adjacent MBs of F m,n are error-free. The main advantage with the LAGI technique is that it ensures the high quality of recovered video consuming very less computational time.
3.4 MVR based on Polynomial Model
This subsection presents a PIM  to form a polynomial that describes the motion tendency of MVs adjacent to any of the lost MVs. This polynomial model results in an approximate function that can describe the change tendency of the MVs within a small area. Based on the property of this polynomial model, an approximation of the lost MVs can be obtained from the neighboring MVs and the lost MB can be reconstructed [Figure 2] and [Table 1]. As shown in [Table 1], the correct neighboring MVs: y is (MV i0 ,..., MV i3) and the corresponding values of x coordinates (p i s) are used to constitute a polynomial model. A polynomial model, which describes the correlation of the MVs in the neighboring MBs, can be constituted as follows:
where a 0 , a 1 , ..., a l are a set of unknown coefficients that can be calculated by the given points and l is the order of the polynomial. The objective is to compute the set of coefficients such that the squares of differences between W l(x i) and y i are minimized. The squares of differences between W l(x i) and y i can be presented as a function of the independent variable a 0 , a 1 , ..., a l as shown in the following equation
To obtain the minimum of F(a 0 , a 1 , ..., a l), the set of coefficients a 0 , a 1 , ..., a l should satisfy the Equation (4)
From Equation (4), a set of functions can be obtained to calculate the coefficients, as presented in Equation (5)
Since there are four MVs available in the neighboring MB, polynomial up to the third order (i.e. l=3) can be used to perform this interpolation. However, the first order polynomial cannot accurately describe nonlinear movement. The third order polynomial often results in an oscillatory curve, and it is suitable only for the interpolation data that change quickly. The second-order polynomial can represent the smooth curve, thus the second-order polynomial is more suitable for this kind of applications compared to the other two polynomials. This technique produces comparable quality of the recovered video as it is produced in case of the LAGI technique. On the other hand, this technique is little bit more expensive compared to the LAGI in terms of number of computations. Different polynomials are needed to handle the varying amount of motion in received video frames, ensuring higher PSNR values. In this context, the main advantage of this technique over the LAGI technique is that it allows the flexibility of choosing different types of polynomials based on the characteristics of the frame to be concealed, which in turn makes this technique adaptive in nature.
| 4. MVR Techniques: A Performance Comparison|| |
The experimental results presented in this section use the four standard benchmark video sequences, namely, the Foreman, the Coastguard, the Mobile, the Bus, the Football and the Stefan. Each video sequence has total of 300 frames and all these video sequences are encoded and decoded by the JM12.4  , which is a standard CODEC program for H. 264. In the simulations reported in this section, the fixed group of pictures (GOPs) length (IntraPeriod) parameter is set to 11 to achieve the best tradeoff between compression and quality  . Thus, the frame sequence in our simulation is IPPPPPPPPPP ..... By changing the standard H.264 quantization parameter (QP), the videos are encoded at different bitrates. The locations of lost MBs are selected randomly in each and every P-frame of a given video sequence and the MB loss rates used to conduct simulations are 5%, 10%, and 15%, respectively. The current experimentation assumes two different scenarios. First one assumes that the encoder does not adopt interleaving technique. It means that the MBs in a slice are packed into the same packet and each slice contains fixed number of MBs. When a particular packet is lost, either horizontally or vertically, adjacent neighboring MVs are only available to recover the lost MVs in a given MB. This particular scenario simulates the non flexible macroblock ordering (FMO) in a given frame. Other scenario assumes that the encoder adopts interleaving technique. In this case, the neighboring MBs are packed into different packets. The MVs of four neighboring MB are available to recover the lost MVs in a given MB. This particular scenario simulates the FMO in a given frame.
The PSNR results for the different benchmark sequences across different MVR algorithms with both the test scenarios are presented in [Table 2]a-d. [Figure 3] and [Figure 4] compare the performance of TR, BMA, LAGI and PIM based MVR algorithms across different bitrates in different benchmark sequences without and with interleaved MB orientation, respectively. In each case, the MB loss rate is assumed to be 15% and QP is set to 20. [Figure 5] presents a totally different test scenario where deterministic errors of 15% MB loss are introduced in a given P frame of the Coastguard sequence. To capture the worst-case scenario, the simulator introduces these errors in frames that have the maximum motion. Interestingly, in this case, irrespective of the bitrates in which it is coded, the 69 th frame has the maximum motion. It is evident from this performance analysis that the LAGI and PIM are comparable, BMA stands second, and TR is at the bottom in terms of the quality of the recovered video.
|Figure 3: Without MB interleaving and 15% MB loss (a) Foreman (b) Coastguard (c) Mobile (d) Stefan.|
Click here to view
|Figure 4: With MB interleaving and 15% MB loss (a) Foreman (b) Coastguard (c) Mobile (d) Stefan.|
Click here to view
|Figure 5: Subjective quality comparison for the "Coastguard (QCIF)" sequence at 15% macroblocks loss in 69th frame at QP=24 (a) Original (b) 15% macroblocks lost (c) Concealed using TR (d) Concealed using BMA (e) Concealed using LAGI (f) Concealed using PIM.|
Click here to view
|Table 2a: Simulation results for foreman sequence |
Table 2b: Simulation results for coastguard sequence
Table 2c: Simulation results for mobile sequence
Table 2d: Simulation results for Stefan sequence
Click here to view
The LAGI and PIM based MVR algorithms are also tested on the FFmpeg-based real-time H.264 video streaming setup  . FFmpeg is a comprehensive multimedia encoding and decoding library that consists of numerous audio, video and container formats  . This tool also supports the user datagram protocol (UDP) and the real-time protocol (RTP) based H.264 video streaming. In this experiment, errors are introduced in the compressed video stream at encoder end and this stream is transmitted through wired medium using RTP-UDP to the decoder end. The LAGI and the PIM based MVR algorithms are implemented in FFmpeg decoder. [Table 3] presents the PSNR results on FFmpeg based real-time video streaming setup for benchmark video sequences, namely, Akiyo, Coastguard, and Foreman. Note that 15% MB loss randomly is introduced in all the P frames of each sequence, QP is set to 24 and the total number of frames in each sequence is 70.
| 5. Conclusion and Open Issues|| |
In this paper various techniques have been described for performing EC in video communication. A subclass of EC methods applied on video streams focuses on the optimal estimation of erroneously received motion fields based on available surrounding information. This subclass is termed as motion vector recovery (MVR). This paper elaborates some of the most popular, simple, efficient and widely used MVR techniques reported in the literature. This paper also presents a performance comparison results for these selected MVR techniques in simulation as well as in real-time video streaming environment.
There are many practical issues in error-resilient video communication that needs to be addressed. The first and foremost of them is a system-level framework for video communication wherein the encoding algorithm, transport protocol, and post-processing method are designed jointly to minimize the combined distortion due to both compression and transmission. In case of the MVR-based EC techniques, there must be more emphasis on covering all kinds of motion in a video frame (e.g. nonlinear, fast, and sudden movements) using a single MVR algorithm. There is a great scope for using some of the more sophisticated nonlinear interpolation techniques in order to capture different kinds of movements in the lost MB at a lower computational complexity.
| 6. Abbreviations|| |
ARQ: Automatic Retransmission Request
BMA: Boundary Matching Algorithm
DCT: Discrete Cosine Transform
EC: Error Concealment
EREC: Error-Resilient Entropy Coding
GOP: Group of Pictures
LAGI: Lagrange Interpolation
MDC: Multiple Descriptor Coding
MC: Motion Compensation
ME: Motion Estimation
MV: Motion Vector
MVR: Motion Vector Recovery
PIM: Polynomial Interpolation Model
PSNR: Peak Signal-to-Noise Ratio
QP: Quantization Parameter
RTP: Real-Time Protocol
RVLC: Reversible Variable Length Coding
VLC: Variable Length Coding
TR: Temporal Replacement
UDP: User Datagram Protocol
| References|| |
|1.||Y. Wang, S. Wenger, J. Wen, and A. K. Katsaggelos, "Error resilient video coding techniques," IEEE Signal Process. Mag., vol. 17, no. 7, pp. 974-97, Jul. 2000. |
|2.||Y. Wang, and Q. F. Zhu, "Error control and concealment for video communication: a review," Proc. IEEE on Image Processing, vol. 86, no. 5, pp. 974-97, May. 1998. |
|3.||Y. Wang, and S. Lin, "Error resilient video coding using multiple description motion compensation," IEEE Trans. on Circuit and Syst. for Video Technology, vol. 12, no. 2, pp. 438-52, Jun. 2002. |
|4.||R. Mathew, and J. F. Arnold, "Efficient layered video coding using data partitioning," Signal Processing Image Commun., vol. 14, pp. 761-82, 1999. |
|5.||D. W. Redmill, and N. Kingsbury, "The EREC: An error resilient technique for coding variable length blocks of data," IEEE Trans. on Image Process., vol. 5, no. 4, pp. 565-74, Apr. 1996. |
|6.||MPEG-4 Overview ver. N2725. Seoul, South Korea, Mar. 1999, ISO/IEC JTC1/SC29/WG11. |
|7.||R. Zhang, S. L. Regunathan, and K. Bose, "Video coding with optimal inter/intra-mode switching for packet loss resilience," IEEE Journal in Selected Areas of Communication, vol. 18, no. 6, pp. 966-76, Jun. 2000. |
|8.||E. Steinbach, N. Farber, and B. Girod "Standard compatible extension of H.263 for robust video transmission in mobile environment," IEEE Trans. on Circuit and Syst. for Video Technology, vol. 7, no. 6, pp. 872-81, Dec. 1997. |
|9.||S. Fukunaga, T. Nakai, and H. Inoue, "Error resilient video coding by dynamic replacing of reference picture," Proc. IEEE Int. Conf. on Global Telecommunication, vol. 3, pp. 1503-8, Nov. 1996. |
|10.||S. Hemami, and T. Meng, "Transform coded image reconstruction exploiting interblock correlation," IEEE Trans. on Image Processing, vol. 4, pp. 1023-7, Jul. 1995. |
|11.||J. W. Park, J. W. Kim, and S. U. Lee, "DCT coefficient recovery based error concealment technique and its application to the MPEG-2 bit stream error," IEEE Trans on Circuits Syst. Video Technol., vol. 7, pp. 845-54, Dec. 1997. |
|12.||M. C. Hong, H. Schwab, L. P. Kondi, and A. K. Katsaggelos, "Error concealment algorithms for compressed video," Proc. Signal Process: Image Commun., vol. 14, no. 6-8, pp. 473-92, 1999. |
|13.||P. Salama, N. B. Shroff, E. J. Coyle, and E. J. Delp, "Error concealment techniques for encoded video streams," Proc. IEEE ICIP, pp. 405-16, 1995. |
|14.||J. W. Suh, and Y.S. Ho, "Error concealment based on directional interpolation," IEEE Trans. on Consumer Electron., vol. 43, pp. 295-302, Aug. 1997. |
|15.||M. J. Chen, L. G. Chen, and R. M. Weng, "Error concealment of lost motion vectors with overlapped motion compensation," IEEE Trans. on Circuits Syst. Video Technology., vol. 7, pp. 560-4, Jun. 1997. |
|16.||Z. Wang, Y. Yu, and D. Zhang, "Best neighboring matching: An information loss restoration technique for block-based image coding systems," IEEE Trans. on Image Processing, vol. 7, no. 7, pp. 1056-61, Jul. 1998. |
|17.||J. W. Suh, and Y. S. Ho, "Error concealment technique for digital TV," IEEE Trans on Broadcasting, vol. 48, no. 4, pp. 299-306, Dec. 2002. |
|18.||Y. H. Han, and J. J. Leou, "Detection and correction of transmission errors in JPEG images," IEEE Trans. on Circuits Syst. Video Technology., vol. 8, pp. 221-31, Apr. 1998. |
|19.||M. Bystrom, V. Parthasarathy, and J. W. Modestino, "Hybrid error concealment schemes for broadcast video transmission over ATM networks," IEEE Trans. on Circuits Syst. Video Technology., vol. 9, pp. 868-81, Sep. 1999. |
|20.||J. W. Park, and S. U. Lee, "Recovery of corrupted image data based on the NURBS interpolation," IEEE Trans. on Circuits Syst. Video Technology., vol. 9, pp. 1003-9, Oct. 1999. |
|21.||S. Tsekeridou, and I. Pitas, "MPEG-2 error concealment based on block matching principles," IEEE Trans. on Circuits Syst. Video Technology., vol. 10, no. 4, pp. 646-58, Jun. 2000. |
|22.||S. Shirani, B. Erol, and F. Kossentini, "A concealment method for shape information in MPEG-4 coded video sequence," IEEE Trans. on Multimedia, vol. 2, no. 3, pp. 185-90, Sept. 2000. |
|23.||S. Shirani, F. Kossentini, and R. Ward, "A concealment method for video communications in an error-prone environment," IEEE J. Select. Arears commun., vol. 18, no. 6, pp. 1122-8, Jun. 2000. |
|24.||Y. Shao, L. Zhang, G. Wu, and X. Lin, "Reconstruction of missing blocks in image transmission by using self-embedding," Proc. 2001 Int. Symp. Multimedia Video and Speech Processing, pp. 535-8, May. 2001. |
|25.||D. S. Turaga, and T. Chen, "Model-based error concealment for wireless video," IEEE Trans. on Circuits Syst. Video Technology., vol. 12, no. 6, pp. 483-495, Jun. 2002. |
|26.||X. Li, and M. T. Orchard, "Novel sequential error-concealment techniques using orientation adaptive interpolation," IEEE Trans. on Circuits Syst. Video Technology, vol. 12, no. 10, pp. 857-64, Oct. 2002. |
|27.||Y. K. Wang, M. M. Hannuksela, V. Varsa, A. Hourunranta, and M. Gabbouj, "The error concealment feature in the H.26L test model," Proc. IEEE Int. Conf. Image Processing, pp. 729-32, 2002. |
|28.||Y. Zhang, and K. K. Ma, "Error-concealment for video transmission with dual multiscale Markov random field modeling," IEEE Trans. on Image Processing, vol. 12, no. 2, pp. 236-42, Feb. 2003. |
|29.||S. Cen, and P. C. Cosman, "Decision trees for error concealment in video decoding," IEEE Trans. on Multimedia, vol. 5, no. 1, pp. 1-7, Mar. 2003. |
|30.||S. Aign, "A temporal error concealment technique for I-pictures in an MPEG-2 video-decoder," Proc. VCIP'98, pp. 405-416, Jan. 1998. |
|31.||C. Alexandre, and H. V. Thien, "The influence of residual errors on a digital satellite TV encoder," Proc. Signal Processing: Image Commun., vol. 11, pp. 105-18, 1997. |
|32.||M. Ghanbari, and V. Seferidis, "Cell-loss concealment in ATM video coders," IEEE Trans. on Circuits Syst. Video Technology, vol. 3, pp. 238-47, Jun. 1993. |
|33.||Q. F. Zhu, Y. Wang, and L. Shaw, "Coding and cell-loss recovery in DCT-based packet video," IEEE Trans .on Circuits Syst. Video Technology., vol. 3, pp. 248-58, Jun. 1993. |
|34.||W. Luo, and M. El Zarki, "Analysis of error concealment schemes for MPEG-2 video transmission over ATM based networks," Proc. VCIP'95, vol. 2501, Taipei, Taiwan, pp. 1358-68, May. 1995. |
|35.||P. Cuenca, A. Garrido, F. Quiles, L. Orozco-Barbosa, T. Olivares, and M. Lozano, "Dynamic error concealment technique for the transmission of hierarchical encoded MPEG-2 video over ATM networks," Proc. 1997 IEEE Pacific Rim Conf. on Communications, Computers and Signal Processing, vol. 2, pp. 912-915, Aug. 1997. |
|36.||J. H. Zheng, and L. P. Chau, "A motion vector recovery algorithm for digital video using Lagrange Interpolation," IEEE Trans. on Broadcasting, vol. 49, No. 4, pp. 383-389, Dec. 2003. |
|37.||J. H. Zheng, and L. P. Chau, "Efficient motion vector recovery algorithm for H.264 based on a polynomial model," IEEE Trans. on Multimedia, vol. 7, No. 3, pp. 507-13, Jun. 2005. |
|38.||Z. W. Gao, and W. N. Lie, "Video error concealment by using Kalman filtering technique," in Proc. IEEE Int. Symp. Circuits Syst., pp. 69-72, 2004. |
|39.||W. N. Lie, and Z. W. Gao, "Video error concealment by integrating greedy suboptimization and Kalman filtering techniques," IEEE Trans. on Circuits Syst. Video Technology, vol. 16, pp. 982-92, 2006. |
|40.||Y. Chen, X. Sun, F. Wu, Z. Liu, and S. Li, "Spatio-temporal video error concealment using priority-ranked region-matching," Proc. IEEE Int. Conf. Image Processing, pp. 1050-3, 2005. |
|41.||Y. Chen, Y. Hu, O. C. Au, H. Li, and C. W. Chen, "Video error concealment using spatio-temporal boundary matching and partial differential equation," IEEE Trans. on Multimedia, vol. 10, No. 1, pp. 2-15, Jan. 2008. |
|42.||D. Persson, T. Eriksson, and P. Hedelin, "Packet video error concealment with Gaussian mixture models," IEEE Trans. on Image Processing, vol. 17, No. 2, pp. 145-54, Feb. 2008. |
|43.||S. C. Huang, and S. Y. Kuo, "Optimization of hybridized error concealment for H.264," IEEE Trans. on Broadcasting, vol. 54, No. 3, pp. 499-516, Sept. 2008. |
|44.||JM12.4, H.264/AVC reference software, Available from: http://iphome.hhi.de/suehring/. |
|45.||D. Alfonso, B. Biffi, and L. Pezzoni, "Adaptive GOP size control in H.264/AVC encoding based on scene change detection," Proc. 7 th Nordic Signal Processing Symposium, pp. 86-9, Jun 2006. |
|46.||FFmpeg, Available from: http://ffmpeg.mplayerhq.hu/. |
| Authors|| |
Kavish Seth received master's degree in electrical engineering from Indian Institute of Technology- Madras, Chennai, India in 2002. He is currently working as manager of system engineering in Atheros Communications Inc., Chennai, India. He is also working towards a doctoral degree in Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India. His research interests are image, video coding, video transmission, error concealment techniques, and wireless communication.
Kamakoti V. received master's and doctoral degrees in computer science and engineering from Indian Institute of Technology-Madras, Chennai, India in 1991 and 1995 respectively. He was a research associate at the supercomputer education and research center, Indian Institute of Science, Bangalore, India from 1995-1997. He was the post doctoral fellow in computer science, Institute of Mathematical Sciences, Chennai, India from 1997-1999. He worked as a senior design engineer at ATI Research Silicon Valley Inc., Chennai, India from 1999-2001. He joined as assistant professor in Department of Computer Science and Engineering, Indian Institute of Technology-Madras, Chennai, India in 2001 and he is currently working as professor. His research interests are video coding, video transmission, reconfigurable system design, computer architecture and CAD for VLSI.
Srinivasan S. is a professor in Department of Electrical Engineering, Indian Institute of Technology-Madras, Chennai, India. His teaching and research interests are in the areas of Digital Design, DSP Architectures and Applications and VLSI Design. He is currently coordinating the VLSI activity at Indian Institute of Technology-Madras, Chennai, India. He is a senior member of IEEE, a fellow of IETE (India) and a member of VLSI society of India.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]
[Table 1], [Table 2], [Table 3]