Error concealment techniques in H.264 video transmission over

Final Project Presentation
ERROR CONCEALMENT TECHNIQUES
IN H.264
Under the guidance of
Dr. K.R. Rao
By
Moiz Mustafa Zaveri
([email protected])
(1001115920)
Contents
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Problem statement
Project objective
Sequence information
Error characteristics
Spatial domain error concealment
Temporal domain error concealment
Quality Metrics
Generation of errors
Simulation
Test Sequences
Sequence Formats
Results of Frame copy algorithm
Conclusions
Future Work
References
Acronyms
AVC
Advanced Video Coding
AVS
Audio Video Standard
DSL
Digital Subscriber Line
HVS
Human Visual System
IEC
International Electrotechnical Commission
ISO
International Organization for Standardization
ITU
International Telecommunication Union
JM
Joint Model
LAN
Local Area Network
Acronyms
MMS Multimedia Messaging Service
MSU
Moscow State University
PSNR Peak signal to noise ratio
SAD
Sum of absolute differences
SSIM Structural similarity index metric
Problem Statement
• Video transmission errors are errors in the video sequence that the decoder
cannot decode properly.
• In real-time applications, retransmission is not an option, therefore the
missing parts of the video have to be concealed.
• To conceal these errors, spatial and temporal correlations of the video
sequence can be utilized.
• As H.264 [3] employs predictive coding, this kind of corruption spreads
spatio-temporally to the current and consecutive frames. This is seen in
fig.1.
Problem Statement
Figure 1: Error propagation in H.264 due to predictive coding [1]
Objective
• To implement both the spatial domain and temporal domain categories of
error concealment techniques in H.264 [3] with the application of the Joint
Model (JM 19.0) reference software [3] and use metrics like the peak signal
to noise ratio (PSNR) and mean squared error (MSE), blockiness [1],
blurriness [1] and structural similarity index metric (SSIM) [4] in order to
compare and evaluate the quality of reconstruction.
H.264/AVC Block Diagram
• The H.264/AVC encoder/decoder block diagram is shown in fig.2.
Figure 2: H.264 encoder/decoder block diagram [7]
Sequence Information
• Temporal information
Movement
- It is easier to conceal linear movements in one direction because we can predict
pictures from previous frames (the scene is almost the same).
- If there are movements in many directions, finding a part of a previous frame to fit,
is going to be more difficult. An example of this can be screen cuts.
- In fig.3, five frames of three video sequences (football match, village panorama and
music video clip) with different amounts of movement are seen.
Speed
- The slower the movement of the camera, the easier it will be to conceal an error.
- We can see an example of two different video speeds if we compare village sequence
with football sequence given in fig.3.
Sequence Information
Figure 3: Movement and speed [2]
Sequence Information
• Spatial information
Smoothness of the neighborhood
- The smoothness of the neighborhood of the erroneous macroblock will
determine the difficulty of the spatial concealment.
- In fig.4, there are three different cases in which the lost macroblock has to
be concealed using the neighboring blocks.
- In the first one, it is going to be easy to reconstruct the lost macroblock
because the neighborhood is very uniform (smooth), with almost no
difference between the neighboring macroblocks.
- In the second situation, it is going to be a little bit more difficult; we have to
look for the edges, and then, recover the wire line.
- The third case is an example where the neighbors cannot help us to recover
the macroblock because they do not give any information about the lost part
(in this case, the eye).
Sequence Information
Figure 4: Smoothness of the neighborhood [2]
Error Characteristics
• Lost information
• Size and form of the lost region
• I/P frame
- If the error is situated in an I frame, it is going to affect more critically the
sequence because it will affect all the frames until the next I frame and I frames
do not have any other reference but themselves.
- If the error is situated in a P frame it will affect the rest of the frames until the
next I frame but we still have the previous I frame as a reference.
Spatial domain error concealment
Weighted Averaging
• The pixel values of the damaged macroblock are obtained by interpolating the
neighboring four pixels.
• Missing pixel values can be recovered by calculating the average pixel values from
the four pixels in the four 1-pixel wide boundaries of the damaged macroblock
weighted by the distance between the missing pixel and the four macroblocks
boundaries (upper, down, left and right boundaries).
Temporal domain error concealment
Frame Copy algorithm
• Replaces the missing image part in the current frame with the spatially
corresponding part inside a previously decoded frame, which has maximum
correlation with the affected frame.
• It is easier to conceal linear movements in one direction because pictures can be
predicted from previous frames (the scene is almost the same).
• When the current frame ‘n’ is missing (i,j) values, they can be concealed using the
(i,j) values from the ‘n-1’ frame. This is shown in fig.5.
Figure 5: Copying (i,j) pixel values from previous ‘n-1’ frame into current ‘n’ frame [1] [1]
Temporal domain error concealment
Frame copy algorithm
• If there are movements in many directions or scene cuts, finding a part of
previous frame that is similar is going to be more difficult, or even
impossible.
• The slower is the movement of the camera, the easier will be to conceal an
error.
• This method only performs well for low motion sequences but its advantage
lies in its low complexity.
Temporal domain error concealment
Boundary Matching
• Let B be the area corresponding to a one pixel wide boundary of a missing
block in the nth frame Fn.
• Motion vectors of the missing block as well as those of its neighbors are
unknown.
• The coordinates [𝑥, 𝑦] of the best match to B within the search area A in the
previous frame 𝐹𝑛−1 have to be found. This is shown in fig.6.
Figure 6: Boundary matching algorithm to find best match for area B in search area A of the previous frame [1]
Temporal domain error concealment
Motion Vector Interpolation
• A motion vector of a 4x4 block can be estimated by interpolating its value
from the motion vectors in the surrounding macroblocks, as shown in fig.8.
• The distance between the 4x4-block to be found and the surrounding 4x4
blocks can be used as a weighting factor. This is shown in fig.7.
Figure 7: (a) Weighted averaging in Motion Vector Interpolation [1] (b) Motion vector interpolation [1]
Quality Metrics
• Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE)
• Blockiness and Blurriness
• Structural Similarity Index Metric (SSIM)
PSNR and MSE
• PSNR is used to evaluate the quality of reconstruction of a frame.
• The PSNR for an 8 bit PCM (0-255 levels) is calculated using: [1]
𝑃𝑆𝑁𝑅 =
(255)2
10. 𝑙𝑜𝑔10
[𝑑𝐵]
𝑀𝑆𝐸
(6)
Where, 𝑀𝑆𝐸 is the mean square error, given by: [1]
𝑀𝑆𝐸 =
1
𝑊𝑋
𝑊
𝑖=1
𝑋
2
𝑗=1[𝐹(𝑖, 𝑗) − 𝐹0 (𝑖, 𝑗)]
(7)
Where, 𝑊 × 𝑋 is the size of the frame, 𝐹0 is the original frame and 𝐹 is the
current frame.
• The MSE is computed by averaging the squared intensity differences of the
distorted and reference image/frame pixels. Two distorted images with the
same MSE may have very different types of errors, some of which are much
more visible than others.
PSNR and MSE
• The YUV-PSNR for the 4:2:0 format is calculated using: [1]
𝑃𝑆𝑁𝑅𝑌𝑈𝑉 = 6 𝑃𝑆𝑁𝑅
Where, 𝑃𝑆𝑁𝑅
𝑌
𝑌
+ 𝑃𝑆𝑁𝑅
𝑈
+ 𝑃𝑆𝑁𝑅
𝑉
is the PSNR of the Y component, 𝑃𝑆𝑁𝑅
the U component and 𝑃𝑆𝑁𝑅
𝑉
/8
𝑈
(8)
is the PSNR of
is the PSNR of the V component
Distortion artifacts
• Here measurement of distortion artifacts like blockiness and blurring is
done.
• Blockiness is defined as the distortion of the image characterized by the
appearance of an underlying block encoding structure. [1]
• Blurriness is defined as a global distortion over the entire image,
characterized by reduced sharpness of edges and spatial detail. [1]
• Blockiness measure [1]: This metric was created to measure the visual effect
of blocking. If the value of the metric for first picture is greater than the
value for the second picture, it means that first picture has more blockiness,
than the second picture. An example is seen in fig.8.
• Blurriness measure [1]: This metric compares the power of blurring of two
images. If the value of the metric for first picture is greater than the value for
the second picture, it means that second picture is more blurred, than first.
An example is seen in fig.9.
• The blockiness and blurriness value are computed using the Moscow State
University Video Quality Measurement Tool [6] shown in fig.10.
Distortion artifacts
Figure 8: (a) Orignal image (b) Blockiness in original image [1]
Distortion artifacts
Figure 9: (a) Orignal image (b) Blurriness in original image [1]
MSU Video Quality Measurement Tool
Figure 10: MSU Video Quality Measurement Tool [1]
Structural Similarity Index (SSIM)
• SSIM compares local patterns of pixel intensities (for similarities) that have
been normalized for luminance and contrast.
• The block diagram for measuring the SSIM is given in fig.11.
Figure 11: SSIM measurement [1]
Structural Similarity (SSIM)
• Let x and y be two image patches extracted from the same spatial location of
two images being compared.
2
• Let 𝜇𝑥 and 𝜇𝑦 be their means and 𝜎𝑥2 and 𝜎𝑦2 be their variances. Also, let 𝜎𝑥𝑦
be the variance of x and y.
• The luminance, contrast and structure comparison are given by: [1]
(4)
(5)
(6)
Structural Similarity (SSIM)
Where 𝐶1 , 𝐶2 and 𝐶3 are all constants given by: [1]
(7)
L is the dynamic range of the pixel values (L = 255 for 8 bits/pixel
gray scale images), and 𝐾1 ≪ 1 and 𝐾2 ≪ 1 are scalar constants.
• The general SSIM can be calculated as follows: [1]
(8)
Where 𝛼, 𝛽 𝑎𝑛𝑑 𝛾 are parameters which define the relative importance of the
three components.
Generation of Errors
•
•
•
•
To generate errors we use the rtp_loss file in the software solution.
This program can be used to simulate frame loss on RTP files.
The C random number generator is used for selecting the frames to be lost.
The number of frames to be lost depends on the ‘loss percent’ that the user
provides.
Example:
rtp_loss.exe infile outfile losspercent <keep_leading_packets>
where <keep_leading_packets> is an optional parameter that specifies the
number of RTP frames that are kept at the beginning of the file.
• The random number generator is not initialized before usage. Therefore
subsequent runs of the tool will create identical loss pattern.
• Figs. 12 and 13 show a single frame of Akiyo and Foreman sequences with
and without frame loss.
Generation of Errors
Figure 12: (a) Original Akiyo frame (b) Akiyo frame with error
Generation of Errors
Figure 13: (a) Original Foreman frame (b) Foreman frame with error
Simulation Steps
1. Download the JM 19.0 reference software [3].
2. Modify the encoder and decoder configuration files to accept these desired
inputs:
Encoder
InputFile
FramesToBeEncoded
FrameRate
OutputFile
QPISlice
QPPSlice
OutFileMode
= "foreman_qcif.yuv"
# Input sequence
= 150 # Number of frames to be coded
= 30.0 # Frame Rate per second (0.1-100.0)
= "enout_150.264"
# Bitstream
= 28 # Quant. param for I Slices (0-51)
= 28 # Quant. param for P Slices (0-51)
= 1 # Output file mode, 0:Annex B, 1:RTP
Simulation Steps
Decoder
InputFile
= "lossout_150.264"
# H.264/AVC coded bitstream
OutputFile
= "decout_150.yuv"
# Output file, YUV/RGB
FileFormat
= 1 # NAL mode (0=Annex B, 1: RTP packets)
ConcealMode = 1
# Err Concealment(0:Off,1:Frame Copy,2:Motion Copy)
3. Build the software solution in Microsoft Visual Studio.
4. Run the ‘lencod.exe’ file in command prompt to obtain an encoded output
.264 file.
5. Run the ‘rtp_loss.exe’ file in command prompt to introduce a certain
percentage of loss in the encoded output and obtain a lossy .264 file.
6. Run the ‘ldecod.exe’ file to obtain the decoded output yuv file in which one
of the two error concealment algorithms have been employed to conceal the
frame losses.
Simulation Output
Figure 14: Encoder simulation output
Simulation Output
Figure 15: Decoder simulation output
Test Sequences
•
•
•
•
•
•
Sequences used: akiyo_qcif.yuv, foreman_qcif.yuv, stefan_qcif.yuv
Total number of frames: 300
Number of frames encoded: 150
Height: 176; Width: 144
Quantization Parameter: 28
Frame rate: 30 frames/second
Sequence Formats
• Common Intermediate Format (CIF) and Quadrature Common Intermediate
Format (QCIF) determine the resolution of the frame.
• The resolution of CIF is 352x288 and the resolution of QCIF is 1/4 of CIF,
which is 176x144.
• Consider the YCbCr family of color spaces where Y represents the
luminance, Cb represents the blue-difference chroma component and Cr
represents the red-difference chroma component.
• For QCIF and CIF, the luminance Y is equal to the resolution.
Sequence Formats
• For sampling resolution 4:2:0
Figure 16: Common Intermediate Format (CIF) [2]
Figure 17:Quadrature Common Intermediate Format (QCIF) [2]
Results of Frame Copy algorithm
• In this project, each of the three video sequences: Akiyo.yuv, Foreman.yuv
and Stefan.yuv [5] have been made erroneous with loss of frames ranging
from 1% to 5% of the total number of encoded frames.
• The errors in the resultant lossy sequences have then been concealed using
the Frame copy algorithm.
• The resultant video sequences have been analyzed using metrics like: PSNR,
MSE, SSIM [4], blockiness and blurriness and the results have been
tabulated and plotted.
Results of Frame Copy algorithm
Results for Akiyo.yuv
• The error concealment seen in the decoded output sequence of the erroneous Akiyo
sequence is not perfect, as seen in the story told by fig.18.
Figure 18: (a) Original Akiyo sequence frame (b) Erroneous Akiyo sequence frame (c) Error concealed Akiyo
sequence frame
Results of Frame Copy algorithm
Results for Akiyo.yuv
• The frame copy algorithm has simply copied the pixels of the same location
from the previous frame.
• With an increase in the number of frames lost, it is seen that the PSNR
(fig.19), SSIM [4] (fig.20) and blurriness (fig.23) of the output sequence
reduce while the MSE (fig.21) and blockiness (fig.22) increase.
Results of Frame Copy algorithm
Results for Akiyo.yuv
Error Percentage (%)
Average PSNR of
concealed result (dB)
1
30.243
2
30.240
3
30.179
4
30.09
5
29.975
Figure 19: (a) Average PSNR (dB) of the error concealed output for different error percentages (b) Average PSNR
(dB) vs error percentage for the Akiyo sequence
Results of Frame Copy algorithm
Results for Akiyo.yuv
Error Percentage (%)
Average SSIM of
concealed result
1
0.92751
2
0.92747
3
0.92704
4
0.92629
5
0.92527
Figure 20: (a) Average SSIM of the error concealed output for different error percentages (b) Average SSIM vs error
percentage for the Akiyo sequence
Results of Frame Copy algorithm
Results for Akiyo.yuv
Error Percentage (%)
1
Average MSE of
concealed result
61.48661
2
61.52298
3
62.39262
4
63.68896
5
65.20619
Figure 21: (a) Average MSE of the error concealed output for different error percentages (b) Average MSE vs error
percentage for the Akiyo sequence
Results of Frame Copy algorithm
Results for Akiyo.yuv
Error Percentage
(%)
1
Original
sequence
Blockiness
Measure
11.81973
Concealed
sequence
Blockiness
Measure
13.79751
2
11.81973
13.80202
3
11.81973
13.81160
4
11.81973
13.81890
5
11.81973
13.82310
Figure 22: (a) Average blockiness of the error concealed output for different error percentages (b) Average blockiness
vs error percentage for the Akiyo sequence
Results of Frame Copy algorithm
Results for Akiyo.yuv
Error
Percentage (%)
1
Original
sequence
Blurriness
Measure
16.32378
Concealed
sequence
Blurriness
Measure
14.92913
2
16.32378
14.92850
3
16.32378
14.91929
4
16.32378
14.91371
5
16.32378
14.90696
Figure 23: (a) Average blurriness of the error concealed output for different error percentages (b) Average blurriness
vs error percentage for the Akiyo sequence
Results of Frame Copy algorithm
Results for Foreman.yuv
• The error concealment seen in the decoded output sequence of the erroneous
Foreman sequence is not perfect, as seen in the story told by fig.24.
Figure 24: (a) Original Foreman sequence frame (b) Erroneous Foreman sequence frame (c) Error concealed
Foreman sequence frame
Results of Frame Copy algorithm
Results for Foreman.yuv
• The frame copy algorithm has simply copied the pixels of the same location
from the previous frame.
• With an increase in the number of frames lost, it is seen that the PSNR
(fig.25), SSIM [4] (fig.26) and blurriness (fig.29) of the output sequence
reduce while the MSE (fig.27) and blockiness (fig.28) increase.
Results of Frame Copy algorithm
Results for Foreman.yuv
Error Percentage (%)
Average PSNR of
concealed result (dB)
1
17.25731
2
17.25303
3
17.22355
4
17.19582
5
17.16782
Figure 25: (a) Average PSNR (dB) of the error concealed output for different error percentages (b) Average PSNR
(dB) vs error percentage for the Foreman sequence
Results of Frame Copy algorithm
Results for Foreman.yuv
Error Percentage (%)
Average SSIM of
concealed result
1
0.54698
2
0.54622
3
0.54434
4
0.54222
5
0.53907
Figure 26: (a) Average SSIM of the error concealed output for different error percentages (b) Average SSIM vs error
percentage for the Forman sequence
Results of Frame Copy algorithm
Results for Foreman.yuv
Error Percentage (%)
Average MSE of
concealed result
1
1222.65454
2
1223.86035
3
1232.19348
4
1240.08655
5
1247.15624
Figure 27: (a) Average MSE of the error concealed output for different error percentages (b) Average MSE vs error
percentage for the Foreman sequence
Results of Frame Copy algorithm
Results for Foreman.yuv
Error
Percentage (%)
1
Original
sequence
Blockiness
Measure
6.66808
Concealed
sequence
Blockiness
Measure
15.81962
2
6.66808
15.83485
3
6.66808
15.87380
4
6.66808
15.95529
5
6.66808
16.11628
Figure 28: (a) Average blockiness of the error concealed output for different error percentages (b) Average blockiness
vs error percentage for the Foreman sequence
Results of Frame Copy algorithm
Results for Foreman.yuv
Error
Percentage (%)
1
Original
sequence
Blurriness
Measure
25.58911
Concealed
sequence
Blurriness
Measure
24.63489
2
25.58911
24.63258
3
25.58911
24.62816
4
25.58911
24.62291
5
25.58911
24.59365
Figure 29: (a) Average blurriness of the error concealed output for different error percentages (b) Average blurriness
vs error percentage for the Foreman sequence
Results of Frame Copy algorithm
Results for Stefan.yuv
• For the Stefan sequence, with an increase in the number of frames lost, it is
seen that the PSNR (fig.30), SSIM [4] (fig.31) and blurriness (fig.34) of the
output sequence reduce while the MSE (fig.32) and blockiness (fig.33)
increase.
Results of Frame Copy algorithm
Results for Stefan.yuv
Error Percentage (%)
Average PSNR of
concealed result (dB)
1
15.74143
2
15.74008
3
15.72960
4
15.71948
5
15.71072
Figure 30: (a) Average PSNR (dB) of the error concealed output for different error percentages (b) Average PSNR
(dB) vs error percentage for the Stefan sequence
Results of Frame Copy algorithm
Results for Stefan.yuv
Error Percentage (%)
Average SSIM of
concealed result
1
0.42509
2
0.42466
3
0.42389
4
0.42273
5
0.42116
Figure 31: (a) Average SSIM of the error concealed output for different error percentages (b) Average SSIM vs error
percentage for the Stefan sequence
Results of Frame Copy algorithm
Results for Stefan.yuv
Error Percentage (%)
Average MSE of
concealed result
1
1733.18079
2
1733.72205
3
1737.91101
4
1741.96741
5
1745.87162
Figure 32: (a) Average MSE of the error concealed output for different error percentages (b) Average MSE vs error
percentage for the Stefan sequence
Results of Frame Copy algorithm
Results for Stefan.yuv
Error
Percentage
(%)
1
Original
sequence
Blockiness
Measure
15.21208
Concealed
sequence
Blockiness
Measure
14.98759
2
15.21208
15.01642
3
15.21208
15.03965
4
15.21208
15.06558
5
15.21208
15.12065
Figure 33: (a) Average blockiness of the error concealed output for different error percentages (b) Average blockiness
vs error percentage for the Stefan sequence
Results of Frame Copy algorithm
Results for Stefan.yuv
Error Percentage
(%)
Original sequence
Blurriness
Measure
1
43.92272
Concealed
sequence
Blurriness
Measure
42.76152
2
43.92272
42.75115
3
43.92272
42.73378
4
43.92272
42.70599
5
43.92272
42.65922
Figure 34: (a) Average blurriness of the error concealed output for different error percentages (b) Average blurriness
vs error percentage for the Stefan sequence
Conclusions
The following are observed from the results:
• The PSNR reduces with an increase in the error percentage.
• The SSIM reduces with an increase in the error percentage.
• The MSE increases with an increase in the error percentage.
• As compared to the input, the average blockiness of the error concealed
output is higher when there is an increase in the percentage of error. This
means that the error concealed output suffers from more blockiness than the
input.
• As compared to the input, the average blurriness of the error concealed
output is lower when there is an increase in the percentage of error. This
means that the error concealed output suffers from more blurriness than the
input.
Conclusions
Considering Akiyo sequence to have less movement, Foreman sequence to
have medium movement and Stefan sequence to have high movement, we see
that:
• The PSNR of the concealed output decreases with an increase in movement.
(From the range of 30dB in Akiyo.yuv sequence to the range of 17dB in
Foreman.yuv sequence to the range of 15dB in Stefan.yuv sequence)
• The SSIM of the concealed output decreases with an increase in movement.
(From the range of 0.92 in Akiyo.yuv sequence to the range of 0.54 in
Foreman.yuv sequence to the range of 0.42 in Stefan.yuv sequence)
• The MSE of the concealed output increases with an increase in movement.
(From the range of 60 in Akiyo.yuv sequence to the range of 1220 in
Foreman.yuv sequence to the range of 1730 in Stefan.yuv sequence)
Future Work
• The current JM reference software (19.0) [3], implements only two error
concealment algorithms. They are - Frame Copy algorithm and Motion
Vector Copy algorithm. These are basic spatial and temporal error
concealment algorithms. There is a need for implementation of better and
probably hybrid concealment algorithms.
• Also, the 19.0 JM software [3] has poor error resiliency. This is seen in the
fact that it cannot decode sequences which have more than a 5% frame loss
error. It is true that transmission error losses in wireless networks seldom
exceed this 5% threshold, but there can be exceptional cases.
References
[1]
I. C. Todoli, “Performance of Error Concealment Methods for Wireless
Video,” Ph.D. thesis, Vienna University of Technology, 2007.
[2]
V.S. Kolkeri "Error Concealment Techniques in H.264/AVC, for Video
Transmission over Wireless Networks", M.S. Thesis, Department of Electrical
Engineering,
University
of
Texas
at
Arlington,
Dec.
2009
Online:
http://www.uta.edu/faculty/krrao/dip/Courses/EE5359/index_tem.html.
[3]
H.264/AVC
Reference
Software
Download:
Online:
http://iphome.hhi.de/suehring/tml/download/
[4]
Z. Wang, “The SSIM index for image quality assessment,” Online:
http://www.cns.nyu.edu/zwang/files/research/ssim/.
References
[5]
Video Trace research group at ASU, “YUV video sequences,” Online:
http://trace.eas.asu.edu/yuv/index.html.
[6]
MSU
video
quality
measurement
tool:
Online:
http://compression.ru/video/quality_measure/video_measurement_tool_en.html.
[7]
S. –K. Kwon, A. Tamhankar and K.R. Rao, “An overview of H.264 / MPEG-4
Part 10,” J. Visual Communication and Image Representation, vol. 17, pp.186-216, Apr.
2006.
[8]
Y. Xu and Y. Zhou, “H.264 Video Communication Based Refined Error
Concealment Schemes,” IEEE Transactions on Consumer Electronics, vol. 50, issue 4,
pp. 1135–1141, Nov. 2004.
References
[9]
M. Wada, “Selective Recovery of Video Packet Loss using Error
Concealment,” IEEE Journal on Selected Areas in Communication, vol. 7, issue 5, pp.
807-814, June 1989.
[10]
Y. Chen et al, “An Error Concealment Algorithm for Entire Frame Loss in
Video Transmission,” IEEE Picture Coding Symposium, Dec. 2004.
[11]
S. K. Bandyopadhyay et al, “An error concealment scheme for entire frame
losses for H.264/AVC,” IEEE Sarnoff Symposium, pp. 1-4, Mar. 2006.
[12]
H. Ha, C. Yim and Y. Y. Kim, “Packet Loss Resilience using Unequal Forward
Error Correction Assignment for Video Transmission over Communication Networks,”
ACM digital library on Computer Communications, vol. 30, pp. 3676-3689, Dec. 2007.
References
[13] T. Wiegand et al, “Overview of the H.264/AVC video coding standard,” IEEE
Transactions on Circuits and Systems for Video Technology, vol. 13, no. 7, July 2003.
[14]
Z. Wang et al, “Image quality assessment: From error visibility to structural
similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr.
2004.
[15]
D. Levine, W. Lynch and T. Le-Ngoc, “Observations on Error Detection in
H.264,” 50th IEEE Midwest Symposium on Circuits and Systems, pp. 815-818, Aug.
2007.
[16]
B. Hrušovský, J. Mochná and S. Marchevský, “Temporal-spatial Error
Concealment Algorithm for Intra-Frames in H.264/AVC Coded Video,” 20th
International Conference Radioelektronika, pp. 1-4, Apr. 2010.
References
[17]
W. Kung, C. Kim and C. Kuo “Spatial and Temporal Error Concealment
Techniques for Video Transmission Over Noisy Channels,” IEEE Transactions on
Circuits and Systems for Video Technology, vol. 16, issue 7, pp. 789-803, July 2006.