SlideShare a Scribd company logo
1 of 256
Download to read offline
Dr. Mohieddin Moradi
mohieddinmoradi@gmail.com
1
Dream
Idea
Plan
Implementation
Outline
2
Outline
3
High Definition System
SMPTE 296M (The 720 standard)
4
High Definition System
SMPTE 274M (The 1080 standard)
5
SMPTE 274M (The 1080 standard)
The High Definition Signal
Horizontal Interval
6
SMPTE 274M (The 1080 standard)
The High Definition Signal
Horizontal Interval
7
The High Definition Signal
Vertical Interval
8
The High Definition Signal
Vertical Interval
9
1080 HD Image
10
SMPTE 292M (1.5G SD-SDI)
Defines the bit-serial digital coaxial and fiber-optic interface for high-definition component signals operating at
1.485 Gb/s and 1.485/1.001 Gb/s. (8,10-bits)
Two important formats for HD:
SMPTE 274M
 1920x1080 Scanning and Analog and Parallel Digital Interfaces for Multiple Picture Rates.
 1920x1080 @ 60p, 59.94p, 50p, 60i, 59.94i, 50i, 30p, 29.97p, 25p, 24p, 23.98p, 30sF, 29.97sF, 25sF, 24sF, 23.98sF
Colorimetry ITU-R BT.709
SMPTE 296M
 1280x720 Progressive Image Sample Structure -Analog and Digital.
 1280x720 @ 60p, 59.94p, 50p, 30p, 29.97p, 25p, 24p 23.98p Colorimetry ITU-R BT.709.
HD-SDI Standards
11
– This standard has been developed to carry HDTV digital video signals and formatted data within the
defined payload areas including ancillary data.
– The standard can carry 1280×720, 1920×1080 or 2048×1080 active pixel formats through the 1.5 Gb/s
Serial Digital Interface and enables the carriage of any ancillary data conforming to SMPTE ST 291.
HDTV SMPTE 292 (1.5G SD-SDI) Formats
12
HDTV SMPTE 292 (1.5G SD-SDI) Formats
13
Review of HD-SDI
14
Review of HD-SDI
Y, Cb, Cr Quantizing and 4:2:2 Sampling
15
Analog Composite Signal
– Analog signal transmission between video equipment can be subject to phenomena known as jitter,
signal attenuation, and noise, resulting in signal degradation.
– The Horizontal Sync Signal is also subject to these phenomena, which can introduce synchronization
inaccuracies.
– In composite signals, these synchronization inaccuracies are observed as:
“Geometric Distortion” and “Shift in the Picture’s Position’.
Analog Component Signal
– In analog component signals, such distortions become even more critical.
– Component signals consist of three signals “Y, R-Y, B-Y” which need to be synchronized as one signal
for correct display.
– If a phase shift occurs between the three signals, the color of the picture will be distorted.
– To solve this, the Tri-level Sync System was developed.
16
HDTV Tri-Level Sync
HD Video Signal
– This fact is important in establishing a sync system accurate enough for HD video signals.
– Higher horizontal resolutions require much faster scanning speeds of the R, G, and B signals to display
an image.
– The faster the scanning speed, the more difficult it becomes to maintain accurate synchronization
(extremely sensitive).
– HD signals use component signals, making the use of the Tri-level Sync system essential.
– In today’s digital interfaces, including those used for both SD and HD, the timings of the video signals
are digitally locked and automatically synchronized at the receiving device.
– This relieves the system and its operators from concerns about inaccurate synchronization.
– However, the Tri-level Sync signal remains to play an important role since digital video devices still use
analog reference signals.
17
HDTV Tri-Level Sync
Why Tri Level Sync ?
– HD has faster rise/fall times
– Easier extraction of simplified field pulses
– Improves jitter performance and sync
separation
– Note the analog HD timing reference point 0H is
measured at the 50% point of the positive rising
edge of the tri-level sync.
HDTV Tri-Level Sync
18
An example of sync signal attenuates
– With the Bi-level Sync System, the timing
of the sync signal’s lock point can slip.
– the Tri-level Sync System uses a
symmetrical sync signal and locks the
center of the signal.
– This ensures that the same lock point is
always used, even when signal
attenuation occurs.
HDTV Tri-Level Sync
t
t
19
SMPTE 292 (HD-SDI) Horizontal Line
EAV SAV
HD-SDI Line Format
20
HD-SDI Data Stream Interleaving
YD1920
YD1921
YD1922
YD1923
YD2636
YD2637
YD2638
YD2639
YD1920
YD1921
CbD960
CbD961
CbD960
Cb
D1318
Cb
D1398
CrD60
CrD961
CrD960
CrD1318
CrD1398
YD0
YD1
YA0
YA0
CbD0
CbA0
CrA0
CrD0
YA706
YA707
CbA353
CrA353
YD1918
YD1919
CbD959
CrD959
CV CV
21
CbD959CrD959
CbD0
CbD1
CrD0
CrD1
YD1918
YD1919
YD0
YD1
YD2
YD3
Y: 720 Cr, Cb: 360 Y: 1920 Cr, Cb: 960
CbD959CrD959
YD1918
YD1919
Header: 3FFh (all bits in the word set to 1), 000h (all 0’s), 000h (all 0’s)
– In HD, both the luma and chroma signals have an EAV and SAV sequence that is multiplexed to form
a twenty-bit word.
– The wide variety of HD formats have additional code words added to the EAV sequence.
– Code words LN0 and LN1 indicate the current line number of the HD format
– Code words CR0 and CR1 represent a cyclic redundancy code (CRC) of each HD line
– These code words are added to both the luma and chroma components after EAV.
HD-SDI Data Stream Interleaving
22
Error Testing, CRC
− CRC checking, in high definition, is done separately for luma and chroma on each line.
− A CRC value is used to detect errors in the digital active line by means of the calculation CRC(X) = X18
+ X5 + X4 + 1 with an initial value of zero at the start of the first active line word and ends at the final
word of the line number. The value is then distributed as shown in Table.
− A value is calculated for luma YCR0 and YCR1 and another value, CCR0 and CCR1, is calculated for
color-difference data.
23
Different to Standard Definition TRS
– Sixteen 10 bit words (as opposed to four 8 bit words in standard definition)
– Start or End of Active Video (EAV,SAV).
– Line Number (LN).
– Cyclic Redundancy Check (error checking)(CRC).
– Cyclic Redundancy Check Codes (CRCC)
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
LN0(C)
LN0(Y)
LN1(C)
LN1(Y)
CCR0
YCR0
CCR1
YCR1
000(C)
000(Y)
CbData
YData
CrData
YData
EAV LN CRC
High Definition TRS (Timing reference signals)
EAV
24
Header : 3FFh, 000h, 000h
− The “xyz” word is a 10-bit word with the two least significant bits set to zero to survive an 8-bit signal
path. Contained within the standard definition “xyz” word are functions F, V, and H, which have the
following values:
• Bit 8 – (F-bit): 0 for field one and 1 for field two
• Bit 7 – (V-bit): 1 in vertical blanking interval; 0 during active video lines
• Bit 6 – (H-bit): 1 indicates the EAV sequence; 0 indicates the SAV sequence
Timing Reference Signal (TRS) Codes
25
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
LN0(C)
LN0(Y)
LN1(C)
LN1(Y)
CCR0
YCR0
CCR1
YCR1
000(C)
000(Y)
CbData
YData
CrData
YData
EAV
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
000(C)
000(Y)
CbData
YData
CrData
YData
SAV
Timing Reference Signal (TRS) Codes
26
− The “xyz” word is a 10-bit word with the two
least significant bits set to zero to survive an
8-bit signal path.
− Contained within the standard definition
“xyz” word are functions F, V, and H, which
have the following values:
• Bit 8 – (F-bit):
0 for field one and 1 for field two
• Bit 7 – (V-bit):
1 in vertical blanking interval; 0 during active
video lines
• Bit 6 – (H-bit):
1 indicates the EAV sequence; 0 indicates the
SAV sequence
EAV, SAV, LN and CRC
27
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
LN0(C)
LN0(Y)
LN1(C)
LN1(Y)
CCR0
YCR0
CCR1
YCR1
000(C)
000(Y)
CbData
YData
CrData
YData
EAV
3FF(C)
3FF(Y)
000(C)
000(Y)
XYZ(C)
XYZ(Y)
000(C)
000(Y)
CbData
YData
CrData
YData
SAV
XYZ WORDS & Vertical Timing Information in Different Formats
28
Protection Bits for SAV and EAV
29
Error Corrections Using Protection Bits (P3-P0)
Protection bits for SAV and EAV
− The error correction applied
provides a DEDSEC (double
error detection – single error
correction) function.
− The received bits denoted by
“–” in the Table, if detected,
indicate that an error has
occurred but cannot be
corrected.
30
Blanking & Ancillary Data
31
– The relative positions of EAV and SAV in comparison to the analog horizontal line are shown.
– Note the analog HD timing reference point 0H is measured at the 50% point of the positive rising edge
of the tri-level sync.
Horizontal Line Timing in HD Formats
32
TheRelativeTimingIntervalsforaVarietyofFormats
33
HD Video Bit Rate
34
1080i at 50Hz (1125 Total Lines)
− Luminance (Y) : 2640 samples/line × 1,125 lines/frame ×25 frames/sec × 10 bits/sample =
742.5Mbit/sec
− R-Y (Cr) : 1320 samples/line × 1,125 lines/frame × 25 frames/sec × 10 bits/sample =
371.25Mbit/sec
− B-Y (Cb) : 1320 samples/line × 1,125 lines/frame × 25 frames/sec × 10 bits/sample =
371.25Mbit/sec
− Total Bit Rate
Y + Cr + Cb = 1.485Gbit/sec
HD Video Bit Rate
35
Outline
36
Hybrid Facility
37
Dual Link SDI Format SMPTE 372M
38
– Using existing HD-SDI infrastructure
– Requires two signal paths (Link A & Link B)
– Increase color range from 10 bits to 12 bits
– SMPTE 352M to identify links
– Mapping various formats into existing HD-
SDI structure
Problems
• Interconnection issues
• Swapped or Missing links
• Cable Path different for each Link
Dual Link-supported formats defined in SMPTE 372M.
Line length structure of Dual Link formats
Dual Link SDI Format SMPTE 372M
39
Dual Link SDI Format SMPTE 372M
Source Signal Formats
40
SDI data structure for a single line
Progressive image format divided between Link A and Link B.
Data structure of Link A and B for fast progressive formats.
Data structure for R'G'B' (A) 4:4:4:4 10-bit Dual Link format.
− For the Dual Link signals, the
various formats are mapped
into the two HD-SDI signals.
− Therefore, the various
mapping structures are
constrained by the existing
HD-SDI format.
− Figure shows how the 10-bit
sampled 4:2:2 Luma Y and
Chroma C words are
multiplexed together in the
HD-SDI signal.
Dual Link SDI Format SMPTE 372M
41
− To achieve a greater dynamic range for the signal, a 12-bit data format can be accommodated
within the Dual Link standard. The problem here is that the data structure of each link conforms to 10-
bit words.
− In the case of R'G'B‘ 4:4:4 12-bits, the most significant bits (MSBs) 2-11 are carried within the 10-bit
words.
− The additional two bits from each of the R'G'B' channels are combined into the Y' channel of Link B.
− Link A carries the G' channel bits 2-11 and even sample values of B' and R' bits 2-11.
− In Link B the alpha channel is replaced by the combined bits 0-1 of the R'G'B‘ samples.
− The odd samples of the B' and R' bits 2-11 are carried within the [C'b/C'r] words.
− The combined R'G'B' 0-1 data is mapped into the 10-bit word where EP represents even parity for bits
7-0, the reserved values are set to zero and bit 9 is not bit 8.
12-bit data format within the Dual Link standard
(R'G'B' 4:4:4 12-bit)
42
Data structure for Y'C'bC'r (A) 4:4:4:4 Dual Link format.
Channel representation for RGB 12-bit.
Mapping structure for R'G'B' 0-1.
12-bit data format within the Dual Link standard
(R'G'B' 4:4:4 12-bit)
43
Channel representation for Y'C'bC'r (A) 4:2:2:4 12-bit.
Mapping structure for Y'C'bC'r 0-1.
Mapping structure for Y' 0-1.
Channel representation for Y’C’bC’r 12-bit
Mapping structure for Y'C'bC'r 0-1.
12-bit data format within the Dual Link standard
(Y'C'bC'r (A) 4:2:2:4 12-bit , Y'C'bC'r 4:4:4 12-bit)
44
Outline
45
– Work at the highest resolution (Bit Depth and Color space) possible prior to rendering the product.
– In standard HD-SDI limited to 4:2:2 YCbCr only at 10-bit
– With Dual Link & 3Gb/s, users can:
• Increase color range from 10 bits to 12 bits
• Switch from 4:2:2 to 4:4:4 Sampling to the total chrominance Bandwidth
• Work in the RGB domain for easier integration with Special Effects editors, and Telecine
applications
– Digital cinema cameras now being adopted for feature films, television shows, and even commercials
• Panavision Genesis™
• Attack of the Clones, Revenge of the Sith, Apocalypto, …
• Thomson Viper FilmStream™
Why 3Gb/s SDI and High Speed Data?
46
3G SDI first standardized in 2005……
ITU-R BT.1120
Restricted to 1920 x 1080p50 Y’C’BC’R 4:2:2 10-bit
SMPTE 3G SDI standards first published in 2006
SMPTE ST 424:2006
Physical layer – 3G equivalent of ST 292-1 (1.5Gb/s SDI)
SMPTE ST 425:2006
Video, audio and ancillary data mapping for the 3G interface
SMPTE ST 297:2006
Optical interface standard covering all SDI rates from 143Mb/s through to 3Gb/s
3Gb/s SDI Standards
47
– Extending the ST 425 document suite in support of HDTV and 2K D-Cinema production with higher
resolution (bit depth and sampling)
– Extending the ST 425 document suite in support of HFR 2K D-Cinema production
– Extending the ST 425 document suite in support of Stereoscopic 3D HDTV and 2K D-Cinema
production
– Extending the ST 425 document suite in support of Stereoscopic 3D HDTV and 3D HFR 2K D Cinema
production
– Extending the ST 425 document suite in support of 4K D-Cinema and UHDTV-1 production
The 3G SDI Document Suite
48
ST 425-xx 3G SDI / ITU-R BT.1120 Part 2
49
ST 425-xx 3G SDI / ITU-R BT.1120 Part 2
50
ST 425-xx 3G SDI / ITU-R BT.1120 Part 2
51
ST 425-xx 3G SDI / ITU-R BT.1120 Part 2
52
ST 425-xx 3G SDI / ITU-R BT.1120 Part 2
53
ST 424 3Gb/s Signal/Data serial Interface
ST 297 Optical Interface
The 3G SDI Document Suite
ST 424 3Gb/s Signal/Data serial Interface
ST 297 Optical Interface 54
3Gb/s SDI Standards
55
ST 424 3Gbps SDI Signal/Data Serial Interface
– It is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE
292M allowing for bit-rates of 2.970 Gbit/s and 2.970/1.001 Gbit/s over a single-link coaxial cable.
– This standard defines the 3Gb/s SDI physical interface.
10-bit multiplex, Serialization, Scrambling, Coding, Electrical specifications (eye shape, jitter, return loss…..)
– These bit-rates are sufficient for 1080p video at 50 or 60 frames per second. The initial 424M
standard was published in 2006, with a revision published in 2012 (SMPTE ST 424:2012).
ST 297:2006 Serial Digital Fiber Transmission System
– This standard defines an optical fiber system for transmitting bit-serial digital signals.
– It is intended for transmitting SMPTE ST 259 signals (143 through 360 Mb/s), SMPTE ST 344 signals
(540Mb/s), SMPTE ST 292-1/-2 signals (1.485 Gb/s and 1.485/1.001 Gb/s) and SMPTE ST 424 signals
(2.970 Gb/s and 2.970/1.001 Gb/s).
– In addition to optical specification, ST 297 also mandates laser safety testing and that all optical
interfaces are labelled to indicate safety compliance, application and interoperability.
3Gb/s SDI Standards
3G SDI Physical Interface & Fiber Transmission System
56
– Defines the transport of bit-serial data structure for 3.0Gb/s
– Using a single coaxial cable interface
– Supports either 10 or 12 bits data words
– The SDI signal has an identical HD structure and contains two virtual interfaces into which the data is
mapped.
– The definitions of EAV, SAV, Line Count (LN0,LN1), and Checksum (CR0,CR1) conform to the HD-SDI
signal standards.
– Mapped into two virtual interfaces (10 bit parallel data streams (Data Stream One & Data Stream
Two))
Data stream one of the virtual interface
Interface Frequency 148.5MHz or
148.5/1.001 MHz
Data stream two of the virtual interface
Interface Frequecy 148.5MHz or
148.5/1.001 MHz
SMPTE 424 Signal/Data Serial Interface
57
– Example of image mapping structure for 4:2:2 YCbCr 10 bits 60/59.94 (Mapping Structure One).
– Data stream one of the virtual interface for a fast progressive format contains the Y Luma data and
data stream two contains the C chroma information
Image Structure
58
– Data Stream one and two of the virtual interfaces are multiplexed together producing twice
the data rate.
– Channel Coding uses NRZI
Data stream one of the virtual interface
Data stream two of the virtual interface
Multiplexed 10-bit parallel interface
Image Structure Multiplexed
59
10-bit multiplex of data stream 1 and data stream 2
The 10-bit data words of parallel data stream one and data stream two of the virtual interface
Data stream one of
the virtual interface
Data stream two of
the virtual interface
Multiplexed 10-bit
parallel interface
Image Structure Multiplexed
60
SMPTE ST 425-1
Three different mapping modes are defined as:
• Level A Direct Image Mapping
• Level B-DL Dual Link mapping
• Level B-DS Dual Stream mapping
– Level A is the direct mapping of an uncompressed images into a serial digital interface
operating at a nominal rate of 3Gb/s.
– Level B-DL is the mapping of ST 372 dual-link data streams into a serial digital interface
operating at a nominal rate of 3Gb/s.
– Level B-DS is the dual-stream mapping of two independent 1.5Gb/s video streams into a single
serial digital interface operating at a nominal rate of 3Gb/s.
3Gb/s SDI Standards
3G SDI Mapping Standards (ST 425)
61
3 Gb/s Source Image Formats defined in SMPTE 425M
SMPTE 425-1 Level A
Signal/Data Serial Interface Source Image Format
62
SMPTE 425-1 Level A
Signal/Data Serial Interface Source Image Format
63
Sampling Structure of the Video Line for the Various Frame Rates
64
Mapping Structures 1, 2, 3 and 4 in SMPTE 425M- Level A
65
Mapping Structure One
66
– Mapping structure one supports the carriage of 4:2:2 sampled Y’C’bC’r data and has application for
1080 format.
– Data stream one of the virtual interface for a fast progressive format contains the Y Luma data and
data stream two contains the C chroma information. These two virtual interfaces are then
multiplexed together to form the 10-bit parallel interface which is then converted into the serial signal.
Mapping Structure One
67
Level A within the SMPTE 425M standard defines the specific direct
image format mapping as initially discussed for the fast progressive
format (Mapping structure one for the fast progressive signals)
Figure shows how the Y’, C’b and C’r samples are combined into
the two virtual interfaces. There are a total of 1920 (0-1919) samples
for the active picture and the blanking width is changed for the
various formats to maintain a constant data rate.
Within the SMPTE 425M the provision is made to allow for the
carriage of a Dual Link signal mapped into a 3 Gb/s signal
and this is defined as Level B. In this case the data from Link A
is mapped into virtual interface one and Link B information is
mapped into virtual interface two. Figure shows how the Dual
Link data is mapped into the two virtual interfaces of the 3
Gb/s signal.
(Mapping Structure One)
Comparison between Level A and Level B
68
− SMPTE ST 425-1 mapping structure 1:
− Y sample and Cb/Cr sample of mapping structure.
− The reference sample clock is 148.5 MHz which is twice the 74.25 MHz with the arrangement equivalent to the 292 M
of the standard.
1080p 50/59.94/60 4:2:2 10-bit
SMPTE ST 372 § 5.1:
Each Y sample and Cb/Cr sample are arranged for each line number.
Comparison between Level A and Level B
69
Mapping Structure Two
70
– Mapping structure two supports the carriage of 4:4:4 sampled R’G’B’ or Y’C’bC’r data
and has application for both 1080 and 720 formats.
– Data stream one carries all of the G’ and R’ samples and data stream two carries all of
the Alpha and B’ samples. Each of the channels is sampled at 74.25MHz or
74.25MHz/1.001. In the case of the YC’bC’r format the G samples are replaced by Y’ and
the color difference values C’b/C’r are replace the B’/R’ samples, respectively.
Mapping Structure Two
71
− SMPTE ST 425-1 mapping structure 2
− R sample is always in stream 1, B sample is always stream 2.
4:4:4 and 4:4:4:4 10-bit
− SMPTE ST 372 § 5.2, 5.4:
− Even B + R samples are in stream 1, odd B + R samples are in stream 2.
Comparison between Level A and Level B
72
Mapping Structure Three
73
− Mapping structure three allows for 12-bit data to be carried within the SDI transport as either R’G’B’, Y’C’bC’r or
X’Y’Z’ formats.
− The 12-bit data represented as [11:0] has to be mapped into a 10-bit structure and each 12-bit sample is separated
into four parts ([11:9],[8:6], [5:3], [2:0]). Each of these values is then combined into a 10-bit word for each of the
components R’G’B’, Y’C’bC’r or X’Y’Z’ as defined in next Table.
− These data words are then distributed across the two virtual interfaces and the bits [11:9] and [5:3] are carried by
virtual interface one. The remaining data words [8:6] and [2:0] are carried by virtual interface two as shown in Figure
51.
− In the case of the Y’C’bC’r format the G’ samples are replaced by Y’ and the color difference values C’b/C’r are
replace the B’/R’ samples, respectively.
Mapping Structure Three
74
− In digital cinema application, a different color space of X’Y’Z’ is used to give a greater dynamic
range to the representation of color to replicate the color depth available from film.
− SMPTE 428 defines the various parameters of this color space.
− In the case of the X’Y’Z’ format the R’ samples are replaced by X’, the G’ samples are replaced by Y’
and the B’ samples are replaced by Z’.
− Each of the channels is sampled at 74.25MHz or 74.25MHz/1.001. To maintain the constant 3 Gb/s
data rate for the various supported formats the blanking width is changed.
Mapping Structure Three
75
12-Bit Mapping Structure of R’G’B into the 10-bit Virtual Interface
76
12-bit Mapping Structure of Y’ C’b C’r into the 10-bit Virtual Interface
77
− SMPTE ST 425 - 1 Mapping structure 3:
− Each word 10 bits are 3 bits from 3 channels of RGB (12 words of data are completed with 4 words).
4:4:4 12-bit
− SMPTE ST 372 § 5.3, 5.4:
− The upper 10 bits of the 3 channels of RGB are § 5.2 4: 4: 4 10 bit format
− The remaining two least significant bits of each channel are contained in one word, alpha channel In place of the
data.
Comparison between Level A and Level B
78
Mapping Structure Four
79
– Mapping structure four supports the carriage of 4:2:2 sampled Y’C’bC’r data and has
application for 1080 format as 12 bits.
– In order to map this 12-bit data into the 10-bit infrastructure of the SDI interface, the 12-bit
data represented as [11:0] has to be divided into different words.
– In mapping structure four, the first half of the Y’ data bits [11:6] are carried in virtual
interface one and the subsequent Y’ data bits [5:0] are carried in the next packet of the
virtual interface one as shown in Table.
Mapping Structure Four
80
– Figure shows how the data packets are combined into the two virtual interfaces.
– The luma signal (Y’) is sampled at 74.25MHz or 74.25MHz/1.001 and the chroma channels
(C’b/C’r) are sampled at half this rate of 37.125MHz or 37.125MHz/1.001.
Mapping Structure Four
81
− SMPTE ST 425-1 mapping structure 4:
− Two 10 bit words, 6 bits of each YCbCr channel.
− Stream 1 consists of luminance and stream 2 consists of chrominance.
4:2:2 12-bit
− SMPTE ST 372 § 5.5:
− Stream 1 arranges the upper 10 bits of each channel, in order of Cb / Y / Cr / Y Multiplex. Stream 2 consists of the
lower 2 bits of each channel and the 10-bit alpha channel.
Comparison between Level A and Level B
82
− Mapping of two parallel 10 bit interfaces with same line and frame structure in conformance with SMPTE292.
SMPTE 425M Level B
Level B-DL (Dual Link) mapping, Level B-DS (Dual Stream) mapping
83
− Mapping of two parallel 10 bit interfaces with same line and frame structure in conformance with SMPTE292.
SMPTE 425M Level B
Level B-DL (Dual Link) mapping, Level B-DS (Dual Stream) mapping
84
3Gb/s Level B Mapping of SMPTE 372M Dual Link
Level B-DL Dual Link mapping
85
SMPTE 425-1 Level B
Signal/Data Serial Interface Source Image Format
86
3Gb/s Serial Digital Interface
– Pk-to-Pk Amplitude 800mV +/-10%
– DC Offset 0.0V +/-0.5V
– Rise/Fall Time between 20% & 80% no greater than
135ps and not differ by more than 50ps
– Overshoot rise/fall not to exceed 10% of amplitude
– Timing Jitter <= 2UI above 10Hz
– Alignment Jitter <= 0.3UI above 100kHz
Eye Specifications per SMPTE Standards
87
– SMPTE recommended practice RP184 has a set of definitions and measurement procedures for the
measurement of jitter.
– SMPTE 424M, 292 and 259M defines a set of frequency limits based on this recommended practice.
–f1 = 10 Hz = Timing jitter lower band edge for SD, HD and 3G-SDI
–f3 = 1 kHz = Alignment jitter lower band edge for SD
–f3 = 100 kHz = Alignment jitter lower band edge for HD & 3G-SDI
–f4 > 1/10 the clock rate = Upper band edge
Jitter Measurements
88
Jitter Measurements Review
89
Jitter Measurements Review
OutputJitterLimitsforSDIClocks
90
Technical Report 002
Advice on the use of 3 Gbit/s HD-SDI interfaces
91
Pathological Signals - Stress Testing
92
EAV, SAV, LN and CRC
93
Luma and Chroma Components
94
− The “XYZ” word is a ten-bit word with the two least significant bits set to zero, allowing translation to
and from an eight-bit system. Bits of the “XYZ” word have the following functions:
• Bit 9 – (Fixed bit) always fixed at 1
• Bit 8 – (F-bit) always 0 in a progressive scan system; 0 for field one and 1 for field two in an interlaced or segmented frame system.
• Bit 7 – (V-bit) 1 in vertical blanking interval; 0 during active video lines
• Bit 6 – (H-bit) 1 indicates the End of Active Video (EAV) sequence; 0 indicates the Start of Active Video (SAV) sequence
• Bits 5, 4, 3, 2 – (Protection bits) provide a limited error correction of the data in the F, V, and H bits
• Bits 1, 0 (Fixed bits) set to 0 to have identical word values in 10-bit or 8-bit systems
Format of XYZ Word for HD and SD Standards
95
Analog HD Vertical Blanking Interval
96
Vertical Timing for Digital HD Formats
97
Analog HD Timing Parameters with Selected Digital Relationships
98
SMPTE ST 424: 2012
Updates to add provisions for use of other connector types
Typical cable loss recommendation changed from -20dB to <-30dB
SMPTE ST 425: 2011
Revised to include Digital Cinema production formats and add 32 channel audio support
Split into multiple parts to accommodate future revisions for stereo and high resolution images
425-0 – Index
425-1 – Replaces current 425
425-2 – A Stereo Pair of 1.5Gb/s images – tie up with ST292-2
425-3 – Single Images with payload up to 6 Gb/s, carried on 2 links
425-4 – A Stereo pair of 3 Gb/s signals on 2 links
425-5 –Single Image with payload upto12 Gb/s, carried on 4 links
425-6 – A Stereo Pair of 6Gb/s signals, transported via 4 links
3Gb/s SDI Standards – Continuing Evolution
99
General Issues for 1080p50/60
– Both Level A and Level B-DL mapping modes have similar capabilities BUT they are not compatible
• For 1080p50/60, conversion between Level A and Level B-DL introduces a delay of at least one video line on
each conversion.
• Conversion of signals with embedded audio or other ancillary data may increase the delay and introduce
additional complexity to correct the positioning or timing of some ancillary data packets.
• Some devices process signals internally using a different standard to their own input/output standard. It is
always advisable to confirm these devices compensate for any conversion delay internally before
installation.
– Users should establish capabilities of proposed purchases before designing new installations.
– Facility designers may wish to select one mapping format (Level A or Level BDL) for each facilities routing / vision
mixer signal “cloud”.
3G SDI – Some Things to Consider
100
Switching Regions
− For Level A and Level B-DS, the serial stream switch point is defined in SMPTE RP 168:2009
− For Level B-DS, there is no requirement for frame alignment of each image. If the two images are not frame
aligned, video switching could be adversely effected.
− Users and facility designers should always ensure that Level B-DS equipment guarantees frame alignment.
ST 352 Payload ID
− The use of the SMPTE ST 352 Payload ID is mandatory due to the large number of different video formats that can
be carried in the 3 Gb/s interface.
− Without the payload ID, it is not possible to correctly identify all of the supported formats or mapping modes
purely from inspection of the payload data.
− Users should ensure that any proposed new purchases support ST 352 payload ID before designing new
installations
3G SDI – Some Things to Consider
101
Embedded audio
– Level A, Level B-DL and Level B-DS can all carry up to 32 audio channels
– The channel assignments and identification are different
• Level A uses 8 separate audio groups (of 4 channels each) - in accordance with ST 299-1 AND ST 299-2 - all 32
channels are uniquely identified
• Level B-DL uses two streams of 4 audio groups (of 4 channels each) – in accordance with ST 299-1 - identical
channel numbers are used but channels 1~16 can only be differentiated from channels 17~32 at the ST 372
Dual-Link (Link A / Link B), level
• Level B-DS is similar to Level B-DL carrying two links of 16 channels but there is no defined channel assignment
for this mapping
– End-users and facility designers should ensure that audio embedders/de-embedders correctly
identify audio channel mapping in mixed Level A / Level B systems
– Extra care should be taken in 3G system upgrades to ensure that these new audio embedding
capabilities are handled transparently throughout the plant
3G SDI – Some Things to Consider
102
Outline
103
• Active picture information
• Vertical blanking interval
• Horizontal blanking intervals
– Blanking intervals carry the vertical and horizontal
synchronizing information.
– The vertical blanking interval contains:
– The vertical synchronizing pulses
– “Unused” lines of video
– The horizontal blanking interval is made up of the
front porch, horizontal synchronizing pulse, the
breezeway, the color subcarrier “burst” and the back
porch.
A Historical Perspective of HB & VB — Analog
ACTIVE PICTURE
(NTSC/PAL)
HORIZONTALBLANKING
525/625 lines483/576 lines
708-720 pixels
DATA (LINE SELECTION)
FrontPorch
Horizontal Sync Pulse
Breezeway
Color Subcarrier Burst
BackPorch
VERTICAL BLANKING
Vertical Switching Line
VERTICAL BLANKING
HORIZONTALBLANKING
Active Picture
(NTSC/PAL)
ACTIVE PICTURE
(NTSC/PAL)
– In earlier analog systems, the opportunity for utilizing the
“unused” lines in the vertical blanking interval existed to
carry “extra information”.
– This situation enabled applications such as closed
captioning for the hearing impaired and
news/sports/weather/other “teletext” extra visual
information.
– For production applications, time code in the vertical
blanking interval enhanced video tape edit decisions.
– Other applications such as signaling downstream
equipment to perform certain tasks were also possible.
A Historical Perspective of HB & VB — Analog
ACTIVE PICTURE
(NTSC/PAL)
HORIZONTALBLANKING
525/625 lines483/576 lines
708-720 pixels
DATA (LINE SELECTION)
FrontPorch
Horizontal Sync Pulse
Breezeway
Color Subcarrier Burst
BackPorch
VERTICAL BLANKING
Vertical Switching Line
VERTICAL BLANKING
HORIZONTALBLANKING
Active Picture
(NTSC/PAL)
ACTIVE PICTURE
(NTSC/PAL)
105
– As the vertical blanking interval is divided into lines,
the data is added line by line (a process that is
commonly known as “line selection.”)
– Due to the video signal being interlaced with odd
and even lines, as a line is selected, there are the
field 1 and field 2 selections.
A Historical Perspective of HB & VB — Analog
ACTIVE PICTURE
(NTSC/PAL)
HORIZONTALBLANKING
525/625 lines483/576 lines
708-720 pixels
DATA (LINE SELECTION)
FrontPorch
Horizontal Sync Pulse
Breezeway
Color Subcarrier Burst
BackPorch
VERTICAL BLANKING
Vertical Switching Line
Active Picture
(NTSC/PAL)
ACTIVE PICTURE
(NTSC/PAL)
VERTICAL BLANKING
HORIZONTALBLANKING
Metadata and data with locations and the given standard for analog video signals.
A Historical Perspective of HB & VB — Analog
107
– The move to digital video enabled more data to be
added.
– The “blanking intervals” in analog video signals are
analogous to “ancillary data spaces” in digital video
signals.
Vertical ancillary data space (VANC)
Horizontal ancillary data space (HANC)
DATA (LINE SELECTION)
ACTIVE PICTURE
SDI 270 Mb/s
YCbCR 4:2:2 10-bit
HANC
525/625 lines483/576 lines
708-720 pixels
VANC
SAV
EAV
4 Groups
Embedded Audio
(16 channels)
Vertical Switching Line
DATA (LINE SELECTION)
ACTIVE PICTURE
SDI 270 Mb/s
YCbCR 4:2:2 10-bit
HANC
525/625 lines483/576 lines
708-720 pixels
VANC
SAV
EAV
4 Groups
Embedded Audio
(16 channels)
Vertical Switching Line
VANC
HANC
A Historical Perspective of HB & VB — Digital
ACTIVE PICTURE
(SD-SDI, YCrCb 4:2:2 10bit)
– Vertical and horizontal synchronizing pulses: SAV, EAV
– 16 channels of digital audio could be carried, along
with the digital video signal, with any other
“additional data.”
– This is known as embedding the audio and data
signals into the video signal.
– A digital video signal is made of:
• video essence
• Audio essence
• Any additional data essence or metadata.
DATA (LINE SELECTION)
ACTIVE PICTURE
SDI 270 Mb/s
YCbCR 4:2:2 10-bit
HANC
525/625 lines483/576 lines
708-720 pixels
VANC
SAV
EAV
4 Groups
Embedded Audio
(16 channels)
Vertical Switching Line
A Historical Perspective of HB & VB — Digital
ACTIVE PICTURE
(SD-SDI, YCrCb 4:2:2 10bit)
EAVSAV 4 Group of
embedded
audio (16 ch)
– It is suggested that the audio information be placed between the EAV and SAV during the horizontal
blanking interval.
– Not all of the VANC data space is available.
• For instance, the luminance samples on one line per field are reserved for DVITC (digital vertical
interval time code) and the chrominance samples on that line may be devoted to video index
(Also Closed Caption, Timecode, AFD, WSS (Wide-Screen Signaling).
• Also, it would be wise to avoid using the vertical interval switch line and perhaps, the subsequent
line where data might be lost due to relock after the switch occurs.
Component Digital
Ancillary Data Space for
SD-SDI
HANC & VANC
110
111
HANC & VANC
Horizontal and Vertical Interval around a Switching LineA switched area of a video frame construct on a digital serial interface
− It would be wise to avoid using the vertical interval switch line and perhaps, the subsequent line where
data might be lost due to relock after the switch occurs.
112RP 291-2 also provides a method of calculating the available ancillary data space on any interface.
– The formatting of the ancillary data packets is the same between SD and HD.
– Ancillary data is formatted into packets prior to multiplexing it into the video data stream.
(000h)
(3FFh)
(3FFh)
DID
SDID
CS
DC
DBN
User Data Words
(max 255 Words)
Ancillary data packet structure of the SDI
The maximum length of an ancillary packet is 255
bytes; the maximum number of used data words is 248. 113
• Ancillary Data Flag (ADF): For identifying the start of the ancillary data packet and uses the code word 000h, 3FFh,
3FFh. (This is the reverse of the code words used for EAV and SAV data)
• Data Identification word (DID): For signifying the type of data being carried so that equipment can quickly identify
the type of data present within the signal.
• Data Block Number (DBN): For providing sequential order to ancillary data packets and allows a receiver to
determine if data is missing (it is an optional counter ).
• Secondary Data ID (SDID): For providing a wider range of allowed values and can be used for a series of data to be
grouped, for instance, the Dolby Vertical Ancillary data has a series of SDID to identify the audio channels the data is
associated with.
• Data Count (DC): For indicating the amount of data in the packet.
• Check- Sum (CS): For detecting errors in the data packet.
Ancillary data packet structure in HD
(000h)
(3FFh)
(3FFh)
DID
SDID
CS
DC
DBN User Data Words
(max 255 Words)
114
Ancillary identification codes for type 2. Ancillary identification codes for type 1.
115
ANC Data Placement & Vertical Interval Switching Line Numbers
ANC data placement.
Vertical Interval Switching line numbers
116
A Historical Perspective of HB & VB — Digital
117
A Historical Perspective of HB & VB — HD
118
A Historical Perspective of HB & VB — HD
119
A Historical Perspective of HB & VB — HD
120
– The UMID label data consists of 32 8-bit bytes for a basic UMID or 64 bytes for an extended UMID.
– The number of words in the UDW is indicated in the DC field of the ANC packet header.
Ex: Packing Unique Material Identifier (UMID) and Program ID Label Data
Data structure of a SMPTE 291M ANC packet (type 2)
Data structure of UDWs for UMIDs
UMID data count (DC) and key values
121
Handling of digital audio is defined in:
ANSI/SMPTE Standard 272M:
Formatting AES/EBU Audio and Auxiliary Data into Digital Video Ancillary Data Space, for 525/60 and 625/50
ANSI/SMPTE 259M formats.
ANSI/SMPTE 299M:
24-Bit Digital Audio Format for HDTV Bit-Serial Interface for ANSI/SMPTE 292M formats.
– From 2 to 16 AES/EBU audio channels are transmitted in pairs and combined where appropriate into groups
of four channels.
– Each group is identified by a unique ancillary data ID.
– Audio is sampled at a video synchronous clock frequency of 48 kHz, or optionally at a synchronous or
asynchronous rates from 32 kHz to 48 kHz.
Digital Audio
122
Digital Audio
Television Clock Relationships
123
Digital Audio Block & Frame Format
In SD structure the ancillary audio data is applied
across CbYCrY.
124
- 4 Preamble Bits
- 24 Data Bits
- 1 Validity
- 1 User
- 1 Channel status
- 1 Parity (Even parity)
Validity bit
− The validity bit shall be logic “0” if the audio sample word is suitable for conversion.
− The validity bit shall be logic “1” if the audio sample word is not suitable for conversion. (There is no default value for
validity bit)
User data format
− User data bits may be used in any way desired by the user. (The default value of the user data bit shall be logic "0")
− Channel status Byte 1 bits 4-7 indicate possible formats for the user data channel.
Digital Audio Sub-frame Format
125
- 4 Preamble Bits
- 24 Data Bits
- 1 Validity
- 1 User
- 1 Channel status
- 1 Parity (Even parity)
Channel status format
− The channel status for each audio signal carries information associated with that audio signal (Examples: length of
audio sample words, number of audio channels, sampling frequency, time code, alphanumeric source and
destination codes, and pre-emphasis), and thus it is possible for different channel status data to be carried in the two
sub-frames of the digital audio signal.
• Channel status information is organized in 92-bit blocks, subdivided into 24 Bytes .
• The first bit of each block is carried in the Frame with preamble "Z".
• The specific organization follows, wherein the suffix 0 designates the first Byte or bit. Where multiple bit states
represent a counting number, tables are arranged with most significant bit (MSB) first, except where noted as
LSB first.
Digital Audio Sub-frame Format
126
AES/EBU digital audio signal structure
127
Frame repeat
– 1 Frame : 1/48kHz = 20.83us (44.1kHz=22.67us)
Block repeat
– 1 Block : 20.83us ×192frame = 4ms (44.1kHz=4.352ms)
AES/EBU Interface bit rate
– 48kHz × 2CH × 32Bit = 3.072Mbps
After BPM (Biphase Mark) encoding
– 3.072Mbps × 2 = 6.144Mbps
AES/EBU Frames & Sub-frames & Data rate
128
Biphase Mark Encoding
129
The subframe preambles starting with a
transition from negative to positive.
The subframe preambles starting with a
transition from positive to negative.
– Each preamble must transition to a different level from that of the last state of the bit before it.
X, Y and Z Sync Words (Preambles)
130
Insertionofaudioframesinancillarydatapacketsofthe
SDI
n=8 (SD/1.5G HD-SDI)
131
Insertion of audio frames in ancillary data packets of the SDI
132
The Audio data packet precedes the Extended data packet
in the SDI data–stream.
When the audio data is 24-bit for SD, the Audio Data
Packet only transmits 20-bit data, the Extended Data
Packet is used to transmit another 4-bit data.
133
− When the audio data is 24-bit for SD, the Audio Data Packet only transmits 20-bit data, the Extended
Data Packet is used to transmit another 4-bit data.
Audio Data Packet and Extended Data Packet
134
– The Audio Data Packet contains
one or more audio samples from
up to four audio channels.
– 20 audio bits and C, U, V bits
from each AES sub-frame ,i.e. 23
bits are mapped into three 10-bit
video words (X, X+1, X+2).
Basic Embedded Audio
135
Data Identifiers (DID) for up to 16-Channel Operation of SD embedded audio
Embedded Audio Bit Distribution
– Bit-9 is always the inverse of bit-8 to
ensure that none of the excluded word
values (3FFh through 3FCh or 003h
through 000h) are used.
– The Z-bit is set to “1” corresponding to
the first frame of the 192-frame AES
block.
– Bit-8 in word X+2 is even parity for bits 0-8
in all three words.
136
Full-featured embedded audio to include:
• Carrying the 4 AES auxiliary bits (which may be used to extend the audio samples to 24-bit)
• Allowing non-synchronous clock operation
• Allowing sampling other than 48 kHz
• Providing audio-to-video delay information for each channel
• Documenting Data IDs to allow up to 16 channels of audio in component digital systems
• Counting “audio frames” for 525 line systems.
Extended Embedded Audio
137
– The Audio Control Packet is transmitted once per field in the second horizontal ancillary data space
(on the second line) after the vertical interval switch point.
– It contains information on audio frame number, sampling frequency, active channels, and relative
audio-to video delay of each channel.
– Transmission of audio control packets is optional for 48 kHz synchronous operation and required for all
other modes of operation (since it contains the information as to what mode is being used).
Audio Control Packet Formatting
138
– When the audio data is 24-bit for SD it split-up into 20 bits of audio data and an extended packet
containing the 4 auxiliary bits.
– The Audio Data Packet only transmits 20-bit data, the Extended Data Packet is used to transmit
another 4-bit data.
– The full 24 bits of audio data are sent as a group in HD (no split).
– Since the full 24 bits of audio data are carried within the user data there is no extended data packet
used within HD.
(HD)
(SD)
Subframe Formats in SD and HD
139
There are some similarities and differences in the implementation of AES/EBU within SD and HD environment.
– The formatting of the ancillary data packets is the same between SD and HD.
– The information contained within the user data is different because the full 24 bits of audio data are sent in HD.
– Therefore, the total number of bits used in HD is 28 (24+5 (V,U,C,P)) bits compared with 23 (20+3 (V,U,C)) bits in
SD.
– The 24 bits of audio data are placed in 4 ancillary data words along with C, V, U and Z-bit flag.
– Additionally, the CLK and ECC words are added to the packet.
Error Correction Codes
Structure of HD Audio Data Packet
140
– Conformance to the ancillary data packet structure means that the Ancillary Data Flag (ADF) has a three-
word value of 000h,3FFh, 3FFh, as SMPTE 291M.
– The one-word DID (Data Identification) have the following values to identify the appropriate group of audio
data as shown in Table.
– DBN is a one-word value for data block number
– DC is a one-word data count which is always 218h.
– The User Data Words (UDW) always contains 24 words of data.
– UDW0 and UDW1 are used for audio clock phase data and provide a means to regenerate the audio
sampling clock. The data within these two words provides a count of the number of video clocks between
the first word of EAV and the video sample corresponding to the audio sample.
Data Identifiers (DID) for up to 16-Channel Operation of HD embedded audio.
Structure of HD Audio Data Packet
141
– Each audio data subframe is distributed across 4 UDW samples.
– The full preamble data is not carried within the 4 words, only a reference to the
start of the 192 frame by use of the Z-bit indicator. Also, the parity bit is that used
within the 32-bit subframe (P bit) unlike SD.
– The Error Correction Codes (ECC) is a set of 6 words that are used to detect errors
within the first 24 words from ADF to UDW17. The value is calculated by applying
the 8 bits of data B0-B7 of the 24 words through a BCH code information circuit
that produces the 6 words of the ECC (Error Correction Code.)
– The ancillary data information is multiplexed within the color difference Cb/Cr
data space only (Unlike SD).
– The Y data space is only used for the audio control packet that occurs once per
field and is placed on the second line after the switching point of the Y data.
– No ancillary data is placed within the signal on the line subsequent to the
switching point. (The switching point location is dependent on the format of the
HD signals, for example in the 1125/60 system no ancillary data is put on line 8)
Structure of HD Audio Data Packet
142
– The audio control packet carries additional information used in the process of decoding the audio data and has a similar structure to SD.
– The Ancillary Data Flag has a three-word value of 000h, 3FFh, 3FFh.
– The one-word DID has values to identify the appropriate group of audio data.
– DBN is always 200h
– DC is always 10Bh.
– The UDW contains 11 words of data structured into five different types of data.
– The Audio Frame (AF) number data provides a sequential number of video frames to assist in indicating the position of the audio samples when
using a non integer number of audio samples per frame.
– The RATE indicates the sampling rate of the audio data and whether the data is synchronous or asynchronous.
– The ACT word indicates the number of active channels within the group.
– DELm-n indicates the amount of accumulated audio processing delay relative to video measured in audio sample intervals for each channel pair
1&2 and 3&4.
Structure of audio control packet
Structure of HD Audio control packet
143
Audio Data Packet
Audio Extended Data Packet
Audio Control Packet
Audio Data Packet, Audio Extended Data Packet and Audio Control Packet
144
Outline
145
UHD1, UHD2, 4K & 8K
Transparency channel or Alfa channel
146
UHD1, UHD2, 4K & 8K
147
UHD1, UHD2, 4K & 8K
148
[Number of physical links]
UHD1, UHD2, 4K & 8K
149
DVB UHD Phases
HDR Delivery
8 -
150
SDI and UHD
151
Technology and Standards Timeline
152
ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3
153
ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3
154
ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3
155
ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3
156
ST2082-xx12GSDI/ITU-RBT.2077-1Part3
157
ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3
158
ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3
159
ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3
160
ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3
161
The Gearbox Concept
162
SDI Physical Layer Parameters -Optical
163
SDI Physical Layer Parameters -Optical
For given transmit and receive parameters, worst case dispersion limited optical link lengths of 12km, 5km
and 2km @ 6Gb/s, 12Gb/s and 24Gb/s respectively are targeted in single-mode fiber
164
SDI Physical Layer Parameters -Optical
– A single “robust” optical connector solution is being standardized as proposed SMPTE ST 2091
– One form factor (multi-fiber), to support data rates from 270Mb/s to 200Gb/s on a single (multi-fiber optical), cable.
– Rugged, robust and dirt-protected (shuttered apperture)
– Simple integration –LC connection or MTP® (Media Transfer Protocol)
– Versatile –multichannel 2, 4 and 8 fibers connection system
– Common QSFP optical module form factor
– Up to 4km link distance at all data rates
LC®/PC (Lucent Connector / Physical Contact) connector
165
SMPTE UHDTV-SDI Physical Layer
SMPTE ST2081-1 & ST2082-1
require that jitter is measured
over two different frequency
bands.
166
Video Levels in SDI
Full Range (Newly introduced)
– In file based workflows the full range of levels can be used to improve accuracy in color conversion.
– Some digital image interfaces reserve digital values, e.g. for timing information, such that the
permitted video range of these interfaces is narrower than the video range of the full-range signal.
– The mapping from full-range images to these interfaces is application-specific.
– SDI has excluded code words for EAV and SAV timing reference signal or TRS, so full range gets
changed to 4d to 1019d (16d to 4092d for 12-bit) for SDI.
– Changing method is up to the device outputting the SDI, weather the data gets clipped off or
converted to fit this range.
167
Video Levels in SDI
Narrow Range
– Traditional SDI has used 0-700mv to represent levels from black to white which is typically referred to as
0%-100% or 0IRE to 100IRE.
– 64d to 960d for 10-bit (256d to 3840d for 12-bit)
– The narrow range representation is in widespread use and is considered the “default”.
– Narrow range signals may extend below black (sub-blacks) and exceed the nominal peak values
(super-whites), but should not exceed the video data range.
168
Video Levels
Digital 10- and 12-bit integer representation (ITU-R BT.2100-1)
Round( x ) = Sign( x ) * Floor( | x | + 0.5 )
Floor( x ) the largest integer less than or equal to x
Resulting values that exceed the
video data range should be
clipped to the video data range
Narrow Range
𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐𝟏𝟗𝑬′
+ 𝟏𝟔) × 𝟐 𝒏−𝟖
)]
𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐𝟐𝟒𝑬′ + 𝟏𝟐𝟖) × 𝟐 𝒏−𝟖)]
Full Range
𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐 𝒏
− 𝟏)𝑬′
]
𝑫 = 𝑹𝒐𝒖𝒏𝒅 [ 𝟐 𝒏 − 𝟏 𝑬′ + 𝟐 𝒏−𝟏)]
Coding 10-bit 12-bit 10-bit 12-bit
Black
(R' = G' = B' = Y' = I = 0)
DR', DG', DB', DY', DI
64 256 0 0
Nominal Peak
(R' = G' = B' = Y' = I = 1)
DR', DG', DB', DY', DI
940 3760 1023 4095
Achromatic
(C'B = C'R = -0.5)
DC'B, DC'R, DCT, DCP
64 256 0 0
Nominal Peak
(C'B = C'R = 0)
DC'B, DC'R, DCT, DCP
512 2048 512 2048
Nominal Peak
(C'B = C'R = +0.5)
DC'B, DC'R, DCT, DCP
960 3840 1023 4095
Video Data Range 4~1019 16~4079(?) 0~1023 0~4095
169
Code Values for 10-bit and 12-bit Y or RGB.
Video Levels
Digital 10- and 12-bit integer representation (ITU-R BT.2100-1)
170
399.2
396.9
396.1
-396.9
-397.7
-400.0
Code Values for 10-bit and 12-bit Cb and Cr.
Video Levels
Digital 10- and 12-bit integer representation (ITU-R BT.2100-1)
171
– In file based workflows the full range of video levels can be used to improve accuracy in color conversion in a 10-bit
or 12-bit system.
– When a file is converted to SDI the data maybe scaled or clipped depending on the device, to the allowed range
of SDI levels.
– The full range should not be used for program exchange unless all parties agree.
Mapping from/to Full-Range
Full range SDI
4d
10-bit
1019d
10 bit file
0d
10-bit system
1023d
12 bit file
0 decimal
12-bit system
4095d
Narrow range SDI
256d
12-bit
3760d/3840d
Narrow range SDI
64d
10-bit
940d/960d
10 bit file
0d
10-bit system
1023d
172
Outline
173
ST2082-xx12GSDI/ITU-RBT.2077-1Part3
174
ST 2082 ‘Image Mapping Data Flow’ Roadmap
175
SMPTE ST 2082-10
2160-line Source Image and Ancillary Data Mapping for 12G-SDI
MODE 1: 2160-line source image formats
and ancillary data into a 12 Gb/s
[nominal] SDI bit-serial interface
176
Carriage of 2160-line images in a 12G-SDI interface
Generalized process
10/12 bit
– The source images are divided into two or four 1080-line sub images, depending on the format of the source image.
– Each 10-bit data stream includes timing and sync words, line numbers, cyclic redundancy codes, ancillary data, including audio, and
payload identification packets.
– Mux: Data stream 8, data stream 4, data stream 6, data stream 2, data stream 7, data stream 3, data stream 5, data stream 1 177
MODE 1: 2160-line source image formats and ancillary data into a 12 Gb/s
[nominal] SDI bit-serial interface
(UHDTV1 and Digital Cinematography Production)
178
Mapping Process
Carriage of 2160-line mapping source image formats in a 12G-SDI interface
– The 2160-line source image is divided into four 1080-line sub images in accordance with the 2 sample interleave sub-division method
referenced in SMPTE ST 425-5 2160-line Mapping.
– For a 4:2:0 source image, the C′B and C′R samples in sub images 3 and 4 are set to the value 200h for 10-bit systems and 800h for 12-bit
systems.
179
4 way division square
4 way Interleave (2SI)
180
12G-SDI 10-bit Multiplex Type 1
181
12G-SDI 10-bit Multiplex Type 1
182
– Each sub image is mapped into two 10-bit data streams.
– Sub image 1 is mapped into data streams one and two.
– Sub image 2 is mapped into data streams three and four.
– Sub image 3 is mapped into data streams five and six.
– Sub image 4 is mapped into data streams seven and eight.
– Each data stream includes sync and timing (TRS) words, Cyclic redundancy code (CRC) words, line
numbers (LN), HANC and VANC data and time code (TC).
– The eight 10-bit data streams are combined onto an 80-bit virtual interface:
12G-SDI 10-bit Multiplex Type 1
183
2160-line 80-bit Virtual Interface Multiplex Structure
Mapping Structure 1:
Sub image 1 is mapped into data streams one and two:
data stream one: Y′0, Y′1, Y′2, Y′3...
data stream two: C′B0, C′R0, C′B1, C′R1...
Sub image 2 is mapped into data streams three and four:
data stream three: Y′0, Y′1, Y′2, Y′3...
data stream four: C′B0, C′R0, C′B1, C′R1...
Sub image 3 is mapped into data streams five and six:
data stream five: Y′0, Y′1, Y′2, Y′3...
data stream six: C′B0, C′R0, C′B1, C′R1...
Sub image 4 is mapped into data streams seven and eight:
data stream seven: Y′0, Y′1, Y′2, Y′3...
data stream eight: C′B0, C′R0, C′B1, C′R1...
For a 4:2:0 source images, the 10-bit C′B and C′R samples in sub images 3 and 4 are set to the value 200h.
Multiplex Structure:
{4} C′B0, {2} C′B0, {3} C′B0, {1} C′B0, {4} Y′0, {2} Y′0, {3} Y′0, {1} Y′0, {4} C′R0, {2} C′R0, {3} C′R0, {1} C′R0, {4} Y′1, {2} Y′1, {3} Y′1, {1} Y′1,
{4} C′B1, {2} C′B1, {3} C′B1, {1} C′B1, {4} Y′2, {2} Y′2, {3} Y′2, {1} Y′2, {4} C′R1, {2} C′R1, {3} C′R1, {1} C′R1, {4} Y′3, {2} Y′3, {3} Y′3, {1} Y′3….
184
Mapping Structure 2:
Sub image 1 is mapped into data streams one and two:
data stream one: G′0, R′0, G′1, R′1...
data stream two: A0, B′0, A1, B′1...
Sub image 2 is mapped into data streams three and four:
data stream three: G′0, R′0, G′1, R′1...
data stream four: A0, B′0, A1, B′1...
Sub image 3 is mapped into data streams five and six:
data stream five: G′0, R′0, G′1, R′1...
data stream six: A0, B′0, A1, B′1...
Sub image 4 is mapped into data streams seven and eight:
data stream seven: G′0, R′0, G′1, R′1...
data stream eight: A0, B′0, A1, B′1...
Multiplex Structure:
{4} A0, {2} A0, {3} A0, {1} A0, {4} G′0, {2} G′0, {3} G′0, {1} G′0, {4} B′0, {2} B′0, {3} B′0, {1} B′0, {4} R′0, {2} R′0, {3} R′0, {1} R′0, {4} A1, {2} A1, {3} A1, {1}
A1, {4} G′1, {2} G′1, {3} G′1, {1} G′1, {4} B′1, {2} B′1, {3} B′1, {1} B′1, {4} R′1, {2} R′1, {3} R′1, {1} R′1….
2160-line 80-bit Virtual Interface Multiplex Structure
185
12G-SDI 10-bit Multiplex Type 2
186
12G-SDI 10-bit Multiplex Type 2
187
Mapping Structure 3:
Bit b9 in every word is the complement of b8. The lists and tables below describe Bits b8 – b0
Sub image 1 is mapped into data streams one and two:
data stream one: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]...
data stream two: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]...
Sub image 2 is mapped into data streams three and four:
data stream three: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]...
data stream four: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]...
Sub image 3 is mapped into data streams five and six:
data stream five: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]...
data stream six: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]...
Sub image 4 is mapped into data streams seven and eight:
data stream seven: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]...
data stream eight: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]...
Multiplex Structure:
{4} R′G′B′0 [8:6], {2} R′G′B′0 [8:6], {3} R′G′B′0 [8:6], {1} R′G′B′0 [8:6], {4} R′G′B′0 [11:9], {2} R′G′B′0 [11:9], {3} R′G′B′0 [11:9], {1} R′G′B′0 [11:9], {4} R′G′B′0 [2:0], {2}
R′G′B′0 [2:0],
{3} R′G′B′0 [2:0], {1} R′G′B′0 [2:0], {4} R′G′B′0 [5:3], {2} R′G′B′0 [5:3], {3} R′G′B′0 [5:3], {1} R′G′B′0 [5:3], {4} R′G′B′1 [8:6], {2} R′G′B′1 [8:6], {3} R′G′B′1 [8:6], {1} R′G′B′1
[8:6]…
2160-line 80-bit Virtual Interface Multiplex Structure
188
Mapping Structure 4:
Bit b9 in every word is the complement of b8. The lists and tables below describe Bits b8 – b0
Sub image 1 is mapped into data streams one and two:
data stream one: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]...
Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]...
data stream two: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]...
Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]...
Sub image 2 is mapped into data streams three and four:
data stream three: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]...
Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]...
data stream four: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]...
Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]...
Sub image 3 is mapped into data streams five and six:
data stream five: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]...
Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]...
data stream six: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]...
Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]...
Sub image 4 is mapped into data streams seven and eight:
data stream seven: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]...
Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]...
data stream eight: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]...
Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]...
For a 4:2:0 source image, the 12-bit C′B and C′R samples in sub images 3 and 4 are set to the value 800h.
2160-line 80-bit Virtual Interface Multiplex Structure
189
Multiplex Structure for Mapping Structure 4:
2160-line 80-bit Virtual Interface Multiplex Structure
190
Sync-Bit Insertion
– Repeating patterns of 3FFh or 000h in the 12G-SDI 10-bit
parallel multiplex can result in a long run of zeros feeding the
scrambling polynomial.
– To prevent long runs of zeros and ones, the 10-bit parallel
multiplex data stream should be modified such that the two
least significant bits of repeated 3FFh or 000h code words
should be replaced by the sync-bit values of 10b for 000h
words and 01b for 3FFh words.
– To ensure synchronization and word alignment can be reliably
achieved in the receiver, one complete sequence of
preambles – 3FFh, 000h, 000h – should be retained without
modification as shown in Fig. 3-15.
– This Sync-bit insertion process should be reversed in the
receiver restoring the original 3FFh and 000h data patterns.
191
Sync-Bit Insertion
192
Audio Data
– Audio data shall be mapped into the HANC space of data streams one through eight and shall be in conformance
with SMPTE ST 299-1 and SMPTE ST 299-2.
– Audio data packets shall be mapped into the even numbered data streams.
– Audio control packets shall be mapped into the odd numbered data streams.
– Audio control and data packets shall be mapped into the data stream pair one/two first and any remaining data
shall then be mapped onto data stream pair three/four, then into data stream pair five/six and finally into data
stream pair seven/eight.
– The audio clock phase data as defined in the section “CLK (audio clock phase data)” of SMPTE ST 299-1 shall be
calculated at the clock frequency of 148.5 (/1.001) MHz for 4:2:2 10-bit and 4:2:0 10-bit formats at 48/1.001, 48, 50,
60/1.001 and 60 Hz, which use Mapping Structure 1.
– The audio clock phase data as defined in the section “CLK (audio clock phase data)” of SMPTE ST 299-1 shall be
calculated at the clock frequency of 74.25 (/1.001) MHz for formats at 24/1.001, 24, 25, 30/1.001 or 30Hz, which use
Mapping Structure 2, 3 or 4.
193
Number of Audio Channels
The number of audio channels is as defined in SMPTE ST 425-5 “Number of Audio Channels”
Audio Copy
Audio may be copied within the 12G interface, in order to simplify division of a single 12G signal into dual-
link 6G or quad-link 3G with audio copy between links.
– As an alternative to the mapping of the maximum number of unique audio channels, blocks of
audio channels may be copied within the interface.
– This may be as a result of the single-link 12G-SDI signal being created by combining quad-link
3G-SDI or dual-link 6G SDI signals.
– It may alternatively be done in the original single-link 12G-SDI signal in order to permit simple
splitting of the single-link 12G-SDI signal into a quad-link 3G-SDI or a dual-link 6G-SDI signal.
– Note: Audio copy reduces the number of channels that can be transported by the interface.
Audio Data
194
Inherited Audio Copy as a result of combining multi-link 3G-SDI or 6G-SDI signals
– In the case where the audio data has been embedded according to SMPTE ST 425-5, for example
when the audio was embedded in a quad-link 3G interface that has been combined into a single-link
12G interface, the audio in data stream pair three/four, five/six and seven/eight may be a copy of the
audio in data stream pair one/two.
– Similarly where the audio has been embedded according to SMPTE ST 2081-11 in a dual-link 6G
interface that has been combined into a single 12G interface, the audio in data stream pair five/six
and seven/eight may be a copy of the audio in data stream pair one/two and three/four.
Audio Data
195
Originated Audio Copy in 12G-SDI signal
If audio is copied:
– Data stream pair one/two shall always carry original audio.
– Data stream pair three/four may carry additional channels of original audio.
– Data stream pairs five/six and seven/eight may carry additional channels of original audio, as long as
data stream pair three/four is carrying original audio.
– Data stream pairs five/six and seven/eight may carry copied audio from data stream pairs one/two
and three/four.
– Data stream pair three/four may carry copied audio from data stream pair one/two. In this case data
stream pairs five/six and seven/eight shall also carry the same copied audio.
– The audio copy status of each data stream shall be signaled in the PID.
196
A quad-link 3G interface combined into a single 12G-SDI interface, and the
possible status of audio copy on each data stream.
197
A dual-link 6G interface combined into a single 12G-SDI interface, and the
possible status of audio copy on each data stream.
198
12G-SDI signal Eye Diagram
HD-SDI (1.5G) Eye Diagram
Typical 12G-SDI Eye Diagram 199
Pathological signals for UHD (6G and 12G-SDI)
SMPTE RP 198-1998
− Pathological signals are recommended by SMPTE for use with SD, HD & 3G standards only
− Not yet approved by SMPTE for 6G or 12G-SDI
− … may take a few more years for approval!
− New pathological tools for Qx
− With pathological patterns, the pathological signal only occurs statistically every 512 lines!
− A new tool will provide feedback of pathological conditions on the interface with GPI
trigger output
− Checkfield, PLL and EQ testing
200
Pathological signals for UHD –PHABRIX solution
– Pathological Checkfield Overlay developed by PHABRIX with major SDI chip manufacturer
co-operation
– Used to verify how sensitive the SDI link is to pathological conditions on the interface
201
Outline
202
SMPTE ST 2082-11
4320-line and 2160-line Source Image and Ancillary Data Mapping for Dual-link 12G-SDI
MODE 1: 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit image formats and ancillary data on a
Dual-link 12 Gb/s [nominal] SDI bit-serial interface
MODE 2: 2160-line R′G′B′, Y′C′BC′R 4:4:4(:4) 10-bit and 4:4:4 12-bit image formats and
ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface
MODE 3: 2160-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit Additional Frame Rate Source image
formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface
203
Dual-link 12G-SDI 2 x 10-bit Multiplex – Type 1
204
Dual-link 12G-SDI 2 x 10-bit Multiplex – Type 1
205
MODE 1: 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit image formats and ancillary data on
a Dual-link 12 Gb/s [nominal] SDI bit-serial interface
(UHDTV2 Production)
206
MODE 2: 2160-line R′G′B′, Y′C′BC′R 4:4:4(:4) 10-bit and 4:4:4 12-bit image formats and
ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface
(UHDTV1 and Digital Cinematography Production)
Notes:
*1InthisimageformatR′G′B′indicateseitherR′G′B′orR′FSG′FSB′FS.
*2Thisisthemaximumpixelarray,theactiveimagemaynotfill
themaximumarray.
207
MODE 3: 2160-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit Additional Frame Rate Source image
formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface
(UHDTV1 and Digital Cinematography Production AFR)
208
Carriage of 4320-line images in a Dual-link 12G interface
Generalized process as used by Mode 1
209
Carriage of 2160-line images in a Dual-link 12G interface
Generalized process as used by Mode 2 and Mode 3
210
For a 4:2:0 source image, the C′B and C′R samples in
intermediate sub images 3 and 4 shall be set to the value 200h.
Mode 1: Carriage of 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit
Source Image Formats and Ancillary Data
211
212
Mode2: Carriage of 2160-line image formats in a dual-link 12G-SDI interface
– The 2160-line source images shall be divided into four 1080-line 4:2:2 or 4:2:0 sub images in accordance with the 2 sample interleave sub-
division method referenced in SMPTE ST 425-5 “2160-line image division into four sub images”.
– For a 4:2:0 source image, the C′B and C′R samples in sub images 3 and 4 shall be set to the value 200h 213
Mode 3: The carriage of 2160-line AFR source image formats in a Dual link 12G-SDI interface
214
Mapping of 1080-line Sub Images in Mode3
StructureofeachDataStreamfor120Hz,120/1.001
Hz,100Hz,96Hzor96/1.001Hzframerates
− Each 1080-line 4:2:2 sub image shall be mapped to a 40-bit virtual interface consisting of four data streams. (The
structure of each data stream shall be as illustrated in Figure).
215
Outline
216
SMPTE ST 2082-12 Document Roadmap
4320-line and 2160-line Source Image and Ancillary Data Mapping for Quad-link 12G-SDI
MODE 1: 4320-line Source image formats and ancillary data into a Quad-link 12 Gb/s
[nominal] SDI bit serial interface
MODE 2: 2160-line Y′C′BC′R or R′G′B′ 4:4:4:4 10-bit or 4:4:4 10-bit or 12-bit Additional Frame
Rate (AFR) Source image formats and ancillary data into a Quad-link 12 Gb/s [nominal] SDI
bit-serial interface
217
Quad-link 12G-SDI 4 x 10-bit Type 1 Multiplex
218
Quad-link 12G-SDI 4 x 10-bit Type 1 Multiplex
219
MODE 1: 4320-line Source image formats and ancillary data into a Quad-
link 12 Gb/s [nominal] SDI bit serial interface
(UHDTV2 Production)
220
MODE 2: 2160-line Y′C′BC′R or R′G′B′ 4:4:4:4 10-bit or 4:4:4 10-bit or 12-bit Additional Frame Rate (AFR)
Source image formats and ancillary data into a Quad-link 12 Gb/s [nominal] SDI bit-serial interface
(UHDTV1 and Digital Cinematography Production AFR)
Notes:
*1InthisimageformatR′G′B′indicateseitherR′G′B′orR′FSG′FSB′FS.
*2Thisisthemaximumpixelarray,theactiveimagemaynotfill
themaximumarray.
221
Carriage of 4320-line Images on a Quad-link 12G interface
Generalized process as used by Mode 1
222
Carriage of 2160-line Images on a Quad-link 12G interface
Generalized process as used by Mode 2
223
For 4:2:0 source images, the 10-bit C′B and C′R samples in
intermediate sub images 3 and 4 are set to the value 200h
and the 12-bit C′B and C′R samples in intermediate sub
images 3 and 4 are set to the value 800h.
Mode 1: Carriage of 4320-line Source Image Formats and Ancillary Data
224
225
MODE 2: The process for the carriage of 2160-line AFR source image formats
in a Quad-link 12G-SDI interface
(The division of the source image format into four sub images)
The division of the source image format into four sub images
226
MODE 2: The process for the carriage of 2160-line AFR source image
formats in a Quad-link 12G-SDI interface
(The mapping of sub image one onto 12G-SDI Link 1)
The mapping of sub image one onto 12G-SDI Link 1 227
Mapping of 1080-line Sub Images in Mode 2
− Each 1080-line sub image shall be mapped to an 80-bit virtual interface consisting of eight data streams. (The
structure of each data stream shall be as illustrated in Figure)
Structureofeachdatastreamfor120Hz,120/1.001Hz,100Hz,
96Hzor96/1.001Hzframerates
228
Outline
229
ST 2083-xx 24G SDI / ITU-R BT.2077-1 Part 3
230
ST 2083-xx 24G SDI / ITU-R BT.2077-1 Part 3
231
24G-SDI 10-bit Multiplex – Type 1
232
24G-SDI 10-bit Multiplex – Type 1
233
24G-SDI 10-bit Multiplex – Type 1
234
24G-SDI 10-bit Multiplex – Type 1
235
Outline
236
Video Payload Identifier (VPID)
– Video payload identifier monitoring is more important than ever with a wide variety of formats it is
essential to use the SMPTE ST 352 Video Payload Identifier (VPID).
– The SMPTE ST 352 Video Payload Identifier (VPID) is carried within the Ancillary data space to assist a
device in quickly decoding the video signal.
– The payload identifier consists of 4 bytes where each byte has a separate significance.
– The first byte of the payload identifier has the highest significance and subsequent bytes define lower
order video and ancillary payload information.
– The horizontal placement of the packet should be immediately following the last CRC code word
(CR1) of the line(s) specified in SMPTE ST 352 for 1125-line systems.
Note: The line numbers defined in SMPTE ST 352 for the placement of the payload identifier packet in
1125-line systems avoid those lines used by SMPTE ST 299-1 and SMPTE ST 299-2 for the carriage of digital
audio control packets and extended audio control packets, respectively.
237
Video Payload Identifier (VPID)
– 525- and 625-line digital interfaces, interlace: once per field
• 525I (field 1): Line 13, 525I (field 2): Line 276
• 625I (field 1): Line 9, 625I (field 2): Line 322
– 525- and 625-line digital interfaces, progressive: once per frame
• 525P: Line 13, 625P: Line 9
– 750-line digital interfaces, progressive: once per frame
• 750P: Line 10
– 1125-line digital interfaces, interlace and segmented-frame: once per field (segment).
• 1125I (field 1): Line 10, 1125I (field 2): Line 572
– 1125-line digital interfaces, progressive: once per frame
• 1125P: Line 10
238
– The VPID conforms to the SMPTE 291 Ancillary Data Packet and Space Formatting standard and
contains the Ancillary Data Flag (ADF), Data Identifier (DID), Secondary Data Identifier (SDID), Data
Count, User Data Words (UDW1-4) and Checksum.
– It is sent as 4 User Data Words (UDW) UDW1 –UDW4 in specified line in each frame or field.
Video Payload Identifier (VPID)
(000h)
(3FFh)
(3FFh)
DID
SDID
CS
DC
DBN
User Data Words
(max 255 Words)
239
– The video payload ID tells you a lot about the signal you are receiving
– It is sent as 4 User Data Words (UDW) UDW1 –UDW4
– You need the magic decoder Ring to decode it correctly.
Video Payload Identifier (VPID)
In quad-3G: 16 UDW
240
This one is different VPID for each Link and is UHD 59.94p Y CbCr
Quad 3G level A
Video Payload Identifier (VPID)
241
Video Payload Identifier (VPID)
This one is the same VPID for each Link and is UHD 29.97p Y CbCr Quad
HD
242
OLDVersion
243
OLDVersion
244
OLDVersion
245
SDI Metadata, HDR, WCG
– Newly some metadata about HDR and WCG is added to the
SDI feed.
–Is it a ST2084 PQ curve or HLG
–What is the diffuse white point
–What is the Grade point 1K Nits, 2K Nits or 540 Nits?
–Is it Full levels or Narrow levels (SMPTE Levels)
– The Metadata for HDMI and the Monitor will be added when
the Content is Encoded. Either manually typed in or read
from a Metadata sidecar file.
246
Payload identifier definitions for 1080-line payloads on a 1.5 Gbit/s
(nominal) serial digital interface
247
Payload identifier definitions for 1080-line video payloads on a quad-link
1. 5 Gbit/s (nominal) serial digital interface
248
Payload identifier definitions for 1920 ×1080 video payloads on
dual link high definition digital interfaces
249
Additionally supported picture payload identifier definitions for 1920 ×
1080 video payloads on dual link high definition digital interfaces
250
Payload identifier definitions for 1080-line payloads on a 3Gbit/s
(nominal) serial digital interface
251
Payload Identifier Definitions for 2160-line Video Payload for
Mapping on a 12 Gb/s (nominal) Serial Interface
252
Payload Identifier Definitions for 2160-line Video Payload for
Mapping on a 12 Gb/s (nominal) Serial Interface
253
Payload Identifier Definitions for 2160-line Video Payload for
Mapping on a 12 Gb/s (nominal) Serial Interface
254
Ancillary Data Capacity of the 12G-SDI Interface
– The ancillary data space available in serial digital interface transports is approximately equivalent to
horizontal interval space and vertical interval space for the image format being transported.
– In the case of images transported on the interface specified in this standard, it is dependent on the
horizontal interval space and vertical interval space for each of the data streams being carried on the
interface, multiplied by the number of data streams.
– SMPTE RP 291-2 provides information on the size of the ancillary data space in a SMPTE ST 425-1 and
SMPTE ST 292-1 interface.
– For Mode 1 2160-line source image formats, the available HANC and VANC data space on the
interface is 4 times the HANC and VANC data space available on a SMPTE ST 425-1 interface carrying
the corresponding sub-image.
255
Questions??
Discussion!!
Suggestions!!
Criticism!!
256

More Related Content

What's hot

Broadcast Camera Technology, Part 2
Broadcast Camera Technology, Part 2Broadcast Camera Technology, Part 2
Broadcast Camera Technology, Part 2Dr. Mohieddin Moradi
 
An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2Dr. Mohieddin Moradi
 
Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Dr. Mohieddin Moradi
 
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDesigning an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDr. Mohieddin Moradi
 
Video Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsVideo Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
 
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
 
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsVideo Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Dr. Mohieddin Moradi
 
Broadcast Camera Technology, Part 3
Broadcast Camera Technology, Part 3Broadcast Camera Technology, Part 3
Broadcast Camera Technology, Part 3Dr. Mohieddin Moradi
 
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 opt
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 optMpeg 101 demyst analysis &amp; picture symptoms 20110808 opt
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 opthexiay
 
HDR and WCG Video Broadcasting Considerations
HDR and WCG Video Broadcasting ConsiderationsHDR and WCG Video Broadcasting Considerations
HDR and WCG Video Broadcasting ConsiderationsDr. Mohieddin Moradi
 

What's hot (20)

Broadcast Camera Technology, Part 2
Broadcast Camera Technology, Part 2Broadcast Camera Technology, Part 2
Broadcast Camera Technology, Part 2
 
An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2An Introduction to Video Principles-Part 2
An Introduction to Video Principles-Part 2
 
Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles Video Compression Part 1 Video Principles
Video Compression Part 1 Video Principles
 
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-basedDesigning an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
 
SDI to IP 2110 Transition Part 1
SDI to IP 2110 Transition Part 1SDI to IP 2110 Transition Part 1
SDI to IP 2110 Transition Part 1
 
Video Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video CodecsVideo Compression, Part 3-Section 2, Some Standard Video Codecs
Video Compression, Part 3-Section 2, Some Standard Video Codecs
 
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
 
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video CodecsVideo Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 3-Section 1, Some Standard Video Codecs
 
HDR and WCG Principles-Part 1
HDR and WCG Principles-Part 1HDR and WCG Principles-Part 1
HDR and WCG Principles-Part 1
 
HDR and WCG Principles-Part 6
HDR and WCG Principles-Part 6HDR and WCG Principles-Part 6
HDR and WCG Principles-Part 6
 
Broadcast Lens Technology Part 3
Broadcast Lens Technology Part 3Broadcast Lens Technology Part 3
Broadcast Lens Technology Part 3
 
Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts Video Compression, Part 2-Section 1, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts
 
SDI to IP 2110 Transition Part 2
SDI to IP 2110 Transition Part 2SDI to IP 2110 Transition Part 2
SDI to IP 2110 Transition Part 2
 
Broadcast Camera Technology, Part 3
Broadcast Camera Technology, Part 3Broadcast Camera Technology, Part 3
Broadcast Camera Technology, Part 3
 
Broadcast Lens Technology Part 1
Broadcast Lens Technology Part 1Broadcast Lens Technology Part 1
Broadcast Lens Technology Part 1
 
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 opt
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 optMpeg 101 demyst analysis &amp; picture symptoms 20110808 opt
Mpeg 101 demyst analysis &amp; picture symptoms 20110808 opt
 
Broadcast Lens Technology Part 2
Broadcast Lens Technology Part 2Broadcast Lens Technology Part 2
Broadcast Lens Technology Part 2
 
HDR and WCG Principles-Part 2
HDR and WCG Principles-Part 2HDR and WCG Principles-Part 2
HDR and WCG Principles-Part 2
 
HDR and WCG Video Broadcasting Considerations
HDR and WCG Video Broadcasting ConsiderationsHDR and WCG Video Broadcasting Considerations
HDR and WCG Video Broadcasting Considerations
 
HDR and WCG Principles-Part 5
HDR and WCG Principles-Part 5HDR and WCG Principles-Part 5
HDR and WCG Principles-Part 5
 

Similar to Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2

DTV Technical Overview
DTV Technical OverviewDTV Technical Overview
DTV Technical OverviewAmos Tsai
 
Ensoft dvb 1
Ensoft dvb 1Ensoft dvb 1
Ensoft dvb 1sarge
 
Next generation image compression standards: JPEG XR and AIC
Next generation image compression standards: JPEG XR and AICNext generation image compression standards: JPEG XR and AIC
Next generation image compression standards: JPEG XR and AICTouradj Ebrahimi
 
Digital Video Course Section 1
Digital Video Course  Section 1Digital Video Course  Section 1
Digital Video Course Section 1ericlsnider
 
Unit ii mm_chap5_fundamentals concepts in video
Unit ii mm_chap5_fundamentals concepts in videoUnit ii mm_chap5_fundamentals concepts in video
Unit ii mm_chap5_fundamentals concepts in videoEellekwameowusu
 
Accelerating MIPI Interface Development and Validation - Introspect Technology
Accelerating MIPI Interface Development and Validation - Introspect TechnologyAccelerating MIPI Interface Development and Validation - Introspect Technology
Accelerating MIPI Interface Development and Validation - Introspect TechnologyJean-Marc Robillard
 
Chapter 3 - Fundamental Concepts in Video and Digital Audio.ppt
Chapter 3 - Fundamental Concepts in Video and Digital Audio.pptChapter 3 - Fundamental Concepts in Video and Digital Audio.ppt
Chapter 3 - Fundamental Concepts in Video and Digital Audio.pptBinyamBekele3
 
Video Compression Technology
Video Compression TechnologyVideo Compression Technology
Video Compression TechnologyTong Teerayuth
 
Data Acquisition Equipment for Automation and Process Control
Data Acquisition Equipment for Automation and Process ControlData Acquisition Equipment for Automation and Process Control
Data Acquisition Equipment for Automation and Process ControlMiller Energy, Inc.
 
Performance evaluation of multicast video distribution using lte a in vehicul...
Performance evaluation of multicast video distribution using lte a in vehicul...Performance evaluation of multicast video distribution using lte a in vehicul...
Performance evaluation of multicast video distribution using lte a in vehicul...Communication Systems & Networks
 
Multimedia fundamental concepts in video
Multimedia fundamental concepts in videoMultimedia fundamental concepts in video
Multimedia fundamental concepts in videoMazin Alwaaly
 
Introduction to Video Compression Techniques - Anurag Jain
Introduction to Video Compression Techniques - Anurag JainIntroduction to Video Compression Techniques - Anurag Jain
Introduction to Video Compression Techniques - Anurag JainVideoguy
 

Similar to Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2 (20)

HDR and WCG Principles-Part 3
HDR and WCG Principles-Part 3HDR and WCG Principles-Part 3
HDR and WCG Principles-Part 3
 
DTV Technical Overview
DTV Technical OverviewDTV Technical Overview
DTV Technical Overview
 
HDTV
HDTVHDTV
HDTV
 
Ensoft dvb 1
Ensoft dvb 1Ensoft dvb 1
Ensoft dvb 1
 
Next generation image compression standards: JPEG XR and AIC
Next generation image compression standards: JPEG XR and AICNext generation image compression standards: JPEG XR and AIC
Next generation image compression standards: JPEG XR and AIC
 
Digital Video Course Section 1
Digital Video Course  Section 1Digital Video Course  Section 1
Digital Video Course Section 1
 
Unit ii mm_chap5_fundamentals concepts in video
Unit ii mm_chap5_fundamentals concepts in videoUnit ii mm_chap5_fundamentals concepts in video
Unit ii mm_chap5_fundamentals concepts in video
 
SDH and TDM telecom
SDH and TDM telecomSDH and TDM telecom
SDH and TDM telecom
 
Accelerating MIPI Interface Development and Validation - Introspect Technology
Accelerating MIPI Interface Development and Validation - Introspect TechnologyAccelerating MIPI Interface Development and Validation - Introspect Technology
Accelerating MIPI Interface Development and Validation - Introspect Technology
 
Chapter 3 - Fundamental Concepts in Video and Digital Audio.ppt
Chapter 3 - Fundamental Concepts in Video and Digital Audio.pptChapter 3 - Fundamental Concepts in Video and Digital Audio.ppt
Chapter 3 - Fundamental Concepts in Video and Digital Audio.ppt
 
Video Compression Technology
Video Compression TechnologyVideo Compression Technology
Video Compression Technology
 
intro_dgital_TV
intro_dgital_TVintro_dgital_TV
intro_dgital_TV
 
intro_dgital_TV
intro_dgital_TVintro_dgital_TV
intro_dgital_TV
 
intro_dgital_TV
intro_dgital_TVintro_dgital_TV
intro_dgital_TV
 
Data Acquisition Equipment for Automation and Process Control
Data Acquisition Equipment for Automation and Process ControlData Acquisition Equipment for Automation and Process Control
Data Acquisition Equipment for Automation and Process Control
 
Yokogawa DX2000 DAQSTATION
Yokogawa DX2000 DAQSTATIONYokogawa DX2000 DAQSTATION
Yokogawa DX2000 DAQSTATION
 
RGB Broadcast Company Profile
RGB Broadcast Company ProfileRGB Broadcast Company Profile
RGB Broadcast Company Profile
 
Performance evaluation of multicast video distribution using lte a in vehicul...
Performance evaluation of multicast video distribution using lte a in vehicul...Performance evaluation of multicast video distribution using lte a in vehicul...
Performance evaluation of multicast video distribution using lte a in vehicul...
 
Multimedia fundamental concepts in video
Multimedia fundamental concepts in videoMultimedia fundamental concepts in video
Multimedia fundamental concepts in video
 
Introduction to Video Compression Techniques - Anurag Jain
Introduction to Video Compression Techniques - Anurag JainIntroduction to Video Compression Techniques - Anurag Jain
Introduction to Video Compression Techniques - Anurag Jain
 

More from Dr. Mohieddin Moradi

An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3Dr. Mohieddin Moradi
 
An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2Dr. Mohieddin Moradi
 
Broadcast Camera Technology, Part 1
Broadcast Camera Technology, Part 1Broadcast Camera Technology, Part 1
Broadcast Camera Technology, Part 1Dr. Mohieddin Moradi
 
An Introduction to Audio Principles
An Introduction to Audio Principles An Introduction to Audio Principles
An Introduction to Audio Principles Dr. Mohieddin Moradi
 
Video Compression, Part 4 Section 1, Video Quality Assessment
Video Compression, Part 4 Section 1,  Video Quality Assessment Video Compression, Part 4 Section 1,  Video Quality Assessment
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
 
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 4 Section 2,  Video Quality Assessment Video Compression, Part 4 Section 2,  Video Quality Assessment
Video Compression, Part 4 Section 2, Video Quality Assessment Dr. Mohieddin Moradi
 
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Dr. Mohieddin Moradi
 

More from Dr. Mohieddin Moradi (8)

HDR and WCG Principles-Part 4
HDR and WCG Principles-Part 4HDR and WCG Principles-Part 4
HDR and WCG Principles-Part 4
 
An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3An Introduction to HDTV Principles-Part 3
An Introduction to HDTV Principles-Part 3
 
An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2An Introduction to HDTV Principles-Part 2
An Introduction to HDTV Principles-Part 2
 
Broadcast Camera Technology, Part 1
Broadcast Camera Technology, Part 1Broadcast Camera Technology, Part 1
Broadcast Camera Technology, Part 1
 
An Introduction to Audio Principles
An Introduction to Audio Principles An Introduction to Audio Principles
An Introduction to Audio Principles
 
Video Compression, Part 4 Section 1, Video Quality Assessment
Video Compression, Part 4 Section 1,  Video Quality Assessment Video Compression, Part 4 Section 1,  Video Quality Assessment
Video Compression, Part 4 Section 1, Video Quality Assessment
 
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 4 Section 2,  Video Quality Assessment Video Compression, Part 4 Section 2,  Video Quality Assessment
Video Compression, Part 4 Section 2, Video Quality Assessment
 
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 2, Video Coding Concepts
 

Recently uploaded

_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 

Recently uploaded (20)

_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 

Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2

  • 4. High Definition System SMPTE 296M (The 720 standard) 4
  • 5. High Definition System SMPTE 274M (The 1080 standard) 5
  • 6. SMPTE 274M (The 1080 standard) The High Definition Signal Horizontal Interval 6
  • 7. SMPTE 274M (The 1080 standard) The High Definition Signal Horizontal Interval 7
  • 8. The High Definition Signal Vertical Interval 8
  • 9. The High Definition Signal Vertical Interval 9
  • 11. SMPTE 292M (1.5G SD-SDI) Defines the bit-serial digital coaxial and fiber-optic interface for high-definition component signals operating at 1.485 Gb/s and 1.485/1.001 Gb/s. (8,10-bits) Two important formats for HD: SMPTE 274M  1920x1080 Scanning and Analog and Parallel Digital Interfaces for Multiple Picture Rates.  1920x1080 @ 60p, 59.94p, 50p, 60i, 59.94i, 50i, 30p, 29.97p, 25p, 24p, 23.98p, 30sF, 29.97sF, 25sF, 24sF, 23.98sF Colorimetry ITU-R BT.709 SMPTE 296M  1280x720 Progressive Image Sample Structure -Analog and Digital.  1280x720 @ 60p, 59.94p, 50p, 30p, 29.97p, 25p, 24p 23.98p Colorimetry ITU-R BT.709. HD-SDI Standards 11
  • 12. – This standard has been developed to carry HDTV digital video signals and formatted data within the defined payload areas including ancillary data. – The standard can carry 1280×720, 1920×1080 or 2048×1080 active pixel formats through the 1.5 Gb/s Serial Digital Interface and enables the carriage of any ancillary data conforming to SMPTE ST 291. HDTV SMPTE 292 (1.5G SD-SDI) Formats 12
  • 13. HDTV SMPTE 292 (1.5G SD-SDI) Formats 13
  • 15. Review of HD-SDI Y, Cb, Cr Quantizing and 4:2:2 Sampling 15
  • 16. Analog Composite Signal – Analog signal transmission between video equipment can be subject to phenomena known as jitter, signal attenuation, and noise, resulting in signal degradation. – The Horizontal Sync Signal is also subject to these phenomena, which can introduce synchronization inaccuracies. – In composite signals, these synchronization inaccuracies are observed as: “Geometric Distortion” and “Shift in the Picture’s Position’. Analog Component Signal – In analog component signals, such distortions become even more critical. – Component signals consist of three signals “Y, R-Y, B-Y” which need to be synchronized as one signal for correct display. – If a phase shift occurs between the three signals, the color of the picture will be distorted. – To solve this, the Tri-level Sync System was developed. 16 HDTV Tri-Level Sync
  • 17. HD Video Signal – This fact is important in establishing a sync system accurate enough for HD video signals. – Higher horizontal resolutions require much faster scanning speeds of the R, G, and B signals to display an image. – The faster the scanning speed, the more difficult it becomes to maintain accurate synchronization (extremely sensitive). – HD signals use component signals, making the use of the Tri-level Sync system essential. – In today’s digital interfaces, including those used for both SD and HD, the timings of the video signals are digitally locked and automatically synchronized at the receiving device. – This relieves the system and its operators from concerns about inaccurate synchronization. – However, the Tri-level Sync signal remains to play an important role since digital video devices still use analog reference signals. 17 HDTV Tri-Level Sync
  • 18. Why Tri Level Sync ? – HD has faster rise/fall times – Easier extraction of simplified field pulses – Improves jitter performance and sync separation – Note the analog HD timing reference point 0H is measured at the 50% point of the positive rising edge of the tri-level sync. HDTV Tri-Level Sync 18
  • 19. An example of sync signal attenuates – With the Bi-level Sync System, the timing of the sync signal’s lock point can slip. – the Tri-level Sync System uses a symmetrical sync signal and locks the center of the signal. – This ensures that the same lock point is always used, even when signal attenuation occurs. HDTV Tri-Level Sync t t 19
  • 20. SMPTE 292 (HD-SDI) Horizontal Line EAV SAV HD-SDI Line Format 20
  • 21. HD-SDI Data Stream Interleaving YD1920 YD1921 YD1922 YD1923 YD2636 YD2637 YD2638 YD2639 YD1920 YD1921 CbD960 CbD961 CbD960 Cb D1318 Cb D1398 CrD60 CrD961 CrD960 CrD1318 CrD1398 YD0 YD1 YA0 YA0 CbD0 CbA0 CrA0 CrD0 YA706 YA707 CbA353 CrA353 YD1918 YD1919 CbD959 CrD959 CV CV 21 CbD959CrD959 CbD0 CbD1 CrD0 CrD1 YD1918 YD1919 YD0 YD1 YD2 YD3 Y: 720 Cr, Cb: 360 Y: 1920 Cr, Cb: 960 CbD959CrD959 YD1918 YD1919
  • 22. Header: 3FFh (all bits in the word set to 1), 000h (all 0’s), 000h (all 0’s) – In HD, both the luma and chroma signals have an EAV and SAV sequence that is multiplexed to form a twenty-bit word. – The wide variety of HD formats have additional code words added to the EAV sequence. – Code words LN0 and LN1 indicate the current line number of the HD format – Code words CR0 and CR1 represent a cyclic redundancy code (CRC) of each HD line – These code words are added to both the luma and chroma components after EAV. HD-SDI Data Stream Interleaving 22
  • 23. Error Testing, CRC − CRC checking, in high definition, is done separately for luma and chroma on each line. − A CRC value is used to detect errors in the digital active line by means of the calculation CRC(X) = X18 + X5 + X4 + 1 with an initial value of zero at the start of the first active line word and ends at the final word of the line number. The value is then distributed as shown in Table. − A value is calculated for luma YCR0 and YCR1 and another value, CCR0 and CCR1, is calculated for color-difference data. 23
  • 24. Different to Standard Definition TRS – Sixteen 10 bit words (as opposed to four 8 bit words in standard definition) – Start or End of Active Video (EAV,SAV). – Line Number (LN). – Cyclic Redundancy Check (error checking)(CRC). – Cyclic Redundancy Check Codes (CRCC) 3FF(C) 3FF(Y) 000(C) 000(Y) XYZ(C) XYZ(Y) LN0(C) LN0(Y) LN1(C) LN1(Y) CCR0 YCR0 CCR1 YCR1 000(C) 000(Y) CbData YData CrData YData EAV LN CRC High Definition TRS (Timing reference signals) EAV 24
  • 25. Header : 3FFh, 000h, 000h − The “xyz” word is a 10-bit word with the two least significant bits set to zero to survive an 8-bit signal path. Contained within the standard definition “xyz” word are functions F, V, and H, which have the following values: • Bit 8 – (F-bit): 0 for field one and 1 for field two • Bit 7 – (V-bit): 1 in vertical blanking interval; 0 during active video lines • Bit 6 – (H-bit): 1 indicates the EAV sequence; 0 indicates the SAV sequence Timing Reference Signal (TRS) Codes 25 3FF(C) 3FF(Y) 000(C) 000(Y) XYZ(C) XYZ(Y) LN0(C) LN0(Y) LN1(C) LN1(Y) CCR0 YCR0 CCR1 YCR1 000(C) 000(Y) CbData YData CrData YData EAV 3FF(C) 3FF(Y) 000(C) 000(Y) XYZ(C) XYZ(Y) 000(C) 000(Y) CbData YData CrData YData SAV
  • 26. Timing Reference Signal (TRS) Codes 26 − The “xyz” word is a 10-bit word with the two least significant bits set to zero to survive an 8-bit signal path. − Contained within the standard definition “xyz” word are functions F, V, and H, which have the following values: • Bit 8 – (F-bit): 0 for field one and 1 for field two • Bit 7 – (V-bit): 1 in vertical blanking interval; 0 during active video lines • Bit 6 – (H-bit): 1 indicates the EAV sequence; 0 indicates the SAV sequence
  • 27. EAV, SAV, LN and CRC 27 3FF(C) 3FF(Y) 000(C) 000(Y) XYZ(C) XYZ(Y) LN0(C) LN0(Y) LN1(C) LN1(Y) CCR0 YCR0 CCR1 YCR1 000(C) 000(Y) CbData YData CrData YData EAV 3FF(C) 3FF(Y) 000(C) 000(Y) XYZ(C) XYZ(Y) 000(C) 000(Y) CbData YData CrData YData SAV
  • 28. XYZ WORDS & Vertical Timing Information in Different Formats 28
  • 29. Protection Bits for SAV and EAV 29
  • 30. Error Corrections Using Protection Bits (P3-P0) Protection bits for SAV and EAV − The error correction applied provides a DEDSEC (double error detection – single error correction) function. − The received bits denoted by “–” in the Table, if detected, indicate that an error has occurred but cannot be corrected. 30
  • 32. – The relative positions of EAV and SAV in comparison to the analog horizontal line are shown. – Note the analog HD timing reference point 0H is measured at the 50% point of the positive rising edge of the tri-level sync. Horizontal Line Timing in HD Formats 32
  • 34. HD Video Bit Rate 34
  • 35. 1080i at 50Hz (1125 Total Lines) − Luminance (Y) : 2640 samples/line × 1,125 lines/frame ×25 frames/sec × 10 bits/sample = 742.5Mbit/sec − R-Y (Cr) : 1320 samples/line × 1,125 lines/frame × 25 frames/sec × 10 bits/sample = 371.25Mbit/sec − B-Y (Cb) : 1320 samples/line × 1,125 lines/frame × 25 frames/sec × 10 bits/sample = 371.25Mbit/sec − Total Bit Rate Y + Cr + Cb = 1.485Gbit/sec HD Video Bit Rate 35
  • 38. Dual Link SDI Format SMPTE 372M 38
  • 39. – Using existing HD-SDI infrastructure – Requires two signal paths (Link A & Link B) – Increase color range from 10 bits to 12 bits – SMPTE 352M to identify links – Mapping various formats into existing HD- SDI structure Problems • Interconnection issues • Swapped or Missing links • Cable Path different for each Link Dual Link-supported formats defined in SMPTE 372M. Line length structure of Dual Link formats Dual Link SDI Format SMPTE 372M 39
  • 40. Dual Link SDI Format SMPTE 372M Source Signal Formats 40
  • 41. SDI data structure for a single line Progressive image format divided between Link A and Link B. Data structure of Link A and B for fast progressive formats. Data structure for R'G'B' (A) 4:4:4:4 10-bit Dual Link format. − For the Dual Link signals, the various formats are mapped into the two HD-SDI signals. − Therefore, the various mapping structures are constrained by the existing HD-SDI format. − Figure shows how the 10-bit sampled 4:2:2 Luma Y and Chroma C words are multiplexed together in the HD-SDI signal. Dual Link SDI Format SMPTE 372M 41
  • 42. − To achieve a greater dynamic range for the signal, a 12-bit data format can be accommodated within the Dual Link standard. The problem here is that the data structure of each link conforms to 10- bit words. − In the case of R'G'B‘ 4:4:4 12-bits, the most significant bits (MSBs) 2-11 are carried within the 10-bit words. − The additional two bits from each of the R'G'B' channels are combined into the Y' channel of Link B. − Link A carries the G' channel bits 2-11 and even sample values of B' and R' bits 2-11. − In Link B the alpha channel is replaced by the combined bits 0-1 of the R'G'B‘ samples. − The odd samples of the B' and R' bits 2-11 are carried within the [C'b/C'r] words. − The combined R'G'B' 0-1 data is mapped into the 10-bit word where EP represents even parity for bits 7-0, the reserved values are set to zero and bit 9 is not bit 8. 12-bit data format within the Dual Link standard (R'G'B' 4:4:4 12-bit) 42
  • 43. Data structure for Y'C'bC'r (A) 4:4:4:4 Dual Link format. Channel representation for RGB 12-bit. Mapping structure for R'G'B' 0-1. 12-bit data format within the Dual Link standard (R'G'B' 4:4:4 12-bit) 43
  • 44. Channel representation for Y'C'bC'r (A) 4:2:2:4 12-bit. Mapping structure for Y'C'bC'r 0-1. Mapping structure for Y' 0-1. Channel representation for Y’C’bC’r 12-bit Mapping structure for Y'C'bC'r 0-1. 12-bit data format within the Dual Link standard (Y'C'bC'r (A) 4:2:2:4 12-bit , Y'C'bC'r 4:4:4 12-bit) 44
  • 46. – Work at the highest resolution (Bit Depth and Color space) possible prior to rendering the product. – In standard HD-SDI limited to 4:2:2 YCbCr only at 10-bit – With Dual Link & 3Gb/s, users can: • Increase color range from 10 bits to 12 bits • Switch from 4:2:2 to 4:4:4 Sampling to the total chrominance Bandwidth • Work in the RGB domain for easier integration with Special Effects editors, and Telecine applications – Digital cinema cameras now being adopted for feature films, television shows, and even commercials • Panavision Genesis™ • Attack of the Clones, Revenge of the Sith, Apocalypto, … • Thomson Viper FilmStream™ Why 3Gb/s SDI and High Speed Data? 46
  • 47. 3G SDI first standardized in 2005…… ITU-R BT.1120 Restricted to 1920 x 1080p50 Y’C’BC’R 4:2:2 10-bit SMPTE 3G SDI standards first published in 2006 SMPTE ST 424:2006 Physical layer – 3G equivalent of ST 292-1 (1.5Gb/s SDI) SMPTE ST 425:2006 Video, audio and ancillary data mapping for the 3G interface SMPTE ST 297:2006 Optical interface standard covering all SDI rates from 143Mb/s through to 3Gb/s 3Gb/s SDI Standards 47
  • 48. – Extending the ST 425 document suite in support of HDTV and 2K D-Cinema production with higher resolution (bit depth and sampling) – Extending the ST 425 document suite in support of HFR 2K D-Cinema production – Extending the ST 425 document suite in support of Stereoscopic 3D HDTV and 2K D-Cinema production – Extending the ST 425 document suite in support of Stereoscopic 3D HDTV and 3D HFR 2K D Cinema production – Extending the ST 425 document suite in support of 4K D-Cinema and UHDTV-1 production The 3G SDI Document Suite 48
  • 49. ST 425-xx 3G SDI / ITU-R BT.1120 Part 2 49
  • 50. ST 425-xx 3G SDI / ITU-R BT.1120 Part 2 50
  • 51. ST 425-xx 3G SDI / ITU-R BT.1120 Part 2 51
  • 52. ST 425-xx 3G SDI / ITU-R BT.1120 Part 2 52
  • 53. ST 425-xx 3G SDI / ITU-R BT.1120 Part 2 53
  • 54. ST 424 3Gb/s Signal/Data serial Interface ST 297 Optical Interface The 3G SDI Document Suite ST 424 3Gb/s Signal/Data serial Interface ST 297 Optical Interface 54
  • 56. ST 424 3Gbps SDI Signal/Data Serial Interface – It is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s and 2.970/1.001 Gbit/s over a single-link coaxial cable. – This standard defines the 3Gb/s SDI physical interface. 10-bit multiplex, Serialization, Scrambling, Coding, Electrical specifications (eye shape, jitter, return loss…..) – These bit-rates are sufficient for 1080p video at 50 or 60 frames per second. The initial 424M standard was published in 2006, with a revision published in 2012 (SMPTE ST 424:2012). ST 297:2006 Serial Digital Fiber Transmission System – This standard defines an optical fiber system for transmitting bit-serial digital signals. – It is intended for transmitting SMPTE ST 259 signals (143 through 360 Mb/s), SMPTE ST 344 signals (540Mb/s), SMPTE ST 292-1/-2 signals (1.485 Gb/s and 1.485/1.001 Gb/s) and SMPTE ST 424 signals (2.970 Gb/s and 2.970/1.001 Gb/s). – In addition to optical specification, ST 297 also mandates laser safety testing and that all optical interfaces are labelled to indicate safety compliance, application and interoperability. 3Gb/s SDI Standards 3G SDI Physical Interface & Fiber Transmission System 56
  • 57. – Defines the transport of bit-serial data structure for 3.0Gb/s – Using a single coaxial cable interface – Supports either 10 or 12 bits data words – The SDI signal has an identical HD structure and contains two virtual interfaces into which the data is mapped. – The definitions of EAV, SAV, Line Count (LN0,LN1), and Checksum (CR0,CR1) conform to the HD-SDI signal standards. – Mapped into two virtual interfaces (10 bit parallel data streams (Data Stream One & Data Stream Two)) Data stream one of the virtual interface Interface Frequency 148.5MHz or 148.5/1.001 MHz Data stream two of the virtual interface Interface Frequecy 148.5MHz or 148.5/1.001 MHz SMPTE 424 Signal/Data Serial Interface 57
  • 58. – Example of image mapping structure for 4:2:2 YCbCr 10 bits 60/59.94 (Mapping Structure One). – Data stream one of the virtual interface for a fast progressive format contains the Y Luma data and data stream two contains the C chroma information Image Structure 58
  • 59. – Data Stream one and two of the virtual interfaces are multiplexed together producing twice the data rate. – Channel Coding uses NRZI Data stream one of the virtual interface Data stream two of the virtual interface Multiplexed 10-bit parallel interface Image Structure Multiplexed 59
  • 60. 10-bit multiplex of data stream 1 and data stream 2 The 10-bit data words of parallel data stream one and data stream two of the virtual interface Data stream one of the virtual interface Data stream two of the virtual interface Multiplexed 10-bit parallel interface Image Structure Multiplexed 60
  • 61. SMPTE ST 425-1 Three different mapping modes are defined as: • Level A Direct Image Mapping • Level B-DL Dual Link mapping • Level B-DS Dual Stream mapping – Level A is the direct mapping of an uncompressed images into a serial digital interface operating at a nominal rate of 3Gb/s. – Level B-DL is the mapping of ST 372 dual-link data streams into a serial digital interface operating at a nominal rate of 3Gb/s. – Level B-DS is the dual-stream mapping of two independent 1.5Gb/s video streams into a single serial digital interface operating at a nominal rate of 3Gb/s. 3Gb/s SDI Standards 3G SDI Mapping Standards (ST 425) 61
  • 62. 3 Gb/s Source Image Formats defined in SMPTE 425M SMPTE 425-1 Level A Signal/Data Serial Interface Source Image Format 62
  • 63. SMPTE 425-1 Level A Signal/Data Serial Interface Source Image Format 63
  • 64. Sampling Structure of the Video Line for the Various Frame Rates 64
  • 65. Mapping Structures 1, 2, 3 and 4 in SMPTE 425M- Level A 65
  • 67. – Mapping structure one supports the carriage of 4:2:2 sampled Y’C’bC’r data and has application for 1080 format. – Data stream one of the virtual interface for a fast progressive format contains the Y Luma data and data stream two contains the C chroma information. These two virtual interfaces are then multiplexed together to form the 10-bit parallel interface which is then converted into the serial signal. Mapping Structure One 67
  • 68. Level A within the SMPTE 425M standard defines the specific direct image format mapping as initially discussed for the fast progressive format (Mapping structure one for the fast progressive signals) Figure shows how the Y’, C’b and C’r samples are combined into the two virtual interfaces. There are a total of 1920 (0-1919) samples for the active picture and the blanking width is changed for the various formats to maintain a constant data rate. Within the SMPTE 425M the provision is made to allow for the carriage of a Dual Link signal mapped into a 3 Gb/s signal and this is defined as Level B. In this case the data from Link A is mapped into virtual interface one and Link B information is mapped into virtual interface two. Figure shows how the Dual Link data is mapped into the two virtual interfaces of the 3 Gb/s signal. (Mapping Structure One) Comparison between Level A and Level B 68
  • 69. − SMPTE ST 425-1 mapping structure 1: − Y sample and Cb/Cr sample of mapping structure. − The reference sample clock is 148.5 MHz which is twice the 74.25 MHz with the arrangement equivalent to the 292 M of the standard. 1080p 50/59.94/60 4:2:2 10-bit SMPTE ST 372 § 5.1: Each Y sample and Cb/Cr sample are arranged for each line number. Comparison between Level A and Level B 69
  • 71. – Mapping structure two supports the carriage of 4:4:4 sampled R’G’B’ or Y’C’bC’r data and has application for both 1080 and 720 formats. – Data stream one carries all of the G’ and R’ samples and data stream two carries all of the Alpha and B’ samples. Each of the channels is sampled at 74.25MHz or 74.25MHz/1.001. In the case of the YC’bC’r format the G samples are replaced by Y’ and the color difference values C’b/C’r are replace the B’/R’ samples, respectively. Mapping Structure Two 71
  • 72. − SMPTE ST 425-1 mapping structure 2 − R sample is always in stream 1, B sample is always stream 2. 4:4:4 and 4:4:4:4 10-bit − SMPTE ST 372 § 5.2, 5.4: − Even B + R samples are in stream 1, odd B + R samples are in stream 2. Comparison between Level A and Level B 72
  • 74. − Mapping structure three allows for 12-bit data to be carried within the SDI transport as either R’G’B’, Y’C’bC’r or X’Y’Z’ formats. − The 12-bit data represented as [11:0] has to be mapped into a 10-bit structure and each 12-bit sample is separated into four parts ([11:9],[8:6], [5:3], [2:0]). Each of these values is then combined into a 10-bit word for each of the components R’G’B’, Y’C’bC’r or X’Y’Z’ as defined in next Table. − These data words are then distributed across the two virtual interfaces and the bits [11:9] and [5:3] are carried by virtual interface one. The remaining data words [8:6] and [2:0] are carried by virtual interface two as shown in Figure 51. − In the case of the Y’C’bC’r format the G’ samples are replaced by Y’ and the color difference values C’b/C’r are replace the B’/R’ samples, respectively. Mapping Structure Three 74
  • 75. − In digital cinema application, a different color space of X’Y’Z’ is used to give a greater dynamic range to the representation of color to replicate the color depth available from film. − SMPTE 428 defines the various parameters of this color space. − In the case of the X’Y’Z’ format the R’ samples are replaced by X’, the G’ samples are replaced by Y’ and the B’ samples are replaced by Z’. − Each of the channels is sampled at 74.25MHz or 74.25MHz/1.001. To maintain the constant 3 Gb/s data rate for the various supported formats the blanking width is changed. Mapping Structure Three 75
  • 76. 12-Bit Mapping Structure of R’G’B into the 10-bit Virtual Interface 76
  • 77. 12-bit Mapping Structure of Y’ C’b C’r into the 10-bit Virtual Interface 77
  • 78. − SMPTE ST 425 - 1 Mapping structure 3: − Each word 10 bits are 3 bits from 3 channels of RGB (12 words of data are completed with 4 words). 4:4:4 12-bit − SMPTE ST 372 § 5.3, 5.4: − The upper 10 bits of the 3 channels of RGB are § 5.2 4: 4: 4 10 bit format − The remaining two least significant bits of each channel are contained in one word, alpha channel In place of the data. Comparison between Level A and Level B 78
  • 80. – Mapping structure four supports the carriage of 4:2:2 sampled Y’C’bC’r data and has application for 1080 format as 12 bits. – In order to map this 12-bit data into the 10-bit infrastructure of the SDI interface, the 12-bit data represented as [11:0] has to be divided into different words. – In mapping structure four, the first half of the Y’ data bits [11:6] are carried in virtual interface one and the subsequent Y’ data bits [5:0] are carried in the next packet of the virtual interface one as shown in Table. Mapping Structure Four 80
  • 81. – Figure shows how the data packets are combined into the two virtual interfaces. – The luma signal (Y’) is sampled at 74.25MHz or 74.25MHz/1.001 and the chroma channels (C’b/C’r) are sampled at half this rate of 37.125MHz or 37.125MHz/1.001. Mapping Structure Four 81
  • 82. − SMPTE ST 425-1 mapping structure 4: − Two 10 bit words, 6 bits of each YCbCr channel. − Stream 1 consists of luminance and stream 2 consists of chrominance. 4:2:2 12-bit − SMPTE ST 372 § 5.5: − Stream 1 arranges the upper 10 bits of each channel, in order of Cb / Y / Cr / Y Multiplex. Stream 2 consists of the lower 2 bits of each channel and the 10-bit alpha channel. Comparison between Level A and Level B 82
  • 83. − Mapping of two parallel 10 bit interfaces with same line and frame structure in conformance with SMPTE292. SMPTE 425M Level B Level B-DL (Dual Link) mapping, Level B-DS (Dual Stream) mapping 83
  • 84. − Mapping of two parallel 10 bit interfaces with same line and frame structure in conformance with SMPTE292. SMPTE 425M Level B Level B-DL (Dual Link) mapping, Level B-DS (Dual Stream) mapping 84
  • 85. 3Gb/s Level B Mapping of SMPTE 372M Dual Link Level B-DL Dual Link mapping 85
  • 86. SMPTE 425-1 Level B Signal/Data Serial Interface Source Image Format 86
  • 87. 3Gb/s Serial Digital Interface – Pk-to-Pk Amplitude 800mV +/-10% – DC Offset 0.0V +/-0.5V – Rise/Fall Time between 20% & 80% no greater than 135ps and not differ by more than 50ps – Overshoot rise/fall not to exceed 10% of amplitude – Timing Jitter <= 2UI above 10Hz – Alignment Jitter <= 0.3UI above 100kHz Eye Specifications per SMPTE Standards 87
  • 88. – SMPTE recommended practice RP184 has a set of definitions and measurement procedures for the measurement of jitter. – SMPTE 424M, 292 and 259M defines a set of frequency limits based on this recommended practice. –f1 = 10 Hz = Timing jitter lower band edge for SD, HD and 3G-SDI –f3 = 1 kHz = Alignment jitter lower band edge for SD –f3 = 100 kHz = Alignment jitter lower band edge for HD & 3G-SDI –f4 > 1/10 the clock rate = Upper band edge Jitter Measurements 88
  • 91. Technical Report 002 Advice on the use of 3 Gbit/s HD-SDI interfaces 91
  • 92. Pathological Signals - Stress Testing 92
  • 93. EAV, SAV, LN and CRC 93
  • 94. Luma and Chroma Components 94
  • 95. − The “XYZ” word is a ten-bit word with the two least significant bits set to zero, allowing translation to and from an eight-bit system. Bits of the “XYZ” word have the following functions: • Bit 9 – (Fixed bit) always fixed at 1 • Bit 8 – (F-bit) always 0 in a progressive scan system; 0 for field one and 1 for field two in an interlaced or segmented frame system. • Bit 7 – (V-bit) 1 in vertical blanking interval; 0 during active video lines • Bit 6 – (H-bit) 1 indicates the End of Active Video (EAV) sequence; 0 indicates the Start of Active Video (SAV) sequence • Bits 5, 4, 3, 2 – (Protection bits) provide a limited error correction of the data in the F, V, and H bits • Bits 1, 0 (Fixed bits) set to 0 to have identical word values in 10-bit or 8-bit systems Format of XYZ Word for HD and SD Standards 95
  • 96. Analog HD Vertical Blanking Interval 96
  • 97. Vertical Timing for Digital HD Formats 97
  • 98. Analog HD Timing Parameters with Selected Digital Relationships 98
  • 99. SMPTE ST 424: 2012 Updates to add provisions for use of other connector types Typical cable loss recommendation changed from -20dB to <-30dB SMPTE ST 425: 2011 Revised to include Digital Cinema production formats and add 32 channel audio support Split into multiple parts to accommodate future revisions for stereo and high resolution images 425-0 – Index 425-1 – Replaces current 425 425-2 – A Stereo Pair of 1.5Gb/s images – tie up with ST292-2 425-3 – Single Images with payload up to 6 Gb/s, carried on 2 links 425-4 – A Stereo pair of 3 Gb/s signals on 2 links 425-5 –Single Image with payload upto12 Gb/s, carried on 4 links 425-6 – A Stereo Pair of 6Gb/s signals, transported via 4 links 3Gb/s SDI Standards – Continuing Evolution 99
  • 100. General Issues for 1080p50/60 – Both Level A and Level B-DL mapping modes have similar capabilities BUT they are not compatible • For 1080p50/60, conversion between Level A and Level B-DL introduces a delay of at least one video line on each conversion. • Conversion of signals with embedded audio or other ancillary data may increase the delay and introduce additional complexity to correct the positioning or timing of some ancillary data packets. • Some devices process signals internally using a different standard to their own input/output standard. It is always advisable to confirm these devices compensate for any conversion delay internally before installation. – Users should establish capabilities of proposed purchases before designing new installations. – Facility designers may wish to select one mapping format (Level A or Level BDL) for each facilities routing / vision mixer signal “cloud”. 3G SDI – Some Things to Consider 100
  • 101. Switching Regions − For Level A and Level B-DS, the serial stream switch point is defined in SMPTE RP 168:2009 − For Level B-DS, there is no requirement for frame alignment of each image. If the two images are not frame aligned, video switching could be adversely effected. − Users and facility designers should always ensure that Level B-DS equipment guarantees frame alignment. ST 352 Payload ID − The use of the SMPTE ST 352 Payload ID is mandatory due to the large number of different video formats that can be carried in the 3 Gb/s interface. − Without the payload ID, it is not possible to correctly identify all of the supported formats or mapping modes purely from inspection of the payload data. − Users should ensure that any proposed new purchases support ST 352 payload ID before designing new installations 3G SDI – Some Things to Consider 101
  • 102. Embedded audio – Level A, Level B-DL and Level B-DS can all carry up to 32 audio channels – The channel assignments and identification are different • Level A uses 8 separate audio groups (of 4 channels each) - in accordance with ST 299-1 AND ST 299-2 - all 32 channels are uniquely identified • Level B-DL uses two streams of 4 audio groups (of 4 channels each) – in accordance with ST 299-1 - identical channel numbers are used but channels 1~16 can only be differentiated from channels 17~32 at the ST 372 Dual-Link (Link A / Link B), level • Level B-DS is similar to Level B-DL carrying two links of 16 channels but there is no defined channel assignment for this mapping – End-users and facility designers should ensure that audio embedders/de-embedders correctly identify audio channel mapping in mixed Level A / Level B systems – Extra care should be taken in 3G system upgrades to ensure that these new audio embedding capabilities are handled transparently throughout the plant 3G SDI – Some Things to Consider 102
  • 104. • Active picture information • Vertical blanking interval • Horizontal blanking intervals – Blanking intervals carry the vertical and horizontal synchronizing information. – The vertical blanking interval contains: – The vertical synchronizing pulses – “Unused” lines of video – The horizontal blanking interval is made up of the front porch, horizontal synchronizing pulse, the breezeway, the color subcarrier “burst” and the back porch. A Historical Perspective of HB & VB — Analog ACTIVE PICTURE (NTSC/PAL) HORIZONTALBLANKING 525/625 lines483/576 lines 708-720 pixels DATA (LINE SELECTION) FrontPorch Horizontal Sync Pulse Breezeway Color Subcarrier Burst BackPorch VERTICAL BLANKING Vertical Switching Line VERTICAL BLANKING HORIZONTALBLANKING Active Picture (NTSC/PAL) ACTIVE PICTURE (NTSC/PAL)
  • 105. – In earlier analog systems, the opportunity for utilizing the “unused” lines in the vertical blanking interval existed to carry “extra information”. – This situation enabled applications such as closed captioning for the hearing impaired and news/sports/weather/other “teletext” extra visual information. – For production applications, time code in the vertical blanking interval enhanced video tape edit decisions. – Other applications such as signaling downstream equipment to perform certain tasks were also possible. A Historical Perspective of HB & VB — Analog ACTIVE PICTURE (NTSC/PAL) HORIZONTALBLANKING 525/625 lines483/576 lines 708-720 pixels DATA (LINE SELECTION) FrontPorch Horizontal Sync Pulse Breezeway Color Subcarrier Burst BackPorch VERTICAL BLANKING Vertical Switching Line VERTICAL BLANKING HORIZONTALBLANKING Active Picture (NTSC/PAL) ACTIVE PICTURE (NTSC/PAL) 105
  • 106. – As the vertical blanking interval is divided into lines, the data is added line by line (a process that is commonly known as “line selection.”) – Due to the video signal being interlaced with odd and even lines, as a line is selected, there are the field 1 and field 2 selections. A Historical Perspective of HB & VB — Analog ACTIVE PICTURE (NTSC/PAL) HORIZONTALBLANKING 525/625 lines483/576 lines 708-720 pixels DATA (LINE SELECTION) FrontPorch Horizontal Sync Pulse Breezeway Color Subcarrier Burst BackPorch VERTICAL BLANKING Vertical Switching Line Active Picture (NTSC/PAL) ACTIVE PICTURE (NTSC/PAL) VERTICAL BLANKING HORIZONTALBLANKING
  • 107. Metadata and data with locations and the given standard for analog video signals. A Historical Perspective of HB & VB — Analog 107
  • 108. – The move to digital video enabled more data to be added. – The “blanking intervals” in analog video signals are analogous to “ancillary data spaces” in digital video signals. Vertical ancillary data space (VANC) Horizontal ancillary data space (HANC) DATA (LINE SELECTION) ACTIVE PICTURE SDI 270 Mb/s YCbCR 4:2:2 10-bit HANC 525/625 lines483/576 lines 708-720 pixels VANC SAV EAV 4 Groups Embedded Audio (16 channels) Vertical Switching Line DATA (LINE SELECTION) ACTIVE PICTURE SDI 270 Mb/s YCbCR 4:2:2 10-bit HANC 525/625 lines483/576 lines 708-720 pixels VANC SAV EAV 4 Groups Embedded Audio (16 channels) Vertical Switching Line VANC HANC A Historical Perspective of HB & VB — Digital ACTIVE PICTURE (SD-SDI, YCrCb 4:2:2 10bit)
  • 109. – Vertical and horizontal synchronizing pulses: SAV, EAV – 16 channels of digital audio could be carried, along with the digital video signal, with any other “additional data.” – This is known as embedding the audio and data signals into the video signal. – A digital video signal is made of: • video essence • Audio essence • Any additional data essence or metadata. DATA (LINE SELECTION) ACTIVE PICTURE SDI 270 Mb/s YCbCR 4:2:2 10-bit HANC 525/625 lines483/576 lines 708-720 pixels VANC SAV EAV 4 Groups Embedded Audio (16 channels) Vertical Switching Line A Historical Perspective of HB & VB — Digital ACTIVE PICTURE (SD-SDI, YCrCb 4:2:2 10bit) EAVSAV 4 Group of embedded audio (16 ch)
  • 110. – It is suggested that the audio information be placed between the EAV and SAV during the horizontal blanking interval. – Not all of the VANC data space is available. • For instance, the luminance samples on one line per field are reserved for DVITC (digital vertical interval time code) and the chrominance samples on that line may be devoted to video index (Also Closed Caption, Timecode, AFD, WSS (Wide-Screen Signaling). • Also, it would be wise to avoid using the vertical interval switch line and perhaps, the subsequent line where data might be lost due to relock after the switch occurs. Component Digital Ancillary Data Space for SD-SDI HANC & VANC 110
  • 111. 111 HANC & VANC Horizontal and Vertical Interval around a Switching LineA switched area of a video frame construct on a digital serial interface − It would be wise to avoid using the vertical interval switch line and perhaps, the subsequent line where data might be lost due to relock after the switch occurs.
  • 112. 112RP 291-2 also provides a method of calculating the available ancillary data space on any interface.
  • 113. – The formatting of the ancillary data packets is the same between SD and HD. – Ancillary data is formatted into packets prior to multiplexing it into the video data stream. (000h) (3FFh) (3FFh) DID SDID CS DC DBN User Data Words (max 255 Words) Ancillary data packet structure of the SDI The maximum length of an ancillary packet is 255 bytes; the maximum number of used data words is 248. 113
  • 114. • Ancillary Data Flag (ADF): For identifying the start of the ancillary data packet and uses the code word 000h, 3FFh, 3FFh. (This is the reverse of the code words used for EAV and SAV data) • Data Identification word (DID): For signifying the type of data being carried so that equipment can quickly identify the type of data present within the signal. • Data Block Number (DBN): For providing sequential order to ancillary data packets and allows a receiver to determine if data is missing (it is an optional counter ). • Secondary Data ID (SDID): For providing a wider range of allowed values and can be used for a series of data to be grouped, for instance, the Dolby Vertical Ancillary data has a series of SDID to identify the audio channels the data is associated with. • Data Count (DC): For indicating the amount of data in the packet. • Check- Sum (CS): For detecting errors in the data packet. Ancillary data packet structure in HD (000h) (3FFh) (3FFh) DID SDID CS DC DBN User Data Words (max 255 Words) 114
  • 115. Ancillary identification codes for type 2. Ancillary identification codes for type 1. 115
  • 116. ANC Data Placement & Vertical Interval Switching Line Numbers ANC data placement. Vertical Interval Switching line numbers 116
  • 117. A Historical Perspective of HB & VB — Digital 117
  • 118. A Historical Perspective of HB & VB — HD 118
  • 119. A Historical Perspective of HB & VB — HD 119
  • 120. A Historical Perspective of HB & VB — HD 120
  • 121. – The UMID label data consists of 32 8-bit bytes for a basic UMID or 64 bytes for an extended UMID. – The number of words in the UDW is indicated in the DC field of the ANC packet header. Ex: Packing Unique Material Identifier (UMID) and Program ID Label Data Data structure of a SMPTE 291M ANC packet (type 2) Data structure of UDWs for UMIDs UMID data count (DC) and key values 121
  • 122. Handling of digital audio is defined in: ANSI/SMPTE Standard 272M: Formatting AES/EBU Audio and Auxiliary Data into Digital Video Ancillary Data Space, for 525/60 and 625/50 ANSI/SMPTE 259M formats. ANSI/SMPTE 299M: 24-Bit Digital Audio Format for HDTV Bit-Serial Interface for ANSI/SMPTE 292M formats. – From 2 to 16 AES/EBU audio channels are transmitted in pairs and combined where appropriate into groups of four channels. – Each group is identified by a unique ancillary data ID. – Audio is sampled at a video synchronous clock frequency of 48 kHz, or optionally at a synchronous or asynchronous rates from 32 kHz to 48 kHz. Digital Audio 122
  • 123. Digital Audio Television Clock Relationships 123
  • 124. Digital Audio Block & Frame Format In SD structure the ancillary audio data is applied across CbYCrY. 124
  • 125. - 4 Preamble Bits - 24 Data Bits - 1 Validity - 1 User - 1 Channel status - 1 Parity (Even parity) Validity bit − The validity bit shall be logic “0” if the audio sample word is suitable for conversion. − The validity bit shall be logic “1” if the audio sample word is not suitable for conversion. (There is no default value for validity bit) User data format − User data bits may be used in any way desired by the user. (The default value of the user data bit shall be logic "0") − Channel status Byte 1 bits 4-7 indicate possible formats for the user data channel. Digital Audio Sub-frame Format 125
  • 126. - 4 Preamble Bits - 24 Data Bits - 1 Validity - 1 User - 1 Channel status - 1 Parity (Even parity) Channel status format − The channel status for each audio signal carries information associated with that audio signal (Examples: length of audio sample words, number of audio channels, sampling frequency, time code, alphanumeric source and destination codes, and pre-emphasis), and thus it is possible for different channel status data to be carried in the two sub-frames of the digital audio signal. • Channel status information is organized in 92-bit blocks, subdivided into 24 Bytes . • The first bit of each block is carried in the Frame with preamble "Z". • The specific organization follows, wherein the suffix 0 designates the first Byte or bit. Where multiple bit states represent a counting number, tables are arranged with most significant bit (MSB) first, except where noted as LSB first. Digital Audio Sub-frame Format 126
  • 127. AES/EBU digital audio signal structure 127
  • 128. Frame repeat – 1 Frame : 1/48kHz = 20.83us (44.1kHz=22.67us) Block repeat – 1 Block : 20.83us ×192frame = 4ms (44.1kHz=4.352ms) AES/EBU Interface bit rate – 48kHz × 2CH × 32Bit = 3.072Mbps After BPM (Biphase Mark) encoding – 3.072Mbps × 2 = 6.144Mbps AES/EBU Frames & Sub-frames & Data rate 128
  • 130. The subframe preambles starting with a transition from negative to positive. The subframe preambles starting with a transition from positive to negative. – Each preamble must transition to a different level from that of the last state of the bit before it. X, Y and Z Sync Words (Preambles) 130
  • 132. Insertion of audio frames in ancillary data packets of the SDI 132
  • 133. The Audio data packet precedes the Extended data packet in the SDI data–stream. When the audio data is 24-bit for SD, the Audio Data Packet only transmits 20-bit data, the Extended Data Packet is used to transmit another 4-bit data. 133
  • 134. − When the audio data is 24-bit for SD, the Audio Data Packet only transmits 20-bit data, the Extended Data Packet is used to transmit another 4-bit data. Audio Data Packet and Extended Data Packet 134
  • 135. – The Audio Data Packet contains one or more audio samples from up to four audio channels. – 20 audio bits and C, U, V bits from each AES sub-frame ,i.e. 23 bits are mapped into three 10-bit video words (X, X+1, X+2). Basic Embedded Audio 135
  • 136. Data Identifiers (DID) for up to 16-Channel Operation of SD embedded audio Embedded Audio Bit Distribution – Bit-9 is always the inverse of bit-8 to ensure that none of the excluded word values (3FFh through 3FCh or 003h through 000h) are used. – The Z-bit is set to “1” corresponding to the first frame of the 192-frame AES block. – Bit-8 in word X+2 is even parity for bits 0-8 in all three words. 136
  • 137. Full-featured embedded audio to include: • Carrying the 4 AES auxiliary bits (which may be used to extend the audio samples to 24-bit) • Allowing non-synchronous clock operation • Allowing sampling other than 48 kHz • Providing audio-to-video delay information for each channel • Documenting Data IDs to allow up to 16 channels of audio in component digital systems • Counting “audio frames” for 525 line systems. Extended Embedded Audio 137
  • 138. – The Audio Control Packet is transmitted once per field in the second horizontal ancillary data space (on the second line) after the vertical interval switch point. – It contains information on audio frame number, sampling frequency, active channels, and relative audio-to video delay of each channel. – Transmission of audio control packets is optional for 48 kHz synchronous operation and required for all other modes of operation (since it contains the information as to what mode is being used). Audio Control Packet Formatting 138
  • 139. – When the audio data is 24-bit for SD it split-up into 20 bits of audio data and an extended packet containing the 4 auxiliary bits. – The Audio Data Packet only transmits 20-bit data, the Extended Data Packet is used to transmit another 4-bit data. – The full 24 bits of audio data are sent as a group in HD (no split). – Since the full 24 bits of audio data are carried within the user data there is no extended data packet used within HD. (HD) (SD) Subframe Formats in SD and HD 139
  • 140. There are some similarities and differences in the implementation of AES/EBU within SD and HD environment. – The formatting of the ancillary data packets is the same between SD and HD. – The information contained within the user data is different because the full 24 bits of audio data are sent in HD. – Therefore, the total number of bits used in HD is 28 (24+5 (V,U,C,P)) bits compared with 23 (20+3 (V,U,C)) bits in SD. – The 24 bits of audio data are placed in 4 ancillary data words along with C, V, U and Z-bit flag. – Additionally, the CLK and ECC words are added to the packet. Error Correction Codes Structure of HD Audio Data Packet 140
  • 141. – Conformance to the ancillary data packet structure means that the Ancillary Data Flag (ADF) has a three- word value of 000h,3FFh, 3FFh, as SMPTE 291M. – The one-word DID (Data Identification) have the following values to identify the appropriate group of audio data as shown in Table. – DBN is a one-word value for data block number – DC is a one-word data count which is always 218h. – The User Data Words (UDW) always contains 24 words of data. – UDW0 and UDW1 are used for audio clock phase data and provide a means to regenerate the audio sampling clock. The data within these two words provides a count of the number of video clocks between the first word of EAV and the video sample corresponding to the audio sample. Data Identifiers (DID) for up to 16-Channel Operation of HD embedded audio. Structure of HD Audio Data Packet 141
  • 142. – Each audio data subframe is distributed across 4 UDW samples. – The full preamble data is not carried within the 4 words, only a reference to the start of the 192 frame by use of the Z-bit indicator. Also, the parity bit is that used within the 32-bit subframe (P bit) unlike SD. – The Error Correction Codes (ECC) is a set of 6 words that are used to detect errors within the first 24 words from ADF to UDW17. The value is calculated by applying the 8 bits of data B0-B7 of the 24 words through a BCH code information circuit that produces the 6 words of the ECC (Error Correction Code.) – The ancillary data information is multiplexed within the color difference Cb/Cr data space only (Unlike SD). – The Y data space is only used for the audio control packet that occurs once per field and is placed on the second line after the switching point of the Y data. – No ancillary data is placed within the signal on the line subsequent to the switching point. (The switching point location is dependent on the format of the HD signals, for example in the 1125/60 system no ancillary data is put on line 8) Structure of HD Audio Data Packet 142
  • 143. – The audio control packet carries additional information used in the process of decoding the audio data and has a similar structure to SD. – The Ancillary Data Flag has a three-word value of 000h, 3FFh, 3FFh. – The one-word DID has values to identify the appropriate group of audio data. – DBN is always 200h – DC is always 10Bh. – The UDW contains 11 words of data structured into five different types of data. – The Audio Frame (AF) number data provides a sequential number of video frames to assist in indicating the position of the audio samples when using a non integer number of audio samples per frame. – The RATE indicates the sampling rate of the audio data and whether the data is synchronous or asynchronous. – The ACT word indicates the number of active channels within the group. – DELm-n indicates the amount of accumulated audio processing delay relative to video measured in audio sample intervals for each channel pair 1&2 and 3&4. Structure of audio control packet Structure of HD Audio control packet 143
  • 144. Audio Data Packet Audio Extended Data Packet Audio Control Packet Audio Data Packet, Audio Extended Data Packet and Audio Control Packet 144
  • 146. UHD1, UHD2, 4K & 8K Transparency channel or Alfa channel 146
  • 147. UHD1, UHD2, 4K & 8K 147
  • 148. UHD1, UHD2, 4K & 8K 148
  • 149. [Number of physical links] UHD1, UHD2, 4K & 8K 149
  • 150. DVB UHD Phases HDR Delivery 8 - 150
  • 152. Technology and Standards Timeline 152
  • 153. ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3 153
  • 154. ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3 154
  • 155. ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3 155
  • 156. ST 2081-xx 6G SDI / ITU-R BT.2077-1 Part 3 156
  • 158. ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3 158
  • 159. ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3 159
  • 160. ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3 160
  • 161. ST 2082-xx 12G SDI / ITU-R BT.2077-1 Part 3 161
  • 163. SDI Physical Layer Parameters -Optical 163
  • 164. SDI Physical Layer Parameters -Optical For given transmit and receive parameters, worst case dispersion limited optical link lengths of 12km, 5km and 2km @ 6Gb/s, 12Gb/s and 24Gb/s respectively are targeted in single-mode fiber 164
  • 165. SDI Physical Layer Parameters -Optical – A single “robust” optical connector solution is being standardized as proposed SMPTE ST 2091 – One form factor (multi-fiber), to support data rates from 270Mb/s to 200Gb/s on a single (multi-fiber optical), cable. – Rugged, robust and dirt-protected (shuttered apperture) – Simple integration –LC connection or MTP® (Media Transfer Protocol) – Versatile –multichannel 2, 4 and 8 fibers connection system – Common QSFP optical module form factor – Up to 4km link distance at all data rates LC®/PC (Lucent Connector / Physical Contact) connector 165
  • 166. SMPTE UHDTV-SDI Physical Layer SMPTE ST2081-1 & ST2082-1 require that jitter is measured over two different frequency bands. 166
  • 167. Video Levels in SDI Full Range (Newly introduced) – In file based workflows the full range of levels can be used to improve accuracy in color conversion. – Some digital image interfaces reserve digital values, e.g. for timing information, such that the permitted video range of these interfaces is narrower than the video range of the full-range signal. – The mapping from full-range images to these interfaces is application-specific. – SDI has excluded code words for EAV and SAV timing reference signal or TRS, so full range gets changed to 4d to 1019d (16d to 4092d for 12-bit) for SDI. – Changing method is up to the device outputting the SDI, weather the data gets clipped off or converted to fit this range. 167
  • 168. Video Levels in SDI Narrow Range – Traditional SDI has used 0-700mv to represent levels from black to white which is typically referred to as 0%-100% or 0IRE to 100IRE. – 64d to 960d for 10-bit (256d to 3840d for 12-bit) – The narrow range representation is in widespread use and is considered the “default”. – Narrow range signals may extend below black (sub-blacks) and exceed the nominal peak values (super-whites), but should not exceed the video data range. 168
  • 169. Video Levels Digital 10- and 12-bit integer representation (ITU-R BT.2100-1) Round( x ) = Sign( x ) * Floor( | x | + 0.5 ) Floor( x ) the largest integer less than or equal to x Resulting values that exceed the video data range should be clipped to the video data range Narrow Range 𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐𝟏𝟗𝑬′ + 𝟏𝟔) × 𝟐 𝒏−𝟖 )] 𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐𝟐𝟒𝑬′ + 𝟏𝟐𝟖) × 𝟐 𝒏−𝟖)] Full Range 𝑫 = 𝑹𝒐𝒖𝒏𝒅 [(𝟐 𝒏 − 𝟏)𝑬′ ] 𝑫 = 𝑹𝒐𝒖𝒏𝒅 [ 𝟐 𝒏 − 𝟏 𝑬′ + 𝟐 𝒏−𝟏)] Coding 10-bit 12-bit 10-bit 12-bit Black (R' = G' = B' = Y' = I = 0) DR', DG', DB', DY', DI 64 256 0 0 Nominal Peak (R' = G' = B' = Y' = I = 1) DR', DG', DB', DY', DI 940 3760 1023 4095 Achromatic (C'B = C'R = -0.5) DC'B, DC'R, DCT, DCP 64 256 0 0 Nominal Peak (C'B = C'R = 0) DC'B, DC'R, DCT, DCP 512 2048 512 2048 Nominal Peak (C'B = C'R = +0.5) DC'B, DC'R, DCT, DCP 960 3840 1023 4095 Video Data Range 4~1019 16~4079(?) 0~1023 0~4095 169
  • 170. Code Values for 10-bit and 12-bit Y or RGB. Video Levels Digital 10- and 12-bit integer representation (ITU-R BT.2100-1) 170
  • 171. 399.2 396.9 396.1 -396.9 -397.7 -400.0 Code Values for 10-bit and 12-bit Cb and Cr. Video Levels Digital 10- and 12-bit integer representation (ITU-R BT.2100-1) 171
  • 172. – In file based workflows the full range of video levels can be used to improve accuracy in color conversion in a 10-bit or 12-bit system. – When a file is converted to SDI the data maybe scaled or clipped depending on the device, to the allowed range of SDI levels. – The full range should not be used for program exchange unless all parties agree. Mapping from/to Full-Range Full range SDI 4d 10-bit 1019d 10 bit file 0d 10-bit system 1023d 12 bit file 0 decimal 12-bit system 4095d Narrow range SDI 256d 12-bit 3760d/3840d Narrow range SDI 64d 10-bit 940d/960d 10 bit file 0d 10-bit system 1023d 172
  • 175. ST 2082 ‘Image Mapping Data Flow’ Roadmap 175
  • 176. SMPTE ST 2082-10 2160-line Source Image and Ancillary Data Mapping for 12G-SDI MODE 1: 2160-line source image formats and ancillary data into a 12 Gb/s [nominal] SDI bit-serial interface 176
  • 177. Carriage of 2160-line images in a 12G-SDI interface Generalized process 10/12 bit – The source images are divided into two or four 1080-line sub images, depending on the format of the source image. – Each 10-bit data stream includes timing and sync words, line numbers, cyclic redundancy codes, ancillary data, including audio, and payload identification packets. – Mux: Data stream 8, data stream 4, data stream 6, data stream 2, data stream 7, data stream 3, data stream 5, data stream 1 177
  • 178. MODE 1: 2160-line source image formats and ancillary data into a 12 Gb/s [nominal] SDI bit-serial interface (UHDTV1 and Digital Cinematography Production) 178
  • 179. Mapping Process Carriage of 2160-line mapping source image formats in a 12G-SDI interface – The 2160-line source image is divided into four 1080-line sub images in accordance with the 2 sample interleave sub-division method referenced in SMPTE ST 425-5 2160-line Mapping. – For a 4:2:0 source image, the C′B and C′R samples in sub images 3 and 4 are set to the value 200h for 10-bit systems and 800h for 12-bit systems. 179
  • 180. 4 way division square 4 way Interleave (2SI) 180
  • 183. – Each sub image is mapped into two 10-bit data streams. – Sub image 1 is mapped into data streams one and two. – Sub image 2 is mapped into data streams three and four. – Sub image 3 is mapped into data streams five and six. – Sub image 4 is mapped into data streams seven and eight. – Each data stream includes sync and timing (TRS) words, Cyclic redundancy code (CRC) words, line numbers (LN), HANC and VANC data and time code (TC). – The eight 10-bit data streams are combined onto an 80-bit virtual interface: 12G-SDI 10-bit Multiplex Type 1 183
  • 184. 2160-line 80-bit Virtual Interface Multiplex Structure Mapping Structure 1: Sub image 1 is mapped into data streams one and two: data stream one: Y′0, Y′1, Y′2, Y′3... data stream two: C′B0, C′R0, C′B1, C′R1... Sub image 2 is mapped into data streams three and four: data stream three: Y′0, Y′1, Y′2, Y′3... data stream four: C′B0, C′R0, C′B1, C′R1... Sub image 3 is mapped into data streams five and six: data stream five: Y′0, Y′1, Y′2, Y′3... data stream six: C′B0, C′R0, C′B1, C′R1... Sub image 4 is mapped into data streams seven and eight: data stream seven: Y′0, Y′1, Y′2, Y′3... data stream eight: C′B0, C′R0, C′B1, C′R1... For a 4:2:0 source images, the 10-bit C′B and C′R samples in sub images 3 and 4 are set to the value 200h. Multiplex Structure: {4} C′B0, {2} C′B0, {3} C′B0, {1} C′B0, {4} Y′0, {2} Y′0, {3} Y′0, {1} Y′0, {4} C′R0, {2} C′R0, {3} C′R0, {1} C′R0, {4} Y′1, {2} Y′1, {3} Y′1, {1} Y′1, {4} C′B1, {2} C′B1, {3} C′B1, {1} C′B1, {4} Y′2, {2} Y′2, {3} Y′2, {1} Y′2, {4} C′R1, {2} C′R1, {3} C′R1, {1} C′R1, {4} Y′3, {2} Y′3, {3} Y′3, {1} Y′3…. 184
  • 185. Mapping Structure 2: Sub image 1 is mapped into data streams one and two: data stream one: G′0, R′0, G′1, R′1... data stream two: A0, B′0, A1, B′1... Sub image 2 is mapped into data streams three and four: data stream three: G′0, R′0, G′1, R′1... data stream four: A0, B′0, A1, B′1... Sub image 3 is mapped into data streams five and six: data stream five: G′0, R′0, G′1, R′1... data stream six: A0, B′0, A1, B′1... Sub image 4 is mapped into data streams seven and eight: data stream seven: G′0, R′0, G′1, R′1... data stream eight: A0, B′0, A1, B′1... Multiplex Structure: {4} A0, {2} A0, {3} A0, {1} A0, {4} G′0, {2} G′0, {3} G′0, {1} G′0, {4} B′0, {2} B′0, {3} B′0, {1} B′0, {4} R′0, {2} R′0, {3} R′0, {1} R′0, {4} A1, {2} A1, {3} A1, {1} A1, {4} G′1, {2} G′1, {3} G′1, {1} G′1, {4} B′1, {2} B′1, {3} B′1, {1} B′1, {4} R′1, {2} R′1, {3} R′1, {1} R′1…. 2160-line 80-bit Virtual Interface Multiplex Structure 185
  • 188. Mapping Structure 3: Bit b9 in every word is the complement of b8. The lists and tables below describe Bits b8 – b0 Sub image 1 is mapped into data streams one and two: data stream one: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]... data stream two: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]... Sub image 2 is mapped into data streams three and four: data stream three: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]... data stream four: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]... Sub image 3 is mapped into data streams five and six: data stream five: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]... data stream six: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]... Sub image 4 is mapped into data streams seven and eight: data stream seven: R′G′B′ 0 [11:9], R′G′B′ 0 [5:3], R′G′B′ 1 [11:9], R′G′B′ 1 [5:3]... data stream eight: R′G′B′ 0 [8:6], R′G′B′ 0 [2:0], R′G′B′ 1 [8:6], R′G′B′ 1 [2:0]... Multiplex Structure: {4} R′G′B′0 [8:6], {2} R′G′B′0 [8:6], {3} R′G′B′0 [8:6], {1} R′G′B′0 [8:6], {4} R′G′B′0 [11:9], {2} R′G′B′0 [11:9], {3} R′G′B′0 [11:9], {1} R′G′B′0 [11:9], {4} R′G′B′0 [2:0], {2} R′G′B′0 [2:0], {3} R′G′B′0 [2:0], {1} R′G′B′0 [2:0], {4} R′G′B′0 [5:3], {2} R′G′B′0 [5:3], {3} R′G′B′0 [5:3], {1} R′G′B′0 [5:3], {4} R′G′B′1 [8:6], {2} R′G′B′1 [8:6], {3} R′G′B′1 [8:6], {1} R′G′B′1 [8:6]… 2160-line 80-bit Virtual Interface Multiplex Structure 188
  • 189. Mapping Structure 4: Bit b9 in every word is the complement of b8. The lists and tables below describe Bits b8 – b0 Sub image 1 is mapped into data streams one and two: data stream one: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]... Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]... data stream two: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]... Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]... Sub image 2 is mapped into data streams three and four: data stream three: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]... Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]... data stream four: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]... Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]... Sub image 3 is mapped into data streams five and six: data stream five: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]... Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]... data stream six: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]... Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]... Sub image 4 is mapped into data streams seven and eight: data stream seven: Bits b8 – b6: A0 [11:9], A0 [5:3], A1 [11:9], A1 [5:3]... Bits b5 – b0: Y′0 [11:6], Y′0 [5:0], Y′1 [11:6], Y′1 [5:0]... data stream eight: Bits b8 – b6: A0 [8:6], A0 [2:0], A1 [8:6], A1 [2:0]... Bits b5 – b0: C′B 0 [11:6], C′B 0 [5:0], C′R 0 [11:6], C′R 0 [5:0]... For a 4:2:0 source image, the 12-bit C′B and C′R samples in sub images 3 and 4 are set to the value 800h. 2160-line 80-bit Virtual Interface Multiplex Structure 189
  • 190. Multiplex Structure for Mapping Structure 4: 2160-line 80-bit Virtual Interface Multiplex Structure 190
  • 191. Sync-Bit Insertion – Repeating patterns of 3FFh or 000h in the 12G-SDI 10-bit parallel multiplex can result in a long run of zeros feeding the scrambling polynomial. – To prevent long runs of zeros and ones, the 10-bit parallel multiplex data stream should be modified such that the two least significant bits of repeated 3FFh or 000h code words should be replaced by the sync-bit values of 10b for 000h words and 01b for 3FFh words. – To ensure synchronization and word alignment can be reliably achieved in the receiver, one complete sequence of preambles – 3FFh, 000h, 000h – should be retained without modification as shown in Fig. 3-15. – This Sync-bit insertion process should be reversed in the receiver restoring the original 3FFh and 000h data patterns. 191
  • 193. Audio Data – Audio data shall be mapped into the HANC space of data streams one through eight and shall be in conformance with SMPTE ST 299-1 and SMPTE ST 299-2. – Audio data packets shall be mapped into the even numbered data streams. – Audio control packets shall be mapped into the odd numbered data streams. – Audio control and data packets shall be mapped into the data stream pair one/two first and any remaining data shall then be mapped onto data stream pair three/four, then into data stream pair five/six and finally into data stream pair seven/eight. – The audio clock phase data as defined in the section “CLK (audio clock phase data)” of SMPTE ST 299-1 shall be calculated at the clock frequency of 148.5 (/1.001) MHz for 4:2:2 10-bit and 4:2:0 10-bit formats at 48/1.001, 48, 50, 60/1.001 and 60 Hz, which use Mapping Structure 1. – The audio clock phase data as defined in the section “CLK (audio clock phase data)” of SMPTE ST 299-1 shall be calculated at the clock frequency of 74.25 (/1.001) MHz for formats at 24/1.001, 24, 25, 30/1.001 or 30Hz, which use Mapping Structure 2, 3 or 4. 193
  • 194. Number of Audio Channels The number of audio channels is as defined in SMPTE ST 425-5 “Number of Audio Channels” Audio Copy Audio may be copied within the 12G interface, in order to simplify division of a single 12G signal into dual- link 6G or quad-link 3G with audio copy between links. – As an alternative to the mapping of the maximum number of unique audio channels, blocks of audio channels may be copied within the interface. – This may be as a result of the single-link 12G-SDI signal being created by combining quad-link 3G-SDI or dual-link 6G SDI signals. – It may alternatively be done in the original single-link 12G-SDI signal in order to permit simple splitting of the single-link 12G-SDI signal into a quad-link 3G-SDI or a dual-link 6G-SDI signal. – Note: Audio copy reduces the number of channels that can be transported by the interface. Audio Data 194
  • 195. Inherited Audio Copy as a result of combining multi-link 3G-SDI or 6G-SDI signals – In the case where the audio data has been embedded according to SMPTE ST 425-5, for example when the audio was embedded in a quad-link 3G interface that has been combined into a single-link 12G interface, the audio in data stream pair three/four, five/six and seven/eight may be a copy of the audio in data stream pair one/two. – Similarly where the audio has been embedded according to SMPTE ST 2081-11 in a dual-link 6G interface that has been combined into a single 12G interface, the audio in data stream pair five/six and seven/eight may be a copy of the audio in data stream pair one/two and three/four. Audio Data 195
  • 196. Originated Audio Copy in 12G-SDI signal If audio is copied: – Data stream pair one/two shall always carry original audio. – Data stream pair three/four may carry additional channels of original audio. – Data stream pairs five/six and seven/eight may carry additional channels of original audio, as long as data stream pair three/four is carrying original audio. – Data stream pairs five/six and seven/eight may carry copied audio from data stream pairs one/two and three/four. – Data stream pair three/four may carry copied audio from data stream pair one/two. In this case data stream pairs five/six and seven/eight shall also carry the same copied audio. – The audio copy status of each data stream shall be signaled in the PID. 196
  • 197. A quad-link 3G interface combined into a single 12G-SDI interface, and the possible status of audio copy on each data stream. 197
  • 198. A dual-link 6G interface combined into a single 12G-SDI interface, and the possible status of audio copy on each data stream. 198
  • 199. 12G-SDI signal Eye Diagram HD-SDI (1.5G) Eye Diagram Typical 12G-SDI Eye Diagram 199
  • 200. Pathological signals for UHD (6G and 12G-SDI) SMPTE RP 198-1998 − Pathological signals are recommended by SMPTE for use with SD, HD & 3G standards only − Not yet approved by SMPTE for 6G or 12G-SDI − … may take a few more years for approval! − New pathological tools for Qx − With pathological patterns, the pathological signal only occurs statistically every 512 lines! − A new tool will provide feedback of pathological conditions on the interface with GPI trigger output − Checkfield, PLL and EQ testing 200
  • 201. Pathological signals for UHD –PHABRIX solution – Pathological Checkfield Overlay developed by PHABRIX with major SDI chip manufacturer co-operation – Used to verify how sensitive the SDI link is to pathological conditions on the interface 201
  • 203. SMPTE ST 2082-11 4320-line and 2160-line Source Image and Ancillary Data Mapping for Dual-link 12G-SDI MODE 1: 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface MODE 2: 2160-line R′G′B′, Y′C′BC′R 4:4:4(:4) 10-bit and 4:4:4 12-bit image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface MODE 3: 2160-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit Additional Frame Rate Source image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface 203
  • 204. Dual-link 12G-SDI 2 x 10-bit Multiplex – Type 1 204
  • 205. Dual-link 12G-SDI 2 x 10-bit Multiplex – Type 1 205
  • 206. MODE 1: 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface (UHDTV2 Production) 206
  • 207. MODE 2: 2160-line R′G′B′, Y′C′BC′R 4:4:4(:4) 10-bit and 4:4:4 12-bit image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface (UHDTV1 and Digital Cinematography Production) Notes: *1InthisimageformatR′G′B′indicateseitherR′G′B′orR′FSG′FSB′FS. *2Thisisthemaximumpixelarray,theactiveimagemaynotfill themaximumarray. 207
  • 208. MODE 3: 2160-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit Additional Frame Rate Source image formats and ancillary data on a Dual-link 12 Gb/s [nominal] SDI bit-serial interface (UHDTV1 and Digital Cinematography Production AFR) 208
  • 209. Carriage of 4320-line images in a Dual-link 12G interface Generalized process as used by Mode 1 209
  • 210. Carriage of 2160-line images in a Dual-link 12G interface Generalized process as used by Mode 2 and Mode 3 210
  • 211. For a 4:2:0 source image, the C′B and C′R samples in intermediate sub images 3 and 4 shall be set to the value 200h. Mode 1: Carriage of 4320-line Y′C′BC′R 4:2:2 and 4:2:0 10-bit Source Image Formats and Ancillary Data 211
  • 212. 212
  • 213. Mode2: Carriage of 2160-line image formats in a dual-link 12G-SDI interface – The 2160-line source images shall be divided into four 1080-line 4:2:2 or 4:2:0 sub images in accordance with the 2 sample interleave sub- division method referenced in SMPTE ST 425-5 “2160-line image division into four sub images”. – For a 4:2:0 source image, the C′B and C′R samples in sub images 3 and 4 shall be set to the value 200h 213
  • 214. Mode 3: The carriage of 2160-line AFR source image formats in a Dual link 12G-SDI interface 214
  • 215. Mapping of 1080-line Sub Images in Mode3 StructureofeachDataStreamfor120Hz,120/1.001 Hz,100Hz,96Hzor96/1.001Hzframerates − Each 1080-line 4:2:2 sub image shall be mapped to a 40-bit virtual interface consisting of four data streams. (The structure of each data stream shall be as illustrated in Figure). 215
  • 217. SMPTE ST 2082-12 Document Roadmap 4320-line and 2160-line Source Image and Ancillary Data Mapping for Quad-link 12G-SDI MODE 1: 4320-line Source image formats and ancillary data into a Quad-link 12 Gb/s [nominal] SDI bit serial interface MODE 2: 2160-line Y′C′BC′R or R′G′B′ 4:4:4:4 10-bit or 4:4:4 10-bit or 12-bit Additional Frame Rate (AFR) Source image formats and ancillary data into a Quad-link 12 Gb/s [nominal] SDI bit-serial interface 217
  • 218. Quad-link 12G-SDI 4 x 10-bit Type 1 Multiplex 218
  • 219. Quad-link 12G-SDI 4 x 10-bit Type 1 Multiplex 219
  • 220. MODE 1: 4320-line Source image formats and ancillary data into a Quad- link 12 Gb/s [nominal] SDI bit serial interface (UHDTV2 Production) 220
  • 221. MODE 2: 2160-line Y′C′BC′R or R′G′B′ 4:4:4:4 10-bit or 4:4:4 10-bit or 12-bit Additional Frame Rate (AFR) Source image formats and ancillary data into a Quad-link 12 Gb/s [nominal] SDI bit-serial interface (UHDTV1 and Digital Cinematography Production AFR) Notes: *1InthisimageformatR′G′B′indicateseitherR′G′B′orR′FSG′FSB′FS. *2Thisisthemaximumpixelarray,theactiveimagemaynotfill themaximumarray. 221
  • 222. Carriage of 4320-line Images on a Quad-link 12G interface Generalized process as used by Mode 1 222
  • 223. Carriage of 2160-line Images on a Quad-link 12G interface Generalized process as used by Mode 2 223
  • 224. For 4:2:0 source images, the 10-bit C′B and C′R samples in intermediate sub images 3 and 4 are set to the value 200h and the 12-bit C′B and C′R samples in intermediate sub images 3 and 4 are set to the value 800h. Mode 1: Carriage of 4320-line Source Image Formats and Ancillary Data 224
  • 225. 225
  • 226. MODE 2: The process for the carriage of 2160-line AFR source image formats in a Quad-link 12G-SDI interface (The division of the source image format into four sub images) The division of the source image format into four sub images 226
  • 227. MODE 2: The process for the carriage of 2160-line AFR source image formats in a Quad-link 12G-SDI interface (The mapping of sub image one onto 12G-SDI Link 1) The mapping of sub image one onto 12G-SDI Link 1 227
  • 228. Mapping of 1080-line Sub Images in Mode 2 − Each 1080-line sub image shall be mapped to an 80-bit virtual interface consisting of eight data streams. (The structure of each data stream shall be as illustrated in Figure) Structureofeachdatastreamfor120Hz,120/1.001Hz,100Hz, 96Hzor96/1.001Hzframerates 228
  • 230. ST 2083-xx 24G SDI / ITU-R BT.2077-1 Part 3 230
  • 231. ST 2083-xx 24G SDI / ITU-R BT.2077-1 Part 3 231
  • 232. 24G-SDI 10-bit Multiplex – Type 1 232
  • 233. 24G-SDI 10-bit Multiplex – Type 1 233
  • 234. 24G-SDI 10-bit Multiplex – Type 1 234
  • 235. 24G-SDI 10-bit Multiplex – Type 1 235
  • 237. Video Payload Identifier (VPID) – Video payload identifier monitoring is more important than ever with a wide variety of formats it is essential to use the SMPTE ST 352 Video Payload Identifier (VPID). – The SMPTE ST 352 Video Payload Identifier (VPID) is carried within the Ancillary data space to assist a device in quickly decoding the video signal. – The payload identifier consists of 4 bytes where each byte has a separate significance. – The first byte of the payload identifier has the highest significance and subsequent bytes define lower order video and ancillary payload information. – The horizontal placement of the packet should be immediately following the last CRC code word (CR1) of the line(s) specified in SMPTE ST 352 for 1125-line systems. Note: The line numbers defined in SMPTE ST 352 for the placement of the payload identifier packet in 1125-line systems avoid those lines used by SMPTE ST 299-1 and SMPTE ST 299-2 for the carriage of digital audio control packets and extended audio control packets, respectively. 237
  • 238. Video Payload Identifier (VPID) – 525- and 625-line digital interfaces, interlace: once per field • 525I (field 1): Line 13, 525I (field 2): Line 276 • 625I (field 1): Line 9, 625I (field 2): Line 322 – 525- and 625-line digital interfaces, progressive: once per frame • 525P: Line 13, 625P: Line 9 – 750-line digital interfaces, progressive: once per frame • 750P: Line 10 – 1125-line digital interfaces, interlace and segmented-frame: once per field (segment). • 1125I (field 1): Line 10, 1125I (field 2): Line 572 – 1125-line digital interfaces, progressive: once per frame • 1125P: Line 10 238
  • 239. – The VPID conforms to the SMPTE 291 Ancillary Data Packet and Space Formatting standard and contains the Ancillary Data Flag (ADF), Data Identifier (DID), Secondary Data Identifier (SDID), Data Count, User Data Words (UDW1-4) and Checksum. – It is sent as 4 User Data Words (UDW) UDW1 –UDW4 in specified line in each frame or field. Video Payload Identifier (VPID) (000h) (3FFh) (3FFh) DID SDID CS DC DBN User Data Words (max 255 Words) 239
  • 240. – The video payload ID tells you a lot about the signal you are receiving – It is sent as 4 User Data Words (UDW) UDW1 –UDW4 – You need the magic decoder Ring to decode it correctly. Video Payload Identifier (VPID) In quad-3G: 16 UDW 240
  • 241. This one is different VPID for each Link and is UHD 59.94p Y CbCr Quad 3G level A Video Payload Identifier (VPID) 241
  • 242. Video Payload Identifier (VPID) This one is the same VPID for each Link and is UHD 29.97p Y CbCr Quad HD 242
  • 246. SDI Metadata, HDR, WCG – Newly some metadata about HDR and WCG is added to the SDI feed. –Is it a ST2084 PQ curve or HLG –What is the diffuse white point –What is the Grade point 1K Nits, 2K Nits or 540 Nits? –Is it Full levels or Narrow levels (SMPTE Levels) – The Metadata for HDMI and the Monitor will be added when the Content is Encoded. Either manually typed in or read from a Metadata sidecar file. 246
  • 247. Payload identifier definitions for 1080-line payloads on a 1.5 Gbit/s (nominal) serial digital interface 247
  • 248. Payload identifier definitions for 1080-line video payloads on a quad-link 1. 5 Gbit/s (nominal) serial digital interface 248
  • 249. Payload identifier definitions for 1920 ×1080 video payloads on dual link high definition digital interfaces 249
  • 250. Additionally supported picture payload identifier definitions for 1920 × 1080 video payloads on dual link high definition digital interfaces 250
  • 251. Payload identifier definitions for 1080-line payloads on a 3Gbit/s (nominal) serial digital interface 251
  • 252. Payload Identifier Definitions for 2160-line Video Payload for Mapping on a 12 Gb/s (nominal) Serial Interface 252
  • 253. Payload Identifier Definitions for 2160-line Video Payload for Mapping on a 12 Gb/s (nominal) Serial Interface 253
  • 254. Payload Identifier Definitions for 2160-line Video Payload for Mapping on a 12 Gb/s (nominal) Serial Interface 254
  • 255. Ancillary Data Capacity of the 12G-SDI Interface – The ancillary data space available in serial digital interface transports is approximately equivalent to horizontal interval space and vertical interval space for the image format being transported. – In the case of images transported on the interface specified in this standard, it is dependent on the horizontal interval space and vertical interval space for each of the data streams being carried on the interface, multiplied by the number of data streams. – SMPTE RP 291-2 provides information on the size of the ancillary data space in a SMPTE ST 425-1 and SMPTE ST 292-1 interface. – For Mode 1 2160-line source image formats, the available HANC and VANC data space on the interface is 4 times the HANC and VANC data space available on a SMPTE ST 425-1 interface carrying the corresponding sub-image. 255