Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

Also see [[AdvancedOptions]]
<div class='header' role='banner' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div id='mainMenu' role='navigation' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' role='navigation' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' role='complementary' refresh='content' force='true' tiddler='SideBarTabs'></div>
<div id='displayArea' role='main'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox where print preview displays the noscript content */
<div class='toolbar' role='navigation' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
Hugh and I go over the fundementals of sound and recorded audio in the first of a two-parter

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Hugh and Phil continue their discussion of audio and make particular mention of cabling for TV facilities.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Phil and Hugh talk about modern picture quality analysis and why good old fashioned colour bars are of little use to the modern broadcast engineer!

<html><iframe width="480" height="270" src="" frameborder="0" allowfullscreen></iframe>

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

CCIR rec 601 – Standard Definition Digital Video
Originally the 1982 standard defined; •	4 x 3 aspect ratio •	720 pixels x 576 lines – enough pixels for 5.5Mhz video •	Y Cb Cr luminance/colour encoding at 4:2:2 data rate – half res colour difference.
There are several things worth noting; •	4 x 3 display with 720 x 576 gives non-square pixels (almost square, but not quite) •	When 16 x 9 came along pixels got very non-square, same 720 x 576 resolution. •	Colour space & sampling structure unlike graphics formats

Remember – at this point Photoshop was still pre v.1 and 601 served the needs of TV images.
SDi – a standard that has grown with time
Whereas CCIR Rec 601 defines the sampling structure of standard definition digital video the SDi standard allows multiple data rates;

image: Wikipedia
3G Video over coaxial cable
Since the introduction of HDCamSR in 2004 most engineers have viewed the dual-link interface (2 x 1.5G) as ungainly. In 2006 a new standard for three gigabit SDi was proposed;
•	SMPTE372 is the standard that covers sixty(!) transport formats. •	A 3G payload can carry many variations of Y Cb Cr / RGB / XYZ colour, frame rates etc. •	You can even send two 1.48G HD-SDi streams down one side of a 3G connection - this is
being referred to as SMPTE 292B (an extension on the original HD-SDi spec). •	Inter-link timing discrepancy - it can be a max. of 40nS (not long!).

etc etc....
320 x200 320 x 240 720 x 480 640 x480 720 x576 854 x 480 800 x 600 1024 x 768 1280 x 720
1920 x 1080

WUXGA: 1920x1200
QXGA:	2048 x 1536
From video to computer display formats
•	VGA	- an analogue RGB format popular from the eighties onwards. Resolutions up to and beyond 2K possible. DDC allows the monitor to tell the equipment what resolutions are possible.
•	DVI – Since 2001 this has been replacing VGA. Digital RGB (the interface uses TMDS ‘lanes’) all the way up to 1920x1200 (single-link) and 3840x2400 (dual-link). DDC supported although EDID favoured (multi-res profiles, embedded).
1 1 1 1 1 1

DVI –> VGA breakout
From video to computer display formats cont.
HDMI – A recent standard that is electrically identical to DVI-D but with a different connector and the ability to carry digital audio and copy protection (HDCP – see later).

HDMI -> DVI breakout
From video to computer display formats cont.
DisplayPort – Is a royalty-free next gen display standard. Like DVI & HDMI it uses the same TMDS data-lanes idea but a lane is not dedicated to a particular colour component. At low resolution a single lane can carry R, G, B & control data. At higher res up to four lanes can be used but this is decided during a handshake process with the monitor; “Link Training”
DisplayPort version 1.2 was approved on December 22, 2009. Most significant improvement of the new version is the doubling of the effective bandwidth up to 17.28 Gbit/s, which allows for increased resolutions, higher refresh rates, and greater colour depth.
If the graphics card (or Blue Ray player etc) sees a DVI/HDMI device it runs in “dual mode” where the four TMDS lanes become R, G, B & control channels to emulate a DVI or HDMI source.

Why can’t I convert my Blue Ray disk to HD-SDi and record it on tape?
HDCP – High Definition Content Protection system. •	Industrial strength public/private key cryptography •	Each player has “device keys” and each disk “volume keys” •	These are combined and used to decrypt the content using a symmetric stream cipher
•	Hollywood has the ability to disable a device (Sony BD player, for example) on new releases by use of revocation lists in new content.

There is no way a manufacturer can remove HDCP encryption and expect their product to work for more than a few weeks – the Hollywood Alliance revokes keys when it discovers this! New content won’t play AND old devices will not handshake if their revocation list gets updated.
HDCP cont.
• If there are multiple “sinks” then the key exchange	has	to happen several times.
• The “repeater” (HDMI distribution amplifier or router)	has	to manage/arbitrate this process.
• If a “source” is updated by the disk “revocation list” then a sink can be disabled permanently.
• Only one compromised sink will spoil the process for all.

Extending DVI and HDMI
• • • •
Long pre-made DVI or HDMI cables; distance problem depending on monitor Extenders that use twisted-pair copper cable; horses for courses? Extenders that use fibre optic cable; optimal Going via HD-SDi; may be useful in TV facilities with 3G infrastructure
In an operational environment what works for a sys admin who has to take control of a low-res server a couple of times a day to create an email account might not be optimal for an editor who has to stare at a pair of 24” monitors for ten (or more!) hours.
Single link maximum data rate including 8b/10b* overhead is 4.95 Gbit/s. With the 8b/10b overhead removed, the maximum data rate is 3.96 Gbit/s
* 8b/10b overhead is a function of the TDMS lane structure Transition Minimized Differential Signaling

Data rates over copper – cat5, cat5e, cat6 & cat7/6a
•	The Germans refer to the cable as cat7, the Americans as cat6a (the 'a' is augmented) and Tyco (who seem to have the biggest portfolio so far) as XG-10gig cable.
•	The new cable is a 600Mhz channel and by QAM and OFDM signal processing techniques they can get ten gigabits per sec of ethernet down one hundred metres of cable. Nyquist still applies.

Making your life easy – EDID management
Part of the handshaking process between source and sink is the EDID exchange;
• • •
Graphics cars queries the display device Monitor responds with an EDID profile The profile contains details about resolutions, colour spaces and frame rates
A common problem with DVI and HDMI extenders is that the EDID return is often corrupted or absent causing the graphics card to have problems deciding how to drive the display.

• • •
The monitor may go to sleep because it doesn’t get a signal it recognises The graphics card may drive the monitor at some default ‘lego vision’ resolution
In the case of OS-X the EDID data gets cached by XFree86 and you can be looking at four reboots to rectify things!

example EDID profile from an Envision EN-775e monitor – extracted from;
startx -- -logverbose 6
Problems with extending OS-X 2nd display over fibre
1. Boot the machine with a single monitor connected to the DVI port - increase resolution in increments to 1920x1200 @60hz
2. Reboot
3. Check the resolution sticks.
4. Swap the monitor to the Display Port output
5. Reboot
6. Wind up the resolution as per 1. and if OS-X detects the extra monitor turn on display mirroring
7. Reboot
8. If both monitors come back up at 1920x1200 then turn off mirroring and ensure that both monitors are still at 1920x1200

9. Reboot
Make sure it's all sticking!
The answer!

The Future; DVI / HDMI & DisplayPort-over-IP, a technology whose time has come?
• • • •
Access to the digital display data means compression can be optimised Expensive gen-1 products are giving way to sub-£1k solutions As with networking not all applications suit display-over-IP Mixing D.O.I.P with corporate network traffic is rarely optimal.
Custom Hardware 1: Electronics and Arduino

Hugh and Phil talk about the need for custom made boxes and panels. They talk about the metalwork as well as circuit details used and wind up with a review of the Arduino platform.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>


Custom electronics and home-brewing

Section 1 - custom electronics and Arduino

Reasons for needing custom panels
No off the shelf parts that fit exactly or is simple enough. Things like the "Big Knob" are great but are overly complex for most situations.
Making monitors intelligently switch their inputs, for example; 
Differences between wet and dry GPIs - example; TriCaster has 5v low current tallies out whereas you may need to light your own 12v tally lamps
Wireless switching - NBC audio suite example.

A metal workshop will always make a better job of punching and engraving and will charge less than you think.
Based around their existing parts - 1u boxes etc punched for your connectors
Steel or aluminium podium plates

Much used pieces - typical parts I use; all from RS 
Volume pots
Fancy LEDs
Multi-part illuminating switches - it's the holder that dictates latching or momentary, not the switch body.

Typical circuit fragments
Audio volume - passive for unbalanced and balanced
Audio pads.

Relay drives from low-voltage lines
Mains considerations - class 1?

Arduino - programmable board with analogue, digital (i.e. GPI & GPOs), a serial port and possibly Ethernet (network stack and good library).
Arduino cookbook
Video of the board, connecting it up etc
Some code examples
Some ideas for projects
The "enough already" video project

Section 2 - NetIOM and RaspberryPi

NetIOM - single board network control gadget, complete TCP/IP stack with GPIs, GPOs, analogue pins and a serial port. 
Board is configured over an RS232 connection - remember to short/un-short the pin for programming. I put the short in the header of the serial cable so when the programming cable is plugged in the board is forced into programming mode. 
Windows software. 
Board can be interrogated over it's web interface.
Board can be configured to take action based on events.
Board can send email based on events
Two boards can be paired (hole opened in firewalls?) and the GPIs on one will mirror the GPOs on the other, same for the RS232 making it a very easy way of sending serial data over the Internet.
Thermistor can be used as a cheap temperature probe and the board can be configured to send an email when a certain temperature is reached and then close a relay when another temperature is reached; server room monitoring etc.

Single board Linux computer that boots off an SD card and runs from a 5v supply. 
Network, HDMI and audio allow for a very cheap project platform; the may well the a software solution to what you're trying to achieve. 

Phil & Tim Taylor go over some of the features of the DD-WRT router firmware and how they can be used to secure a home network

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html> 
Phil's "keeping your family safe online" course notes 

An alternative; the Tomato router
Hugh and Phil are joined by Deluxe Digital's CTO Laurence Claydon to talk DCI, DI, projectors and Dolby ATMOS.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

1. Tell us a bit about your career history; did you start as a film/cinema guy or were you a TV/video guy like Hugh and me?

2. An overview of the last few years of DCinema delivery

3. How do finished films leave the mastering facility and get to my local flea pit?

4. JPG2000 - why an IFrame codec? Video guys imagine delivery codecs should be long-GOP. Colour space - how much bigger than Rec.709?

5. Content Protection - how does it work?

6. Why do you see so many different DCinema servers in the same machine room? Dolby, Doremi, DVS etc

7. Network delivery - how's that coming on?

8. Projection - same models of servers in the theatres as I see in Deluxe's MCR?

9. Perhaps we could then talk a bit about theatres -  projectors and screen. Typical manufacturers; Sony 4K starting to dominate?

10. Audio - how does the DCP package carry audio? Description - where does that sit?

11. Audio - typical arrangement of speakers, amps? control?

12. ATMOS - an overview (perhaps a podcast in it's own right?!)
Our first podcast from January 2012 - Hugh and Phil talk about fibre optic cabling and practises as used in film and television facilities.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Hugh and Phil talk about optical multiplexing as well as new methods for accurately testing fibre cables. A few tips on basic fibre cleaning as well.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Hugh and Phil go over some of the basics surrounding delivery of TV shows as files. We then do a QC pass using Vidchecker. 

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Hugh and Phil go over some of the basics surrounding delivery of TV shows as files. We then do a QC pass using Vidchecker.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
 Hugh's background:

    Managing Director of Molinare Ltd, Soho, London. One of UK's largest Television Facilities Houses. Regularly voted in Top 3 Sector Companies.

    Group Technical Director. Molinare Ltd, Soho, London. One of UK's largest Television Facilities Houses. Regularly voted in Top 3 Sector Companies.

    Chief Engineer. Tele-Cine Ltd, one of UK's leading TV Facilities Houses.
@hugh_waters on the Twitter.

Simon Quill of Bryant Unlimited and Phil go over their range of network controlled, intelligently monitored power distribution units.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Hugh and Phil talk about KVM-over-IP systems with particular reference to Teradici and Phil's favourite manufacturer Amulet Hotkey. They also go over the basics of encryption with symmetric and public-key crypto.

<html><iframe width="480" height="270" src=";showinfo=0" frameborder="0" allowfullscreen></iframe>


[[Top Tips; Video]]
[[Top Tips; Audio]]
[[Beyond Colour Bars]]
[[TV Colour 3 - LUTs]]
[[Video Compression 101]]
[[Fibre 102]]
[[Intelligent Mains Distribution]]
[[System Design with Excel]]
[[DD-WRT router firmware]]
[[TCP/IP for Engineers]] parts 1 & 2
[[RS232 - 50 years!]]
[[Custom Hardware 1]]
[[Contemporary Display Standards]]
[[Audio 1]]
[[Audio 2]]
[[TV Colourimetry 1 & 2]]
[[What's in your rucksack?]]
[[Mains - 1 & 2]]

Part 1 - Safety
<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Part 2 - Postscript & update

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Having done a degree in maths and programming in the eighties Phil went to the BBC hoping to find the bright lights of show business. After five years in studios and maintenance he realised Soho was where the bright lights were and spent most of the nineties running engineering in a couple of the larger facilities. Despite years of being tech supervisor on Big Brother and Fame Academy he found no bright lights and has been running SI projects at Root6 since 2003. He now realises software is where the bright lights always were! 




<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Photo Sensitive Epilepsy in Television

In some cases, specific television programs featuring certain types of visual stimuli have provoked seizures in a small minority of television viewers, including some viewers with no prior history of seizures of any kind. The "Dennō Senshi Porygon" episode of Pokémon is the most frequently cited example. Since the early 2000s OFCOM have included PSE measurement as part of TV deliverable specification.

OFCOM guidance notes Section 2: “Harm and Offence”

But how bright is the punter’s television?

“Screen luminance may be measured using a hand held spot photometer with a CIE characteristic designed for making measurements from a television screen. The display conditions are those of the ‘home viewing environment’ described in Recommendation ITU R BT.500. For accurate results, the display brightness and contrast should first be set up using PLUGE (Rec. ITU R BT. 814) with peak white corresponding to a screen illumination of 200 Cd/m2”

Yours truly, setting up a monitor 

TV (both SD & HD) has a gamma response of 2.2

Of course all our digital video & files don’t have voltages representing brightness!
By comparison we set monitors to 80 Cd/m2

Frame 1

Frame 2 (more than 10% luminance difference)

Frame 3

Methods of testing

Live Video - The "Flash Gordon"; long established test set

Software - VidChecker; part of the video test suite
All the PDFs I'm showing will be from this

Software - Harding FPA

OFCOM vs Harding

These parameters are well defined and so anyone who can understand them can build a PSE detector that will with certainty detect when a violation occurs. This is how the situation should be as it avoids any one manufacturer of test equipment having a monopoly. Unfortunately this is just the situation that has nearly developed with the Harding FPA detector. Their machine is a PC with SDi capture card that you digitise the video sequence to be checked into and it runs an analysis. The other popular unit is the GordonHD which is more like a traditional piece of equipment in that it sits in the signal chain and gives an alert when it sees a violating sequence go past.
We have a couple of customers who like the idea of realtime performance that the Gordon gives and don't like having to capture (the Harding doesn't support standard codecs so no Quicktime reference export from Avid!), analyse (in slower-than-realtime) and then get a report - only to repeat it all after you've corrected the offending clips (because the broadcaster likes to see a full 10:00:00:00 - 10:54:00:00 report!). The Gordon on the other hand is cheap (£3k against £13k) and just sits there taking a feed of HD/SDi video and Timecode and firing a GPI when a violation is detected (and even entering the TC into a file) which means you can have it hanging off the Avid (or whatever) and the editor can rock'n'roll over a piece of footage adjusting his edit point over the flash frames (it's mostly paparazzi footage with all those camera flashes that cause it) until he gets a sequence that doesn't cause a problem.

Anyhow - you can tell which machine I think is best. Harding is a great self-publiciser who gives you the idea that he alone knows the secret-sauce of PSE. The guys at Tektronix tell me it's on the way as an upgrade for their WVR-series 'scopes but they are worried that Harding has all the patents stitched up.

Anyway - a quick once around pals from a year or so ago revealed the following;
Ascent Media check all of five's output (including the two daughter channels) on a GordonHD, ITV's QC department at Upper Ground use a Gordon as their first-pass analyser and Channel Four specify it as well. However, talking to everyone in facilities reveals that they (almost) universally believe the Harding to be the only machine capable of doing the job.
I had several earlier-model Gordon's at Resolution and they were superbly fitted to the job (and cheap enough to have one in every suite). We never had a tape sent back that had gone through it and that included many more hours of terrestrial television than most facilities ever turn out (including quick turn-around stuff with lots of potential trouble - think the Friday night eviction show for Big Brother).

Harding checks for OFCOM but also "Red Flashing"? and "Regular Patterns"?! - and this is where nobody else had any visibility into his algorithms.

Some test files & analysis

Sky Midnight News_Sky News_2013_07_22_cut1.mpg

I also have a couple of others - BBC News off air and Al Jazeera from the same day - lots of paparazzi flash stuff of the royal baby.

Sky Midnight News_Sky News_2013_07_22_cut1.mpg_report_2013-07-26_17-00-27.pdf
BBC News_BBC NEWS_2013_07_22_cut.mpg_report_2013-07-26_17-16-01.pdf
News Live_AL JAZEERA_2013_07_23_cut2.mpg_report_2013-07-26_17-01-07.pdf


Thankfully the DPP spec now mentions the OFCOM spec directly and so hopefully we'll see people just asking for OFCOM rather than Harding

LCDs are much kinder on the eye because they don't "pulse" like CRTs do - BUT, we run LCDs a darn sight brighter (typically 2x or 3x the Cd/m2)
The USA (the most litigious country in the world!) has no standard governing PSE on television.
The Grange Hill title sequence (with the sausage) fails Harding FPA - BUT, can't here have been a piece of video that more kids have stared at with their noses 4" away from the screen?!
Hugh and Phil go into the details of RS232C and how it is still used in broadcast engineering for configuration and test.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Looking forward to doing our first with a guest - later this week with the mighty RupertWatson of Root6

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Brief history of shared storage
SCSI path finders with Quantel and Lightworks in the 1990s
Late 90s fibre channel SANs
Film & TV - Unity, Tivoli Sanergy
Quantum SNFS file system - still the high-end solution
Apple XSAN and where that has gone?
Cheap SAN in a box solutions; Terrablock etc strengths and weaknesses
Ethernet shared storage - improved codecs, GigE, 10GigE
Current examples; DDP, Space, Object Matrix, Isilon, Avid ISIS
NAS vs SANs vs "Ethernet SANs"
iSCSI vs regular network protocols.
The state of shared storage for film & TV in 2013 and the future.
Brief mention of data rates - SD, HD and film; compressed & uncompressed.

How to size your storage
Estimating required bandwidth - file type, format, resolution etc etc
How many streams do you cater for? 
Balance of Live or nearline?
Managing your SAN: what to think about; Users & MAM, 
Engineering & managing disc life, de-fragging? etc
How about SSDs? Are they making an appearance? What are the gotchas?
Show notes
The Engineer's Bench Podcast

Phil and Hugh go over a few tips and tricks for using MS Excel in the design of film & TV facilities. 

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
Part 1 - The Basics

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Gone are the days when every cable carried a synchronous video stream. Contemporary engineering staff have to be aware of packetized networks and how they impact the modern facility. This session covers the fundamentals of the protocols and practises that drive all internet-derived networks.

•	History - why a packet-switched network?
•	Layers - why they're important
•	The OSI 7-layer model and why it’s not accurate!
•	Protocols – UDP/IP and TCP/IP
•	Routers, hubs, switches and how they differ
•	Classes of networks
•	Network Address Translation - NAT
•	Tips & Tricks – broadcast packets, Ethernet speed, forwarding with Skype
History – Why Packet-switched networks?
Packet switching is a digital network communications method that groups all transmitted data – irrespective of content, type, or structure – into suitably-sized blocks, called packets.  This all dates back to the prototype internet of the 1960s – ArpaNet.
Packet switching features;
•	Delivery of variable-bit-rate data streams (sequences of packets) over a shared network
•	Switches, routers and other network nodes buffered and queue packets
•	Variable delay and throughput depending on the traffic load in the network.
•	No dedicated circuits. Data units do not have to follow the same route.
•	Each circuit carries many different transmissions at the same time. 
•	Every data unit sent through a packet-switching network must have enough information in the header that the nodes in the network can determine how to route the data unit. This tends to add overhead to the data unit, but the trade-off is well invested.
Layers – why are they important?
The layered concept of networking was developed to accommodate changes in technology. Each layer of a specific network model may be responsible for a different function of the network. Each layer will pass information up and down to the next subsequent layer as data is processed.
Why the OSI 7-layer model is not too useful!
Even though the concept is different than in OSI, the Cisco/Arpanet layers are nevertheless often compared with the OSI layering scheme in the following way: 
•	The Internet Application Layer includes the OSI Application Layer, Presentation Layer, and most of the Session Layer. 
•	Its end-to-end Transport Layer includes the graceful close function of the OSI Session Layer as well as the OSI Transport Layer. 
•	The internetworking layer (Internet Layer) is a subset of the OSI Network Layer, 
•	Link Layer includes the OSI Data Link and Physical Layers, as well as parts of OSI's Network Layer. 

These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the Network Layer document.
IT people get very hung-up on the seven layer model, but four layers are enough for anyone!
The four layer ‘Cisco Academy’ or Arpanet model – some detail;

•	The Link layer corresponds to the hardware, including the device driver and interface card. The link layer has data packets associated with it depending on the type of network being used such as ARCnet, Token ring or ethernet. In our case, we will be talking about ethernet.
•	The network layer manages the movement of packets around the network. It is responsible for making sure that packages reach their destinations, and if they don't, reporting errors.
•	The transport layer is the mechanism used for two computers to exchange data with regards to software. The two types of protocols that are the transport mechanisms are TCP and UDP.   
•	The application layer refers to networking protocols that are used to support various services such as FTP, Telnet, WWW, etc. However a program that you may write can define its own data structure. For example when your program opens a socket to another machine, it is using TCP protocol, but the data you send depends on how you structure it.
IP addresses
The thing that all hosts (computers, servers, embedded devices etc) need to communicate on the Internet (or any IP LAN) is an IP address:
how many IP addresses are there?
Additionally each host on the network will have a subnet mask which dictates the class of the network (we’ll come to that later) – in most of the small networks you’ll deal with expect to find hosts assigned to a C-Class network with a subnet mask:
IP addresses are different from MAC (Media Access Control) addresses (AKA H/W address):
•	01-23-45-67-89-ab

MAC addresses are unique and should be unique to each piece of equipment.
As well a unique IP address a machine will send and receive internet traffic on ports – these are a virtual construct and allow segregation of traffic for different services. Typical ports used might be:

•	80	 		The standard port for web traffic
•	21	 		The File Transfer Protocol port FTP
•	135-139 & 445	Windows file sharing protocol (SMB)
•	5900			The VNC remote control protocol

Generally the ‘low-order’ ports (1-1024) are reserved for defined protocols and all ports above 1024 are less well defined and can be used for whatever purpose you want.
Part 2 - The Protocols

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Types of Protocols – UDP/IP and TCP/IP
IP has these concepts of connection and connectionless communication. Although there are many kinds of IP packets defined the ones you’ll come across the most are UDP (User Datagram Protocol) and TCP (Transmission Control Protocol). 
TCP operations may be divided into three phases. 
•	Connections must be properly established in a multi-step handshake. 
•	Data transfer phase. 
•	After data transmission is completed, the connection termination closes established virtual circuits and releases all allocated resources.
Due to network congestion, traffic load balancing, or other unpredictable network behaviour, IP packets can be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost packets, rearranges out-of-order packets, and even helps minimize network congestion to reduce the occurrence of the other problems. Once the TCP receiver has finally reassembled a perfect copy of the data originally transmitted, it passes that datagram to the application program. Thus, TCP abstracts the application's communication from the underlying networking-details. The user (or even the programmer) need not be aware of the underlying plumbing. One developer I know insists that real men should avoid TCP!
UDP/IP dispenses with all the heavy lifting that TCP provides and offers a more lean’n’mean method of communication.
This provides the application multiplexing and checksums that TCP does, but does not handle building streams or retransmission giving the application developer the ability to code those in a way suitable for the situation and/or to replace them with other methods like forward error correction or interpolation. 

•	Streamed media – Internet radio etc where packets arriving out of order is worse than packet loss.
•	DNS – the domain name system has to be lightweight with little need for handshaking.
•	DHCP & RIP – the protocols used to welcome a machine onto a LAN and provide it with all the settings it needs to communicate properly ahead of getting an IP address.

Typically UDP/IP can be thought of as the utility protocol that undergirds the Internet and all IP-based LANs.
Before we proceed – the structure of a TCP/IP packet;
Stolen from Wikipedia
The protocol allows for variable-sized data payloads as network conditions dictate – on a very quiet local-area network packet size may extend to 64k bits but extensions to the protocol allow for hundreds of megabytes!
The three-way TCP handshake;
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:
•	The active open is performed by the client sending a SYN to the server. It sets the segment's sequence number to a random value A.
•	In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number (A + 1), and the sequence number that the server chooses for the packet is another random number, B.
•	Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value, and the acknowledgement number is set to one more than the received sequence number i.e. B + 1.

At this point, both the client and server have received an acknowledgment of the connection. This also means it is nigh on impossible to spoof an IP address on the internet as the return from the handshake would not arrive back at the originating host.
Real world example – PC connects to a router and starts a web-browser session;
1.	Computer powers up, OS loads (including n/w card driver and IP stack) and acquires network settings;

•	The operating system issues a DHCP request over UDP/IP – remember at this point the PC has no IP address so it uses the ‘broadcast’ address of – DHCP Discovery
•	The router (or in the case of a larger network, the DHCP server) responds with the offer of an IP address - DHCP Offer
•	The operating system issues an acknowledgement over UDP/IP with the IP address (in case multiple DHCP servers answered the call, they can hang onto those IPs) – DHCP Request
•	The router (or DHCP server) responds with other details (DNS etc) and the lease time – how long the IP address is good for- DHCP Acknowledge

The computer now has all the information it needs to communicate with the local area network and (via the router) out to the Internet.
Real world example – PC connects to a router and starts a web-browser session;
2.	Computer runs a web browser and requests

•	WRT our four layer model Firefox (running at the application layer) requests the web page.
•	The transport layer of the OS’s IP stack checks to see if it has a recent record of having visited that site and if not issues a DNS request to the internet layer 
•	The network layer sends the request out (over the appropriate interface – ethernet cable or WiFi) to the router (over UDP/IP, port 53)
•	The router sends the DNS request out (over its internet-facing interface – aDSL or cable) to the ISP’s upstream DNS router (over UDP/IP, port 53) and takes a note of which computer made the request.
•	The ISP’s DNS server checks to see if it has a recent record of that site and if not it passes the request on to the next higher-order DNS server – eventually gets turned into which the PC can do something with!
•	The router gets the result of the DNS lookup and returns it to the PC (UDP, port 53)
Real world example – PC connects to a router and starts a web-browser session;
3.	Computer now knows the IP address of the page it needs and can fetch it
•	The transport layer of the OS’s IP stack issues a request for the web page to the internet layer using the newly acquired IP address. This forms the start of a TCP session
•	The network layer sends the request out (over the appropriate interface – ethernet cable or WiFi) to the router (now over TCP/IP, port 80)
•	The router sends the web page request out (over its internet-facing interface – aDSL or cable) to (TCP/IP, port 80) and takes a note of which computer made the request. ISP & other routers send the packets on their way – beyond the ‘scope of today!
•	The server at receives the request on it’s port 80 (via it’s network, internet, transport and application layers) and returns the HTML file that makes the page.
•	The router (eventually!) gets the packet(s) over port 80 and (knowing who requested the packets) returns them (over port 80 TCP) to the PC.
•	The IP stack on the PC returns the packet up the network, internet, transport and finally application layers allowing the browser to start building the page!

The previous example is simplified and doesn’t include;

•	The ‘heavy lifting’ protocols like BGP (Border Gateway Protocol) which allow the packets to traverse the public internet.
•	The unavoidable TCP/IP retries as packets are lost.
•	The ARP (Address Resolution Protocol) table that the router has to build and maintain to know what IP addresses match to MAC addresses.
•	Any additional negotiation that goes on if the PC is on a wireless network.

But if you can follow the simplified example you will now be aware of the hundreds of transactions and thousands of packets that have to travel to build a single web page.
One concept that is important to read further on is TCP sessions
Classes of networks and IP addresses space
The Internet was originally designed as a ‘classful’ network where (to aid routing when embedded network devices were slow/expensive) every IP address would imply routing by the class of it’s network. 

In the case of a B-Class network the basic division is into 16 bits for network ID and 16 bits for host ID. However, the first two bits of all class B addresses must be "10”, so that leaves only 14 bits to uniquely identify the network ID. This gives us a total of 214 or 16,384 class B network IDs. For each of these, we have 216 host IDs, less two, for a total of 65,534.
In the early years the Internet Assigned Numbers Authority handed out A and B class networks too readily leading to what was perceived a great shortage of IP addresses in the mid 90’s until we were saved by..... 
Network Address Translation
In the mid-1990s NAT became a popular tool for alleviating the fact that the world was running out of IP addresses. Your office or home computer will likely have an address like which clearly could not be routed across the internet.

•	Most systems using NAT do so in order to enable multiple hosts on a private network to access the Internet using a single public IP address (as a gateway). However, NAT breaks the originally envisioned model of IP end-to-end connectivity across the Internet, introduces complications in communication between hosts, and affects performance.
•	NAT obscures an internal network's structure: all traffic appears to outside parties as if it originated from the gateway machine. NAT routers are by their nature superb firewalls as they turn away unrequested packets and stop unsolicited traffic from entering a network.
•	Network address translation involves re-writing the source and/or destination IP addresses and usually also the TCP/UDP port numbers of IP packets as they pass through the NAT. Checksums (both IP and TCP/UDP) must also be rewritten to take account of the changes.
Routers, hubs, switches and how they all differ;

•	A router is a networking device whose software and hardware are usually tailored to the tasks of routing and forwarding information. Routers connect two or more logical subnets.
•	A network which is generally optimized for Ethernet LAN interfaces and may not have other physical interface types.  The switch maintains an ARP table which keeps a record of all of the MAC addresses on the network and learns which ports to switch traffic to. Initially then the switch powers up it behaves as a Hub.
•	Hub (predecessor of the "switch" or "switching hub") does not do any routing, instead every packet it receives on one network line gets forwarded to all the other network lines.

The term "layer 3 switching" is often used interchangeably with routing, but switch is a general term without an exact technical definition. Hubs are increasingly rare.
There is a world of difference between the domestic aDSL/cable/wireless router and a data-centre type devices and small 8-port Ethernet switches and enterprise switches. However, in a very real sense you can regards your home or small office network as a microcosm of the Internet.
Tips and Tricks

The multicast subnet 

If you're ever in a position where you need to identify a device's IP address (even on a different subnet, but the same LAN segment) you can PING and everything on the segment will respond to the PING (firewall settings permitting).

So, if I set my machine's IP address to on a 10.100.100.x network and then PING the multicast address;

This comes in very useful with Amulet DXiP cards which you configure over a web interface. Our demo kit came back from a customer who had forgotten what they had hard-set the cards' IP addresses to and this technique was a life-saver.


You can see that all the machines on the network respond.

Ever need to slow down ethernet? 

I've had a few occasions when I've had to force gigabit down to 100BaseT or even 100 down to 10BaseT. My preferred method is to force the NIC down to the appropriate speed but if you aren't using Windows (OS-X, Linux or an embedded device) then a hardware solution is needed.

•	Distance - 100BaseT only goes 100m over cat5e but 10BaseT goes 300m; If you find yourself in that situation then an old 10BaseT hub at the far end does the job.
•	Equipment reports 100BaseT but is only reliable at 10BaseT; my Squeezebox network MP3 player is running a hacked OS and works a lot more reliably at 10BaseT. I achieved this by swapping the green/white and orange cores in the network cable. This degrades the common-mode rejection performance of the cable and means the ethernet switch ramps the circuit down to 10BaseT.
•	Gigabit too fast? Just make off a cable with the blue and brown pairs excluded. Gigabit needs all four pairs and if the switch only sees the Green and Orange pairs it will assume 100BaseT.
Port forwarding to help Skype

One of the things I found makes a difference is forwarding the port you particular Skype installation has randomly chosen. The reason for this is that we pretty much all live behind NAT routers and for a peer-to-peer protocol to work you need some users to be supernodes
The Skype system automatically selects certain users with fast CPUs, good broadband connections and no firewall issues to be "supernodes", through which other users may connect. Skype can therefore utilise other users' bandwidth. There are some 20,000 supernodes out of many millions of users logged on.

So if you're communicating with another Skype user and having to traverse two NAT routers (his and yours) the call has to go via a supernode - the two routers have no way of letting each other know what Skype ports are required to be left open. The way to avoid this is to port-forward your Skype port in your router and then there is only one NAT taversal taking place. Now if you run your network DHCP then you may have to set the MAC address of your PC to always get dished the same IP address, but once that's done it's trivial to look up the port being used (in Skype's Option>Connection settings) and then set that as a static UDP route in your Netgear/DLink/whatever box. If you have more than one Skype client then you need to repeat the process for each computer - but since Skype allocated a random UDP port on install (in the range 1024-65535) it is unlikely you'll get two the same on a class-C subnet.
The good thing about this is that once you've done it you'll enjoy better throughput with any other Skype user - they don't need to have even have heard of port forwarding.
Hugh and Phil go over the practise of using a 3D LUT (look up table) to get OLEDs & LCD televisions closer to the Rec.709 gamut.
<html><iframe width="640" height="360" src="" frameborder="0" allowfullscreen></iframe></html>
Hugh and Phil lay the groundwork for the next podcast about monitor calibration. This episode concerns perception of colour in TV.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

After the intro to colourimetry Hugh and Phil talk about calibrating monitors for film and TV use.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>
The Engineer's Bench is a series of podcasts done as screen recordings. They are Skype conversations between PhilCrawley and HughWaters relating to broadcast engineering and are meant as introductions and "101"-type guides to various aspects of the Media Technician's trade. The are not meant to relate to the current state of the industry and hopefully will be useful for years to come! Hugh and Phil are both ex-Chief Engineers of various TV companies.

on Youtube

On iTunes (works on iOS devices and AppleTV)

Vanilla RSS feed

Direct http download of the H.264 files

Or on Phil's blog with some extra notes;

Or, the obligatory Facebook page!

<html><IMG SRC=""></html>

Hugh and Phil talk about some tips and get-out-of-gaol-free cards with respect to broadcast audio.

<html><iframe width="480" height="270" src=";showinfo=0" frameborder="0" allowfullscreen></iframe>
Hugh and Phil talk about some tips concerning broadcast video.

<html><iframe width="480" height="270" src=";showinfo=0" frameborder="0" allowfullscreen></iframe>
Hugh and Phil go through some of the principles of traditional video QC using the Tektronix WFM and WVR series test sets.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

* Recap of operation of the Tek
* How legalisers fit into the television delivery process
* Complete run-through of the network QC features of the WVR
* More advanced problems - PSE, BCAP captions sizes etc.
* Physical transport problems - Timecode & AES timing etc.
* Parameter sets and building/maintaining presets for WVRs

Recap of Video & Audio measurement principles and requirements

To avoid finished programmes from being thrown back by (often over-zealous) broadcasters it is necessary to ensure the technical aspects of the recording (be it on video tape or as a file) conform to the delivery specification. Since all broadcasters (as well as sell-through and other commissioning companies) have slightly difference requirement a familiarity with their published documents is important;

* Video - is the material constrained level-wise (both for luminance - the black and white image, and chrominance - the colour content)? 
* Audio - Is the sound track constrained to +8dBu and have a sensible dynamic range?
* Content 

o Are captions compliant with the BCAP regulations
o Is the programme Action and Graphics Safe?
o PSE - no excessive flashing frames
o Timecode - does programme starts and finish as specified (typ. 10:00:00:00)
o Clock and identification
o Consistent aspect ratio

Recap of operation of the Tektronix WVR & WFM-series

The WVR and WFM-series rasterisers differ in that the WVR (pictured) requires and external display. Aside from that equivalent models are largely the same and represent the best of breed. If you can understand and drive a Tek you'll not have any trouble with other manufacturers. Measuring waveforms etc. is standard across other types.

The machine has a quad-split display and you can assign from a dozen different tiles. 

It can be connected to your network so you can control and see the output on your computer as well as downloading the QC and error logs.

Although the delivery specifications for most broadcasters are broadly similar you will have to read the documents that relate to your specific jobs ahead of doing QC work.

How legalisers and audio compressors fit into the television delivery process

The first display shows the input signal that goes out of bounds at the black end (left; sub-black) and overshoots white (right end). The second shows the signals hard-clipped with all the detail in the blacks and whites cut off. The right-hand most waveform shows the result of using a legaliser which has applied a more graceful attenuation in the whites and gain in the blacks to maintain all of the black and white detail whilst producing pictures that comply with standard.

In the case of audio the top-end levels can be similarly auto-corrected with a compressor. These are devices that limit the dynamic range of audio and when configured in "limiting mode' can ensure that no audio peaks go above +8dBu (6 on a PPM, +4 on a VU meter). Remember the two uses of the word compression! 	However - legalisers and compressors will never improve badly produced television.
Video displays of the WVR/WFM-series

There are several displays that allow you to monitor the state of the video signal - since these can be tile'd in a group of four you can keep an eye on multiple parameters;

* Waveform display - Component parade
* Waveform display - RGB parade
* Waveform display - simulated composite display

The waveforms are normally displayed as one or two television lines overlaid so that you see a whole frame of video but sliced into "2H" or "1H" - you can also overlay one or two video fields - "1V" or "2V"

Since the colour (or "chroma") portion of the signal is often troublesome the Tek has several display modes;

* Vectorscope - the traditional colour component display
* Arrowhead display - combines luminance and chrominance levels into a single gamut display
* Diamond display - shows the two colour difference signals in their own diamonds - useful for colourists and studio engineers.

We'll spend a while looking at these on the machine - they are also shown in the Tek poster.

Different colour spaces 

The arrowhead display is very useful as a single display that shows both overall video levels and colour gamut. The Arrowhead display plots luminance on the vertical axis, with blanking at the lower left corner of the arrow. The magnitude of the chroma subcarrier at every luminance level is plotted on the horizontal axis, with zero subcarrier at the left edge of the arrow. The upper sloping line forms a graticule indicating 100% colour bar total luma + subcarrier amplitudes. The lower sloping graticule indicates a luma + subcarrier extending towards sync tip (maximum transmitter power). 

Status displays of the WVR/WFM-series

The real power of a rasterising test set is in that it can detect lots of things about the incoming signal - not only over and under-levels but loss of signal, bad standards, and even extended periods of audio silence (very useful if the device is across a transmission feed).

In this example the two left tiles are showing the QC log as it has entries written into it (upper) and the video status page - this shows more about the state of the video signal and as errors occur (as defined in the template) they are written into the log.

As errors happen then can cause one/many of several things to happen;

* Entry in the log
* The "red diamond' alert on screen
* GPI (via D-type connector on the back)
* SNMP trap (network alert - email, SMS)

Each can be uniquely useful.

Audio displays of the WVR/WFM-series

The audio display differ slightly in that there is only one that can be assigned to the 4-way tile display.

The bar-graphs on the left differ from the bar-displays on a VTR in that they can have digital or PPM-type scales. This image shows the digital level in dBfs. In this case the audio is Dolby-D encoded and the machine is deriving the six surround channels and making a pair of stereo bars for reference. The final bar shows various level and average values relating to the Dolby audio. 

The phase display is showing the left and right channels set at ninety degrees to give a proper representation of audio phase.

The yellow diamond give a quick check on gross phase errors.

Complete run-through of the network QC features of the WVR/WFM-series

Automated QC starts with the network connector on the back of the device. Once connected to the network it can be controlled and all logs (both system error and session QC) can be downloaded (via a web browser) to a computer. 

Once you have the instrument physically attached to the network you need to set it up for TCP/IP communication - you may need to get these details from your in-house IT team.

Once your PC and the Tek are on the same sub-net then you can use a web-browser (Firefox, Internet Explorer etc.) to take control.
From the browser you can download either the error log or the QC log but if you have Java installed (at least v. 1.4) you can download and run an applet that allows you to not only control all aspects of the machine but you can even see the display (delayed by a couple of seconds and not particularly real-time). If you can persuade IT you can even open a hole in your firewall and do this from home! I've been saved several 3:00am dashes because of this!

Advanced problems - Photo Sensitive Epilepsy

The issue of Photosensitive Epilepsy comes up often in television - back in the late nineties there was an episode of PokÈmon that featured flashing images that provoked kids in Japan to have seizures. Since then Ofcom have been very keen to avoid this on British television and since 2003 have produced the guidelines;
This is an extract from the document but it does include the important details which hinge around the following;

Advanced problems - BCAP captions sizes

The preferred minimum heights of on-screen text in TV advertisements made in different formats are given in the table.

Rule 5.4.2 of the advertising Code requires that text "must be legible' and must comply with this note. The aim is to achieve a standard of legibility that will enable an interested viewer, who makes some positive effort, to read all text messages. Sections 4 to 8 below indicate the minimum standards with which relevant text must comply.
Advanced problems - Cages

The Tek WVR/WFM-series test sets include safe area markers as well as movable cursors that allow you to check the size of captions to ensure compliance with the BCAP.

A good starting point for building broadcaster-specific templates is the built-in EBU103 standard. This is the video-standard that most UK-based television has to conform to. Additionally it's a lot easier to set up a preset template over the Java interface rather than poking the left-down keys on the front of the Tek.

The five present buttons on the front act to recall a preset when pressed momentarily, but if you hold in the button for three seconds that preset records the entire state of the machine. So, get the instrument exactly as you want it - what displays are in the four tiles, what parameters are being monitored etc and then store it away. You can download and upload preset templates over the web interface or via a USB stick on the WFM-version.
Hugh and Phil go into the emerging standards for Ultra High Definition television.

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>

Phil had a day at BAFTA listening to various speakers from the industry talking about 4K (quad-HD, UltraHD, etc etc) and the surrounding standards.

ITU Rec.2020 (URL: is the document that covers 4k TV (in fact it defines two resolutions - 3840 x 2160 and 7680 x 4320 - which I'll refer to as 4k and 8k television, but these aren't the same as the 4096 pixels and 8192 wide resolutions used in digital film).


The colour space is monstrous! The 2020 triangle is even bigger than the P3 colour space (as defined by the DCI) - take that Mr. Dolby! It'll be a while before ANY displace device can faithfully reproduce that gamut. Thankfully we stay with D65 for white (well, 6504k to be strictly correct - Planck's was re-calculated in the 70s) and the primaries are;

ïred: 0.708, 0.292
ïgreen: 0.170, 0.797
ïblue: 0.131, 0.046
ïwhite: 0.3127, 0.3290

The new luma transfer function is: Y'= 0.2627 R + 0.6780 G + 0.0593 B and for the first time ever in television an allowance for constant luminance has been allowed. There is an almost philosophical argument by Charles Poynton


and others that constant luminance is the way to go. Essentially the gamma response should be applied only to the derived luminance rather than the three colour components. I suppose your feeling on that comes down on whether you think gamma is correcting for the camera response (that's what I was always taught at the Beeb in analogue SD days) OR if gamma is a tool to give better dynamic range in the dark areas of the picture. I expect that constant luminance (proper Y as opposed to Y' / "luma") should best be constant in the case of 12-bit video (where you have so much more dynamic range anyway) but remain pre-corrected RGB in the case of 10 and 8 bit 4k.

Frame rates are defined up to 120FPS with no interlaced framerates - unfortunately non-integer (23.98, 29.97, 59.94) are still hanging around like bad smell! The Beeb's Richard Salmon showed a very convincing argument for >100FPS for sports footage. Essentially as you have more resolution the difference between detail in static pictures and moving scenes become objectionable. The problem is that currently HDMI 1.4 only supports a maximum of 30 FPS at 4k and so we're waiting for HDMI 2.0.

Today you can buy equipment that works at the TV "4k" resolution which is also referred to as "quad-HD" because it has twice the number of pixels horizontally and twice the number of active lines; 3,980 x 2,160. Blackmagic have already implemented what they call 6G SDi - i.e. 4 x 1.5Gbit/sec 1920x1080 @30FPS (max) with 4:2:2 colour sampling. 


If you want 50 or 60P at 4:2:2 you'd need 12G and should you want to go to 4:4:4 RGB at 12bit then you're looking at >20G! 
Whilst a coax interface still (just!) works at 6G (and I'd point you towards some research I did in 2009) it seems like single-mode fibre is the only sensible interface that we'll have for synchronous video as 4K starts to be used for live production.


Richard Salmon from the BBC showed that with the huge amount of resolution that 4k brings the human brain recoils if there isn't enough temporal resolution to make moving images look as good as static images. Imagine a rapid pan across the crowd at a football stadium. At sub 100 frames per sec you don't see enough detail in the picture (each pixel is smeared so as to make it look like a much lower resolution image) but when the camera stops the pan you suddenly notice the huge amount of detail. That difference in static and dynamic resolution can, in extreme cases, cause nausea.  With this in mind it seems that the standard for live TV will be 4:2:2 colour encoding at 120 FPS! Anyone for 24Gbit/sec video?! v1.4 HDMI currently only supports sub-8 Gigabits/sec. So - it seems like we're going to have to wait for cable standards to catch up and when it does it'll probably be 9/125µ fibre.


Take this to 8k (which is the second half of the proposed UHD TV standard) then we're looking at 96Gbits/sec! Even current standard fibre struggles with that! So - the other interesting technology that may well form the mezzanine format for moving over cables and networks is pixel-free video;

This video was shown in one of the sessions; the speaker was Professor Philip Willis of The University of Bath's Computer Science department.


It shows various pieces of video that have been converted to a contour/vector representation where instead of using pixels in a raster to represent video they use contours (which also have shading associated with them) and vectors (which dictate how the contours are moving). This is not an effort to compress the data load; although Prof. Willis was at pains to point out that they have not made any efforts to optimise or do any bit-rate reduction calculations on the data, rather it is a way of representing high resolution video in a pixel-free manner. This might provide a useful transport/mezzanine format for moving 4k and 8k television around, rendering the pictures at the resolution of the target display device. 

The upshot of this is that rendering at a higher resolution than the material was shot at shows none of the aliasing that you'd expect from pixel-based video. Although you can't get more detail than was there originally the codec fails gracefully such that the images are not unpleasant to look at (unlike the low-res YouTube clip above!).
Prof. Willis gave a tantalizing little extra in the Q&A sessions - he implied that they are looking to give the contours/vectors a time-based element so that they move not only in X-Y space, but along the t-axis such that the pixel-free video now becomes frame free! You can render the resulting data as easily at 1920x1080 @60FPS as you could 720x576 @59.98 fields without any aliasing in the spacial dimensions OR temporally; say goodbye to standards conversion!

The original paper is a bit heavyweight but if you are happy with vector maths it is understandable.

Hugh and Phil go over the principles of the Discrete Cosine Transform as applied to video compression and the differences between IFrame and long-GOP codecs.

<html><iframe width="640" height="360" src="" frameborder="0" allowfullscreen></iframe></html>
Phil and Hugh talk about the essentials for any broadcast engineer's bag;

Multimeter with terms and BNC barrels
Basic tools - screwdrivers, t-strips, tweaker
Mains death tester
Homebrew RS422 tester
Earpiece to copper ends
Laser pointer

<html><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></html>