So the idea of linear workflow is to under a few fundamentals. First up is the difference between 8-bit, 16-bit and 32-bit images. 8-bit images uses an algorithm that is consistent of 2^8, creating 256 colors in RGB, so in total there would be a total of 16,777216 combinations for color. 16 but would be over 281 trillion combinations and 32 bit would be over 79 octillion combinations, so that's a lot of color. What more interesting is that 8-bit and 16-bit images will round out to a whole unit, so for example 3.4 would round off to 3 and 4.9 would round up to 5. So the image can be optimized.
Now this tutorial is a little out of date as it being shown in Maya 2013, so there is only so much I can do. I understand the technology and methods Maya uses to an extent in order to render a scene and create an image. Having certain render settings either "Washing-Out" or "Bleaching-Out" an image due to it trying to recreate the original is understandable, and whilst this technique may not be as present in the current version of Maya its nice to understand how to ratios of lights and dark's clash to make the image.
Things like exposure, contrast and Gamma can give a scene realism in places you wouldn't expect, because of things that are lost in traditional renders rather than optimized renders. This can also be applied with things like Portal Lights, Ambient Occlusion layers, HDR Lighting and Final Gather Render options, its easy to see how a simulation of realism could be simulated, rather a brand new version of realism is "Created".
I know these images don't do this tutorial justice, but I do have a better understanding of how this technology works, and how it can be applied to certain situations if the need for it arises.
Also (Side Note) another thing to consider is the values a computers screen can export for you can effect a render of even a product. This got me thinking about how a lot of screens in media wouldn't be the same, not just in the sense of settings (E.g. Contrast, exposure, Saturation), so would the product than needed to be rendered need to be tested on multiple screens in order to get as true a render as many screens can portray?