PapersWeLove - Rendering Synthetic Objects Into Real Scenes - Paul Debevec.pdf
1. Adam Hill ~ Improving Enterprises
Papers We Love, Dallas
I love this paper because there is very little advanced math at its core. (Sorry
Michael) It cleverly uses already established techniques to make beautiful images. It
was also the first paper I ever read from SIGGRAPH and saw the results in a real
world demo.
3. Abstract
“We present a method that uses measured scene radiance and
global illumination in order to add new objects to light-based
models with correct lighting. The method uses a high dynamic
range image- based model of the scene, rather than synthetic
light sources, to illuminate the new objects.”
4. But first, a small detour into
Computer Graphics 101
(from 1998)
By now rendering has a long history of improvements, but I
will try to give you a taste of how things were back in 1998 and
some fundamentals of CG
5. CG 101 - Scenes
Frames to be rendered are stored in Scenes
They contain:
● Cameras
● Lights
● Objects
These are stored in some file.
6. CG 101 - Camera
The point of view of the renderer.
View frustum - Volume to be
rendered, bounded by the near
and far clipping planes.
The scene is rendered in world
space (3D) then are projected
into screen space (2D)
7. CG 101 - Lights
Lights were special
purposed, just like
photography - key, fill,
back and background
8. CG 101 - Lights II
There were lots of lights in the scene, often
linked to specific objects and for specific effects
Lights are normally not visible in the scene, just
contributing to the scene
9. CG 101 - Objects
● Shaders
○ Materials / Rendering models - Lambert, Gouraud,
Blinn, Phong
■ Color, Specularity, Diffuse and Bump
This worked but made things flat and plasticy, lots of post
processing to “fix” how rendered images looked.
11. But that was not good enough
What if we could simulate rays of light?
How hard could that be?
12. Turns out... It wasn’t hard, but...
Ray tracing was
time consuming.
You were literally
recursively
tracing all the
light rays from
your eyes into
the scene
14. So Global Illumination was invented
It takes into account how light contributes to a
scene, not taken into account by ray tracing
methods. And didnt recompute everything all the
time
Lots of different algorithms, speed vs accuracy
trade offs. (Caching computations, finite element
methods, etc...)
15. GI Examples
Without Global Illumination - No green wall
contribution to the scene. Can't see the shadow of
the light fixture.
With Global Illumination - A lot more light interaction
16. DONE! :-)
Global illumination solved a lot of problems but
we were still placing lots of lights to make
synthetic objects look like they belonged in
natural settings.
So this guy going to SIGGRAPH had
a cunning plan
19. Abstract
“We present a method that uses measured scene radiance and
global illumination in order to add new objects to light-based
models with correct lighting. The method uses a high dynamic
range image-based model of the scene, rather than synthetic
light sources, to illuminate the new objects.”
20. What Paul Debevec discovered
You can use a HDR image to capture the light
model of the scene (radiance and irradiance)
and use global illumination to render the
synthetic objects.
21. How?
Capture the light model from
the distant scene.
Have accurate model of the
local scene.
Have a accurate model of
the synthetic objects.
Global illumination simulates
interaction between the
pieces.
Ignore light heading back
toward the distant scene.
Separate the scene into three parts
22. 1: Capture and model the light of the
distant scene via HDR
High Dynamic Range Imaging (HDR) - using photographic
and computational techniques to increase the dynamic
range of an image.
This is almost impossible to show because consumer
projectors uniformly suck, but we will try.
24. Combine them into a single HDR “image”
This is actually a file with more
color information than you can
display on regular monitors
Instead of integer for each pixel
in each color channel it now has
floats
Now it can be used by the render
to calculate radiance information
for the scene
26. More light probes
Grace Cathedral, San Francisco
Dynamic range: 200,000:1
Eucalyptus Grove, UC Berkeley
Dynamic range: 5000:1
27. Now measure the light
Remap the spherical
light map to a cube
map (p.7) and use it to
determine the incident
light contribution to the
scene
28. 2: Approximate the material based
local scene model
To describe the materials you need to provide the
parameters for it’s Bidirectional Reflectance Distribution
Function (BRDF).
Remember those shaders from earlier, those are specific
examples of a BRDF. But there is a more general
version, but for that we need...
MATH!
29. Bidirectional Reflectance Distribution Function
Bidirectional means that the camera
and the light source could swap
positions and the function gives you
the same result.
where L is radiance, or power per unit
solid-angle-in-the-direction-of-a-ray per unit
projected-area-perpendicular-to-the-ray, E is irradiance,
or power per unit surface area, and θi
is the angle
between wi
and the surface normal n. The index wi
indicates incident light, whereas the index wr
indicates
reflected light.
The bidirectional reflectance distribution function
is a four-dimensional function that defines how
light is reflected at an opaque surface
30. Radiance and Irradiance
Radiance is a measure of the quantity of radiation that passes through
or is emitted from a surface and falls within a given solid angle in a
specified direction.
Irradiance is the power of electromagnetic radiation per unit area
(radiative flux) incident on a surface.
These are both a measurement of flux - the number of photons per
time, into or from a surface per area.
31. How to get the BRDF
There are devices to measure surface reflectance
characteristics - spectroradiometers and let you calculate
the BRDF for each color channel. (Used in ‘The Matrix’)
Also you can assume a reflectance model, render, and
adjust. There are many collections of parameters for
various materials as a starting point. (Its CG’s version of
bubble sort)
32. 3: Complete material based models
of the objects
This is the easy part.
You built ‘em, you know what they are made of
and how they are shaped. :-)
33. Render!
I am not about to teach a course on rendering methods.
Lets just say this is a solved problem :-)
You render all 3 components and composite them together
But...
34. Differential Rendering
If you can’t get a good approximation of the
local scene sometimes it is adequate enough to
figure out the change between the contribution
of the synthetic objects to local scene and not
having the objects in the scene.
35. More (simple) MATH! (p.6)
If LSb
= just the local scene background (The HDR image)
Then the error between the background and not having any objects in the scene is
Errls
= LSnoobj
- LSb
This error is the difference between the BRDF characteristics of the actual local scene as compared to
the modeled local scene
So the final version should be
LSfinal
= LSobj
- Errls
And given the first we can rewrite it as
LSfinal
= LSb
+ (LSobj
- LSnoobj
)
Ok... its probably better to show you
38. Art and OMG! We’re Living in the Future
Rendering With Natural Light
http://www.pauldebevec.com/RNL/
Radeon 9700 RNL Demo
Fiat Lux
http://www.pauldebevec.com/FiatLux/movie/
Unity5 Demo
39. Links & Questions
Paul Debevec
http://www.pauldebevec.org
Radiance, BRDF, and the Rendering Equation - Lischinski
http://www.cs.huji.ac.il/~danix/advanced/notes2.pdf
Image Based Lighting Tutorial - Debevec
http://ict.usc.edu/pubs/Image-Based%20Lighting.pdf