In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments.
First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatio-temporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos.
We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. We conduct a user study with 20 participants to qualitatively evaluate Social Street View and Geollery. The user study has identified several use cases for these systems, including immersive social storytelling, experiencing culture, and crowd-sourced tourism.
We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures, haptic feedback, and visual cryptography in virtual and augmented reality environments.
Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education.
THE HISTOLOGY OF THE CARDIOVASCULAR SYSTEM 2024.pptx
Fusing Multimedia Data Into Dynamic Virtual Environments
1. Fusing Multimedia Data Into
Dynamic Virtual Environments
Ruofei Du
ruofei@cs.umd.edu
Dissertation Committee: Dr. Varshney, Dr. Zwicker, Dr. Huang, Dr. JaJa, and Dr. Chuang
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
6. Motivation
Popularity of VR and AR devices
6
4.5million VR and AR devices shipped worldwide
https://www.statista.com/statistics/671403/global-virtual-reality-device-shipments-by-vendor
16. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Montage4D
ACM I3D’18
To appear in JCGT’18
Social Street View
Best Paper Award
ACM Web3D’16
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
16
17. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Video Fields
ACM Web3D’16
Montage4D
ACM I3D’18
To appear in JCGT’18
Social Street View
Best Paper Award
ACM Web3D’16
Geollery
Under Review, ACM CHI’18
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
17
18. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Video Fields
ACM Web3D’16
Montage4D
ACM I3D’18
JCGT’18
Haptics and Gestures
• VRSurus (CHI EA’16)
• HandSight (ECCVW’14,TACCESS’16,
SIGACCESS’16)
FoveatedRendering
• Kernel Foveated Rendering (I3D’18)
Learning Sketchy Scenes
• Sketchy Scenes (ECCV’18)
• LUCCS (Under Review, TOG’18)
Visualizationand Cryptography
• AtmoSPHERE(CHI EA’15)
• TopicFields
• ARCrypt(Under Review VR’19)
Social Street View
Best Paper Award
ACM Web3D’16
Geollery
Under Review, ACM CHI’19
Under Review, IEEE VR’19
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
HAPTICS IN VR,
VISUAL CRYPTOGRAPHY IN
AR, VISUALIZATION, ETC.
18
19. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Social Street View
Best Paper Award
ACM Web3D’16
19
20. Social Street View: Blending Immersive Street Views
with Geotagged Social Media
Ruofei Du and Amitabh Varshney
{ruofei, varshney} @ cs.umd.edu
www.SocialStreetView.com
In Proceedings of the International Conference of Web3D 2016. [Best Paper Award]
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
36. Related Work
VR and Visual Memories
37
Immersive virtual environment can improve
social interactions and spatial awareness relates to visual memories.
37. 38
Very little work has been carried out in
designing immersive interfaces that interleave
visual navigation of our surroundings
with social media content
48. Related Work
Visualization of Geo-tagged Information
(cont.)
49Registers user-annotated text and images to a particular point in 3D space.
49. Related Work
Visualization of Geo-tagged Information
(cont.)
50However, both the 3D model and the annotation are predefined for rendering such a scene.
53. Related Work
Visualization of Geo-tagged Information
(cont.)
54
They use offline 3D reconstruction algorithm or existing 3D models
for registering the photos onto the meshes.
Such methods usually suffer from HOURS of processing time for a single location.
60. Conception, architecting & implementation
Social Street View
A mixed reality system that can depict geo-tagged
social media in immersive 3D web environments
61
61. Blending multiple modalities of
Street View + Social Media
Depth maps, normal maps, and road orientation
GPS coordinates and time creation
62
63. Achieving cross-platform compatibility by
WebGL + Three.js
smartphones, tablets, desktop, high-resolution large-
area wide field of view tiled display walls, as well as
head-mounted displays.
64
78. SQL Database
Indexed by B+ Tree
logarithmic insertion & query
79
Longitude, latitude, timestamps are indexed by B+ trees.
79. Haversine Formula
Andrew, 1805
80
The distance between social media and street view panorama
Haversine formula, which defines the distance on the surface of a sphere.
94. Maximal Poisson-disk Sampling
Gamito et al. Remove visual clutter and occlusion
95
The Poisson-disk distribution is uniform.
Minimum distance between each pair of social media is greater than R.
Terminates the sampling when it reaches the maximal coverage.
96. Pixel-Pie Algorithm
Remove when occlusion occurs
97
Pixel-Pie by Ip. et al. uses GPU depth-testing feature for efficient sampling.
97. Project Social Media Pictures
By Maximal Poisson-disc Sampling
98
Sampling in regions which are not Sky nor Ground, and with a limitation of depth values;
Place social media as billboards upon the building geometries;
Cast soft shadows as if lightings were 45 degree to the normal vectors.
124. Social Media
Coverage
100 panoramas for each algorithm
125
In the last algorithm, all the images are separated with each other and uniformly distributed on the building surfaces.
The last two algorithms largely reduce the coverage of saliency map to less than 20%.
125. What could be the potential applications of Social Street View?
131. ... dinner started off with
amazing oysters paired with my
favorite Ruinart blanc de blancs
champagne
By frankiextah on Instagram
132
132. Business Advertising
Museum, restaurant, real-estate ...
133
Social Street View enhances future consumers’ visual memories and
makes it easier for them to seek the “amazing oysters” around this place.
136. In conclusion, Social Street View provides a novel solution for
automatically fusing geotagged social media in an immersive 3D environments.
137. We envision social media and panoramic videos are
significant parts of the big data requirements for VR and AR applications
138. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
140
139. Spherical Harmonics for Saliency Computation and Virtual
Cinematography in 360° Videos
Ruofei Du and Amitabh Varshney
{ruofei, varshney} @ cs.umd.edu
Best Student Poster Award at ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), 2018.
Under Review, IEEE Transactions on Visualization and Computer Graphics.
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
145. Motivation
Almost 90% of the pixels are outside the
binocular field of view
147
Visual
Medium
Binocular Field of View (FoV)
Ratio Beyond FoV
Horizontal Vertical
Human Eyes 114° 135°
HTC Vive,
Oculus Rift
85° 95°
Samsung
Gear VR
75° 85°
Google
Cardboard
65° 75°
146. Motivation
Almost 90% of the pixels are outside the
binocular field of view
148
Visual
Medium
Binocular Field of View (FoV)
Ratio Beyond FoV
Horizontal Vertical
Human Eyes 114° 135° 76.25%
HTC Vive,
Oculus Rift
85° 95° 87.53%
Samsung
Gear VR
75° 85° 90.16%
Google
Cardboard
65° 75° 92.48%
Almost 90% of the pixels are outside a typical HMD's field of view (FoV)
149. Background
Saliency Maps, Itti et al., PAMI 1998
151
The goal of saliency detection is to model the visual attention system,
inspired by the behavior and the neuronal architecture of the early primate visual system.
151. Motivation
Classic Saliency Approaches are not
consistent (Itti et al.’s model)
153
The classic Itti et al’s model treats the input as an rectilinear image.
156. Motivation
Classic Saliency Approaches are not
consistent (Itti et al.’s model)
158
A spherical rotation may cause both false negative and false positive.
157. Motivation
How can we build an efficient and consistent
saliency model in the spherical space?
159
The Input Frame
Source
Horizontal
Clipping
Spherical
Rotation
Our ModelItti et al’s Model
How can we build an efficient and consistent saliency model in the spherical space?
161. Algorithm
Extracting Static Features
164
Compute SH coefficients
using prefix sum on the GPU
in real time
𝑂(𝐿2
𝑙𝑜𝑔𝑁𝑀)
Here, 𝑓𝑖,𝑗 is the feature value,
𝐻𝑖,𝑗 is the precomputed value with
SH functions at each pixel
171. Results
Timing Comparison Between Itti et
al’s model and our SSR model
174
Resolution
The Average Timing Per Frame
Itti et al. (CPU) SSR (CPU) SSR (GPU)
1920x1080 104.46 ms 21.34 ms 10.81 ms
4096x2048 314.94 ms 48.18 ms 13.20 ms
7680x3840 934.26 ms 69.53 ms 26.58 ms
172. Virtual Cinematography
How can saliency maps be used?
175
How can saliency maps guide automatic camera control for 360° videos?
173. 176
The most salient objects may vary from frame to frame,
due to the varying occlusions, colors, and self-movement.
Virtual Cinematography
Most salient objects are not reliable
174. 177
An approach that relies on just tracking the most salient objects
may incur rapid motion of the camera, and
worse still, may induce motion sickness in virtual reality.
Virtual Cinematography
Camera jittering
177. Evaluation
Compute 2048 potential camera positions
180
32×64 pairs of camera positions uniformly over the sphere
The optimal position considers both saliency coverage and temporal movement.
183. Conclusion
Spherical Harmonics Saliency
186
In conclusion, we have presented a novel GPU-driven pipeline which employs spherical
harmonics to directly compute the saliency maps for 360° videos in the 𝑆𝑂(2) space.
184. Conclusion
Spherical Harmonics Saliency
187
Our method remains consistent for challenging cases like horizontal clipping, spherical
rotations, and equatorial bias, and can be computed on the GPU in real time.
186. Conclusion
Spherical Harmonics Saliency
189
We present a novel spatiotemporal optimization model to ensure large spatial saliency
coverage and reduce temporal jitters of the camera motion.
187. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Montage4D
ACM I3D’18
To appear in JCGT’18
190
188. Montage4D: Interactive Seamless Fusion of
Multiview Video Textures
Ruofei Du†‡, Ming Chuang‡¶
, Wayne Chang‡, Hugues Hoppe‡§
, and Amitabh Varshney†
†Augmentarium and GVIL | UMIACS | University of Maryland, College Park
‡Microsoft Research, Redmond ¶
Apple Inc. §
Google Inc.
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLAND
189. Introduction
Fusion4D and Holoportation
Escolano et al. Holoportation: Virtual 3D Teleportation in Real-time (UIST 2016)
RGB Depth
RGB Depth Mask
RGB Depth Mask
RGB Depth Mask
Depth estimation &
segmentation
8 Pods
Capture
SITE A SITE B
Volumetric
fusion
Color
Rendering
Remote
rendering
Mesh,
color,
audio
streams
Network
192
190. Fusing multiview video textures onto dynamic meshes
with real-time constraint is a challenging task
200. Related Work
3D Texture Montage
203
𝑤𝑖 = 𝑉 ⋅ max 0, ො𝑛 ⋅ ෝ𝑣𝑖
𝛼
Normal Weighted Blending
(200 FPS)
Visibility test
Normal vector
Texture camera view direction
213. Majority Voting (Color Seams)
Each vertex is assigned with 8 colors coming from the 8
cameras. These colors are classified into different clusters
in LAB color space with a 0.1 threshold. The mean color
of the largest cluster is named majority voting color.
Seams
Causes
216
214. Majority Voting (Color Seams)
The triangle vertices have different results of the majority
voting colors, which may be caused by either mis-
registration or self-occlusion.
Seams
Causes
217
217. Field of View (Background Seams)
One or two triangle vertices lie outside the
camera’s field of view or in the subtracted background
region while the rest are not.
Seams
Causes
220
226. Geodesics
For diffusing the seams
229
On triangle meshes, this is challenging because of the computation of tangent directions.
And shortest paths are defined on edges instead of the vertices.
227. Geodesics
For diffusing the seams
230
We use the algorithm by Surazhsky and Hoppe for computing the approximate geodesics.
The idea is to maintain only 2~3 shortest paths along each edge
to reduce the computational cost.
232. 235
Visibility test
Visibility% of view i
User camera’s view direction
Texture camera view direction
Geodesics
𝑫 𝑣
𝑖 (𝑡)
For a frame at time 𝑡, for each camera 𝑖, for each vertex 𝑣,
We define the Desired Texture Field:
236. Temporal Texture Field
Temporally smooth the texture fields
239
For frame at time 𝑡, for each camera 𝑖, for each vertex 𝑣,
We define the Temporal Texture Field (exponential smoothing)
𝑻 𝑣
𝑖
𝑡 = 1 − 𝜆 𝑻 𝑣
𝑖
𝑡 − 1 + 𝜆𝑫 𝑣
𝑖
(𝑡)
Texture field of the previous frame
Temporal smoothing factor
1
𝐹𝑃𝑆
242. 245
Note: Results use default parameters of Floating Textures, which may not be optimal for our datasets.
Still, for optical flow based approach, it would be better if seam vertices are assigned with less texture weights.
247. Experiment
Break-down of a typical frame
251
Most of the time is used in communication between CPU and GPU
248. In conclusion, Montage4D uses seam identification, geodesic fields,
and temporal texture field to provides a practical texturing solution for
real-time 3D reconstructions. In the future, we envision that Montage4D is useful
for fusing the massive multi-view video data into VR applications like
remote business meeting, remote training, and live broadcasting.
249. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Video Fields
ACM Web3D’16
Montage4D
ACM I3D’18
To appear in JCGT’18
Haptics
• VRSurus (CHI EA’16)
• HandSight (ECCVW’14,TACCESS’16,
SIGACCESS’16)
FoveatedRendering
• Kernel Foveated Rendering (I3D’18)
Learning Sketchy Scenes
• Sketchy Scenes (ECCV’18)
• LUCCS (Under Review, TOG’18)
Visualization and Cryptography
• AtmoSPHERE (CHI EA’15)
• TopicFields (In Submission to VR’18)
• ARCrypt(In Submission to VR’18)
Social Street View
Best Paper Award
ACM Web3D’16
Geollery
Under Review, ACM CHI’18
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
HAPTICS IN VR,
VISUAL CRYPTOGRAPHY IN
AR, VISUALIZATION, ETC.
253
250. Outline
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Social Street View
Best Paper Award
ACM Web3D’16
Geollery
Under Review, ACM CHI’18
254
251. Geollery: An Interactive Social Platform in Mixed Reality
Ruofei Du, David Li, and Amitabh Varshney
{ruofei, dli7319, varshney} @ umd.edu
www.Geollery.com
Under Review, ACM CHI Conference on Human Factors in Computing Systems, 2018.
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
259. VRSurus: Enhancing Interactivity and Tangibility of
Puppets in Virtual Reality
Ruofei Du and Liang He
University of Maryland, College Park
ACM CHI Conference on Human Factors in Computing Systems, 2015.
4th place in UIST 2014 Student Innovation Contest.
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
262. Conclusion
Fusing Multimedia Data Into
Dynamic Virtual Environments
PANORAMAS MULTIVIEW
VIDEOS
SOCIAL MEDIA
MESHES
Video Fields
ACM Web3D’16
Montage4D
ACM I3D’18
To appear in JCGT’18
Haptics and Gestures
• VRSurus (CHI EA’16)
• HandSight (ECCVW’14,TACCESS’16,
SIGACCESS’16)
FoveatedRendering
• Kernel Foveated Rendering (I3D’18)
Learning Sketchy Scenes
• Sketchy Scenes (ECCV’18)
• LUCCS (Under Review, TOG’18)
Visualization and Cryptography
• AtmoSPHERE(CHI EA’15)
• TopicFields (In Submission to VR’18)
• ARCrypt(In Submission to VR’18)
Social Street View
Best Paper Award
ACM Web3D’16
Geollery
Under Review, ACM CHI’18
Spherical Harmonics Saliency
Best Student Poster Award
ACM I3D’18
Under Review, IEEE TVCG’18
HAPTICS IN VR,
VISUAL CRYPTOGRAPHY IN
AR, VISUALIZATION, ETC.
266
273. Peer-reviewed Publications
277
1. Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe, and Amitabh Varshney. Montage4D: Interactive Seamless
Fusion of Multiview Video Textures. Proceedings of ACM SIGGRAPH Symposium on Interactive 3D Graphics and
Games (I3D), pp. 124–133, 2018.
2. Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe, and Amitabh Varshney. Montage4D: Real-time Seamless
Fusion and Stylization of Multiview Video Textures. Under Review. Journal of Computer Graphics Techniques, 2018.
3. Ruofei Du and Amitabh Varshney. Spherical Harmonics for Saliency Computation and Virtual Cinematography in 360°
Videos. [Best Student Poster Award] ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D),
2018. Under Review, IEEE Transactions on Visualization and Computer Graphics, 2018.
4. Ruofei Du and Amitabh Varshney. Social Street View: Blending Immersive Street Views with Geo-Tagged Social
Media. [Best Paper Award] In Proceedings of the 21st International Conference on Web3D Technology, pp. 77–85,
2016.
5. Ruofei Du, David Li, and Amitabh Varshney. Geollery: Designing an Interactive Mirrored World with Geo-tagged
Social Media. Under Review, ACM CHI Conference on Human Factors in Computing Systems, 2018.
6. Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. Kernel Foveated Rendering. Proceedings of the
ACM on Computer Graphics and Interactive Techniques, 1(5), pp. 1–20, 2018.
7. Changqing Zou, Qian Yu, Ruofei Du, Haoran Mo, Yi-Zhe Song, Tao Xiang, Chengying Gao, Baoquan Chen, and Hao
Zhang. SketchyScene: Richly-Annotated Scene Sketches. In European Conference of Computer Vision (ECCV), pp.
421–436, 2018.
274. Peer-reviewed Publications
278
8. Ruofei Du, Sujal Bista, and Amitabh Varshney. Video Fields: Fusing Multiple Surveillance Videos into a Dynamic Virtual
Environment. Proceedings of the 21st International Conference on Web3D Technology, pp. 165–172, 2016.
9. Ruofei Du and Liang He. VRSurus: Enhancing Interactivity and Tangibility of Puppets in Virtual Reality. Proceedings of the
34th Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI), pp. 2454--2461, 2016.
10. Ruofei Du, Kent Wills, Max Potasznik, and Jon Froehlich. AtmoSPHERE: Representing Space and Movement Using Sand
Traces in an Interactive Zen Garden Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in
Computing Systems, pp. 1627–1632, 2016.
11. Changqing Zou, Haoran Mo, Ruofei Du, Xing Wu, Chengying Gao, and Hongbo Fu. LUCCS: Language-based User-
customized Colorization of Scene Sketches. arXiv preprint, arXiv:1808.10544, Under Review. ACM Transaction on Graphics,
2018.
12. Lee Stearns, Ruofei Du, Uran Oh, Catherine Jou, Leah Findlater, David A. Ross, Rama Chellappa, and Jon E. Froehlich.
Evaluating Haptic and Auditory Directional Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted
Cameras ACM Transactions on Accessible Computing (TACCESS), 9(1):1, 2014.
13. Lee Stearns, Ruofei Du, Uran Oh, Yumeng Wang, Leah Findlater, Rama Chellappa, and Jon E. Froehlich. The Design and
Preliminary Evaluation of a Finger-Mounted Camera and Feedback System to Enable Reading of Printed Text for the Blind
European Conference on Computer Vision (ECCV), Workshop on Assistive Computer Vision and Robotics, pp. 615–631, 2014.
14. Leah Findlater, Lee Stearns, Ruofei Du, Uran Oh, David Ross, Rama Chellappa, and Jon E. Froehlich. Supporting Everyday
Activities for Persons with Visual Impairments Through Computer Vision-Augmented Touch Proceedings of the 17th
International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS 2015), pp. 383–384, 2015.
276. Acknowledgement
Collaborators, advisors, labmates,
and committee members
280
Ben Cutler
bcutler@microsoft.com
Spencer Fowers
sfowers@microsoft.com
Wayne Chang
wechang@microsoft.com
Sameh Khamis
sameh.khamis@gmail.com
Shahram Izadi
shahram.izadi.uk@gmail.com
Sujal Bista
Sujal@cs.umd.edu
Amitabh Varshney
varshney@cs.umd.edu
Eric Krokos
ekrokos@cs.umd.edu
Hugues Hoppe
hhoppe@gmail.com
Ming Chuang
mingchuang82@gmail.com
Matthias Zwicker
zwicker@cs.umd.edu
Furong Huang
furongh@cs.umd.edu
Committee Augmentarium / Family
MSR
Hsueh-ChienCheng
hccheng@cs.umd.edu
Mentors / Collaborators
Sai Yuan
syuan@umd.edu
Xuetong Sun
xtsunl@cs.umd.edu
Xiaoxu Meng
xmeng525@umiacs.umd.edu
Jon Froehlich
jonf@cs.washington.edu
Leah Findlater
leahkf@uw.edu
Sida Li
sidali@umiacs.umd.edu
Eric Lee
ericlee@umiacs.umd.edu
LiangHe
edigahe@gmail.com
Jesph F. JaJa
josephj@umd.edu
David Li
ericlee@umiacs.umd.edu
Gregory Kramida
algomorph@gmail.com
Jonathan Heagerty
ericlee@umiacs.umd.edu
Shuo Li
li3shuo1@gmail.com
Alexander Rowden
alrowden@teprmail.umd.edu
277. Thank you
Ruofei Du
ruofei@cs.umd.edu
Committee: Dr. Varshney, Dr. Zwicker, Dr. Huang, Dr. JaJa, and Dr. Chuang
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
278. Fusing Multimedia Data Into
Dynamic Virtual Environments
Ruofei Du
ruofei@cs.umd.edu
Committee: Dr. Varshney, Dr. Zwicker, Dr. Huang, Dr. JaJa, and Dr. Chuang
THE AUGMENTARIUM
VIRTUAL AND AUGMENTED REALITY LABORATORY
AT THE UNIVERSITY OF MARYLAND
COMPUTER SCIENCE
UNIVERSITY OF MARYLANDUMIACSGVIL
289. Thank you
With a Starry Night Stylization
Ruofei Du
ruofei@cs.umd.edu
Amitabh Varshney
varshney@cs.umd.edu
Wayne Chang
wechang@microsoft.com
Ming Chuang
mingchuang82@gmail.com
Hugues Hoppe
hhoppe@gmail.com
• www.montage4d.com
• www.duruofei.com
• shadertoy.com/user/starea
• github.com/ruofeidu
• I am graduating December 2018.
295. Geollery: An Interactive Social Platform in Mixed Reality
Ruofei Du, David Li, and Amitabh Varshney
The Augmentarium| UMIACS | University of Maryland, College Park
{ruofei, dli7319, varshney} @ umd.edu
www.Geollery.com