Abstract: How we perceive the visual environment is more complicated than how they are created by computers. Realistically displaying and, more recently, generating visual content has achieved significant success. However, existing solutions suffer from high energy consumption and misalignment with creators' intentions. In this talk, I will discuss some of our research on developing computational models of human visual perception. Based on this knowledge, we create energy-efficient and user-adaptive virtual environments and generative AI algorithms.
Speaker Biography: Qi Sun is an assistant professor at New York University. Before joining NYU, he was a research scientist at Adobe Research. He received his PhD at Stony Brook University. His research interests lie in VR/AR, Generative AI, computer graphics, and visual perception. He is a recipient of the IEEE Virtual Reality Best Dissertation Award. With colleagues, his research has been recognized as several best paper and honorable mention awards at ACM SIGGRAPH, IEEE ISMAR, IEEE VR, and IEEE VIS. The research is supported by NSF, NASA, DARPA, and the industry.
http://qisun.me/ | https://www.immersivecomputinglab.org/