Rai Sato's Portfolio

about me.

icon.png

Rai Sato (佐藤 来) is a Ph.D. Student at Graduate School of Culture Technology, Korea Advanced Technology of Science and Technology, South Korea (Feb 2023 –).

My research aims to enhance the auditory experience in Extended Reality (XR) by developing a novel rendering technique that makes virtual sounds presented in XR perceptually indistinguishable from physical word. This is achieved through a real-time and seamless acoustic space inference mechanism, allowing for a more immersive and natural sound experience without any sense of incongruity.

XRs represent a blend of real and virtual environments, facilitating human-computer interactions via cutting-edge technology and wearable devices. As the Metaverse emerges as a novel avenue for social connection, it holds the potential to impact various facets of our lives, extending beyond entertainment and digital arts. To tap into this potential, it’s crucial to grasp the cognitive processes governing our selection and interpretation of multisensory data in the real world. This understanding can then guide the creation of immersive content for Metaverse users. Yet, current XR technologies and studies predominantly emphasize visual experiences, sidelining the holistic multisensory interactions, notably auditory, that shape our perception of reality. Crafting realistic audio and accompanying tactile sensations is pivotal for authentic virtual spatial experiences. When users interact with dynamic soundscapes that adapt to environmental cues, it heightens their sense of “being there”. Such fluid, integrated interactivity can amplify the authenticity of XR encounters and foster a communal experience within the Metaverse.

Specifically, I am currently focusing on these research themes;

  • Real-time spatial sound rendering system for Auditory Augmented Reality (AAR): Augmented reality Room Acoustic Estimator (ARAE)
  • Immersive audio reproduction and perception
  • Aural heritage preservation using 6DoF audio representation