Kapralos, BillCowan, Brent B. D.2020-02-272022-03-292020-02-272022-03-292020-01-01https://hdl.handle.net/10155/1136Given the importance of sound, and our ability to localize sound sources in three-dimensions in the real world, incorporating spatial sound in virtual environments can help increase realism, improve the sense of “presence”, or “immersion”, improve task performance, and improve navigation speed and accuracy. However, despite advancements in the real-time simulation of spatial sound, current solutions are computationally expensive and often rely on specialised hardware, and as a result, spatial sound cues are often overlooked in virtual environments and games notwithstanding their importance. Here, a novel spatial sound rendering framework that approximates spatial sound for virtual environments, yet conforms to physical sound propagation rules/laws, is introduced. The framework employs graphs in order to reduce computation time and each node in the graph is processed in parallel using the graphics processing unit (GPU) making this method suitable for real-time virtual immersive applications such as video games and virtual simulations. Results of a user study that was conducted with human participants (the intended users of any spatial sound method), to test the effectiveness of the spatial sound framework introduced here, indicates it does lead to improved player performance over traditional panning (binaural sound cues only) and ray-cast occlusion in a 3D first-person video game.enSpatial soundReal-timeAcoustic modelingSound propagationGPUA graph-based real-time spatial sound frameworkDissertation