ImmersAV is an open source toolkit for immersive audiovisual composition. It was built around a focused approach to composition based on generative audio, raymarching and interactive machine learning techniques. Here are two examples of work created with the toolkit:
Aims:
- Provide well defined, independent areas for generating audio and visual material.
- Provide a class that can be used to generate, send and receive data from:
- machine learning algorithms
- VR hardware sensors
- audio and visual processing engines
- Allow for direct rendering on a VR headset.
Dependencies:
- OpenVR
- Csound6
- OpenGL4
- glm
- glfw3
- Glew
- CMake3
- RapidLib
- libsndfile
See the GitHub repo for detailed instructions for installation, workflow and walkthroughs.