Ventriloquy I

Performed live at Seeing Sound 5, Bath Spa University 2018.

Sound and visuals move in a continuous manner, at times coming together in unison, at times moving independently. A continuum of fluctuating modal equilibrium is experienced creating tension and release as it swings in and out of balance. When audio and visuals temporally align, their movement is shared in a form of cross-modal ventriloquism. This binding of audio and visual elements takes place in the audience’s mind. An audiovisual soliloquy experienced across the senses.

Ventriloquy is a generative audiovisual composition exploring the perceptual binding and separation of abstract audio and visual elements. The grinding and sometimes coarse audio is combined with continuously undulating textured 3D primitive shapes. The system used to perform this piece was built using interactive machine learning technology that allows for the fast and intuitive mapping of many audio and visual parameters to the performer’s input. The audio and visual elements are generated simultaneously, bound together by the machine learning algorithm. The performer trains the machine learning model according to their own perception and artistic intuition, choosing examples that exhibit strong cross-modal correspondences. The trained model then outputs continuous audio and visual parameters according to the three dimensional movement of the performer in real-time.

Made with by Álvaro