Ancestral (r)Evocations
Erika Tan, 2024
Mac Mini
Python Data Input
Pure Data patch
Displayed at Tate Modern as part of the Museum X Machine X Me conference
Ancestral (r)Evcocations Github Repository
This work focuses on data sonification through a ‘DIY' diagnostic tool consisting of fragmented instruments and mechanised parts, and the feedback of live machine learning museum soundscapes. Working closely with artist Erika Tan, I constructed an archival sonification schema. The commissioned work has developed as a practice-led research project supported by the Decolonising Arts Institute Transforming Collections Artist Research Residencies.
Act of interrogation, a diagnostic probing into the depths of museal collections, processes and their current states of ‘health and well-being’. Ancestral (r)Evocations gathers and scrapes collections data referencing ‘Southeast Asia’ from British institutions (Tate and Wellcome Collection) to bring together forms of computational processes and human–computer collaboration where data, digitised and physical materials, speculation and generative processes create a series of loosely subjective and firmly indexical sound and image events.
I was commissioned to develop the ‘semantic sound’ layer - collections data is translated into numerical vectors and ‘labelled’ (a machine learning process) based on the existence (or lack of) racialised collections data statements, separating data into binaries which feed new systems of sonification and machine learning. Sounds collected in the Tate Modern repository were used to train a Realtime Audio Variational autoEncoder (RAVE) - a neural network model that learns to re-synthesise the training corpus, artificially, in real-time.
The model learns a compressed, low-dimensional representation of the high-dimensional audio input. This compressed "manifold" can be explored through exploration of the “latent space” - movement within this space will modify the audio output, corresponding to different learned representations of the archival training data. This latent space is explored, and semantically meaningful movements can be automated using the translated numerical vectors. This creates a generative soundscape, constantly changing, yet supporting the conceptual framework of the artwork. These ‘live’ components of sound feedback machine learnt and generated sounds of ‘Tate Modern’ and a more musical track of reconfigured recordings of the instruments.
Here the potentials and pitfalls of AI and machine learning (ML) in relation to collections data and national collections connects the project to the broader Transforming Collections research project of which it is a part, which responds to research questions around bias within museum collections and interpretation. The potentials of AI and ML to amplify or continue patterns was an underlying question to return to, and as the latest in a lineage of technology that can be put to work to maintain and extend museal control over objects within collections, its use is also under interrogation.