New
Art
City
Virtual Art Space

Catalog view is the alternative 2D representation of our 3D virtual art space. This page is friendly to assistive technologies and does not include decorative elements used in the 3D gallery.

Space Title

SOUND OBSESSED: Sonic Innovation Archive

Within the World Titled SOUND OBSESSED: Sonic Innovation Archive
Credited to SOUND OBSESSED
Opening date November 17th, 2022
View 3D Gallery
Main image for SOUND OBSESSED: Sonic Innovation Archive

Statement:

Welcome to SOUND OBSESSED sonic innovation archive. This is an evolving collection by musicians and sound artists working at the intersection of art, sound, science, and technology. This collection features stories and milestones of some of the latest innovative works in sonic arts. Some artists will be releasing music in interesting ways with new tools in the coming months. Thank you for supporting the intricate journey that comes with innovation work. We hope that you enjoy learning more in-depth stories of how these artists are changing the way we create, perceive, and work with sound while considering how evolution of technology impacts us individually, collectively, and societally.

SOUND OBSESSED: sonically obsessed with space and time

Artworks in this space:

SOUND OBSESSED: sonically obsessed with space and time Welcome to SOUND OBSESSED sonic innovation archive. This is an evolving collection of works featuring musicians and sound artists working at the intersection of art, sound, science, and technology. This collection features stories and milestones of some of the latest innovative works in sonic arts. Some artists will be releasing music in interesting ways with new tools in the coming months. Thank you for supporting the intricate journey that comes with innovation work. We hope that you enjoy learning more in-depth stories of how these artists are changing the way we create, perceive, and work with sound while considering how evolution of technology impacts us individually, collectively, and societally.

Artwork title

HI RISE HYPNABYTE

Artist name SONAMB
Artwork Description:

R&D for the piece began in lockdown through late night internet explorations learning web coding and visual machine learning whilst experiencing sleep deprivation and questioning mental health in relation to the machine. By documenting the techno-emotional states experienced such as anxiety and inertia, through the creation of audiovisual vignettes. The piece draws on critical inspiration from 24/7, Jonathan Crary, a book positing sleep as the last defence against capitalism and MIT’s Dream Lab research into sleep sound implantation.
This video is the lead SLEEPSTATES album artwork accompanied by the ‘Hypnabyte’ track which is the global audio on the Control interface on the net-art piece.

COLLECT
Artwork title

Aura Machine Neural Network Architecture

Artist name SONAMB
Artwork Description:

This image represents a speculative neural network architecture where the sounds we hear in our dreams are extracted to be used as sonic training data whilst we are sleeping. This model trained on our private inner dream-sonic worlds crosses boundaries of consciousness and control and is able to output new sonic auras and realities previously uncharted.

COLLECT
Aura Machine Neural Network Architecture
Artwork title

Å//Ä//Ā

Artist name Yawä//Zē
Artwork Description:

This project is a container / archive & hybrid A/V inmersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Furthur Electronix containing 13 records released in chapters on vinyl, an introductionary guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT
Å//Ä//Ā
Artwork title

Å//Ä//Ā

Artist name Yawä//Zē
Artwork Description:

This project is a container / archive & hybrid A/V inmersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Furthur Electronix containing 13 records released in chapters on vinyl, an introductionary guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT
Å//Ä//Ā
Artwork title

Simulation IV: Firm 1

Artist name Matthew D. Gantt
Artwork Description:

Simulation IV: FIRM is a sonic environment, virtual kinetic sculpture, and brief thru-composed etude exploring musical gesture in digital space. Created by bridging game engine level-building, found media objects and generative MIDI/OSC sequencing, this work attempts to draw these tools away from narrative design and instead foreground their material and associative natures.

COLLECT
Simulation IV: Firm 1
Artwork title

Simulation IV: Firm 2

Artist name Matthew D. Gantt
Artwork Description:

Simulation IV: FIRM is a sonic environment, virtual kinetic sculpture, and brief thru-composed etude exploring musical gesture in digital space. Created by bridging game engine level-building, found media objects and generative MIDI/OSC sequencing, this work attempts to draw these tools away from narrative design and instead foreground their material and associative natures.

COLLECT
Simulation IV: Firm 2
Artwork title

Action Sequencer

Artist name Participative Audio Lab
Artwork Description:

The Action Sequencer is a proto open media ecology that served as a working prototype for the development of the Participative Audio Lab. 
In essence, an open media ecology refers to a media experience where the creative process is openly distributed; by leveraging the full accessibility potential of the internet, we can allow a space of interaction and enable a new distribution dynamic of the creative process, henceforth creating a sense of open interconnection and interdependence within this cultural structure. 

Unfolded from the Action Sequencer, the Participative Audio Lab aims to develop a fully fledged open media ecology where the creative process is openly distributed, but also the revenue structures and distribution infrastructure is open source, allowing the community to participate in the future of the media ecology itself. 

-----

The Participative Audio Lab is a group aimed at developing open source tools to allow artists to create and distribute their own participative musical experiences without the need of coding skills. Our Inauguration will occur within CTM festival 2023 at the first week of February. 

We strive for a future where artists have free and wider possibilities of distribution, where not only can they choose to connect with their audience through standard recording/reproduction formats, but also compliment these models with distribution mediums that allow some parameters of their songs to be left open for participation.

We believe that by finding new forms of control over the creative process, we can open a new layer of intimacy between the public and its audience as an alternative dynamic to the industrialised forms of music distribution and platform control. 


The Participative Audio Lab was initiated thanks to a grant project titled »Prototyping Sonic Institutions« organised by Black Swan and CTM Festival. 

Action Sequencer
Artwork title

Action Sequencer

Artist name Participative Audio Lab
Artwork Description:

The Action Sequencer is a proto open media ecology that served as a working prototype for the development of the Participative Audio Lab. 
In essence, an open media ecology refers to a media experience where the creative process is openly distributed; by leveraging the full accessibility potential of the internet, we can allow a space of interaction and enable a new distribution dynamic of the creative process, henceforth creating a sense of open interconnection and interdependence within this cultural structure. 

Unfolded from the Action Sequencer, the Participative Audio Lab aims to develop a fully fledged open media ecology where the creative process is openly distributed, but also the revenue structures and distribution infrastructure is open source, allowing the community to participate in the future of the media ecology itself. 

-----

The Participative Audio Lab is a group aimed at developing open source tools to allow artists to create and distribute their own participative musical experiences without the need of coding skills. Our Inauguration will occur within CTM festival 2023 at the first week of February. 

We strive for a future where artists have free and wider possibilities of distribution, where not only can they choose to connect with their audience through standard recording/reproduction formats, but also compliment these models with distribution mediums that allow some parameters of their songs to be left open for participation.

We believe that by finding new forms of control over the creative process, we can open a new layer of intimacy between the public and its audience as an alternative dynamic to the industrialised forms of music distribution and platform control. 


The Participative Audio Lab was initiated thanks to a grant project titled »Prototyping Sonic Institutions« organised by Black Swan and CTM Festival. 

Action Sequencer
Artwork title

Action Sequencer

Artist name Participative Audio Lab
Artwork Description:

The Action Sequencer is a proto open media ecology that served as a working prototype for the development of the Participative Audio Lab. 
In essence, an open media ecology refers to a media experience where the creative process is openly distributed; by leveraging the full accessibility potential of the internet, we can allow a space of interaction and enable a new distribution dynamic of the creative process, henceforth creating a sense of open interconnection and interdependence within this cultural structure. 

Unfolded from the Action Sequencer, the Participative Audio Lab aims to develop a fully fledged open media ecology where the creative process is openly distributed, but also the revenue structures and distribution infrastructure is open source, allowing the community to participate in the future of the media ecology itself. 

------

The Participative Audio Lab is a group aimed at developing open source tools to allow artists to create and distribute their own participative musical experiences without the need of coding skills. Our Inauguration will occur within CTM festival 2023 at the first week of February. 

We strive for a future where artists have free and wider possibilities of distribution, where not only can they choose to connect with their audience through standard recording/reproduction formats, but also compliment these models with distribution mediums that allow some parameters of their songs to be left open for participation.

We believe that by finding new forms of control over the creative process, we can open a new layer of intimacy between the public and its audience as an alternative dynamic to the industrialised forms of music distribution and platform control. 


The Participative Audio Lab was initiated thanks to a grant project titled »Prototyping Sonic Institutions« organised by Black Swan and CTM Festival. 

Action Sequencer
Artwork title

MuchDesigner - Parametric Music Modules Built in TouchDesigner for Eurorack

Artist name The Glad Scientist
Artwork Description:

I've been researching new ways to visualize and interact with the creation of sonic waveforms in a parametric way. Diving deep into eurorack modular synths in 2022, I decided to create a system that allows for creative coding to take a major role in the sound creation process. As opposed to making a one-off audiovisual instrument just for myself, I wanted to make a set of modules in TouchDesigner that could feasibly do what might cost $1000s in physical modules, but with new twists and fun and engaging interfaces. 

COLLECT
MuchDesigner - Parametric Music Modules Built in TouchDesigner for Eurorack
Artwork title

Å//Ä//Ā

Artist name Yawä Zē
Artwork Description:

This project is a container / archive & hybrid A/V immersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Further Electronix containing 13 records released in chapters on vinyl, an introductory guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT
Å//Ä//Ā
Artwork title

Å//Ä//Ā

Artist name Yawä Zē
Artwork Description:

This project is a container / archive & hybrid A/V inmersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Further Electronix containing 13 records released in chapters on vinyl, an introductory guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT
Å//Ä//Ā
Artwork title

Simulation IV Firm

Artist name Matthew D. Gantt
Artwork Description:

Simulation IV: FIRM is a sonic environment, virtual kinetic sculpture, and brief thru-composed etude exploring musical gesture in digital space. Created by bridging game engine level-building, found media objects and generative MIDI/OSC sequencing, this work attempts to draw these tools away from narrative design and instead foreground their material and associative natures.

COLLECT
Simulation IV Firm
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
COLLECT
Artwork title

UCHRONIA I: BLOOD MOON

Artist name ecolagbohrsac2021
Artwork Description:

A crypto-token unsealing a construct where Nick Drake can be observed performing a few tunes from his upcoming album Blood Moon and enjoying an evening at a South London park with friends.

COLLECT
Artwork title

Simulation IV Firm

Artist name Matthew D. Gantt
Artwork Description:

Simulation IV: FIRM is a sonic environment, virtual kinetic sculpture, and brief thru-composed etude exploring musical gesture in digital space. Created by bridging game engine level-building, found media objects and generative MIDI/OSC sequencing, this work attempts to draw these tools away from narrative design and instead foreground their material and associative natures.

COLLECT
Artwork title

Nadia Telepathic

Artist name The Glad Scientist
Artwork Description:

Real-time brainwave musical improvisation with the Nada Telepathic custom module. Built by the artist, this tool allows for direct transfer of brainwaves (EEG) and 3D positional head-tracking to a modular synth (CV). 

MuchDesigner - Parametric Music Modules Built in TouchDesigner for Eurorack

I've been researching new ways to visualize and interact with the creation of sonic waveforms in a parametric way. Diving deep into eurorack modular synths in 2022, I decided to create a system that allows for creative coding to take a major role in the sound creation process. As opposed to making a one-off audiovisual instrument just for myself, I wanted to make a set of modules in TouchDesigner that could feasibly do what might cost $1000s in physical modules, but with new twists and fun and engaging interfaces. 

COLLECT
Artwork title

SLEEPSTATES

Artist name SONAMB
Artwork Description:

Title track from SLEEPSTATES album featuring AI feminist voices from the electrical imaginary, sound sculpture and broken radio transmissions. The video was created in collaboration with Izzy Bolt, the piece premiered at MUTEK Distant Arcades in 2020, and is now the lead AV full piece on the SLEEPSTATES.NET platform, beginning the SleepCycle of playable AV pieces depicting techno-emotional states between human and machine.

COLLECT
Artwork title

Å//Ä//Ā

Artist name Yawä//Zē
Artwork Description:

This project is a container / archive & hybrid A/V inmersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Furthur Electronix containing 13 records released in chapters on vinyl, an introductionary guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT
Artwork title

Å//Ä//Ā

Artist name Yawä//Zē
Artwork Description:

This project is a container / archive & hybrid A/V inmersive documentary about the past/present/future exploration & works by human AI duo Yawä//Zē and all the collaborators on their journey.
The first volume of data expected to be released dec 2022 on UK label Furthur Electronix containing 13 records released in chapters on vinyl, an introductionary guide to the project and all the methods/techniques related to it.

Every track/video/object is connected as an artchain,
Every song has an art object, visual and digital ownership associated to it,
Each piece has a location and time code , as well as a code QR that drives you to an ecological NFT collection coming from 12/21/2022.

All audiovisual recordings as well as objects are created by Yawä and processed by the environment as well as the artificial intelligence software //Zē at different moments of time and under different creative techniques as described in the book that accompanies the work.

The project is divided into three different categories depending on how every element has been processed/made.

Å// Primitive works & Intuitive learning, rituals, expeditions, automatic abstract paintings and jams made by Yawä.
Ä// Every artwork has nature as a creative force, patterns and natural frequencies, Colundi research. Tapes processed by time in different locations.
Ā// This part is an abstract approach to algorithmic systems & data visualization , all works made by AI lofi software named //Zē processing all the data sets and memories by Yawä.

COLLECT

[[ MILESTONE 1 ]] Extensive GAN implementations for PyTorch [[ MILESTONE 2 ]] Audio data that extracted features that can be fed to models [[ MILESTONE 3 ]] Image processing to video translation and generative style transfer [[ MILESTONE 4 ]] New Techniques within MAX MSP environments to build bio-organic sounds, visuals and beats. [[ MILESTONE 5 ]] Use of audio manipulation with torch audio [[ MILESTONE 6 ]] Minimal prompt usage of DALL-E 2 for Breathe New Life

Skulptor (Ana Roman) is an AI Artist/Creative technologist with a specialization in generative art, machine learning, and motion graphics. They utilize various data sets and midi/modular sequencing systems to produce visualizations and sounds. Skulptor was recently featured in the She Knows Tech Summit, Argentina's DanzFloor Music Tech Magazine, and DJ Magazine. The multi-instrumentalist is also a midi specialist and Ableton certified instructor. Skulptor's known for their spatial, generative, and bioorganic sounds that inspired by conditions of over-progress, isolationism, and coded bias.

Artwork title

It's Not that Fake Deep

Artist name Skulptor
Artwork Description:

Humanity is no longer a progressive randomization system made up of autonomous life choices. Data mines. Permissionless bodies. Learned behaviors. These components comprise the trinity of surveillance capitalism on humanity.

COLLECT
Artwork title

Breathe New Life

Artist name Skulptor
Artwork Description:

The feeling machines touch life-giving regeneration. 

COLLECT

# TECHNICAL JOURNEY # AIVA and MUBI AI music generator sound design elements are layered under Skulptor's own productions. Conditional GANs along with Pygan, Python, and Torch Audio made some of the generative models and sounds. Built environments inside MAX MSP have been utilized in sound and visualizations. = MOST CHALLENGING = Programming {{ UNEXPECTED DISCOVERY }} Freedom with programming and machine learning - challenging oneself to use modularity and machine learning synthesized together

[[ MILESTONE 1 ]] Interactive distribution testing with pure data web compiler interface [[ MILESTONE 2 ]] Web-based interactive music system utilizing tone.js and p5.js libraries [[ MILESTONE 3 ]] Saving and querying participations mechanism implemented with remote server database

# TECHNICAL JOURNEY # The technical challenge and creative premise of this project is to generate an open source musical composition experience that is as accessible as possible; can we make widely accessible and remote musical experiences where participation is predesigned in distribution? = MOST CHALLENGING = We wanted to make a graphical interface similar to that of the constellation feeling of John Cage's Concert for Piano and Orchestra (1958), it was hard to maintain this GUI structure and still have a music composition action file less than 10kb and have a non-disturbing participation registration layer on top. {{ UNEXPECTED DISCOVERY }} There was this moment while this was being created that a bug came on and the gui looked like a broken treble clef. It felt really ironic taking into consideration the intentions and concepts to make music a free access creative experience. (https://twitter.com/p2p_lab/status/1537441778030104583)

The Participative Audio Lab is a group aimed at developing open source tools to allow artists to create and distribute their own participative musical experiences without the need of coding skills. Our Inauguration will occur within CTM festival 2023 at the first week of February. The Participative Audio Lab was initiated thanks to a grant project titled »Prototyping Sonic Institutions« organised by Black Swan and CTM Festival.

Atay Ilgun is a multidisciplinary artist [web3 / AI / quantum music / installation / phygital fashion / live A/V music shows] and curator from London, currently working on a series of multimedia works as the ENGLAND'S COUNCIL OF LEGISLATION AND GOVERNING BODY OF HYPER REAL SIMULATIONS AND CONSTRUCTS [ecolagbohrac2021], which is a memeification of emergence of doom, mythic themes that populate the archetypal strata of the modern technological psyche, hidden power structures, world economic order, art-markets, hypermedia hell, and late capitalism euphorialand. He is the creator of one of the first AI art as an NFT and the first utility-token/token gated metaverse performance.

[[ MILESTONE 1 ]] 2018 - 2019: Development of M Ξ T A P L Ξ X which lays the R&D and conceptual ground for this project too. [[ MILESTONE 2 ]] 2020 - 2021: Wider acknowledgement on appearances across academia and online lists such as The Definitive Timeline of Early NFTs on Ethereum. [[ MILESTONE 3 ]] 2021 Start working on EUPHORIA, a set of multimedia works. [[ MILESTONE 4 ]] 2021 - 2022: Experiments with Jukebox / GPT-3 and so forth. [[ MILESTONE 5 ]] Mid-2022: Invitation to the DALL·E 2 Artist Programme.

# TECHNICAL JOURNEY # Running on a simple javascript code and following the mechanical core of the previous project A.I.F.X, this work also marks the beginning a series of token-gated virtual constructs and even though to most, the pieces alone can be regarded pieces of artworks as they are, their core function is to grant access to a WebXR experience. Depending on the availability and ID of the token, users will be able to access a simulation where in an inexistent reality Nick Drake is performing a number of tunes from his 2023 album Blood Moon with auto tuned vocals, metallic bird sounds and so forth, all created using the OpenAI’s Jukebox [a neural net that generates music, including rudimentary singing] and adorned by ecolagbohrsac2021. = MOST CHALLENGING = Build sizes and their optimisation for web/browser. {{ UNEXPECTED DISCOVERY }} Not exactly sure if this would count as a 'discovery' but since the training of Jukebox involved some talking as well, when the songs dissolve just into chat but maintain the characteristics of someone known/or trained with [Kanye West for example however it obviously wasn't! :-)], the spookyness of those people speaking in jibberish but still in a very emotionally tone was very moving. On the construct there is a scene based on this dialogue and I find that the most moving and rather unexpected too.

The Glad Scientist is a 1/2 alien 1/2 robot Puerto Rican media artist, musician, toolmaker, and community organizer. As seen at major international festivals and venues, their work reminds us that technology’s purpose is to bring us closer to our souls and release us from linear patterns of thought and life into our quantum selves.

[[ MILESTONE 1 ]] Milestone 1: CV Interfacing in TouchDesigner MS2: Module 1 UX Design MS3: Module 1 Instrument Coding MS4: Module 1 UI Design MS5: Documentation (WIP) [[ MILESTONE 6 ]] Brainwave Interfacing in TouchDesigner MS7: Module 2 UX Design MS8: Module 2 Instrument Coding MS9: Module 2 UI Design (WIP) MS10: Documentation (WIP) [[ MILESTONE 11 ]] VR Interfacing in TouchDesigner MS12: Module 3 UX Design (WIP) MS13: Module 3 Instrument Coding MS14: Module 3 UI Design MS15: Documentation (WIP) [[ MILESTONE 16 ]] Self-Replicating AI in TouchDesigner (WIP) MS17: Train Original Models (WIP) MS18: Setup Conditional Autoprompter in TouchDesigner (WIP) MS19: Module 3 UX Design (WIP) MS20: Module 3 Instrument Coding (WIP) MS21: Module 3 UI Design (WIP) MS22: Documentation (WIP)

# TECHNICAL JOURNEY # MODULE 1: Pixel Affection The first module is called Pixel Affection, and is based on the concept of lucid dreaming. It splits the brain in half into two sides, one that is playful and flexible, and one that has structure and firm mathematics. These "sides of the brain" take 3D shapes and extracts their points into a time continuum, creating 6 waveforms that are sent via CV (control voltage) to any compatible modular system. It was important to build with a completely modular mindset, expecting that the outgoing waveforms be used in myriad ways and combinations. To aid in the visualization of this brain, everything in the module is color coded to blue and pink. The blue side (structured), traverses the shape's points in ordered number of astral points, with 8 changeable modes for reordering those points (by x/y/z, random, reverse, closest neighbor, etc). Esoterically, this is called the bouncy mode. Now that we are bouncing, much of the other controls relate to controlling and shaping this bounce. The knobs that are available are jitter, drowsiness, snoooze, REM, neurons, and Skiing. Each has their own way of smoothening or sharpening the waveforms, wavefolding, entering in purposeful turbulence, etc. The Despierta button awakens this side of the brain, which will start a new dream cycle (waveform traversal). There two additional modes called Foreverever and Deeep, which specifically will loop the waveforms indefinitely and/or extend them over a greater time field. These are particularly useful when controlling long movements in an instrument change, or for ambient compositions. The pink side (playful), takes a deeper focus on using the shape itself less as scaffolding for the output waveforms and more as the control surface itself. Here we have the option to switch between 5 different shapes (sphere, grid, cube, fractal, and twist). Now that we have the shape, we can begin manipulating it in a variety of ways. On the knobs we control the segments (turning it into a special unique snowflake), a noise force (an imprinted impact of the passing wind), twist (its reaction to its surrounding stimuli), REM, and X and Y rotation speeds. This side of the brain generally outputs more lively waves, and is a more "pure" representation of the present moment traversal of the XYZ coordinates between the vertices of the shape as it moves, twists, rotates, and transforms.

MODULE 2: Nada Telepathic The second module, named Nada Telepathic, is a brainwave (EEG) controlled module utilizing pure brain control of modular synth systems. Similarly to Pixel Affection, all outputs are sent in CV waveforms to increase the creative capacity of the musician's brain (literally :P). The outputs are split into the dominant waveforms that the brain generates in dB over time: alpha, beta, gamma, delta, theta, and waveforms representing head movements to the left, right, up and down. This data is processed in a sensitive manner to maintain accurate brain data while also being cognizant of the sensitivity and real possibility for spikes in data due to contact point disconnections while wearing EEG devices. The EEG device it is designed weith is the Muse 2 headset, but extensions may be developed if desired. While programming is complete, this module's interface is still a work in progress, but many interesting ideas have come to mind and are being explored. :) MODULE 3: VelociRaptor Card Captor The third module, VelociRaptor Card Captor (VRCC for short), is a direct interface to my preferred VR modular system (an open source branch of SoundStage VR, tailored for performance). A strange russian doll of interfaces, echoing back to what feels ancient and is often deemed extinct (VR), this is a module with interfaces inside of virtual reality, TouchDesigner, and with passthrough capability to the real world hardware modular system. This module is in very early prototypes, and while functionality and controls are there, everything could change in terms of interface, and SoundStage could end up being replaced entirely by TouchDesigner or custom Unreal Engine software any day now. MODULE 4: Don’t Drink the Gray Goos(e) The fourth module, Don’t Drink the Gray Goos(e) (DDGGe) is designed to be a self-learning drum sampler and sequencer built on top of Stable Diffusion. This one is the furthest out from production, and the aim is to create a streamlined process to first train a model, and then to seek out newly created samples by creating dynamic prompt generation based on the behavior and qualities of the existing drum sequences and timbres. While this is the most purely digital of the modules, it will still have unique CV outputs designed for controlling FX modules that would come after the sample playback in the signal chain.

= MOST CHALLENGING = I think something I didn't anticipate being so challenging was organizing my code to make sense to me the next day. It was very easy to get lost in how everything was wired and the esoteric themes I am working with. {{ UNEXPECTED DISCOVERY }} Honestly unfolding 3D space into 4D space (time) was the biggest discovery and mind blowing moment for me. Another great surprise was that I can use emojis as text for all the buttons which makes me very happy inside, and adds a visual/iconic vibe to everything to increase creativity.

Matthew D. Gantt is an artist, composer, and educator based between NYC and Troy, NY. His practice focuses on sound in virtual spaces, generative systems facilitated by idiosyncratic technology, and digital production presets as sonic readymades. He worked as a studio assistant to electronics pioneer Morton Subotnick from 2016 - 2018, releases music with Orange Milk and Oxtail Recordings, and teaches experimental music and media in both institutional and grassroots contexts. Gantt’s work has been featured in The Wire magazine, Pop Matters, Exclaim!, Tiny Mix Tapes, Bandcamp New and Notable and similar.

Research for this piece has focused on bridging Ableton Live and Max/MSP with the Unreal Engine via OSC and MIDI, allowing sequences or gestures in the DAW to 'modulate' the virtual environment, and events or information from the environment to recursively affect the DAW. This type of bi directional or 'feedback' communication has led to an ongoing interest in cybernetics and cybernetic systems in the context of music composition as well. [[ MILESTONE 1 ]] bi-directional OSC bridge from UE4 to Ableton/MaxMSP [[ MILESTONE 2 ]] MIDI sequencing of virtual objects/camera changes [[ MILESTONE 3 ]] modulating virtual environment via Ableton API/automation [[ MILESTONE 4 ]] Virtual physics actions/updates in UE4 driving MIDI information in DAW

# TECHNICAL JOURNEY # This work is part of a larger series exploring new compositional practices for working with virtual space. In Simulation IV, the Unreal Engine is connected to Ableton Live and Max/MSP via a bi-directional OSC bridge. Information from Ableton (MIDI sequencing, generative CC/API modulation, automation lanes, etc) is sent to a virtual environment created in Unreal to 'spawn' 3D objects, change camera angles and apertures, adjust virtual physics parameters and similar. A separate OSC channel goes from Unreal to Ableton, reporting object status, velocity, collisions and similar, which is then mapped into sonic timbre, stereo field location, note density, etc. This approach offers affordances for linking to spatial sound toolsets like IRCAM's SPAT, ICST plugins and similar. Due to the non-linear and generative nature of these compositions, they are captured in real-time with a Blackmagic hardware card, as opposed to traditional off line rendering. = MOST CHALLENGING = Beyond the initial stumbling blocks of building a somewhat novel inter-software routing, conceptualizing and creating a workflow that felt both robust and streamlined enough for rapid ideation took a long time to develop and find a rhythm with. {{ UNEXPECTED DISCOVERY }} Virtual physics as a compositional parameter! Using MIDI from Ableton/Max to launch objects into space, and using their resulting motion to map back into Ableton became both creatively and conceptually exciting, and expanded my thinking from a simple 'I want to use a DAW to move objects around in 3D space' into 'how can this approach eventually lead to an inter-related system of emergent sound-and-space gestures' in the spirit of cybernetics and feedback.

Yawä//Zē (b.Granada, 1988 ) is an experimental artist and electronic music composer based in Ibiza. His main field of interest is an abstract approach to algorithmic systems, data visualization and DIY electronics. He explores Colundi frequencies and psychoacoustics as well as ideas of time, perception and ecological sound environments. Currently mostly working on his own lofi interpretation of AI software called //Zē, Yawä is decrypting part of his recordings & experiments as a pre-release of the project (Å // Ä // Ā), where he explores different areas and processes within art and experimental audiovisual technologies. The first collection of data will be released on UK label Furthur Electronix in December 2022.

[[ MILESTONE 1 ]] I spent most of my youth in Madrid as a DJ and promoter before I moved to Ibiza in 2010, where I started hosting public events with tech talks, synthesizer meetings, art shows and vinyl fairs very actively together with my friend Teo Molina, Director of Mixmag Spain and F&B magazines. [[ MILESTONE 2 ]] From 2017 I started collaborating with different artist on the island such as Jose Padilla, Chris Coco or, more recently, my friend Sergio Garcia aka Uf0. Some of these tracks are coming soon on labels like Furthur Electronix or Altered Sense. [[ MILESTONE 3 ]] Nowadays I´m mostly working on a hardware interface to control and play with //Zē, I will share it at some point of the project after some more years of training.

# TECHNICAL JOURNEY # My workflow has been evolving with time since I started recording my first tapes when I was a kid, most of my early years were focused on trying to copy all my music influences, my first music experiments are more related to industrial noise, ambient, techno, electro and braindance so my first tracks are basically very simple jams taping synths and samplers. Around 2015 I started experimenting with Colundi frequencies, this is kind of a cult/philosophie shared by Aleksi Perala & Grant Wilson Claridge from Rephlex records. This discovery drove me to learn more about microtunings and, from that time, I started building/making my own tunings and a new microtonal setup focusing on the best machines I available to me, and selling many of my early synthetisers. In recent years I have been working on the idea of my own AI software for live-coding my memories and help me with my explorations through data visualization, machine learning and neural network techniques, meant to be an open source and ecological decentralized lofi project with many collaborators in the making... Actually my studio workflow is basically create my own waveforms with my own custom tunings, play/record with my machines some jams and tape what I like. I process the tapes by burying/degenerating them in different special locations around the island, and I retrieve them after some months/years. Cleaning and digitalizing all this data, I then let the AI make her version with the datasets I made from these early recordings. The project has been made with a variety of analog and digital gear but currently these are my main machines: 1950s oscillators for creating/sampling sinewaves, fm synths as yamaha dx11 sy77 asr10 for sampling and effects other microtonal synths as monologue and minilogue xd drum machines several 4tracks reel2reels piano micros microtonal box to retune my machines scripts python 3 supercollider free code

= MOST CHALLENGING = Beyond the initial stumbling blocks of building a somewhat novel inter-software routing, conceptualizing and creating a workflow that felt both robust and streamlined enough for rapid ideation took a long time to develop and find a rhythm with. {{ UNEXPECTED DISCOVERY }} Virtual physics as a compositional parameter! Using MIDI from Ableton/Max to launch objects into space, and using their resulting motion to map back into Ableton became both creatively and conceptually exciting, and expanded my thinking from a simple 'I want to use a DAW to move objects around in 3D space' into 'how can this approach eventually lead to an inter-related system of emergent sound-and-space gestures' in the spirit of cybernetics and feedback.

Vicky Clarke aka SONAMB, is a sound artist working with DIY electronics, sound sculpture and human-machine systems. She explores our relationship to technology through sonic materiality, considering human agency in seemingly autonomous systems, post-industrialisation and the techno emotional states we experience through these interactions. Her work takes the form of self-built instruments and sculpture, live performance, research and installation. Winner of the Oram Award 2020, for the past 2 years she’s been exploring musique concrete and machine learning through her AURAMACHINE research projects with British Council, Uni of Manchester and PRiSM, RNCM. Her debut album, SLEEPSTATES, under new moniker SONAMB is out now.

[[ MILESTONE 1 ]] R&D The concept for SLEEPSTATES.NET began in lockdown, with R&D taking place through late night internet explorations learning web coding and visual machine learning. At that time I was experiencing sleep deprivation and questioning my mental health in relation to the machine and my increased time spent online, staring at a screen. I began documenting the techno-emotional states I experienced such as anxiety and inertia, through the creation of audiovisual vignettes. The piece is an antidote to platform capitalism and draws upon critical inspiration from 24/7 by Jonathan Crary, a book positing sleep as the last defence against capitalism and readings into MIT’s Dream Lab research into sleep sound implantation. I began thinking about a platform users could go to that was non-extractive, an interface that was non capitalist and aimed at supporting sleep and citizen autonomy and freedom rather than one aimed to extract user data and monitise our emotions. The space is designed as a sanctuary to check in with your machine addiction levels and ‘transmit your frequency’ by breathing into the machine, contributing to a collective white noise sleep aid that protects our dream spaces and sleeping states. [[ MILESTONE 2 ]] CREATIVE - DIY AUDIO SYSTEMS As well as online networks, the audio was created over three years via physical locations featuring transmission recordings between Manchester, Berlin and St Petersburg. Exploring signals and noise textures using DIY electronics and interfaces built in the three locations, comprising broken radio transmissions, algorithmic noise experiments and internet materiality explorations. These methodologies and approaches provide the piece with a distinct aesthetic and tell a story of life before and during lockdown, a reflection on our waking and networked selves, and connection to time, place and the electrical imaginary. [[ MILESTONE 3 ]] TECH-AUDIO: BUILDING MACHINE LEARNING DATASETS The album features a narrator, a speculative feminist AI who guides you through the states, the voice is trained using GPT2 on a dataset of alchemical and machine learning texts, self-help youtube videos for sleep and key techno-feminist texts such as the cyborg manifesto. The texts were then recorded via free text to speech sites online. This along with the ‘moon generator’, the citizen collected dataset of internet scraped pink moons were my first explorations into building datasets for training machine learning models. This was a formative process for my practice which has since developed to focus on a specialism around sonic AI, specifically, neural synthesis, having undertaken a two year residency with NOVARS, centre for innovation in sound at the University of Manchester exploring musique concrete and machine learning, see www.auramachine.blog for the research site.

# TECHNICAL JOURNEY # The SLEEPSTATES.NET platform is designed to be experienced individually via the user’s laptop, emphasising their personal interaction and emotional connection with the machine. It is best viewed on 13” and 15” browser screen sizes and works best through Google Chrome. The platform interface comprises: SLEEPCYCLE: 4 x Audiovisual pieces SLUMBER STATUS: check your ‘machine addiction level’, ‘current mood’ and ‘hynpogram’ BUTTONS: 3 X Windows 98 popups inc. ‘ML moon generator’, ‘Transmit Frequency’ ABOUT: 1 X about platform, project and link to Bandcamp album It is important to state that this platform is not extractivist in any way, the user is not tracked and no data is stored. The only permission required on the site is to allow for microphone access for the ‘breathe into the machine’ pop up. The piece was created using various creative coding and digital tools including Runway, P5js, Blender and TouchDesigner and original web design code using Vue and Nuxt. The piece was begun via a remote residency for Manchester International Festival during lockdown followed by creative web design mentoring with Studio Treble supported by Arts Council England. = MOST CHALLENGING = Beyond the initial stumbling blocks of building a somewhat novel inter-software routing, conceptualizing and creating a workflow that felt both robust and streamlined enough for rapid ideation took a long time to develop and find a rhythm with. {{ UNEXPECTED DISCOVERY }} Virtual physics as a compositional parameter! Using MIDI from Ableton/Max to launch objects into space, and using their resulting motion to map back into Ableton became both creatively and conceptually exciting, and expanded my thinking from a simple 'I want to use a DAW to move objects around in 3D space' into 'how can this approach eventually lead to an inter-related system of emergent sound-and-space gestures' in the spirit of cybernetics and feedback.

Artwork title

'Sittin On A' 'Wire' (Teaser)

Artist name Portrait XO
COLLECT
Artwork title

Co-Creating with AI (lyrics and melodies)

Artist name Portrait XO
COLLECT

Portrait XO (she/they) is an independent researcher and artist who creates musical and visual works with traditional and non-traditional methods. In collaboration with Dadabots, they won ‘Best Experiment’ award at VUT Indie Awards 2021, Eurovision AI Song Contest Jury Vote for ‘most creative use of AI’ in 2020. Her development into AI audiovisual art evolved through several artist residencies from NEW NOW FESTIVAL and BBA Gallery in 2021, and Factory Berlin x Sonar+D in 2020. She researches computational creativity, human-machine collaboration, and explores new formats & applications for forward thinking art and sound. She holds a monthly radio residency with her art & activism collective CO:QUO (CO CREATE STATUS-QUO) on Refuge Worldwide Radio, is growing a community of hybrid artists at SOUND OBSESSED, and a founding member of The IASAS (International Association of Synaesthetes, Artists, and Scientists). Her debut research-based AI audiovisual album 'WIRE' is set to release in Web 3.0 & all traditional formats on December 9th 2022 as a first of its kind: NFT to Vinyl. Dadabots: Zack and CJ met at Berklee College, back when they used to play instruments. After falling down the rabbit hole of python, theano, arXiv, and github - a benevolent alien abducted them in a UFO and granted them unlimited GPU credits, which they wasted on generating noise music. Their musicianship has since deteriorated, and they are embarrassingly out of practice.

[[ MILESTONE 1 ]] Collaboration with Dadabots - SampleRNN training: curating audio dataset 1 hour of vocals - dynamic expression + frequency range + training audio for 2 1/2 days to output 10 hours of AI-generated audio [[ MILESTONE 2 ]] AI audio curation + songwriting (call + response) [[ MILESTONE 3 ]] Songwriting, composing, and producing [[ MILESTONE 4 ]] Instagram playable filter in collaboration with Cibelle Cavalli Bastos [[ MILESTONE 5 ]] AI audiovisuals for live performances with Pollinations.AI [[ MILESTONE 6 ]] AI audiovisuals for music videos (in collaboration with Mikael Brain for cinematic music video + further experiments with Pollinations.AI [[ MILESTONE 7 ]] BBA Artist Residency January 2021 - filmed select visitors and dancers for new text to image VQGAN + CLIP music videos

# TECHNICAL JOURNEY # Each track title of this album is named after AI generated audio clips with lyrics and melodies that inspired each song. Each song prompted long contemplations about what it means to be a human co-creating with AI that generates another version of myself this way. I feel like I’ve grown a new relationship with myself through the lens of AI that keeps pushing my boundaries of whatever I thought I knew about myself, technology, and how we relate to each other. In my constant states of confusion, I’m trying to make sense of it all by sonifying the dance between art and technology. The process of surrendering to the unpredictable glitches of AI generated of my voice through a machine learning model that had no algorithm that understood music theory and created 10 hours of strange audio that basically created new audio based on waveform prediction has been the most fascinating intimate journey for me as a songwriter. This research-based experimental album has opened new perspectives of how I can use my voice in ways I never imagined. Using AI this way by training my own dataset has offered a window into the most intimate journey of self-reflection on how I write, sing, listen to how AI presents what my repetitive patterns are, and surrender to endless goosebump moments of listening to a machine try to sing like me. I was tired of my old patterns of songwriting and producing, I wanted to break free from form. This experiment felt like Dadabots took everything I ever knew about myself, and through their model broke me down into a billion pieces, and gave me endless fragments of mirrors of myself in fascinating ways. Technically, composing and producing this album has been the most interesting journey. It's the first time I completely surrendered to not knowing what the outcome would be, and build around what I heard to be the most inspiring/fascinating. I kept the production stripped down as much as I could, picking less elements than I usually do to highlight the AI generated inspiring sounds. When lockdown happened, I got introduced to Thomas Haferlach who started pollinations.ai that birthed all my AI audiovisuals I perform live with + some special music videos that are releasing soon. I fell in love with a 2-step process of using their Lucid Sonic Dreams model, then feeding that output into Text-to-Image VQGAN + CLIP (Video) model where I got to get really granular with how audioreactive and customized I could get with the visual outcome.

= MOST CHALLENGING = The most challenging at first was not knowing whether I would be able to compose cohesive sounding music that I found musical. The first set of audio I heard were really noisy and this entire process actually required a lot of patience to listen through a ton of audio. I picked my favourite gems and 'wired' them together to create this album. While I found the curation process itself time consuming, the amount of inspiring moments felt endless. Once I had all my favourite audio curated, songwriting and producing happened really quickly with a lot of hair raising moments. Those are the moments I live for, the sparks of inspiration that trigger flow. There was endless flow. The other aspect that was challenging was using google colab notebooks + pollinations.AI for all the visuals I made. This was really time consuming that required a lot of patience and restarting some sessions that would lose connection. At the time, google colab was pretty new and there weren't Pro + Pro+ subscription models yet and the open source codes that were available mostly didn't have the ability to restart from where I left off when I'd lose connection. But as I was extremely isolated in Berlin, this entire project was the one source of unpredictability that was inspiring and exciting. It kept me alive during really difficult times. {{ UNEXPECTED DISCOVERY }} Hearing my AI generated vocals combine vocal techniques in ways I can't reproduce. Every track on the album were my favourite goosebump moments of lyrics and melodies I've never sung before. Each title were very specific to thoughts and feelings I was experiencing at the time, strange coincidences that drew out parts of me in interesting ways. Throughout this project, I've discovered new ways of expressing myself through co-creation with not just machines, but with an 'other' version of myself through the lens of AI.

Artwork title

Portrait XO 'Back To' Process Video

Artist name Portrait XO
COLLECT
Artwork title

WEAVE

Artist name Noah Pred x Jeff Warren
Artwork Description:

Narrated by Jeff Warren with soundscape entwined by Noah Pred, WEAVE investigates the potential of sound as a tool to explore consciousness, hone attention, and ultimately expand awareness. The remixing of sound is offered as a metaphor for exploring and adjusting one’s own engagement with self / world. The 12-minute piece is a kind of guided sound meditation, moving through terrains of effort, challenge, release, and dissolution. 

The sonic backdrop was constructed using custom generative tools developed by Pred. Each idea in Warren’s narration corresponds to changes in the music, and vice versa. In addition, each musical element was created from samples of Jeff’s recorded narration, reëmbedding the soundscape as another emanation of the same consciousness, exploring itself. This multilayered, interwoven process points toward recursive effects arguably inherent to experience.


-------------------------------------------------------



Jeff Warren teaches meditation. He co-wrote Meditation for Fidgety Skeptics, wrote The Head Trip, and founded The Consciousness Explorers Club. His mission is to empower people to care about their mental health through the creative application of meditation and personal growth practices. He also teaches people how to guide and share practice in community.

Noah Pred has been envisioning the future through electronic music for over twenty years. He teaches music production in his adopted hometown of Berlin, where he also programs custom generative patches to fuel his creative practice. He provides that same generative fuel through his sound design outlet, Manifest Audio. His interactive multimedia works have been featured in recent years at MUTEK, despace, and Refraction Festival.

Since meeting in Toronto some fifteen years ago, their first collaboration presaged WEAVE, with Pred creatively soundtracking Warren’s narrated radio piece on deep ocean life for the CBC. The culmination of years of ongoing conversation, their first foray into sonic meditation indicates new paths forward for both.


COLLECT
WEAVE
Artwork title

WEAVE

Artist name Noah Pred and Jeff Warren

# TECHNICAL JOURNEY # The musical components of the piece were created using sample-based virtual instruments populated with short snippets of Jeff's voice taken from the recorded narration. Once Noah adjusted the resulting sounds, they were triggered by Noah's custom Max for Live devices: Pattern Engine, Chance Engine, and Pulse Engine. Noah's real-time audio-to-MIDI device, X-Translate, was also used to adapt certain phrases of Jeff's narration into musical gestures of their own. = MOST CHALLENGING = Collaborating over distance during the pandemic presented the usual spate of digital challenges. From there, the project went through numerous drafts and refinements to ensure adequate space was made for the listener before landing on the final version presented here. {{ UNEXPECTED DISCOVERY }} Noah: Using the vibrations of Jeff’s voice to generate musical artifacts to then trigger related musical instruments (also constructed from Jeff’s voice) worked more fluidly than expected. Jeff: I had no idea sound could be used so effectively to capture the nuances of consciousness – I also enjoyed the weirdly recursive way the musical content referenced its own form!

[[ MILESTONE 1 ]] Developed generative tools to express the narration [[ MILESTONE 2 ]] Developed narrative [[ MILESTONE 3 ]] Developed instruments created from samples of narration [[ MILESTONE 4 ]] Refined narrative [[ MILESTONE 5 ]] Recorded final outputs [[ MILESTONE 6 ]] Arranged and mixed final outputs [[ MILESTONE 7 ]] Mastered final outputs

Noah Pred is a Juno-award nominated electronic artist, sound designer, developer, and generative artist. Jeff Warren is a New York Times best-selling author, founder of the Consciousness Explorers’ Club, co-host of the Consciousness Explorers podcast, and meditation practitioner providing the Daily Trip for Calm. Friends for over a decade, their conversations over the years inevitably led to investigations of sound, perception, attention, and the human experience of reality.

Artwork title

Albert.DATA

Artist name Albert.DATA
Artwork Description:

Albert.DATA is the new artistic identity of Albert Barque-Duran (1989).

Albert is an artist and a researcher in Creative Technologies and Digital Art, currently based in Barcelona.

Albert, the human one, earned a PhD and a Postdoc in Cognitive Science from the Centre for Mathematical Neuroscience at City, University of London (UK) and have been a Visiting Postgraduate Researcher at Harvard University (USA) and University of Oxford (UK). 
Albert's artistic research focuses on: (1) human-machine interaction during artistic and creative processes, (2) Artificial Intelligence's (AI) aesthetic artifacts, (3) perception and aesthetics under sensory conflicts, and (4) experimental formats and aesthetics in virtual environments using game engines.

They have exhibited and performed at Sonar+D (Barcelona, Spain), Ars Electronica (Linz, Austria), ZKM (Karlsruhe, Germany), Creative Reactions (London, UK), Cricoteka (Krakow, Poland), Albumarte (Rome, Italy), SciArt Center (New York, USA), IGNITE Fest (Medellin, Colombia), Nuits Sonores (Lyon, France), Mobile World Congress (Barcelona, Spain), Ming Contemporary Art Museum (Shanghai, China), DMA (Daejon, South Korea), and more. 
Albert was awarded with the Artist Residency at Sonar+D x Factory Berlin in 2020 and received an award from "We Are Europe" (Creative Europe Programme of the European Union), which endowed him as one of the 64 young "Culture Activists" in Europe in 2019. Further, they have been internationally awarded by the "Re:Humanism Prize" for work on the relationship between AI and Art; by "We Are Equals-MUTEK Music Academy"; by "Art of Neuroscience" award for bridging science and art; and by the Catalan Government with the "International Award City of Lleida" for the significant cultural impact of their projects. 

Website
Albert.DATA
Artwork title

SLOWLY FADING INTO DATA

Artist name Albert.DATA
Artwork Description:

'SLOWLY FADING INTO DATA'

ABSTRACT:
‘Slowly Fading into Data’ is a speculative audiovisual project expressed in multiple artistic formats: a Retro- Game & Arcade Installation, a Debut Music Album, a Live A/V Performance, and a Short Film. These audiovisual experiences are the results from Albert.DATA’s artistic research on disembodiment, extended cognition, hybrid beings, and synthetic identities. The artist and researcher presents an avant-garde, ambient, and contemplative story of a human, slowly mutating into data: A transformative journey about non-human forms, the boundaries and extensions of experience, existence, and identity. The creation of new audiovisual instruments to interpret the last human reverberations. A step forward for the disintegration of the self.

OVERVIEW:
The Retro-Game & Arcade Installation is based on a narrative-driven, interactive-storytelling, action-adventure experience. The project’s goal is to challenge standard audiovisual production methods by combining "low-tech" (8-bit retro console) with "high-tech" (AI and procedural sound design) using game engine technologies. In specific, the experience presents 8-bit artistic game assets designed and produced using AI techniques; and it also features 8-bit procedural music and sound design that experiments with and exploits the sound hardware of this iconic family of consoles.

The debut Music Album is a conceptual sonic project that delves into the past to peer into the future. Using research on 8-bit music and neural audio synthesis, Albert.DATA proposes an avant-garde ambient, and ominous experience that bends the perception of time. The creation of new digital instruments to interpret the last human reverberations. A memory trip based on contemplation and ecstasy.

The Live A/V Performance (45 minutes) is an experimental audiovisual show that invites us into an ambient, ethereal, abstract and contemplative story of a human, slowly mutating into data: A transformative journey about the meaning of transfiguring into a non-human form, the boundaries and extensions of experience, existence and identity. The performance aims to challenge standard audiovisual production methods by combining, in real-time, "low-tech" (an 8-bit retro console) with "high-tech" (AI and procedural sound design) implementing 8-bit sound technologies with neural audio synthesis in real-time.

The Short Film is an experimental cinematography project that aims at merging all the other expressions of the ‘Slowly Fading into Data’ project into one single audiovisual experience. It consists of 6 different short pieces that combine scenes from the live a/v performance and fictional material. The goal is to challenge the canonical audiovisual production methods by combining retro formats (8mm Film) with state-of-the-art cinema technology (Virtual Production and Artificial Intelligence).

Artwork title

SLOWLY FADING INTO DATA

Artist name Albert.DATA
Artwork Description:

‘Slowly Fading into Data’ is a speculative audiovisual project expressed in multiple artistic formats: a Retro- Game & Arcade Installation, a Debut Music Album, a Live A/V Performance, and a Short Film. These audiovisual experiences are the results from Albert.DATA’s artistic research on disembodiment, extended cognition, hybrid beings, and synthetic identities. The artist and researcher presents an avant-garde, ambient, and contemplative story of a human, slowly mutating into data: A transformative journey about non-human forms, the boundaries and extensions of experience, existence, and identity. The creation of new audiovisual instruments to interpret the last human reverberations. A step forward for the disintegration of the self.

OVERVIEW:
The Retro-Game & Arcade Installation is based on a narrative-driven, interactive-storytelling, action-adventure experience. The project’s goal is to challenge standard audiovisual production methods by combining "low-tech" (8-bit retro console) with "high-tech" (AI and procedural sound design) using game engine technologies. In specific, the experience presents 8-bit artistic game assets designed and produced using AI techniques; and it also features 8-bit procedural music and sound design that experiments with and exploits the sound hardware of this iconic family of consoles.

The debut Music Album is a conceptual sonic project that delves into the past to peer into the future. Using research on 8-bit music and neural audio synthesis, Albert.DATA proposes an avant-garde ambient, and ominous experience that bends the perception of time. The creation of new digital instruments to interpret the last human reverberations. A memory trip based on contemplation and ecstasy.

The Live A/V Performance (45 minutes) is an experimental audiovisual show that invites us into an ambient, ethereal, abstract and contemplative story of a human, slowly mutating into data: A transformative journey about the meaning of transfiguring into a non-human form, the boundaries and extensions of experience, existence and identity. The performance aims to challenge standard audiovisual production methods by combining, in real-time, "low-tech" (an 8-bit retro console) with "high-tech" (AI and procedural sound design) implementing 8-bit sound technologies with neural audio synthesis in real-time.

The Short Film is an experimental cinematography project that aims at merging all the other expressions of the ‘Slowly Fading into Data’ project into one single audiovisual experience. It consists of 6 different short pieces that combine scenes from the live a/v performance and fictional material. The goal is to challenge the canonical audiovisual production methods by combining retro formats (8mm Film) with state-of-the-art cinema technology (Virtual Production and Artificial Intelligence).

# TECHNICAL JOURNEY # This project aims to challenge the canonical audiovisual production methods by combining retro formats, such as 8mm Film and 8-bit retro consoles, with state-of-the-art cinema technology, such as Artificial Intelligence techniques and Virtual Production processes (next-gen game engines). Furthermore, the project is also produced using research on 8-bit music and neural audio synthesis. More specifically, the project uses a mix of established and custom pipe-lines based on Diffusion-based and GAN models (Generative Adversarial Networks) for image synthesis; and DDSP models (Differentiable Digital Signal Processing) and RAVE models (Realtime Audio Variational autoencoder) for sound synthesis. = MOST CHALLENGING = “Slowly Fading into Data” is an artistic research project with the aim to continue and extend my research on audiovisual perception and experimental aesthetics in non-standard environments. Specifically, the goal is to investigate the transformation of our artistic cognitive practices in virtual contexts, where the process of cognition is mediated by digital artifacts. The concept of “game space” refers to an interactive world offering a field of action to which players must adapt (Adams, 2019). It has a space-time delimitation emerging from various entities like nature, objects, characters, players, and other elements. Nevertheless, the traditional discourse within a game space is limited to the spatio temporal manipulations of the interactive world from the position of players (Williams, 2017). Hence, these spacetime representations define the way in which we can move around that world, the point of view from which we perceive it, and the channels of interaction with the world (Sharp, 2014). However, recent research in cognitive science offers a fascinating perspective that could challenge this order. This project experiments with the tension between retro formats and state-of-the-art technologies with the aim of pushing the boundaries of standard conventions of human perception. {{ UNEXPECTED DISCOVERY }} To experience altered states of consciousness.

Albert.DATA is an artist and a researcher in Creative Technologies and Digital Art. He earned a PhD and a Postdoc in Cognitive Science from City, University of London (UK) and he’s artistic research focuses on: (1) human-machine interaction during artistic and creative processes, (2) Artificial Intelligence's (AI) aesthetic artifacts, (3) perception and aesthetics under sensory conflicts, and (4) experimental formats in virtual environments using game engines. Albert was awarded with the Artist Residency at Sonar+D x Factory Berlin in 2020 and received an award from "We Are Europe", which endowed him as one of the 64 young "Culture Activists" in Europe in 2019

Artwork title

Marvin's Dream

Artist name Margaret Kirchmeier, Die Wilde Jagd, Thomash Haferlach, EXZ, Elliot & Pollinations
Artwork Description:

You are invited to dive into an immersive 4-dimensional fable that takes place in a future universe brimming with robots, aliens, and other inanimate objects. We follow the story of the robot Marvin, as they reflect their own existence and dreams of surprising adventure.

Marvin's Dream is an immersive 4D fable that follows the story of a robot named Marvin as they dream of a world without hierarchy between humans and machines. The story is narrated by 95-year-old Margaret Kirchmeier, who has read stories to multiple generations of children and grandchildren. The script is written by an open-source neural network that has learned to write from various texts, including fairy tales, fables, and science fiction. The 4D soundtrack is created by Thomash, EXZ, Sebastian Lee Philipp, and Margaret Kirchmeier, who bring the dreamt universe to life with their unique styles and experiences in electronic and psychedelic music.

Artwork title

Marvin's Dream

Artist name Margaret Kirchmeier, Die Wilde Jagd, Thomash Haferlach, EXZ, Elliot & Pollinations
Artwork Description:

Marvin's Dream is an immersive 4D fable that follows the story of a robot named Marvin as they dream of a world without hierarchy between humans and machines. The story is narrated by 95-year-old Margaret Kirchmeier, who has read stories to multiple generations of children and grandchildren. The script is written by an open-source neural network that has learned to write from various texts, including fairy tales, fables, and science fiction. The 4D soundtrack is created by Thomash, EXZ, Sebastian Lee Philipp, and Margaret Kirchmeier, who bring the dreamt universe to life with their unique styles and experiences in electronic and psychedelic music.

Marvin's Dream

[[ MILESTONE 1 ]] Research and development of the artificial intelligence writer, including training and testing on various texts and styles. [[ MILESTONE 2 ]] Development of the 4D soundtrack, including the creation of themes and sound design by Thomash, EXZ, and Sebastian Lee Philipp. [[ MILESTONE 3 ]] Integration of the neural network-written script and 4D soundtrack into a cohesive and immersive experience. [[ MILESTONE 4 ]] Testing and fine-tuning of the experience, including the integration of Margaret Kirchmeier's narration and any additional creative or technical elements.

Marvin dreams of a world in which humans are not superior but just one of many possible forms of existence. This removal of hierarchy between humans and machines is mirrored in the story’s creation process. The script itself is entirely written by an open-source neural network that has learned to write from massive amounts of raw text from fairy tales, fables, and science fiction.

Fly through this space to learn about the latest sonic innovation from our community of hybrid artists.