Clarice Assad


Sandbox Residency

About Sandbox

A new initiative for engaging composers and community in the creation of new music.

The 2022.23 season will bring the launch of the SPCO’s Sandbox Composer Residency program — a new initiative for engaging composers and community in the creation of new music. Over the next several seasons, three composers will participate in innovative and intensive multi-week residencies with the SPCO, in a program designed to foster a spirit of shared discovery and the creation of a significant body of new music. Viet Cuong, Clarice Assad and Gabriela Lena Frank have been selected to work with SPCO musicians as the inaugural Sandbox composers.

My Project:
The Evolution of AI

I’m excited to have the opportunity to work with the Saint Paul Chamber Orchestra and write a new work exploring one of our time’s most fascinating and challenging subjects: Artificial Intelligence. The development of AI and it’s rapidly changing pace is already transforming human roles in society. Significant disruptions are about to reach us in various fields. It is essential to question these developments shaping our world for years to come, whether through robots, machine learning, or other means. in the piece, I will  use electronic musical instruments and wearables as well as AI software to generate ideas rather than come up with them on my own.  Stay Tuned.

– Clarice Assad.

The Evolution of AI

i. reboot \ ii. data collection \ iii. machine learning \
iv. integration



The Evolution of AI is in four movements.  The  piece begins with “i. Reboot,” when the AI (represented by Assad as a performer) is born or becomes aware of its existence. It conveys the idea of starting anew, just like a computer or electronic device goes through a reboot to refresh and begin again.

Resources, Gear and More

Scroll Down for resources


The following IG videos are demonstrations of gear I experimented with to create this piece. Not all have made into the Final Cut, but they are still fun interesting to watch.

Percussion Suit

Wave Midi Ring (By Genki)

Dubler USB Microphone

Robert Hüttl is the maker of this suit. It includes ten dynamic pads. It is a wired suit (which is difficult for specific performance scenarios), but it is highly responsive, and because it is midi, it can trigger a great variety of sounds. Sadly the suit did not make it into the piece. 



The Genki wave ring is a dream for any music performer looking to add a dramatic flair to their act. It frees the person from controlling sounds on a keyboard or another midi-controlling device.  




AFrame (continued)

This is a unique microphone. It is essentially a midi controller, but one that uses the voice as the input, which is wild.  There is a lot of potential here and many tricks to learn.  The company is called DUBLER.   I chose not to go with the Dubler for this piece because I will already be using the TC Helicon and the Genki ring.  

AFrame Last Demo


Image of Assad generated by ChatGPT and Midjourney based on a real picture.

Relevant Links to AI Models and Software with Description


Courses – Broad intro and medium advance courses for people wanting to get to know AI


ChatGPT Pro Prompts: There is a plethora available for free on a single google search


MIT Management Executive Education, also SMU and other institutions are beginning to offer M.A. in Creative Technology



Art | Video – Creates brand pictures in minutes.

Discord Midjourney: a text-to-image AI.

DALL.E: Creating images from text

HeyGen: Create videos with AI

No Code MBA



SPIRITT : Build an app just by describing it


Ad & Social Creatives – Generating Conversion


Adalo: Custom Responsive Apps and Publishing On-brand AI content. 


Voice and Music

REVOICER: Realistic AI Text to Speech online for podcast and videos


SCRIBEBUDDY: Automatically transcribe any audio, zoom call, google meet, podcast live speech


SOUNDRAW: Royalty-free, AI generated



Amper Music


Mubert: free version of its AI-powered music streaming service



What is (AI) Artificial Intelligence ?

Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.

The wayAI work its way into music and how I structured the piece to reflect this

Data Collection: To utilize AI in music, you need substantial musical data. This includes audio recordings, sheet music, MIDI files, and music-related metadata. The data serves as the foundation for training AI models. Data Preprocessing: Like other AI applications, the collected musical data must be cleaned, organized, and formatted appropriately. This might involve converting audio data into a suitable digital format or extracting relevant features like pitch, tempo, and timbre. Algorithm Selection: Depending on the specific task, different AI techniques can be used in music. Some popular methods include machine learning algorithms like deep neural networks for music generation, classification, and analysis. Music Generation: AI can be used to generate music automatically.



There are so many other uses of AI in music, and it is important to point them out as well: 


You can use a generative model like a Variational Autoencoder (VAE) or a Recurrent Neural Network (RNN) to create new melodies, harmonies, or entire compositions based on the patterns learned from the training data. 


Music Analysis: AI can be applied to analyze existing music. For example, you can use a model to transcribe audio into sheet music, detect chords, identify musical genres, or even recognize emotions conveyed in the music. 


Music Recommendation: AI can create personalized music recommendations for users based on their listening preferences and behavior. Recommendation systems use collaborative filtering and user profiling techniques to suggest relevant songs or artists. 


Music Enhancement: AI can be employed to improve the quality of audio recordings. For example, it can be used for audio denoising, upscaling low-resolution audio, or enhancing the sound quality of old recordings. User Interaction: AI-powered music applications can interact with users in innovative ways. This includes chatbots that compose music based on user input or AI systems that respond to emotional cues from listeners.


Testing and Refinement: After implementing an AI music application, it’s essential to test its performance and gather user feedback. Based on the results, the system can be refined and further trained to enhance its capabilities. Deployment and Integration: Once the AI music application is thoroughly tested and refined, it can be deployed in real-world scenarios. 


Integration into music platforms, software, or devices allows users to experience the benefits of AI-generated or AI-enhanced music. Throughout these steps, collaboration between musicians, musicologists, and AI experts is crucial to ensure that the AI systems align with musical aesthetics and maintain artistic integrity. AI in music opens up exciting possibilities for creativity, analysis, and personalized musical experiences.


Coming Soon