AD
play_arrow

keyboard_arrow_right

Listeners:

Top listeners:

skip_previous skip_next
00:00 00:00
playlist_play chevron_left
volume_up
  • cover play_arrow

    HYFIN Connecting The Culture

  • play_arrow

    Rhythm Lab Radio Redefining the Urban Sound

  • play_arrow

    88Nine

  • play_arrow

    Discovering her past: Element uncovers her roots through African Ancestry DNA testing Tarik Moody

Music

How an AI experiment reimagined J. Dilla’s Hip-Hop’s avant-garde classic ‘Donuts’

todayJanuary 20, 2024

Background
share close
AD
AD
image generated by MidJourney

This article was originally posted on my LinkedIn newsletter, ‘Cultural Codex.’ The late hip-hop icon J. Dilla is hailed as a production pioneer for his unorthodox jazz-tinged sampling and off-kilter rhythms showcased on the instrumental album “Donuts” in 2006. Now, over 15 years since his untimely death due to complications from lupus, an artificial intelligence (AI) musical experiment has set out to revive Dilla’s avant-garde sound by training his musical catalog into machine learning algorithms.

The “Artificial Donuts” project, led by Nobody & The Computer, employs advanced AI techniques to recreate Dilla’s avant-garde sound. Utilizing Facebook’s MusicGen, part of the AudioCraft suite, this project embarks on a journey to analyze Dilla’s extensive musical catalog and generate new compositions that reflect his idiosyncratic style.

J Dilla: A trailblazer in hip-hop production

Born James DeWitt Yancy in Detroit in 1974, J Dilla grew up immersed in music with an opera singer mother and jazz bassist father. He first dove into beatmaking and hip-hop production as a teen, experimenting on samplers in his basement, soon joining the local group Slum Village and attracting the attention of acclaimed rappers like Questlove and Q-Tip due to his unorthodox style.

Dilla was among the pioneers manipulating the Akai Music Production Center (MPC) to manually chop, filter, and layer samples in highly syncopated ways previously unexplored within the hip-hop genre. Unlike traditional boom-bap drum rhythms, his beats often landed just behind or ahead of the beat.

The technology behind the project

The resulting project, “Artificial Donuts,” comes from the artist Nobody & The Computer (Nobody), who leveraged AI tools like Facebook’s MusicGen to analyze Dilla’s original works and generate fresh beats and vocals modeled after Dilla’s idiosyncratic style. As described by Facebook, “MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen comprises a single-stage transformer LM with efficient token interleaving patterns, eliminating the need for cascading several models, e.g., hierarchically or upsampling.” This unified model allows for high-quality music generation conditioned on textual or melodic inputs to improve control over the musical output. MusicGen is part of AudioCraft.

Unpacking MusicGen and AudioCraft

Audiocraft consists of three models, MusicGen, AudioGen, and EnCodec, developed with distinct functionalities. MusicGen, trained using Meta-owned and specifically licensed music, is adept at creating music responding to textual prompts. Conversely, AudioGen, trained on a comprehensive database of public sound effects, specializes in generating audio from text prompts. We are thrilled to announce the latest enhancement of our EnCodec decoder, which now offers superior music generation capabilities with a significant reduction in artifacts, enhancing the overall audio quality.

Capturing the Essence of Dilla’s Style

The project started – by training an AI model using Facebook’s MusicGen on Dilla’s “Donuts” to generate new beats inspired by the late artist’s signature sound. Dilla, among the most innovative hip-hop producers ever, was known for his unorthodox jazz-infused sampling and syncopated rhythms. MusicGen uses neural networks to analyze input songs and produce original melodies, harmonies and rhythms. After the initial success, Nobody trained almost Dilla’s entire catalog with the model.

The AI-generated tracks have a remarkably Dilla-esque quality, with glitchy drums and fuzzy vinyl textures. When prompted to make a “slow punk ballad” beat, MusicGen outputs an ethereal, melancholy track that almost ventures into shoegaze territory before snapping back on beat.

But Nobody & the Computer wanted to push the technology even further. He separated Dilla’s vocals from his instrumentals using specialized AI tools, then had the AI generate an entirely new verse mimicking Dilla’s signature style. The lyrics touch on classic hip-hop themes like the struggle to pursue your dreams against adversity. It also repeatedly name-checks the late producer over jazzy piano chords: “Dilla’s still here, his style’s still dear / Bringing that realness year after year.”

Visual Artistry through AI

Animated visual components also relied on machine learning, processing characteristics of psychedelic donut images to produce similar swirling graphics. These deep learning pipelines bridge multiple senses, leveraging sight, sound, and language to channel Dilla’s essence more holistically through AI mediums.

While innovatively utilizing AI to reinterpret J Dilla’s influential sounds, Nobody & The Computer’s album “Artificial Donuts” is trained on audio clips from the late artist without permission from his estate. Even though, this experiment is non-commercial, the project’s unauthorized usage of Dilla’s creative works raises complex questions around AI fair use and copyright law – issues that will continue surfacing as these tools replicate more iconic cultural touchstones.

Conclusion

In the end, Nobody & the Computer’s experiment blending hip-hop artistry and AI technology resulted in the album “Artificial Donuts” – an ambitious attempt to recreate J Dilla’s acclaimed “Donuts” instrumental work with machine learning algorithms.

While no technology can fully capture a singular creative vision like J Dilla’s, the “Artificial Donuts” project highlights evolving possibilities for human-AI collaboration in music. It is an example of machine learning being applied not to replace artists but to enhance, expand, and memorialize their influence. By training algorithms on iconic works and then generating new sounds and visuals, AI allows for ambitious creative preservation and experimentation.

AD

Written by: Tarik Moody

Rate it

Who we are

HYFIN is a media movement from Radio Milwaukee.

Milwaukee’s only Urban Alternative radio station features the full spectrum of Black music beyond R&B and Hip-Hop plus Milwaukee music. HYFIN connects the culture with the latest Black culture news, podcasts and more. Listen to best hip hop & R&B, dance, Afrobeats and more!

Listen

Our radio is always online!
Listen now completely free!
AD
AD
AD
AD
0%

Get your tickets now for just $10 in advance or $15 at the door and join us at 220 East Pittsburgh on May 10th.