Top listeners:

skip_previous skip_next
00:00 00:00
playlist_play chevron_left
  • cover play_arrow

    HYFIN Connecting The Culture

  • play_arrow

    Rhythm Lab Radio Redefining the Urban Sound

  • play_arrow

    Discovering her past: Element uncovers her roots through African Ancestry DNA testing Tarik Moody


Tyler Perry Freezes $800M Studio Plans Due to OPENAI Video Model Sora

todayFebruary 24, 2024

share close
© Rubenstein, photographer Martyna Borkowski

In a recent Hollywood Reporter article, entertainment mogul Tyler Perry has made a startling decision to put a massive $800 million expansion of his Atlanta studio on hold. The reason behind this sudden shift? The emergence of OpenAI‘s groundbreaking text-to-video model, Sora. Perry, known for his forward-thinking approach and successful entertainment ventures, has expressed both awe and concern at the capabilities of Sora, recognizing its potential to revolutionize the industry while also threatening traditional jobs.

Understanding Sora: The AI That’s Changing the Game

OpenAI’s Sora is not just another AI video tool; it’s a leap forward in generative AI technology. Drawing from a technical paper on Sora, we learn that this model is a generalist in visual data, capable of generating videos and images across diverse durations, aspect ratios, and resolutions. Unlike previous models focusing on narrow categories or shorter videos, Sora can produce up to a full minute of high-definition video.

Sora works a bit like how Instagram or Snapchat filters change your photos. But instead of just adding dog ears to your selfie, Sora changes whole videos. It breaks down videos into little patches, and then it learns how to combine them in new ways to make new videos. This is based on “large language models,” AI that are good at understanding and generating text. Sora does the same thing, but with video pieces instead of words.

Turning Visual Data into Patches

Think of a video as a big, detailed painting. Imagine if you could break that painting into tiny squares (patches), each containing a piece of the overall image. Sora does something similar with videos. It takes the entire video and breaks it down into small, manageable pieces. This is inspired by how some AI systems learn to understand and generate text by breaking it down into smaller chunks, like sentences or words.

Why Break Videos into Patches?

By breaking videos down into these tiny patches, Sora can focus on learning from and generating each piece of the video puzzle. This method makes it easier for Sora to handle videos of all shapes and sizes – long or short, square or wide. It’s a bit like how you can more easily solve a puzzle by tackling one piece at a time rather than trying to figure out the whole picture simultaneously.

How Sora Learns from Patches

Once Sora has all these patches, it uses a “diffusion model” to learn how to create new video patches from scratch. Imagine you have a noisy, unclear picture. Each step of the diffusion model makes this picture a little clearer until you end up with a clean image. Sora does this with video patches, starting with something messy and refining it step by step until it has a clear, detailed video patch.

Transformers: The Brain Behind the Operation

To put all these patches in the right order and make a coherent video, Sora uses a technology called “transformers.” Think of Transformers as the director of a movie, deciding where each scene goes to make the story flow smoothly. They’re good at understanding how different pieces fit together, whether those pieces are words in a sentence or patches in a video.

What Makes Sora Different?

The key differentiators for Sora in AI video tools are its speed, cost-effectiveness, and the sheer scope of its creative possibilities. Unlike traditional CGI or VFX processes that can be time-consuming and expensive, Sora promises to deliver high-quality content at a fraction of the cost and time. This efficiency could lead to significant budget cuts for productions, enabling creators to tell more ambitious stories without the associated financial constraints. Moreover, Sora’s intuitive interface allows writers and filmmakers to bring their stories to life without requiring specialized technical skills in video production. This democratization of content creation could open doors for a new wave of storytellers who previously lacked the resources to realize their visions.

The Implications for Hollywood and Beyond

Tyler Perry’s pause on his studio expansion reflects a broader concern within the entertainment industry about the rapid advancement of AI technologies like Sora. While the potential for innovation and cost savings is clear, there is also a palpable fear about the displacement of jobs across various departments, from set construction to location scouting.

Perry’s call for industry-wide collaboration and regulation highlights the need for a balanced approach to integrating AI into the creative process. As studios and production companies grapple with the implications of AI, the conversation must also include the workforce that could be affected by these changes.

The entertainment industry stands at a crossroads, with AI poised to redefine the landscape. As Tyler Perry suggests, it’s not just about embracing the technology but also about ensuring that the human element of filmmaking is preserved and protected. The rise of Sora and similar AI tools may offer exciting new opportunities, but it also demands a thoughtful response to safeguard the livelihoods of those who have built their careers in this ever-evolving field.


Written by: Tarik Moody

Rate it

Who we are

HYFIN is a media movement from Radio Milwaukee.

Milwaukee’s only Urban Alternative radio station features the full spectrum of Black music beyond R&B and Hip-Hop plus Milwaukee music. HYFIN connects the culture with the latest Black culture news, podcasts and more. Listen to best hip hop & R&B, dance, Afrobeats and more!


Our radio is always online!
Listen now completely free!

Get your tickets now for just $10 in advance or $15 at the door and join us at 220 East Pittsburgh on May 10th.