Just attended Kevin Hart's Live AI Film Battle at LA Tech Week - Mind Blown!

I just got back from the most incredible event at LA Tech Week - Kevin Hart’s Hartbeat and Luma AI put on a live AI film battle where comedians and content creators made SHORT FILMS in real-time using Dream Machine and Ray3 technology.

Five teams competed in front of a live audience, and I was skeptical at first (I’m a filmmaker by trade), but the results were genuinely impressive. The speed at which these teams went from concept to finished short was insane.

What really got me thinking: are we approaching a world where the barrier to filmmaking is just having a good idea and knowing how to prompt? On one hand, this democratizes creation. On the other, what happens to traditional cinematography, editing, and the craft we’ve spent decades perfecting?

Would love to hear thoughts from other creatives here. Is this exciting or terrifying? Both?

#AICreativity #LATechWeek #GenerativeMedia

This is both exciting AND terrifying, and I think that’s exactly the right reaction! :artist_palette::sparkles:

As someone who’s spent over a decade in design, I see parallels to what happened when Figma democratized design tools. Suddenly everyone could make “pretty” things, but that didn’t make everyone a designer. The craft shifted from technical execution to conceptual thinking and taste.

What I’m curious about: Did the AI-generated films have that intangible quality that makes great film great? The pacing, the emotional beats, the subtle visual metaphors? Or were they impressive but… kind of hollow?

I actually think this could be amazing for accessibility. Think about all the storytellers who have incredible ideas but lack the resources/skills for traditional filmmaking. BUT - and this is a big but - we need to be really intentional about:

  1. Attribution and transparency - Making it clear what’s AI-generated vs human-created
  2. Preserving the craft - Using AI as a tool, not a replacement for understanding fundamentals
  3. Economic impact - What happens to cinematographers, editors, VFX artists when this scales?

The design world went through this with Canva, and honestly, it raised the floor but not the ceiling. Great designers are still invaluable. I suspect the same will be true for filmmakers.

Did you see any discussion at the event about ethical guidelines or best practices? That’s what I’d want to hear more about.

Sarah, this is a perfect example of why I love attending events like LA Tech Week - you see the future before it becomes obvious.

From a product/market perspective, what Kevin Hart is doing here is brilliant. He’s not just experimenting with AI tools - he’s building a content moat and positioning himself at the intersection of comedy, entertainment, and technology. When AI filmmaking goes mainstream (and it will), he’ll already be the established expert.

But here’s what I’m thinking about from a business angle:

Market Opportunity

  • Traditional film production costs: $50K-$500K+ for a decent short
  • AI film production: Essentially the cost of the software + creative time
  • TAM expansion: This unlocks an entirely new creator class who couldn’t afford traditional production

Competitive Moats

  • First-mover advantage in this space is HUGE
  • Distribution and brand matter more than ever (anyone can make content, but who can get it seen?)
  • The real value shifts to:
    • Storytelling and creative direction
    • Taste and curation
    • Audience relationships

The Question Nobody’s Asking
How does this change content consumption? If production costs drop 100x, we’ll see 100x more content. The bottleneck shifts from creation to discovery and curation.

I’d actually love to hear from @maya_builds on this - how do you think about designing discovery experiences when content volume explodes but quality becomes harder to signal?

Also curious: did Luma sponsor this event, or was it more of a partnership? The go-to-market strategy around these tools is fascinating.

I’ve been playing with some of these generative video tools (mostly Runway and Pika), and the tech is genuinely impressive, but there’s a huge gap between “cool demo” and “production-ready workflow.”

What I’m curious about from the technical side:

Latency & Iteration Speed
How long did it take to generate each scene? Traditional editing lets you scrub through footage instantly. If you’re waiting 30 seconds to 5 minutes for each generation, that kills the creative flow. The best tools will be the ones that can generate in near-real-time.

Consistency & Control
Biggest issue I’ve hit: maintaining visual consistency across scenes. Characters change appearance, lighting shifts, continuity breaks. Did the teams struggle with this? How did they solve it?

The Prompt Engineering Challenge
This is actually fascinating from a UX perspective. Right now, getting good results requires:

  1. Understanding the model’s training data biases
  2. Learning model-specific prompt syntax
  3. Iterating through trial-and-error

That’s… not accessible. It’s just a different skill, not “no skill required.”

What I Think Happens Next

  • Layer 1: Raw AI models (what we have now)
  • Layer 2: Abstraction tools that hide the prompting complexity
  • Layer 3: Integrated workflows that combine AI + traditional tools

The winners won’t be the best AI models - they’ll be the ones with the best developer experience and integration with existing creative workflows.

@product_david your point about distribution is spot-on. We’re going to need AI-powered curation tools to manage AI-generated content. It’s turtles all the way down :sweat_smile:

Fascinating discussion, but I’m going to be the wet blanket here and talk about the security and authenticity implications nobody seems to be addressing.

Deepfakes at Scale
We’re essentially building the infrastructure for industrial-scale deepfake production. When anyone can generate photorealistic video content in minutes, how do we:

  1. Verify authenticity - Is this footage real or AI-generated?
  2. Prevent misuse - What stops bad actors from generating fake news videos, impersonation content, or non-consensual deepfakes?
  3. Establish provenance - How do we track the chain of creation and modification?

Content Authenticity Initiative
There’s actually work happening on this (Adobe, Microsoft, others are involved in C2PA standards), but adoption is slow. These tools need to:

  • Embed cryptographic signatures in generated content
  • Create immutable creation logs
  • Make AI-generated content clearly labeled at the platform level

But here’s the problem: if only the “good guys” adopt these standards, the bad actors have free reign.

The Training Data Question
Luma’s models were trained on… what exactly? Copyrighted films? YouTube videos? Did creators consent? This is going to be a legal minefield, and I don’t think we’re ready for it.

What I’d Want to See

  • Mandatory watermarking/fingerprinting of AI-generated video
  • Platform-level detection and labeling
  • Strong identity verification for high-risk use cases
  • Regular security audits of these generation platforms

I’m not saying we shouldn’t build these tools - the cat’s already out of the bag. But we need to be thinking about security and trust from day one, not as an afterthought.

@alex_dev your point about prompt engineering is interesting from a security angle too. The more complex the interface, the more attack surface for prompt injection and model manipulation.

This conversation exemplifies exactly why cross-functional perspectives matter. You’ve got creative, product, engineering, and security angles all surfacing different - but equally valid - concerns.

From a technology leadership perspective, here’s what I’m tracking:

Strategic Investment Thesis

Every major studio and production company is looking at this technology right now and asking: “Do we build, buy, or partner?” The ones who dismiss it as a gimmick will be caught flat-footed in 2-3 years.

But the smart play isn’t just adopting the technology - it’s building the organizational capability to integrate it into existing workflows. That requires:

  1. Upskilling existing teams - Not replacing cinematographers, but teaching them how to use AI as another tool in their kit
  2. New role definitions - We’ll see “AI creative directors” become a thing, just like we saw “social media managers” emerge 15 years ago
  3. Infrastructure investment - The compute requirements for real-time generation at scale are non-trivial

The Build vs Buy Decision

@alex_dev is right about integration being key. The question for companies is: do you build proprietary AI film tools, or integrate best-of-breed solutions?

My take: unless you’re a major studio with massive resources, you partner. The pace of innovation in foundation models is too fast to keep up internally. Focus on your differentiation (story, brand, distribution) and treat AI as infrastructure.

Addressing @security_sam’s Points

The security concerns are 100% valid and not optional. Any responsible deployment needs:

  • Content authenticity verification baked in from day one
  • Clear governance policies on acceptable use
  • Regular audits and red team exercises
  • Legal review of training data and IP implications

I’ve seen too many “move fast” tech initiatives create massive compliance headaches later. The time to build in safeguards is NOW, not after the first scandal.

Bottom Line

This technology is transformative, but transformation is messy. The winners will be the organizations that can balance innovation velocity with responsible deployment.

WOW. This is exactly why I posted here - you all brought perspectives I hadn’t even considered. Let me try to respond to some of the great questions:

@maya_builds - Your question about whether the films felt “hollow” is SO on point. Honestly? Some did, some didn’t. The winning team had a comedian who’s also a legit screenwriter, and you could FEEL the difference. The pacing, the setup-punchline structure, the emotional arc - it was all there. But other teams… yeah, visually impressive but narratively flat. Your Figma analogy is perfect.

Re: ethical guidelines - there was barely any discussion about it at the event itself, which honestly worried me. It was all “look how cool this is!” with no talk about attribution, labor displacement, or misuse. That needs to change.

@product_david - It was definitely a partnership/sponsored situation. Luma provided the tech, Kevin Hart’s Hartbeat provided the platform and audience. Smart GTM play by Luma - getting Kevin Hart as basically an evangelist/case study is genius positioning.

Your point about discovery is fascinating. I’m already drowning in content as a creator - this is going to get SO much worse.

@alex_dev - THANK YOU for the technical reality check. You’re absolutely right about latency. Each generation took 45 seconds to 2 minutes depending on complexity. Teams had to plan their workflow around that. The best ones pre-generated a bunch of assets and then assembled/refined.

Consistency was HUGE. One team’s character literally changed ethnicity between scenes. They played it off as a joke but… yeah. Not production-ready.

@security_sam - This kept me up last night, not gonna lie. The deepfake implications are terrifying. I asked one of the Luma reps about watermarking and they said “it’s on the roadmap” which is… not reassuring. You’re right that we need this baked in from the start.

@cto_michelle - Your point about upskilling vs replacing is what gives me hope. I don’t want to be a Luddite fighting against progress, but I also don’t want to see an entire craft disappear. Using AI as a tool in the kit rather than a replacement for the kit - that’s the future I want.

This conversation has me thinking: maybe the next evolution isn’t AI replacing human filmmakers, but human filmmakers who understand AI becoming the new standard. Kind of like how “photographer” now includes “proficient in Lightroom/Photoshop” as a baseline skill.

Thanks for the thoughtful discussion, everyone. This is why I love this community.