AI Video Generation: A Lower-Carbon Path for Advertising Production
Guest User Guest User

AI Video Generation: A Lower-Carbon Path for Advertising Production

8 Nov 2025 - by Jason Attar

The Carbon Question We Need to Ask

As the advertising industry grapples with its environmental responsibilities, a significant development has emerged: AI-generated video content appears to offer substantially lower carbon emissions compared to traditional production methods. But what do the numbers really tell us, and how should we interpret them?

Understanding the Traditional Production Footprint

Let's start with what we know about traditional advertising production. According to AdGreen's comprehensive data from analysing over 2,300 completed projects:

  • Travel and transport account for over 70% of production emissions

  • Materials contribute approximately 10%

  • Film spaces and accommodation each represent around 7%

The carbon footprint varies dramatically by production scale. A typical advertising campaign including a TV shoot can generate up to 200 tonnes of CO2e, though smaller productions average considerably less. A standard digital campaign for a luxury brand can exceed 320 tonnes of CO2eq over one month.

For context, TV production creates 8,200kg of CO2 for every hour of broadcast-ready content. A typical 30-second commercial, depending on its complexity, locations, and crew size, generally produces between 20-100 metric tonnes of CO2.

The AI Production Alternative

AI video generation operates on fundamentally different principles. Synthesia reports that generating a minute of AI video averages around 0.00025 kg of CO2e – about 200 times more carbon efficient than boiling a kettle.

When we look at video production specifically: if traditional production methods had been used for the 136,120 hours of video generated through Synthesia in 2024, it would have led to an additional 215,712 metric tonnes of CO2 being released – equivalent to the emissions of 42,086 UK homes.

The key difference lies in what's NOT required:

  • No crew flights or ground transportation

  • No location scouting or permits

  • No physical sets, props, or costumes

  • No catering for 30+ person crews

  • No generators or production vehicles

  • No hotel accommodations

Critical Context: The Complete Picture

AI's Growing Energy Demands

While individual AI generations are efficient, we must acknowledge the broader context:

  • Google's 2024 environmental impact report revealed a 13 per cent increase in carbon emissions year over year, partly due to its generative AI initiatives

  • The energy demands of text-to-video generators quadruple when the length of a generated video doubles

  • Training GPT-3 consumed 1,287 megawatt hours of electricity, generating about 552 tonnes of carbon dioxide

Important Assumptions and Limitations

The comparison between AI and traditional production involves several key assumptions:

  1. Quality parity: These calculations assume the output serves similar purposes, which may not always be the case

  2. Infrastructure costs: AI emissions don't fully account for data centre construction and GPU manufacturing

  3. Human displacement: The social and economic impacts of replacing human creative work aren't captured in carbon metrics

  4. Rebound effects: Lower costs might lead to increased production volume, potentially offsetting carbon savings

The Business Case Beyond Carbon

The carbon advantage is just one factor. AI video generation also offers:

  • Speed: Minutes instead of weeks for production

  • Cost efficiency: Dramatically lower production budgets

  • Accessibility: Democratising video creation for smaller businesses

  • Iteration: Easy testing and refinement of content

A Balanced Path Forward

This isn't about replacing all traditional production—that would be neither realistic nor desirable. Traditional production brings irreplaceable elements:

  • Human creativity and emotional depth

  • Complex narratives and nuanced performances

  • Cultural authenticity and artistic vision

  • High-stakes brand moments requiring premium quality

Instead, we should view AI as an additional tool in the production toolkit, particularly valuable for:

  • Internal communications and training videos

  • Social media content and quick turnarounds

  • Personalised content at scale

  • Pre-visualisation and concept testing

  • Lower-budget campaigns that previously couldn't afford video

Industry Progress and Adaptation

The traditional production industry isn't sitting still. From LED lighting to virtual production stages, the industry is finding ways to reduce its impact. AdGreen's carbon calculator and industry initiatives show genuine commitment to improvement.

However, even with these innovations, the structural differences make it challenging for physical productions to match AI's carbon efficiency for certain types of content.

Recommendations for Advertisers

  1. Assess each project individually: Consider the content needs, quality requirements, and carbon implications

  2. Use AI for appropriate content: Internal videos, social content, and rapid iterations are ideal candidates

  3. Maintain traditional production: For hero campaigns and emotionally complex narratives

  4. Measure and report: Use tools like AdGreen's calculator to track emissions across all production types

  5. Invest in innovation: Support development of both cleaner AI systems and greener traditional methods

The Data-Driven Reality

When we look at the numbers objectively:

  • A traditional 30-second commercial: 20-100 metric tonnes CO2

  • An AI-generated equivalent: 1-5 metric tonnes CO2

  • Potential reduction: 80-95%

These aren't marginal improvements—they represent a fundamental shift in production emissions for appropriate content types.

Moving Forward Responsibly

The advertising industry faces a choice: embrace AI video generation where appropriate while continuing to value and invest in human creativity where it matters most. This isn't an either/or decision—it's about using the right tool for the right job, with carbon footprint as one important consideration among many.

As we navigate this transition, transparency and honest assessment will be crucial. The numbers suggest AI video generation can significantly reduce production emissions for certain content types. The challenge now is implementing this technology thoughtfully, ensuring we maintain creative excellence while reducing our environmental impact.

Traditional production will continue to play a vital role in advertising. But for the growing volume of content that doesn't require full production resources, AI offers a lower-carbon alternative that the industry can no longer afford to ignore.

What's your organisation's approach to balancing creative needs with carbon reduction? How are you evaluating AI tools in your production pipeline?

Sources and Further Reading

Key Data Sources:

Traditional Production Emissions:

AI Energy Research:

Industry Resources:

#SustainableAdvertising #AIProduction #CarbonFootprint #GreenMarketing #AdvertisingInnovation #ClimateAction #AdTech #Sustainability #tvadvertising

Read More
A Glimpse Into 2035: The AI-Curated Decade
Guest User Guest User

A Glimpse Into 2035: The AI-Curated Decade

A Glimpse Into 2035: The AI-Curated Decade

By 2035, the Slopocalypse will have matured, and the lines between human and AI creation will blur further. Imagine:

  • Hyper-personalised streams: Your content feed isn’t just recommended, it’s actively curated by a blend of human super-curators and advanced AI.

  • World-builders as rockstars: Not those who make the worlds, but those who curate the best journeys through them.

  • Decentralised curation economies: Communities funding, discovering, and promoting via “curator DAOs.”

  • Authenticity premium: Human imperfection and rawness command luxury value.

  • Human + AI partnerships: The most innovative work emerges from symbiosis, not replacement.

  • Meaning-making as a skill: Critical curation taught alongside literacy and numeracy.

  • Regulatory scrutiny: Bias, manipulation, and filter bubbles spark constant debate.

In this future, the critic isn’t just reborn — they are everywhere, guiding and co-creating the very fabric of cultural experience.

Conclusion: Looks Are Free, Taste Is Not

The Slopocalypse is coming. Songs, shows, and worlds will flood our feeds. They will look and sound incredible. Most will mean nothing.

In that chaos, curators hold the keys. Creation is cheap. Curation is empire-building.

The critic of the AI age won’t just review films or songs. She’ll be a cross-media guide, sequencing stories across formats, mapping worlds, programming experiences.

The next cultural power brokers won’t be the ones who generate the most. They’ll be the ones who curate the best.

jason@scaryrobots.co.uk

www.scaryrobots.co.uk

Read More
2028: The 19-Year-Old With Borrowed Wi-Fi Who Released Five Features in ONE YEAR! - And Rewrote the Film Business
Guest User Guest User

2028: The 19-Year-Old With Borrowed Wi-Fi Who Released Five Features in ONE YEAR! - And Rewrote the Film Business

Scary Robots

August 7, 2025

By Jason Attar -

In this fictitious article we wanted to imagine where AI-driven content production might be heading to, and test how a single raw-talent creator could break through with cheap, powerful tools; how agents and curators might shift the deal-making game; how streaming giants could be forced to rethink windows and what they actually do as a core business, and how audiences might embrace stories delivered in ways that simply didn’t exist - even 6 months ago. What follows is one speculative but we think plausible timeline of that near-future.

How a film obsessed teen from Mexico City, a London talent agent with a home-built app, and a fresh profit model cracked open Hollywood in nine months

In September 2028, the marquee of the TCL Chinese Theatre in Hollywood lit up with a name most executives first met on their For You feeds: Emilia Reyes, 19 — Mexico City. The remarkable bit isn’t the venue. It’s the cadence: tonight is her fifth feature film this year.

No trust fund. No backlot. Not even her own internet connection. She rendered nights on a café hotspot, uploaded from a public library, and built a global audience by shipping films and ‘how-to’ breakdowns in a tight loop.

This isn’t a fairy-tale; it’s a systems story — what happens when creation is cheap, curation is king, and distribution is native to the audience. All wrapped up in a bundle of natural story-telling flair and a heap of hard work!

The Release Calendar (five in nine months)

  • Feb 2028 — Feature #1 (neo-noir; ES/EN dubs)

  • Apr 2028 — Feature #2 (folk thriller; ES/PT)

  • Jun 2028 — Feature #3 (romance/drama; ES/EN/FR + localised AI character variants)

  • Aug 2028 — Feature #4 (docu-fiction; ES/EN)

  • Sep 2028 — Feature #5 (tonight’s Hollywood première)

Between features, Emilia posted twice weekly ‘how we did it’ videos. Each film became marketing for the next, and training data for her taste.

The Discovery (curators first, agent second)

London, late 2027. Junior agent Hannah Clarke was combing AI-film curator playlists when a 62-minute noir from @EmiliaMakes auto-played. It had rough edges, confident blocking, and a real ending. A Discord watch-club dropped a craft breakdown that night. Her first call with Emilia happened from a free Wi-Fi zone at a Mexico City library.

This wasn’t luck. Months earlier, Hannah had taught herself just enough Python and video APIs on YouTube to build a tiny tracker that sat in her Chrome browser. It scraped public signals from trusted curators, clustered filmmakers by velocity (views, shares, saves, re-edits), and emailed her a daily Signal Report of ‘what’s heating up’. Hannah had a hunch that someone like Emilia would come along and she wanted to be there first. After Feature #2, she signed her.

By spring 2028, CAA’s tech arm quietly acquired Hannah’s little tool and developed it out to be their main way of scouting for emerging AI film making talent. That week Hannah bought her first 2 bedroom flat for cash!

‘I didn’t “discover” her,’ Hannah says. ‘Curators did. I just treated their signal like a green-light.’

The Stack (why five didn’t look like ‘content’)

Emilia’s films don’t look ‘AI’. They look intentional — and 95 per cent of every film is synthetic.

  • Writing: human outline → LLM beat passes → human polish; comment-mined pickups.

  • Visual world-build: text-to-3D engines for sets; diffusion models for plates; procedural lighting tuned by retention heat-maps.

  • Cinematography: two-minute phone clips as reference, then AI camera-solve for framing and movement. No permits, no roadblocks, no dawn calls.

  • Performance: micro-cast for key emotive anchors; everywhere else, consented voice-and-likeness synth from a licensed pool.

  • Post: auto rough-cut → human rhythm pass; model-generated score with live-guitar flourishes; overnight caption & dub bundles in ten languages by Film #3.

  • Localisation 2.0: wardrobe, props and UI elements re-render per language track; provenance ledger logs every swap.

  • Rights: watermarking + consent tokens chained to each asset; an exportable ledger Hannah can send to guilds and buyers.

Tools make iterations cheap. Taste makes choices.

Film One: Two Jobs, One Wi-Fi, and a Kitchen-Table Crew

Before any curator noticed, Film #1 ran on borrowed GPUs and optimism. Emilia noticed that Hugging Face had free non watermarked demos of the best open source text/ image to video models. She took liberties! She also worked two jobs — evenings stacking shelves, weekends bussing tables — to fund ‘render credits’: cloud cycles, model licences, and enough data on a pre-paid SIM to push dailies from a café hotspot. She cut at night slept on the morning bus to school.

‘Mija, it’s too much,’ her mum said, watching her queue another batch render at dawn. ‘It’s one year,’ Emilia replied. ‘I can carry one year.’

During month one, a tiny team materialised. A classmate obsessed with spreadsheets became a pipeline wrangler overnight. A barista who’d watched her storyboard in Stable Diffusion asked if she needed text-prompt polish. Her cousin, a gamer with colour-sense, learnt LUT design on YouTube. They saw the momentum — and the cost of joining was only curiosity, time and a log-in.

By the time Hannah and the curators saw it, Film #1 already carried something algorithms can’t fake: earned belief.

The Online Profit Model (and the breakout)

Per-feature (illustrative, 2028 dollars):

Film #1Film #2Film #3Film #4YouTube views (first 60-90 days)~14 M~21 M~33 M~48 M (45 days)Gross revenue≈ $122 k≈ $189 k≈ $276 k≈ $400 k

After costs, Film #4 let Emilia clear roughly $100 k personally — the tipping point.

Patreon spiked off Film #4. Within a fortnight, member revenue and donations covered the Chinese Theatre hire, DCP mastering, travel, and a chunk of the ten-language localisation for Film #5. The community literally crowd-funded Hollywood.

Proprietary Tech (the teen’s secret sauce)

Between Films #2 and #3, Emilia hacked little processes that became an edge: attention-heat-maps that auto-re-timed beats, viewer telemetry to generate trailer cuts, and a ‘one-click dub-pack’ bundling ten languages plus cultural notes. By early summer, Amazon Prime’s ‘Make Your Own’ team offered to integrate her workflow into their creator pipeline. Hannah politely declined. Emilia was locked in post production of film #4 and pre production of film #5.

Curators Are the New Gatekeepers (and that’s healthy)

Festivals still matter, but the first line of trust lives with curator ecosystems: YouTube essayists, newsletter editors, Discord clubs. They don’t ask ‘Is it AI?’ They ask: Does it move me in 90 seconds, and deepen over 90 minutes? Curators premièred Films #1–#3, hosted Q&As, and syndicated dubs. Platforms and brands followed the signal.

Platforms vs. Studios - Emilia's New Playground.

By 2028 the word platform no longer means “a website where you watch things”. The heavy-weight services — YouTube, TikTok, Disney+, Amazon Prime Video, even Fortnite’s content layer — have converged into global content-delivery networks (CDNs) with bolt-on commerce and identity. Think of them as five-layer stacks:

  1. CDN 2.0 These networks move petabytes of 4K, HDR and volumetric video to any device in roughly 200 milliseconds. The moment a title surges, traffic is re-routed to edge caches automatically, so Emilia’s première is viewable everywhere within the same minute and there are no “territories” to clear.

  2. Programmable Payments Built-in wallets handle micro-rentals, tips, merchandise, pay-per-view and revenue-share down to the individual frame. Royalties settle nightly in stablecoins or local fiat. A cousin in Guadalajara and a fan in Lagos can each pay £1 for the alternative ending, and Emilia sees the split before breakfast.

  3. Identity & Rights Graph Every viewer, curator, advertiser and creative asset carries a wallet-level identifier. Rights licences and consent tokens live on the same graph. If Emilia swaps a synthetic actor’s face for the French dub, the ledger updates instantly and a guild audit that once took months now runs in minutes.

  4. Taste API Watch-time, skip-time and mood metrics are exposed (with viewer consent) so curators and app-makers can build their own front-ends. A Berlin essayist can spin up a “LatAm AI Noir” channel that surfaces Emilia’s first film plus four stylistic cousins — no gatekeeper required.

  5. On-Demand Compute Spare GPU packets at the edge are auctioned in real time for up-res, live translation or alternate endings rendered on request. A colour-blind viewer can ask for an accessibility-friendly grade, or a latecomer can request a 30-second recap that’s generated while the title card still rolls.

Studios, by contrast, have evolved into full-time amplifiers rather than production pipelines. They broker attention instead of bandwidth and make their living in four big arenas:

  • Packaging & Compliance – turning an indie hit into an Oscar-qualifying run; clearing music, union paperwork, insurance and tax rebates across thirty regions.

  • Co-marketing – staging billboard take-overs, festival parties and limited-edition NFT memorabilia that still move the cultural needle.

  • Award-Season Ops – supplying campaign editors, screeners, lobbyists and the “For Your Consideration” budgets most independents can’t stomach alone.

  • Risk Pooling – pre-paying for Dolby Atmos mixes, theatrical QC and the small minority of projects that still need human stunt co-ordinators or IMAX reshoots.

In short: platforms push data at light-speed and settle cash instantly; studios manufacture cultural heat. Their fastest route to relevance is still to attach themselves to what curators have already proven — but in 2028 that attachment happens after the YouTube watch-clubs have voted with their view-hours, not before a green-light meeting in Burbank.

The £1.6 M ‘Your Ending’ Deal

Before tonight’s première of film #5, Hannah quietly closed a $2 M (≈ £1.6 M) development deal with Netflix for a viewer-choice spin-off of Film #4. It’s a bet on format, not just a single story.

Netflix development execs were utterly shocked when they logged onto Emilia's dedicated bespoke film #4 workspace platform. AI chat bot to talk scripts and lore and what Emilia wants and likes asthetically. Instant video spin-ups for new ideas of a character moments and story. End session email notifications for requests, summaries and creative sign off and next steps... already sent. How could a kid have this level of AI workflow automation they asked between themselves.

Emilia's team had already grown to 17 people in 6 countries and three people were AI architects, MCP and agentic specialists, dedicated to workflow solutions for Emilia's projects. Two people just worked on data analysis - Emilia wanted more!

Quality at Speed (how five didn’t collapse into spam)

Emilia’s success rests on an audience-native structure, a willingness to ‘kill your darlings’ by generating ten options and deleting nine, and teaching while building through her ‘how-to’ channel. She keeps a tiny human core for heart-shots and lets synthetic voices handle the rest.

‘AI gives me choices. Directing is choosing.’ — Emilia

The Night on Hollywood Boulevard (Sep 2028)

Film #5 sold out in hours. The Q&A ran half English, half Spanish. The crowd clapped like they recognised a new normal. On the pavement, a Disney studio EVP floated a 40 film slate deal.

Hannah glanced at the queue round the block, the Patreon-funded première, the Netflix prototype cheque in her bag, and the Amazon invite on her phone. Hannah wanted a little time to talk things through with Emilia, before next moves. They needed to stop at a station and breath.

Emilia smiled — tired, grateful, certain.

‘Forty films from now? Maybe we still don’t need a Disney deal,’ she said. ‘But if someone wants to help us teach, translate, and reach — let’s talk.’

Because in 2028, that’s the business: Make five good ones. Teach what you’ve learnt. Localise like you mean it. Let the audience pay for the next première. The rest is amplification and the heart of woman who knows story and loves film.

Somewhere, right now, another teenager on a library or coffee shop network is hovering over Export. This time, everyone knows what that click can mean.It all begins with an idea.

Read More