The Making of ‘House of David’: How Wonder Project Employed AI to Create a Biblical Epic
"I actually think the way to get people to work again is to innovate,” says creator Jon Erwin.
In what is arguably among the more extensive uses of AI in series production to date, Jon Erwin’s Wonder Project employed a range of AI tools and techniques for Season 2 of biblical drama House of David, building on its applications in Season 1. Here, he urges filmmakers to “replace fear with curiosity” as he delves into the production workflow, as well as his views on related subjects including copyrights and jobs in an expansive conversation with The Creative + Tech Orbit.
Of course, there are seemingly unlimited ways AI can be applied, and Erwin sums up the Wonder Project approach: “We do not use text to image, text to video. We’re only using this technology to amplify and augment and change things that we own, that we can prove that we own.”
With this strategy, Erwin suggests that while AI is frequently cited for efficiencies, ultimately it’s about creativity. “I know opinions vary on all this stuff, but I can tell you for myself, this is a creatively superior way to work, and that was the big takeaway from Season Two,” he says. “You’re more tethered to your imagination. [That’s] the primary reason for anyone to consider using it. It just happens to be faster and cheaper.”
House of David Season 1 contains 73 shots that involved the use of AI and that number ballooned during Season 2, with roughly 212 AI-generated shots, plus more than 100 additional shots that tapped AI to generate the environments for an LED volume. “It made virtual production much more affordable and much faster. We were able to generate worlds the morning that we shot,” says Erwin.
His overall approach is that “anything that can be done practically, we do practically,” while various AI-based applications were employed for various reasons, including for safety or to create scale.
Take for example the epic battle in Season 2 Episode 1. “I directed that episode, and I wanted to truly show the scope and scale of these biblical battles,” Erwin says. “When you read the Bible, it’s like, there were 100,000 people on one side of the field of battle and 100,000 people on the other. That’s a staggering amount of people. And so I wanted to tell the story of David in the midst of this battle, having killed Goliath, caught in this car crash of a quarter million people.
“And I wanted to do something much more than a five minute montage,” he continues. “I actually wanted the entirety of the episode to be this battle, starting at dusk, going into darkness, and then ending in sunrise. So there was just no way to even conceivably attempt to do this without innovating.”
Erwin declined to share the budget, but asserts that the series would have been cost prohibitive without the new tech. Live action was lensed on location in Greece with fewer than 200 extras at a given time. “We use generative AI to just amplify and augment all the decisions that we’re making. And then we use traditional VFX methods fused into the process,” he says. “The generative elements matched our show in a way that you’re cutting in and out of AI footage and live action and visual effects. … It’s that fusion that makes it feel really real.”
The uses of AI in the battle sequence included epic wide shots, as well as select closeups such as horses hooves riding close to camera. “Anytime that you have that many horses charging, and you add low light and dust, there’s safety concerns,” says Erwin.
The filmmaker explains that with Wonder Project’s workflow, he doesn’t view AI as visual effects, rather, it’s like having another production unit. “So when you’re directing real actors and making real decisions with your department heads, you have this ‘virtual’ unit,” he relates, noting that this expanded the collaboration where typically, “by the time the VFX process comes around your core department heads, like your production designer or your director of photography, they’ve moved on to other gigs.”
As they are generating these shots in parallel with production, he adds, “You’re sending batches of digital principal photography to editorial, and they’re editing it alongside all the other footage.”
He notes that AI applications also provide additional tools to previz and preproduction processes. “You can visualize your script much earlier,” he suggests, likening this to the iterative process used by Disney’s Pixar Animation Studios to produce its animated movies. “Pixar iteratively makes the content visually (before beginning to animate) because of the cost of animation. I think we’re going to be able to start doing that, where we can prove the ideas out visually and work out major problems with the material visually first, before green light. And I think that this would de risk the industry.”
A Range of Tools
Wonder Project’s workflow involves a collection of AI technology. “We use a lot of different tools, and I think the creativity in how we use them is in the stack,” Erwin explains, noting there are effectively three types of AI tools for production: image generators, video generators and uprez systems.
He’s been using and testing those including Google’s toolset (he describes Nano Banana/Gemini 2.5 Flash as “game changing”), Luma (“They’re the only company doing HDR, and they’re really applying a filmmakers first approach”), and Magnific (“an incredible uprez tool”), among many others. (In related news. Luma AI announced on Thursday that it has raised $900 million in Series C funding, led by HUMAIN with “significant” participation from AMD Ventures, and existing investors Andreessen Horowitz, Amplify Partners, and Matrix Partners.)
As an example of how AI tools might be employed, Erwin says the filmmakers might start with an asset build in Unreal Engine, “then we use generative AI tools to make that asset look photoreal and to bring it to life, and then we use some traditional VFX tools like Nuke or After Effects on the back end, to composite the shots together.”
With the speed at which these tools are developing, Erwin says filmmakers need to keep training. “The cool thing about the technology is you’ll make a discovery when you’re in a state of R&D that can be immediately applicable to your show.”
Guidelines and Recommendations
Partner Amazon has AI guidelines that Wonder Project needed to follow. Erwin didn’t to share specifics about Amazon’s recommendations but points to Netflix’s published guidelines, adding “all the studios are similar; a lot of studios want guarantees that models won’t be trained, and they just want to make sure the process is legal and ethical.” https://partnerhelp.netflixstudios.com/hc/en-us/articles/43393929218323-Using-Generative-AI-in-Content-Production#h_01K1BTNMC21630W4ZWFFS0EYP2
He admits that the rapid pace of tech development does create some “complexities” where this is concerned. “The biggest thing that I would recommend for other filmmakers, is you really want to be able to prove your chain of title the same way you would prove it as a script. So we do not use text to image, text to video. We’re only using this technology to amplify and augment and change things that we own, that we can prove that we own,” he says. “You want to be able to prove your inputs and your prompts.”
Wonder Project has developed an in-house workflow with these requirements in mind. “It tracks all the steps and an output delivers content to editorial. A big part of our process is just tracking the evolution of each shot within the tools used, all the way back to their source and then all the way to final pixel on the screen.
“You’re absolutely able to copyright the work if you’re working on things that you own,” he advises. “If you’re using the tools to augment things that you already own and can prove that you own, then that is not an issue. … You need to be able to prove where you started and that you own where you started.”
Distribution and Jobs
For House David, Wonder Project is also innovating with its distribution model, having launched a channel with Amazon, Wonder Project on Prime, with a sort of profit sharing model. “We announced that we had a half a million subscribers in the first three weeks (of Season 2’s Oct. 5 debut) which was far ahead of our goals.” This followed with a release on Prime. “It’s a cool way to think about being incentivized to create inexpensive content,” Erwin says.
On the thorny subject of AI’s impact on the job market, the filmmaker relates that if House of David “accepted the status quo, nobody’s employed (because the series would be cost prohibitive). As a result of innovating to make content as epic as we can at a reduced price point, the show employed over 600 people.”
“I am an adamant believer that the reason that job loss is a real factor in our industry right now is because things have just gotten too expensive and the risk profile is too high. So I actually think the way to get people to work again is to innovate,” he argues, contending “it will allow studios to make more creative decisions, more original decisions, and bet on emerging voices for filmmakers and I just think you will have more green lights, which will lead to more jobs in the industry.”
As about the unions, he reports that Wonder Project works with SAG, the DGA and WGA, as well as “major unions applicable to the region that we’re shooting.” (In the case of House of David, Greece).
Summing up, Erwin notes that while the tools are rapidly changing, he doesn’t see everything changing. “If I were to deliver some bad news to certain types of executives,” Erwin says, “AI is not a button. I don’t think we’ll ever be in a place where you could hit a button and say, ‘generate the next Terminator movie.’ You still need your people, you still need your department heads. All the core influences in how you make a movie are still completely necessary. I think actually they’ll be even more necessary.”
(Photos: Wonder Project/Amazon MGM Studios)






