Will Actors be replaced with AI?
The past, present, and possible futures of photorealistic animation
The Screen Actors Guild strike brings to the forefront valid apprehensions that actors have about the future of their careers in an AI-dominated landscape. From losing jobs to AI-generated avatars to ethical concerns about the use of their digital likeness, these fears are not without foundation. For instance, the Black Mirror episode "Joan is Awful," featuring Salma Hayek and Annie Murphy, vividly portrays this disconcerting scenario. In the episode, the characters discover that they've inadvertently signed away their rights to their AI-generated personas. These digital versions can be utilized in future projects without their knowledge, approval, or additional payment. Such a reality not only threatens job security but also poses a series of ethical quandaries.
How probable is it that this dystopian scenario, reminiscent of a Black Mirror episode, will actually materialize? And what is the expected timeframe for such an occurrence?
Rather than merely offering unsupported predictions, my goal is akin to the proverbial teaching you how to fish. I intend to arm you with the analytical tools and frameworks you'll need to make sense of AI's evolving role in the film industry. This approach will allow you to understand the multiple avenues AI might explore, as well as the underlying axioms and fundamental patterns that will guide its trajectory. We'll go beyond immediate trends to consider the long-term landscape, giving you the means to think critically about the changes we may see in both the next year and the next generation.
Non-Linear Lifecycles within Genres of Content
The adoption of AI in filmmaking and animation is likely to follow a nuanced, non-linear trajectory, heavily influenced by the visual and emotional complexity of each genre. Comic book-inspired media, with their visually simpler formats like in "Sin City," are poised to be the earliest adopters of fully AI-integrated characters and processes. In fact, I’ve already seen several startups working on this with a previz exemplar capable of producing multiple episodes per day— and it’s not bad. Cutout animation series such as "South Park," which have straightforward animation styles, are also well-suited for early adoption of AI-generated content - although it will take significantly more time AI to write like the writers of South Park. Anime shows like "Naruto" are somewhere in the middle of the spectrum; they feature stylized but expressive characters that could benefit from AI, although full integration may take slightly longer. On the other hand, high-fidelity animated films like Disney's "Frozen" present a considerable challenge due to their intricate visuals and emotional depth, necessitating a more extended timeline for full AI incorporation. Similarly, stylized action films like "300" or "Avatar," which blend intricate CGI with live-action elements, would require a significant amount of time for seamless AI integration. At the zenith of visual complexity, photorealistic live-action films akin to the "Star Wars" sequels could take a decade or more to achieve flawless incorporation of AI characters without encountering uncanny valley issues.
A THEME TO WATCH. Over the next two decades, the adoption of new technologies will probably begin with animated productions before finally extending to live-action films. It's more likely that startups, rather than established franchises, will be the pioneers in experimenting with these advancements.
Next up…the teaching you how to fish bits.
Before exploring the complex landscape of live-action cinema and its evolving relationship with AI technologies, let's lay the groundwork with some essential axioms, identify the three key types of AI, and discuss the stages of AI's development. These foundational principles aim to clarify the prevailing trends influencing AI's role in the movie industry. Moreover, they provide a framework that allows you to potentially arrive at different conclusions from mine, yet still adhere to the same guiding axioms and structures.
A PATTERN TO REMEMBER. In the realm of computing, the initial machines were often built for special purposes and specific tasks. To those working closely with these machines, the journey from specialized, large-scale computers to what we now recognize as personal computers was marked by countless unseen, incremental steps. Engineers and industry insiders were fully aware of the slow but steady advances in microprocessor design, memory storage, and user interface. However, for the general public, the release of the IBM Personal Computer in 1981 came as a sudden and transformative event. What appeared to them as an overnight revolution was, in reality, the culmination of years of methodical, behind-the-scenes iteration and development. To insiders, each step was a logical progression, but to the consumer, the leap from massive, purpose-specific machines to accessible, general-use personal computers felt like a watershed moment.
Much like the subtle shift from specialized to general-use computers, AI's role in filmmaking is also evolving in a "boiling the frog" fashion. To industry insiders, each incremental step towards integrating AI is evident. However, the general audience may only notice these gradual advances when they culminate in a revolutionary film. What appears as a sudden leap to the public is actually years of behind-the-scenes work. This trajectory emphasizes the need to grasp AI's evolving role in filmmaking.
Let us define the AI and Filmmaking Axioms that will govern the incremental steps
Always on the hunt for cost-cutting measures and profit maximization, film studios are likely to embrace AI in a piecemeal, evolutionary manner. And as a result of the current strike, rather than abruptly replacing human artists and performers with a sudden apoplectic inducing introduction of “Joan is Awful” reality, studios may strategically opt for a more gradual, "boil the frog" approach.
That is, they are likely begin by utilizing budgetary pressure to force innovation— e.g. the incorporation of AI in very specific and carefully chosen aspects of scenes or genres. This nuanced strategy would be informed by key guiding principles such as the Distance, Importance, Shadow, Focus, and Time axioms. Such a measured approach allows for a more calculated entry of AI into the realm of filmmaking, adjusting its role based on a variety of contributing factors and thereby smoothing the industry's transition into this groundbreaking phase.
The Distance Axiom dictates that background actors or extras are the most susceptible to being replaced by AI. This is because they are often further away from the camera, falling under the 'fidelity ramp' where their facial features and actions are less scrutinized. Anecdotal evidence can be found in massive crowd scenes, such as those in "The Lord of the Rings" trilogy. With the Distance Axiom in play, it becomes increasingly feasible for AI to generate these vast assemblies of extras, thereby diminishing the need for human involvement in such roles.
On the other hand, the Importance Axiom posits that lead actors who bring unique attributes to their roles are less vulnerable to immediate replacement. Consider Tom Cruise's iconic sprint in various action movies; capturing this unique way of running would require a more intricate AI model. Therefore, lead actors, especially those who bring distinct physical or emotional elements to a film, can expect to have their roles safeguarded for a longer time.
The 'Shadow Axiom' posits that lighting conditions will play a crucial role in the speed at which AI can credibly stand in for human actors. Let's take a future installment of the "Mission: Impossible" series as an example. In a dimly lit action sequence where Tom Cruise's character is supposed to elude his pursuers, stunt doubles usually take on the high-risk maneuvers. However, these shadowy scenes are the perfect environment for AI to take over, as the darkness can effectively conceal any minor AI-generated flaws. The stunt doubles might be the first to experience the impact of AI replacement, since the challenging lighting conditions actually make it easier to opt for a computer-generated character over a human. This shows that what we consider limitations today could become launching pads for AI adoption in the film industry tomorrow.
The Focus Axiom suggests that actors appearing in blurred or out-of-focus shots could be replaced more quickly by AI. For example, a bartender who appears out of focus in the background of a romantic comedy would be an ideal candidate for AI replacement. The less detail the camera and, consequently, the audience focuses on them, the easier it is for AI to generate a convincing substitute without plunging into the 'uncanny valley.'
Lastly, the Time Axiom suggests that the quicker a character moves across the screen or the less screen time they have, the easier it will be for AI to convincingly replace that character. This axiom holds serious implications for the roles of certain actors. For instance, background characters, extras, or minor roles that appear only briefly in fast-moving scenes may be among the first to be replaced by AI-generated characters. Even in blockbuster films, this could mean that stunt doubles, who often appear in fast-paced action sequences, might find their roles becoming obsolete. As for lead actors, although they are central to the narrative and often occupy substantial screen time, the Time Axiom could influence how they are digitally represented in quicker, less scrutinizable scenes, thereby reducing the physical demands and potentially the compensation tied to those specific scenes.
Three Types of AI
Understanding the different types of AI innovations is also crucial for grasping how artificial intelligence will evolve in the realm of live-action filmmaking. These categories of AI offer a lens through which we can anticipate not only the technological advancements but also the shifting roles of human talent and traditional processes within the industry. Each type of AI innovation carries its unique implications, shaping the way films are created, edited, and even conceptualized.
I wrote about this in my previous article - “Will AI Eat My Job”, so if you read it you can skip a bit ahead.
The first category, Analytical AI, is focused on assessment and refinement. It doesn't generate new content but instead provides valuable insights for enhancing existing assets. This form of AI can be instrumental in various stages of filmmaking, such as script evaluation, casting, and performance analysis. In addition, it offers business-related functionalities like market and audience feedback analysis, serving as an invaluable tool for studios and independent filmmakers alike.
The second is Integrated AI, which melds seamlessly with the existing tools and workflows in the industry. It operates more like an upgrade to the current technology, making the existing processes more efficient and effective without drastically altering the foundational approach. For example, in 3D animation software like Maya, Integrated AI could offer features like auto-rigging for character animation or predictive motion paths, enhancing the capabilities of animators and visual effects artists.
Lastly, there is Orthogonal AI, the most disruptive among the three. This form of AI introduces entirely new methods and paradigms, often making existing tools and processes obsolete. Unlike the other two types, Orthogonal AI doesn't aim for mere integration or analysis; it radically changes how work is done. It opens up avenues for revolutionary changes and novel workflows that were previously beyond imagination, paving the way for groundbreaking storytelling techniques and visual spectacles.
— skip to here.
Patterns in Technological Epochs
To fully realize AI-led characters in cinema, each essential component—be it hair, eyes, lips, or other nuanced aspects like movement—will go through an asynchronous but similar path of evolution. Just as the emergence of one technology might pave the way for advancements in another, each of these elements will undergo their own developmental lifecycle, from Pre-AI to Post-AI. This asynchronous maturation means that while one component may advance quickly due to technological breakthroughs or increased investment, others may lag behind until their own eureka moments arrive.
In many cases the “Joan is Awful” scenario is not possible until all components (hair, lips, eyes, voice, etc) have completed their lifecycles.
In the Pre-AI phase, technological innovations often arise from the unique demands of storytelling, usually driven by a director's vision. For instance, Motion Capture (MoCap) was initially a highly proprietary and expensive technology, largely confined to big-budget films. The ambition of James Cameron's "Avatar" propelled significant advancements in MoCap technology, making it crucial for capturing the intricate facial expressions and movements of the Na'vi characters. Initially costing up to $60,000 per suite, MoCap has since been democratized, with more affordable and user-friendly systems entering the market, some costing as little as $400 and compatible with free open-source software—although it should be noted these systems are currently only capable of previz quality.
In the Post-AI phase, each component, like MoCap, will enter a chrysalis phase and ultimately navigate a series of incremental evolutionary milestones resulting in a transformation into an entirely new technology. The journey begins with scholarly white papers suggesting diverse approaches for weaving AI into CGI pipeline systems, not just to capture physical actions but also to understand the subtleties of emotional expression. Following these proposals, extensive training data is collected (e.g. ingesting every movie ever made), and preliminary models are developed and tested. One method will eventually rise to prominence, becoming the cornerstone of proprietary applications. As this approach matures, it will make its way into industry-standard tools such as Houdini and Maya, elevating their capabilities to new heights. Ultimately, this AI-driven methodology will become ubiquitous, integrating into the SMURF stack and reaching a broader range of platforms.
This isn't some distant vision; it's an incremental but relentless transformation that is currently in motion, turning today’s MoCap into a comprehensive 'AI acting' platform capable of delivering performances rich in emotional nuance.
Ok, now we are fully loaded with genre mapping, axioms, and technological epoch mapping patterns - How will all of this unfold?
A THEME TO REMEMBER. Importantly, the evolutionary pathways for Analytical AI, Integrated AI, and Orthogonal AI in the context of visual effects and filmmaking are likely to diverge significantly due to their differing roles and functions.
Analytical AI and Integrated AI, designed to work within existing pipelines, will most likely follow a "boil the frog" trajectory as described above—slowly and subtly becoming indispensable by making incremental improvements and efficiencies in existing workflows. The introduction of these AI types could be so gradual that their full integration might occur almost imperceptibly (like little plugins… ehm Christian), becoming a natural part of the VFX production cycle.
On the other hand, Orthogonal AI, which aims to overhaul existing paradigms, will likely pursue a more disruptive route. It has the potential to introduce entirely new methods and tools that don't just enhance but fundamentally transform the industry, possibly rendering traditional workflows obsolete. This dichotomy suggests that while Analytical and Integrated AI may gently push the industry along its current path, Orthogonal AI holds the potential to reroute it entirely.
Here’s what it will look like for Analytical AI and Integrated AI
The journey to a fully-realized AI 'brain actor' won't happen overnight; it will be a phased evolution. Think about how CGI initially modified actors' appearances in specific shots or scenes. For example, in "The Curious Case of Benjamin Button," CGI was used to age Brad Pitt's character backward. More recently, in "Star Wars: Rogue One," the late Peter Cushing was virtually resurrected to reprise his role as Grand Moff Tarkin. Initially, this type of technology would likely be introduced as proprietary plugins in traditional VFX workflows. Over time, these capabilities would extend into industry-standard software like Houdini, Maya, and Adobe products. Eventually, they'd become accessible to broader audiences through platforms in the SMURF stack, such as Blender and Unreal Engine.
However, the adoption and application of this groundbreaking technology will not be linear. Just as CGI started by enhancing or altering small aspects of films before becoming integral to storytelling, AI will likely start by filling in for wide-angle or fast-action shots and then incrementally take on more significant roles. This evolutionary trajectory follows the axioms and patterns of AI development, with each technological milestone bringing us closer to the AI 'brain actor.'
Despite the promising technological roadmap, several non-technical barriers could slow or even halt this evolution. Audience preference for authentic human performances, legal issues surrounding likeness rights, and ethical considerations from the actors themselves could all serve as roadblocks. Therefore, while the technical path toward this future may be foreseeable, its realization is far from guaranteed.
On the other hand Orthogonal AI is a totally different animal.
Orthogonal AI platforms like Runway ML are upending the conventional wisdom about how filmmaking tools should integrate into existing workflows. Unlike Analytical and Integrated AI, which slide into current systems and gradually refine them—a "boil the frog" approach—Orthogonal AI could disrupt the industry from the fringes. Reminiscent of Clayton Christensen's "Innovator's Dilemma," these technologies may initially appear as toys or novelties to industry veterans. Yet, they're evolving at an exponential pace. Fast-forward five years, and it's conceivable that platforms like Runway ML could autonomously generate an entire movie, from script to post-production, without human input. In this envisioned future, the 'talent'—be it actors, writers, directors, editors, or cinematographers—aren't even invited to the table; they're replaced by AI mirrors of themselves.
This could spell catastrophe for everyone involved in the traditional filmmaking process. No union, guild, contract, or agent can shield the industry from the ramifications of a fully mature Orthogonal AI system. In such a scenario, current systems of labor, creativity, and intellectual property rights would be thrown into disarray, perhaps irreversibly so.
Perhaps the most unanticipated impact would be the disappearance of the entire VFX industry as we know it. In a world powered by Orthogonal AI, there would be no need for industry-standard software like Houdini, Maya, Blender, or Unreal Engine, effectively obsoleting an entire subset of specialized skills and tools.
Peering into the future, the landscape of film and television is on the cusp of revolutionary change in what we could term the 'After AI epoch.' Over the next decade, prepare for monumental shifts. Companies have already embarked on the colossal task of data gathering, siphoning every film, TV show, and live event into a vast reservoir of training data for AI algorithms. During the forthcoming experimental and consolidation stages, AI's capabilities will extend to re-creating complete physical traits of actors—from their skeletal structure and muscle dynamics to even specific mannerisms and injuries. The AI will catalog these traits along with signature expressions and movements, cross-referenced with the contexts in which they were initially observed.
Now, envisage the following innovations that could stem from this technological metamorphosis:
Ultra-Realistic Digital Avatars: Future AI could design avatars that perfectly mimic the physical attributes of real-world actors, from bone structure to muscle movement.
Contextual Emote Archiving: A sophisticated database could catalog each actor's unique expressions and movements, setting the stage for AI to insert these nuanced traits into digital scenes effortlessly.
Injury and Physical Condition Simulation: Imagine AI that could not just mimic but predict how an actor would move under specific physical conditions, like injuries, aging, de-aging, adding an unprecedented layer of realism.
Facial Nuance Capture: The ability to simulate even the tiniest facial expressions, effectively making digital actors indistinguishable from their human counterparts.
Actor DNA Blending Software: Consider tools that enable the fusion of physical and emotional traits from multiple actors to create entirely new, yet utterly convincing, digital personas.
Text-to-Performance Tools: Building upon text-to-speech technology, these tools could take a screenplay and autonomously generate a full digital performance, incorporating the idiosyncratic styles of chosen actors or blends thereof.
And in reaction to these advancement, we need to ask important questions.
Ownership: Who legally owns the AI-generated likeness of an actor, including skeletal structures, facial expressions, and mannerisms? How do they own - do they put it on their Google Drive?
Digital Rights Management: How will the rights of these digital personas be managed, especially when they're derived from human actors? Will you need to send Tom Cruise a PIN code if you are a director in his films in order to film a scene with his AI character?
Digital Certificates: Could there be a certification system ("Digital Certs") to verify the authorized use of an actor's likeness? E.g. A PIN code on steroids - akin to a zero knowledge proof on a Layer blockchain (more about this later)
Control over Certificates: Who would have the authority to issue and revoke these "Digital Certs"? Is is Amazon? Google? The US Postal Service?
Distance and Lighting: Do actors have the right to specify conditions under which their AI likeness can be used, such as only in shadow or from a distance? And what about blurriness?
Interaction with AI Characters: Do actors have a say in how their digital likeness interacts with other AI-generated characters or real actors? (E.g. the cheerleader pooping at the wedding scene from “Joan is Awful”)
Data Privacy: How is the vast amount of personal data collected on actors for AI usage secured and who has access to it? Does it need to be stored on a secured blockchain (again with the blockchain - but it’s important - more on this later)
DNA Blending: If AI technology mixes the features of multiple actors to create new characters, would each actor have to give explicit approval? Brad Cruise anyone?
Policing and Certification: What mechanisms will be in place to enforce these rules and could movies be "certified" based on their ethical use of AI and actor likenesses? Will there be AI Character Police - part of Homeland Security?
Posthumous Rights: What happens to an actor's digital likeness after their death? Do heirs have control over the continued usage of this likeness? I’m not touching this one.
These questions point to complex challenges at the intersection of technology, law, and artistry, which need to be carefully navigated as we step into this transformative era.
A THEME TO REMEMBER. Lawyers, agents, and actors are busily working to end the strike - but whatever we do, it won’t hold - as each incremental advancement in AI will bring new ethical, legal, and moral challenges.
It only takes one.
The film industry is often characterized as a lumbering giant, resistant to change due to the presence of unions, guilds, and corporate pressures. I can’t count how many times writers, directors, and actors have told me about the countless times Silicon Valley has tried to “disrupt” Hollywood - only to fail. But this is different, it’s not a startup taking on Hollywood, it’s a new paradigm that will impact all of humanity.
Despite Hollywood's general resistance to change, historical data tells another story—a strong, but ultimately brittle shield that ultimately shatters when money matters.
While resistance to new technologies and business models certainly exists, it seems to reach a critical tipping point and then the resistance breaks. Once that tipping point is crossed, change unfolds at a startling pace. In this context, the axiom "It only takes one" rings especially true; a single breakthrough can catalyze a massive shift. For example, resistance to streaming services was high until Netflix debuted its game-changing series "House of Cards." Suddenly, the floodgates opened. Amazon followed suit with its own hit, "Transparent”, and “Man in the High Tower" and Hulu entered the limelight with "The Handmaid's Tale." Each of these milestones accelerated the transition from a world dominated by live television to one where streaming reigns supreme.
A THEME TO REMEMBER. It is reasonable to anticipate that in the realm of AI's role in filmmaking, the "It only takes one" axiom will hold true as well. A singular groundbreaking film or TV show leveraging AI technologies could very well serve as the catalyst that triggers a sea change across the industry.
The axiom that "it only takes one" to catalyze industry-wide change holds especially true when considering the trajectory towards adopting AI characters in entertainment. This shift will not unfold in a linear or predictable manner; instead, behind the scenes, dramatic leaps in innovation will be kept under tight wraps for competitive advantages. By the time these advancements break cover via a groundbreaking film or television show, they'll cause a double shockwave. Firstly, the public will be astonished by the sudden leap in storytelling depth and character realism. Secondly, even industry insiders could find themselves startled, as this landmark moment is likely to be the result of proprietary innovations by a single or a handful of studios. The aftermath will trigger a scramble to catch up that will be both swift and disruptive, effectively ushering in a new era in visual storytelling.
This paradigm shift will also send ripples through the industry's workforce, compelling a drastic re-evaluation of required skill sets. In the Pre-AI epoch, mastering intricate software like Houdini, Cinema 4D, and 3DS Max was the gold standard. VFX artists and technicians invested years in honing specialized skills in areas like texturing, modeling, and animation. But once the tipping point is reached—a la the "It only takes one" axiom—the ensuing tidal wave of AI adoption will dramatically recalibrate what is considered essential knowledge and skills, leading to a new, AI-centric era in filmmaking.
In the Post-AI epoch, the landscape changes dramatically as these tools become irrelevant as they will be controlled at a higher abstraction layer - controlled by AI not humans. The role of artists will evolve into that of curators and conductors, orchestrating AI tools to achieve their vision. The expertise required will center more on understanding how to guide and refine AI algorithms, focusing on the art of storytelling rather than the minutiae of technical execution.
This dramatic shift in necessary skills will likely lead to a period of upheaval and retraining, as the industry adjusts to a new paradigm. Some roles may become obsolete, while entirely new roles could emerge, leading to both exciting opportunities and existential challenges for professionals in the field.
A SCARY FUTURE. Just as the advent of the internet made travel agencies obsolete, a full-fledged implementation of ORTHOGONAL AI could potentially do the same for today’s VFX artists.
The Impact
The potential here is not just evolutionary, but revolutionary.
A THEME TO REMEMBER: Corporations, e.g. studios, are under constant pressure for growth and profitability. Right now, it’s more expensive to create a CGI character than it is to film and actor - but what will corporations do if AI technology makes pure CGI movies, capable of perfectly mimicking any actor, for 1/100th the cost?
While the trajectory of AI's capabilities seems limitless to nerds in basements, the path is fraught with inherent impossibilities and ethical dilemmas. For those who are impacted by AI, nothing can prevent a company from ingesting every movie ever filmed to use as training data, and worse, there are insurmountable challenges in terms of enforcing copyright, intellectual property, and moral rights of the actors and creators involved. Furthermore, the opacity of machine learning models poses an accountability issue: it would be exceedingly difficult, if not impossible, to definitively prove what training data was used to develop a particular AI model. This lack of transparency could lead to an array of ethical and legal complications, from unauthorized usage of someone's likeness to potential biases in the AI's behavior. Hence, while the technological pathway seems unobstructed, the ethical, legal, and societal hurdles could prove to be significant barriers.
So other that freaking out and attempting to make AI illegal - what can we do?
In many ways, AI could be likened to the GMOs of the film industry. Utilized judiciously, it can satiate our ever-growing appetite for content while maintaining profitability. However, if deployed recklessly and without ethical considerations, it has the potential to "poison our souls," so to speak, by diluting the essence of human creativity and expression. One potent safeguard against the irresponsible use of AI in filmmaking might be the implementation of labeling, much like what we see with GMO foods today.
A THEME TO REMEMBER. In the same vein that many if not most consumers opt for organic foods despite their higher price tags, there could be a future where an 'organic' label on a movie—a guarantee of human artistry without AI interference—could become a decisive factor for audiences.
In the future, films and television shows might come with a sort of "USDA-like" label, which would disclose the extent of AI involvement in their creation. This label could provide a breakdown, perhaps even a percentage score, to denote how much of the film or show was AI-generated—detailing everything from environments, effects, weather conditions, and props to extras, secondary characters, and even primary characters. Just as nutritional labels guide consumers in making informed food choices, these AI-labels on entertainment could serve as crucial decision-making tools for audiences. For instance, some viewers might specifically seek out films that use minimal AI in order to experience something they perceive as more "authentic," while others might be excited to explore what AI can bring to the table in terms of innovation and realism. Either way, this kind of transparent labeling could foster a more informed and engaged viewership, allowing people to align their entertainment choices with their personal values and expectations.
A THEME TO REMEMBER. In this world, we have the power to shape our own narrative, steering clear of reducing life to mere cost-efficiency. By mastering technology, particularly AI, we can bend it to serve our loftiest aspirations. Imagine a future where AI serves as a robust tool for filmmaking, lowering the cost barriers that limit storytelling today. In doing so, it could pave the way for a diverse tapestry of stories to be told, all without sacrificing the irreplaceable human touch that brings these narratives to life.
As always, the ultimate defense against the potential overreach of technology like AI lies in the value society places on art. While laws and digital rights can offer some shield to artists, these can only go so far in a world that is indifferent to the erosion of human creativity. Artists possess a powerful tool to counteract this indifference: their art.
In other words, Hollywood should do what it does best - make a movie about it—and change minds now.
By crafting narratives rich in human experience—filled with complexities, nuances, and those ineffable qualities that technology cannot mimic—they make a compelling case for the indispensable value of human artistry. The audience plays a crucial role in this ecosystem. If they opt for content that resonates deeply, essentially 'voting with their views,' then they signal a societal preference for a future where human art remains irreplaceable. Thus, the trajectory of the art world in the age of AI will not be dictated solely by technology, but by the collective ethical and aesthetic choices we make as a society.
A THEME TO REMEMBER. In the coming one or two decades, the role of "real" actors in the entertainment industry will diverge along two distinct paths, deeply influenced by societal values and generational shifts.
One path assumes a sort of allergic "GMO reaction" against AI in the arts—a craving for the organic, the authentic, the human. This could occur if audiences continue to value human talent and the intangible qualities that a real actor brings to the screen, akin to the mass negative reaction to GMO foods in favor of organic foods. This would result in corporations merely producing more organic movies because they are more profitable— as voted by the consumer dollar.
On the opposite end, acting as a profession could fade into obscurity, a casualty of the "1% problem" seen elsewhere in capitalism. Under the influence of the Generational Axiom, younger audiences who aren't imbued with the current celebrity culture might not care whether the characters they watch are human or AI-generated. This disinterest could be as profound as the way many of them currently ignore network television or are indifferent to cultural icons like sitcoms or even Star Wars.
The "Generational Axiom" posits that it takes just one generation for a cultural norm to vanish, underscoring the ephemeral nature of collective values and beliefs. Each new generation is akin to a blank slate, unmarked by the traditions and norms of its predecessors unless deliberately taught. This means that cultural values have to be actively rebirthed and re-instituted with every emerging generation. For instance, if a generation comes of age without ever having valued real human acting, the long-standing norm of cherishing and seeking authenticity in performances could dissipate, floating away like a whisper in the wind. In such a scenario, what was once a deeply ingrained cultural expectation can easily become an antiquated notion in one single generation, its absence barely noticed by those who never experienced its value in the first place.
If this second trajectory takes hold, the few A-list actors who have attained celebrity status prior to this paradigm shift (the 1%) could become even wealthier, essentially immortalized in AI form for those older generations who still care. The rest of the acting world—the 99% of extras, character actors, and up-and-comers—could find themselves without a career pipeline, as AI takes over these roles. Once the older generation that values "celebrity" in the traditional sense dies out, the demand for human actors could evaporate entirely. Therefore, the future of acting is teetering on a generational fulcrum, and it only takes one generation to tip the scales, making it crucial for us to understand and anticipate these changes.
But have hope.
When it comes to the emotional resonance of art, there's an ineffable quality that's deeply rooted in our human experience. Regardless of technological wizardry, the moment audiences realize that the artistry before them is not an outpouring of human emotion but a construct of machine algorithms, a kind of emotional uncanny valley emerges. This disconnect underscores the irreplaceable value of the human element in art—it's what truly enables us to experience emotions on a profound level, to laugh genuinely, and to ponder the intricate facets of human existence. Even for a younger generation that may not have grown up valuing human performances, their first encounter with genuine human artistry could be a watershed moment, reigniting a universal human-to-human connection that algorithms can't replicate.
Much like the youngest generation today largely rejects platforms like Facebook and Twitter, contrasting sharply with the preferences of the preceding generation, we should anticipate a generational oscillation as AI and human creativity come into increasing contact and competition.
Ultimately, the question isn't just whether technology has the ability to digitally substitute writers and actors, or even whether corporations might misuse technology for profit; because it will and they will; what truly matters is our collective opinion on the subject.
Just because we can, doesn’t necessarily mean we should.
Well my fingers hurt. And this is going to take 20 years to happen anyway - so I’m going to get a beer… maybe two.
About The Brief
What is The Brief?
I release a weekly digest every Friday, tailored for professionals ranging from executives to writers, directors, cinematographers, editors, and anyone actively involved in the film and television domain. This briefing offers a comprehensive yet accessible perspective on the convergence of technology and its implications for the movie and TV industry. It serves as an efficient gateway to understanding the nexus between Hollywood and Silicon Valley.
What’s the format of The Brief?
In the evolving landscape of Film and Television, concerns about the repercussions of rapid technological advancements are growing. Many in the industry fear that innovations, like AI, could threaten job security, while there's an unease that corporations might put profit margins ahead of fair compensation. But history, particularly from past technological waves in Silicon Valley, offers us valuable lessons. In this daily analysis, I'll juxtapose historical context with present-day developments, aiming to provide clear and informed insights into how the industry is being reshaped — acknowledging the challenges but also spotlighting the new horizons.
Who am I?
I'm Steve Newcomb, perhaps most recognized for founding Powerset, which was later acquired by Microsoft and transformed into Microsoft Bing. I had the privilege of being on the pioneering team that witnessed the inaugural email sent via a mobile device. My journey also led me to SRI (Stanford Research Institute), where we laid the groundwork for contemporary speech recognition technology. Additionally, I was a co-founder of the debut company to introduce a 3D physics engine in Javascript. I've held positions on the board of directors and contributed funding to massive open source initiatives like NodeJS and even the largest such project, jQuery. My experience extends to academia, having been a senior fellow at the University of California, Berkeley's engineering and business faculties. Recently, I ventured into Layer 2 internet protocols and assisted a company named Matter Labs in securing $440 million in funding to bolster their endeavors.
What am I doing besides writing these posts?
Typically, I allocate a year between groundbreaking ventures. My exploration for the upcoming project commenced in May 2023, and the sole certainty is its nexus with the film, television, SMURF, and AI domains. Sharing insights on my research endeavors helps me discern between feasible prospects and mere illusions.
If you are interested in contacting me, or have a topic that you’d like me to cover, my email is steve.e.newcomb@gmail.com.