Close Menu
geekfence.comgeekfence.com
    What's Hot

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Artificial Intelligence»The Problem with AI “Artists” – O’Reilly
    Artificial Intelligence

    The Problem with AI “Artists” – O’Reilly

    AdminBy AdminJanuary 15, 2026No Comments10 Mins Read1 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    The Problem with AI “Artists” – O’Reilly
    Share
    Facebook Twitter LinkedIn Pinterest Email



    The Problem with AI “Artists” – O’Reilly

    A performance reel. Instagram, TikTok, and Facebook accounts. A separate contact email for enquiries. All staples of an actor’s website.

    Except these all belong to Tilly Norwood, an AI “actor.”

    This creation represents one of the newer AI trends, which is AI “artists” that eerily represent real humans (which, according to their creators, is the goal). Eline Van der Velden, the creator of Tilly Norwood, has said that she is focused on making the creation “a big star” in the “AI genre,” a distinction that has been used to justify the existence of AI created artists as not taking away jobs from real actors. Van der Velden has explicitly said that Tilly Norwood was made to be photorealistic to provoke a reaction, and it’s working, as reportedly talent agencies are looking to represent it.

    And it’s not just Hollywood. Major producer Timbaland has created his own AI entertainment company and launched his first “artist,” TaTa, with the music created by uploading demos of his own to the platform Suno, reworking it with AI, and adding lyrics afterward.

    But while technologically impressive, the emergence of AI “artists” risks devaluing creativity as a fundamentally human act, and in the process, dehumanizing and “slopifying” creative labor.

    Heightening Industry at the Expense of Creativity

    The generative AI boom is deeply tied to creative industries, with profit-hungry machines monetizing every movie, song, and TV show as much as they possibly can. This, of course, predates AI “artists,” but AI is making the agenda even clearer. One of the motivations behind the Writer’s Guild Strike of 2023 was countering the threat of studios replacing writers with AI.

    For industry power players, employing AI “artists” means less reliance on human labor—cutting costs and making it possible to churn out products at a much higher rate. And in an industry already known for poor working conditions, there’s significant appeal in dealing with a creation they do not “need” to treat humanely.

    Technological innovation has always posed a risk to eliminating certain jobs, but AI “artists” are a whole new monster in industry. It isn’t just about speeding up processes or certain tasks but about excising human labor from the product. This means in an industry that is already notoriously hard to make money in as a creative, the demand will become even more scarce—and that’s not even looking at the consequences on the art itself.

    The AI “Slop” Takeover

    The interest of making money over quality has always prevailed in industry; Netflix and Hallmark aren’t making all those Christmas romantic comedies with the same plot because they’re original stories, nor are studios embracing endless amount of reboots and remakes based on successful art because it would be visionary to remake a ’90s movie with a 20-something Hollywood star. But they still have their audiences, and in the end, require creative output and labor to be made.

    Now, imagine that instead of these rom-coms cluttering Netflix, we have AI-generated movies and TV shows, starring creations like Tilly Norwood, and the soundtrack comes from a voice, lyrics, and production that was generated by AI.

    The whole model of generative AI is dependent on regurgitating and recycling existing data. Admittedly, it’s a technological feat that Suno can generate a song and Sora can convert text to video images; what it is NOT is a creative renaissance. AI-generated writing is already taking over, from essays in the classroom to motivational LinkedIn posts, and in addition to ruining the em dash, it consistently puts out material of low and robotic quality. AI “artists” “singing” and “acting” is the next uncanny destroyer of quality and likely will alienate audiences, who turn to art to feel connection.

    Art has a long tradition of being used as resistance and a way of challenging the status quo; protest music has been a staple of culture—look no further than civil rights and antiwar movements in the United States in the 1960s. It is so powerful that there are attempts by political actors to suppress it and punish artists. Iranian filmmaker Jafar Panahi, who won the Palme d’Or at the Cannes Film Festival for It Was Just an Accident, was sentenced to prison in absentia in Iran for making the film, and this is not the first punishment he has received for his films. Will studios like Sony or Warner Bros. release songs or movies like these if they can just order marketing-compliant content from a bot?

    A sign during the writer’s strike famously said “ChatGPT doesn’t have childhood trauma.” An AI “artist” may be able to carry out a creator’s agenda to a limited extent, but what value does it have coming from a generated creation that has no lived experiences and emotions—especially when this drives motivation to make art in the first place?

    To top it off, generative AI is not a neutral entity by any means; we’re in for a lot of stereotypical and harmful material, especially without the input of real artists. The fact most AI “artists” are portrayed as young women with specific physical features is not a coincidence. It’s an intensification of the longstanding trend of making virtual assistants—from ELIZA to Siri to Alexa to AI “artists” like Tilly Norwood or Timbaland’s TaTa—“female,” which reinforces the trope of relegating women to “helper” roles that are designed to cater to the needs of the user, a clear manifestation of human biases.

    Privacy and Plagiarism

    Ensuring that “actors” and “singers” look and sound as human as possible in films, commercials, and songs requires that they be trained on real-world data. Tilly Norwood creator Van der Welden has defended herself by claiming that she only used licensed data and went through an extensive research process, looking at thousands of images for her creation. But “licensed data” does not make taking the data automatically ethical; look at Reddit, which signed a multimillion dollar contract to allow Google to train its AI models on Reddit data. The vast data of Reddit users is not protected, just monetized by the organization.

    AI expert Ed Newton-Rex has discussed how generative AI is consistently stealing from artists, and has proposed measures in place to make sure data is licensed and trained in the public domain to be used in creating. There are ways for individual artists to protect their online work: including watermarks, opting out of data collection, and taking measures to block AI bots. While these strategies can keep data more secure, considering how vast generative AI is, they’re probably more a safeguard than a solution.

    Jennifer King from Stanford’s Human-Centered Artificial Intelligence has provided some ways to protect data and personal information more generally, such as making “opt out” the default option for data sharing, and for legislation that focuses not just on transparency of AI use but on its regulation—likely an uphill battle with the Trump administration trying to take away state AI regulations.

    This is the ethical home that AI “artists” are living in. Think of all the faces of real people that went into making Tilly Norwood. A company may have licensed that data for use, but the artists whose “data” is their likeness and creativity likely didn’t (at least directly). In this light, AI “artists” are a form of plagiarism.

    Undermining Creativity as Fundamentally Human

    Looking at how art has been transformed by technology before generative AI, it could be argued that this is simply the next step in the process of change rather than something to be concerned about. But photography and animation and typewriters and all the other inventions used to justify the onslaught of AI “artists” were not eliminations of human creativity. Photography was not a replacement to painting but a new art form, even if it did concern painters. There’s a difference between having a new, experimental way of doing something and extensively using data (particularly data that is taken without consent) to make creations that blur the lines of what is and isn’t human.  For instance, Rebecca Xu, a professor of computer art and animation at Syracuse who teaches an “AI in Creative Practice” course, argues that artists can incorporate AI into their creative process. But as she warns, “AI offers useful tools, but you still need to produce your own original work instead of using something generated by AI.”

    It’s hard to understand exactly how AI “artists” benefit human creativity, which is a fundamental part of our expression and intellectual development. Just look at the cave art from the Paleolithic era. Even humans 30,000 years ago who didn’t have secure food and shelter were making art. Unlike other industries, art did not come into existence purely for profit.

    The arts are already undervalued economically, as is evident from the lack of funding in schools. Today, a kid who may want to be a writer will likely be bombarded with marketing from generative AI platforms like ChatGPT to use these tools to “write” a story. The result may resemble a narrative, but there’s not necessarily any creativity or emotional depth that comes from being human, and more importantly, the kid didn’t actually write. Still, the very fact that this AI-generated story is now possible curbs the industrial need for human artists.

    How Do We Move Forward?

    Though profit-hungry power players may be embracing AI “artists,” the same cannot be said for public opinion. The vast majority of artists and audiences alike are not interested in AI-generated art, much less AI “artists.” The power of public opinion shouldn’t be underestimated; the writer’s strike is probably the best example of that.

    Collective mobilization thus will likely be key in the future when it comes to challenging AI “artists” against the interest of studios, record labels, and other members of the creative industry’s ruling class. There have been wins already, such as the Writer’s Guild of America Strike in 2023, which resulted in a contract stipulating that studios can’t use AI as a credited writer. And because music and film and television are full of stars, often with financial and cultural power, the resistance being voiced in the media could benefit from more actionable steps; for example, maybe a prominent production company run by an A-list actor pledges not to have any “artists” generated by AI in their work.

    Beyond industry and labor, the devaluing of art as unimportant unless you’re a “star” can also play a significant role in changing conversations around it. This means funding art programs in schools and libraries so that young people know that art is something they can do, something that is fun and that brings joy—not necessarily to make money or a living but to express themselves and engage with the world.

    The fundamental risk of AI “artists” is that they will become so commonplace that it will feel pointless to pursue art, and that much of the art we consume will lose its fundamentally human qualities. But human-made art and human artists will never become obsolete—that would require fundamentally eliminating human impulses and the existence of human-made art. The challenge is making sure that artistic creation is not relegated to the margins of life.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026

    Windows 365 for Agents: The Cloud PC’s next chapter

    January 23, 2026

    Why it’s critical to move beyond overly aggregated machine-learning metrics | MIT News

    January 22, 2026

    The Machine Learning Practitioner’s Guide to Model Deployment with FastAPI

    January 21, 2026

    The breakthrough that makes robot faces feel less creepy

    January 20, 2026

    Balancing cost and performance: Agentic AI development

    January 19, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202511 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 20269 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views
    Don't Miss

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    Customer Experience (CX) now sits at the intersection of Artificial Intelligence (AI)-enabled automation, identity and access journeys, AI-generated content…

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026

    Data and Analytics Leaders Think They’re AI-Ready. They’re Probably Not. 

    January 24, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.