How the upcoming AI legislations around the world, like voice cloning prevention and disclosure requeriment of techincal details of models, will affect open source or selfhosted models?

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    Like I said initially, how do we legally define “cloning”? I don’t think it’s possible to write a law that prevents it without also creating vastly more unintended consequences (and problems).

    Let’s take a step back for a moment to think about a more fundamental question: Do people even have the right to NOT have their voice cloned? To me, that is impersonation; which is perfectly legal (in the US). As long as you don’t make claims that it’s the actual person. That is, if you impersonate someone, you can’t claim it’s actually that person. Because that would be fraud.

    In the US—as far as I know—it’s perfectly legal to clone someone’s voice and use it however TF you want. What you can’t do is claim that it’s actually that person because that would be akin to a false endorsement.

    Realistically—from what I know about human voices—this is probably fine. Voice clones aren’t that good. The most effective method is to clone a voice and use it in a voice changer, using a voice actor that can mimick the original person’s accent and inflection. But even that has flaws that a trained ear will pick up.

    Ethically speaking, there’s really nothing wrong with cloning a voice. Because—from an ethics standpoint—it is N/A: There’s no impact. It’s meaningless; just a different way of speaking or singing.

    It feels like it might be bad to sing a song using something like Taylor Swift’s voice but in reality it’ll have no impact on her or her music-related business.