The good news, for me at least, is that the computer thinks I have a nice personality. According to an app called MorphCast, I was, in a recent meeting with my boss, generally “amused,” “determined,” and “interested,” though—sue me—occasionally “impatient.” MorphCast, you see, purports to glean insights into the depths and vagaries of human emotion using AI. It found that my affect was “positive” and “active,” as opposed to negative and/or passive. My attention was reasonably high. Also, the AI informed me that I wear glasses—revelatory!

The bad news is that software now purports to glean insights into the depths and vagaries of human emotion using AI, and it is coming to watch you. If it isn’t already: Morphcast, for example, has licensed its technology to a mental-health app, a program that monitors schoolchildren’s attention, and McDonald’s, which launched a promotional campaign in Portugal that scanned app users’ faces and offered them personalized coupons based on their (supposed) mood. It is one of many, many such companies doing similar work—the industry term is emotion AI or sometimes affective computing.

Some products analyze video of meetings or job interviews or focus groups; others listen to audio for pitch, tone, and word choice; still others can scan chat transcripts or emails and spit out a report about worker sentiment. Sometimes, the emotion AI is baked in as a feature in multiuse software, or sold as part of an expensive analytics package marketed to businesses. But it’s also available as a stand-alone product, and the barrier to entry is shin-high: I used MorphCast at no cost, taking advantage of a free trial, and with no special software. At no point was I compelled to ask my interlocutors if they consented to being analyzed in this way (though I did ask, because of my good personality).

  • elfpie@beehaw.org
    link
    fedilink
    arrow-up
    3
    ·
    3 hours ago

    Please, people, don’t discuss the new polygraph as if it were more real than the old one. They are both tools, but, also, they are first both lies. It’s just a guess, but I believe there’s probably someone at every stage of the implementation saying it doesn’t quite work like that or that it doesn’t work at all. But who cares? The product they are selling is justifiable exploitation.

    • sleepundertheleaves@infosec.pub
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      The product they are selling is justifiable exploitation.

      Thank you. So, so much of AI is designed to manufacture a justification for what the entity using it already wanted to do.

      Will AI replace actors and scriptwriters? No, but it gives studios an excuse to fire expensive, unionized workers and quietly replace them with cheap overseas labor.

      Will AI replace coders? No, but it gives big tech an excuse to fire expensive, experienced, American coders earning American salaries, and replace them with a dozen new graduates out of Hyderabad.

      Will AI replace police work? No, but it’ll send bad facial recognition results falsely labeling innocent black men as criminals, and give racist police more excuses to arrest black men for being black in public.

      Will AI replace military intelligence? No, but it’ll rubber stamp an “assessment” saying any village or church or hospital or fishing boat or girls’ school we want to destroy is a legitimate military target, and give the US military that fig leaf of justification that lets the people actually pulling the trigger sleep at night.

      Finally, will AI replace sound judgment about the human being in front of you? No, but it sure as hell will provide “objective” support for whatever prejudices you already have.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    17 hours ago

    Already not got a job for failing a test that is as scientifically useless as a horoscope for predicting anything. In my last job shortly before leaving we got a new head of L&D who sent loads of these things.

    I had to frequently bite my tongue when she spoke as it was an absolute load of shit. But it turns out the ghouls in HR love this useless crap.

    • astronaut_sloth@mander.xyz
      link
      fedilink
      English
      arrow-up
      10
      ·
      21 hours ago

      The only one that I can think of is research or clinical settings. It could be really useful for various social or psychological research or monitoring patient status. That’s not how it will be used for the most part but that is a place it could be useful. Like any tool, it can be used for good things and bad (how revelatory…)

      The problem with the AI industry (and modern, unregulated capitalism in general) is that as soon as someone has a potentially useful tool, they look everywhere for every possible use with no regard for societal consequences. Thinking about the ramifications of using a tool doesn’t increase shareholder value. In fact, trying to only have your tool be used in a positive way actively harms shareholder value. Greed perverts all that is good.

      • NaibofTabr@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 hours ago

        It could be really useful for various social or psychological research

        The only application I can see for such research would be to extend and refine the distopian use cases. What else would such research be used for? It will only feed back into the cycle of privacy invasion and the surveillance state.

        … or monitoring patient status.

        Impersonal patient status monitoring (beyond vital statistics like heartbeat monitoring which we can already accomplish much more easily) will not have any practical benefit. The most likely outcome is that it will be used to justify reduced nurse staffing.