

Once you’ve traded your principles for proximity to power, do you even run your own company?
No. See also: Shareholder Primacy.


Once you’ve traded your principles for proximity to power, do you even run your own company?
No. See also: Shareholder Primacy.


The company’s response was an auto-reply: “Legacy Media Lies.”
Funny, that seems to be the correct answer to the headline.


My point is that xai is making the case for AI regulation for us.
Ha, regulation? With what governing body? Congress is hopeless, because apathy has griped a majority of the voting public, and there’s still a large portion of morons who thought MAGA was a good idea.


So, are we saying we’re still going to be happy with a system that you can bypass with “ignore all previous instructions” or some stupid magic phrase like that?


Nobody is bitching about photoshopping, a thing that exists and almost anybody can do to put some person’s face on a naked body or whatever situation they want. It’s existed for decades. Suddenly, journalists are inventing a new moral panic with LLMs, saying they can do whatever they want with pictures, despite the fact that this technology already existed, it’s just a little bit easier now. It’s not a new problem, so reporting on it is just shifting the blame to a new boogeyman.
See, the magic formula is to slap the word “AI” on a headline and boom, instant attention! It doesn’t matter what it’s about, if it’s a new problem, if it’s only slightly related to the main root cause… As long as you’re talking shit about every angle around AI in the most extreme ways possible, mission accomplished. It is outrage reporting because there is no solutioning or historical context. The sole purpose is the outrage, because outrages generates clicks. It’s too hard for journalists to think outside the outrage box.


These people and journalists don’t see to remember celebrity photoshops. Or maybe they do and want to continue to feed the outrage machine.
Pardon me if I duck out of my Two Minutes Hate for today.


LLM liability is not exactly cut-and-dry, either. It doesn’t really matter how many rules you put on LLMs to not do something, people will find a way to break it to do the thing it said it wasn’t going to do. For fuck’s sake, have we really forgotten the lessons of Asimov’s I, Robot short stories? Almost every one of them was about how the “unbreakable” three laws were very breakable thing, because absolute laws don’t make sense in every context. (While I hate using AI fiction with LLM comparisons, this one fits.)
Ultimately, it’s the person’s responsibility for telling it to do a thing, and getting the thing it was told to get. LLMs are a tool, nothing more. If somebody buys a hammer, and misuses that hammer by bashing somebody’s brains in, we arrest the person who committed murder. If there’s some security hole on a website that a hacker used to steal data, depending on how negligent the company is, there is some liability with that company not providing enough protections against their data. But, the hacker 100% broke the law, and would get convicted, if caught.
Regardless of all of that, LLMs aren’t fucking sentient and these dumbass journalists need to stop personifying them.
“Made redundant”? WTF, BBC?