How did he calculate the 70% chance? Without an explanation this opinion is as much important as a Reddit post. It’s just marketing fluff talk, so people talk about AI and in return a small percentage get converted into people interested into AI. Let’s call it clickbait talk.
First he talks about high chance that humans get destroyed by AI. Follows with a prediction it would achieve AGI in 2027 (only 3 years from now). No. Just no. There is a loong way to get general intelligence. But isn’t he trying to sell you why AI is great? He follows with:
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,”
Ah yes, he does.
I feel this is all just a scam, trying to drive the value of AI stocks. Noone in the media seems to talk about the hallucination problem, the problem with limited data for new models (Habsburg-AI), the energy restrictions etc.
It’s all uncritical believe that „AI“ will just become smart eventually. This technology is built upon a hype, it is nothing more than that. There are limitations, and they reached them.




