
The other AI apocalypse
AI doesn’t have to destroy us, we’re doing a fine job of that on our own.
Anthropic explored a human-AI centipede ‘in the wild’
Anthropic’s latest paper, titled “Values in the Wild: Discovering and Analyzing Values in Real-World Language Model Interaction,” may be the team’s most noteworthy work since the “Constitutional AI” paper.
The amazing irreplaceability of the human condition
Almost all people can be fooled at least some of the time. Patience may be the best approach as we prepare for the impending onslaught of claims surrounding the emergence of “human-level” AI.
AGI should be neither seen nor heard
The only path forward is advanced AI systems to become so useful that we forget we’re using them. Simply put: AI needs to fade into the background.
‘The world abounds with quantities’ — DeepMind is more ambitious than ever
You got to hand it to DeepMind. They’re the Apple of laboratories. While everyone else is clamoring to occupy space in the public head, it’s toiling mysteriously in the background.
Whenever AI makes you ‘feel’ something, you’re being manipulated
In other words: You’re the middle segment in a human-AI centipede.
Should you be polite to AI? Here’s what the research says
If the basic idea is to get the most utility out of chatbots, it seems counterintuitive to train them to waste tokens and time outputting polite language.
Putting LLMs inside of robots won’t solve the embodiment problem
Chatbots don’t actually exist. No, we’re not trying to create a conspiracy theory. What we mean is this: large language models (LLMs) aren’t “entities” or “beings.”
AI isn’t coming for your job, mediocrity is
LLMs are too stupid to be useful for professional work, but that won’t stop content creators from shooting themselves in the foot with it.