Very interesting article which does agree with my opinion of AI
flip.it/-pibbY— Nutria Boudin aka Greg Mayes (@nutriaboudin.bsky.social) July 14, 2025 at 7:49 PM
Bit of a weighty topic for the off-hours post, but it’s also a fun read. Per Rolling Stone, “What Is Up With These Tech Billionaires? This Astrophysicist Has Answers”:
Fresh off a Ph.D. in astrophysics, science journalist Adam Becker moved to Silicon Valley with an academic’s acclimation to hearing the word “no.” “In academic science, you need to doubt yourself,” he says. “That’s essential to the process.” So it was strange to find himself suddenly surrounded by a culture that branded itself as data-oriented and scientific but where, he soon came to realize, the ideas were more grounded in science fiction than in actual science and the grip on reality was tenuous at best. “What this sort of crystallized for me,” says Becker, “was that these tech guys — who people think of as knowing a lot about science — actually, don’t really know anything about science at all.”
In More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, published this spring, Becker subjects Silicon Valley’s ideology to some much-needed critical scrutiny, poking holes in — and a decent amount of fun at — the outlandish ideas that so many tech billionaires take as gospel. In so doing, he champions reality while also exposing the dangers of letting the tech billionaires push us toward a future that could never actually exist. “The title of the book is More Everything Forever,” says Becker. “But the secret title of the book, like, in my heart is These Fucking People.”
Over several Zooms, Rolling Stone recently chatted with Becker about these fucking people, their magical thinking, and what the rest of us can do to fight for a reality that works for us.
A lot of people who move to Silicon Valley get swept up in its vibe. How did you avoid it?
I did sort of see the glittering temptation of Silicon Valley, but there’s a toxic positivity to the culture. The startup ethos out here runs on positive emotion, and especially hype. It needs hype. It can’t function without it. It’s not enough that your startup could be widely adopted. It needs to change the world. It has to be something that’s going to make everything better. So this ends up becoming an exercise in meaning-making, and then people start talking about these startups — their own or other people’s — in semi-religious or explicitly-religious terms. And it was just a shock to see all of these people talking this way. It all feels plastic and fake. I thought, Oh wow, this is awful. I want to watch these people and see what the hell they’re up to. I want to understand what is happening here, because this is bad…
Underpinning a lot of that toxic positivity was this idea that if you just make more tech, eventually tech will improve itself and become super-intelligent and godlike. [The technocrats] subscribe to a kind of ideology of technological salvation — and I use that word “salvation” very deliberately in the Christian sense. They believe that technology is going to bring about the end of this world and usher in a new perfect world, a kind of transhumanist, algorithmically-guaranteed utopia, where every problem in the world gets reduced to a problem that can be solved with technology. And this will allow for perpetual growth, which allows for perpetual wealth creation and resource extraction.
These are deeply unoriginal ideas about the future. They’re from science fiction, and I didn’t know how seriously people were taking them. And then I started seeing people take them very, very seriously indeed. So, I was like, “OK, let me go talk to actual experts in the areas these people are talking about.” I talked to the experts, and: Yeah, it’s all nonsense…
There’s also no particular reason to believe that the kinds of machines that we are building now and calling “AI” are sufficiently similar to the human brain to be able to do what humans do. Calling the systems that we have now “AI” is a kind of marketing tool. You can see that if you think about the deflation in the term that’s occurred just in the last 30 years. When I was a kid, calling something “AI” meant Commander Data from Star Trek, something that can do what humans do. Now, AI is, like, really good autocomplete.
That’s not to say that it would never be possible to build an artificial machine that does what humans do, but there’s no reason to think that these can and a lot of reason to think that they can’t. And the self-improvement thing is kind of silly, right? It’s like saying, “Oh, you can become an infinitely good brain surgeon by doing brain surgery on the brain surgery part of your brain.” …
So what we’re calling “artificial intelligence” is really just kind of like an advanced version of spellcheck?
Yeah, in a way. I mean, this is not even the first time in the history of AI that people have been having conversations with these machines and thinking, “Oh wow, there’s actually something in there that’s intelligent and helping me.” Back in the 1960s, there was this program called Eliza, that basically acted like a very simple version of a therapist that just reflects everything that you say back to you. So you say, “Hey Eliza, I had a really bad day today,” and Eliza says, “Oh, I’m really sorry to hear that. Why did you have a really bad day today?” And then you say, “I got in a fight with my partner,” and they say, “Oh, I’m really sorry to hear that. Why did you get in a fight with your partner?” I mean, it’s a little bit more complicated than that but not a lot more complicated than that. It just kind of fills in the blanks. These are stock responses — something that’s very clearly not thinking. And people would say, “Oh, Eliza really helped me. I feel like Eliza really understands who I am.”…
Sam Altman gave a talk two or three years ago, and he was asked a question about global warming, and he said something like, “Oh, global warming is a really serious problem, but if we have a super-intelligent AI, then we can ask it, ‘Hey, how do you build a lot of renewable energy? And hey, how do you build a lot of carbon capture systems? And hey, how do we build them at scale cheaply and quickly?’ And then it would solve global warming.” What Sam Altman is saying is that his plan for solving global warming is to build a machine that nobody knows how to build and can’t even define and then ask it for three wishes.
But they really believe that this is coming. Altman said earlier this year that he thinks that AGI is coming in the next four years. If a godlike AI is coming, then global warming doesn’t matter. All that matters is making sure that the godlike AI is good and comes soon and is friendly and helpful to us. And so, suddenly, you have a way of solving all of the problems in the world with this one weird trick, and that one weird trick is the tech that these companies are building. It offers the possibility of control, it offers the possibility of transcendence of all boundaries, and it offers the possibility of tremendous amounts of money…
There’s a lot of delusional thinking at work, and it’s really, really easy to believe stuff that makes you rich. But there’s also a lot of groupthink. If everybody around you believes this, then that makes it more likely that you’re going to believe it, too. And then if all of the most powerful people and the wealthiest people and the most successful people and the most intelligent-seeming people around you all believe this, it’s going to make it harder for you not to believe it.
And the arguments that they give sound pretty good at first blush. You have to really drill down to find what’s wrong with them. If you were raised on a lot of science fiction, especially, these ideas are very familiar to you — and I say this as a huge science fiction fan. And so when you start looking at ideas like super-intelligent AI or going to space, these ideas carry a lot of cultural power. The point is, it’s very easy for them to believe these things, because it goes along with this picture of the future that they already had, and it offers to make them a lot of money and give them a lot of power and control. It gives them the possibility of ignoring inconvenient problems, problems that often they themselves are contributing to through their work. And it also gives them a sense of moral absolution and meaning by providing this grand vision and project that they’re working toward. They want to save humanity. [Elon] Musk talks about this all the time. [Jeff] Bezos talks about this. Altman talks about this. They all talk about this. And I think that’s a pretty powerful drug. Then throw in, for the billionaires, the fact that when you’re a billionaire, you get insulated from the world and from criticism because you’re surrounded by sycophants who want your money, and it becomes very hard to change your mind about anything…
So what you’re telling me is that I’m not gonna get to live on Mars.
Yeah, that’s right. You’re not going to. But you shouldn’t be disappointed because Mars sucks. Mars fucking sucks. Just to name a few of the problems: gravity is too low, the radiation is too high, there’s no air, and the dirt is made of poison…
What can we do about all this? Are we all just basically fucked?
Well, look, the billionaires have an enormous amount of power and money, but there’s a lot more of us than there are of them. Also, we can think critically, and so I think there’s a few different things that we can do. In the short term we need to organize. One of the things that these guys are completely terrified by — and it’s one of the reasons they love AI — is the idea of labor organization. They don’t want workers rising up. They don’t want to have to deal with workers at all, and so I think labor organizing is really important. I think political organizing is really important. We need to build political power structures that can counterbalance the massively outsized power of this really very small community of individuals who just have massive amounts of wealth. And I know that that sounds kind of facile, but I really do think it’s what we have to do, and historically it is how [people] have always combated the very wealthy and their fantasies of power…
And then in the longer term — hopefully not that far away, if we get to a place where we have political power to balance these guys out — I think we’ve got to tax their wealth away. They did not earn that money alone. They needed the infrastructure and community that the rest of us provide and they also, frankly, needed a lot of government investment. They are the biggest welfare queens in existence, right? Silicon Valley got enormous amounts of government spending to benefit it over the years, both on infrastructure and in buying products and whatnot. The government built the internet. The government was the biggest client of Silicon Valley back when it was first starting up through buying computer chips for the space program. The government built the space program without which you wouldn’t be able to have something like SpaceX. So I think it’s time to stop giving them handouts and start saying, “What we invested, the bill has come due.”
Launch this fucking guy at the sun.
— Clean Observer (@hammbear2024.bsky.social) July 22, 2025 at 6:58 PM
M. Bouffant
Here’s a video of Becker w/ Wajahat Ali. youtu.be/jchwkmkTNTc?si=4M-Zf6UrQz3EGhWo
Kristine
This book is on my tbr pile. Looks like it will be a ride.
SpaceUnit
At this point I just want a computer program that can direct me to a good cave.
bjacques
Clearly, what will save humanity is getting rid of billionaires, or at least some of them.
Anyway, you can’t stake your life to a Saviour Machine.
Martin
To a large degree I think there’s just a recognition that someone is going to build the fraud machine, make a fuckload of money doing it, but I could build the fraud machine first and because I believe I’m fundamentally a good person, and that other guy might not be, that’s a service to humanity. Oh, and the fuckload of money I’ll make wasn’t my motivation at all, but I’ll keep it all the same.
Most of these people believe they are building a machine that will destroy capitalism and likely society. And if someone does build an AGI, that’s the end of the economy. We don’t know how to function in that space:
“It’s easier to imagine the end of the world than the end of capitalism”
Ken_L
A marginally sillier solution than Jeb! Bush’s, who assured us some latter-day Edison would invent the fix in his garage.
Baud
Stop building energy intensive AI
Ramona
Thanks for this! I put “Adam Becker” in the YouTube search box and I am now watching an interview of him on “Commonwealth Club World”. He’s talking about finding a publisher of creationist articles financed by Peter Thiel.
Baud
@Ramona:
Adam Becker is a creationist?
Baud
Instead of asking AI, we could ask China.
Ramona
@Baud: Typing on a tablet is hard. He was surprised this magazine was publishing creationist dreck along with good science articles to lure people in and went searching out the culprit responsible for financing it and was surprised to find out that it was Peter Thiel. He said that is when he realized that the so-called tech geniuses had no clue about science or tech but only knew how to make money.
(Sigh)
Baud
@Ramona:
Thank you!
I hope I live long enough to see a society when people don’t find outlandish things interesting. That science magazine should be utterly discredited.
Martin
@Baud: I would note that China exercises considerably more control over their power grid than most of the US does. Texas has 2.5X as much renewables as California despite having ¾ of the population and is still building fossil fuel because in an unregulated market, Jevons paradox is a real problem to be dealt with.
Essentially Texas is coping with the economic benefits of renewables creating opportunity for higher economic returns in crypto and AI. China banned crypto mining and has limitations on AI so that they can realize those renewable benefits to offset fossil fuels. CAs hasn’t banned those but its market doesn’t allow for those economic returns to manifest because rates are tied to conservation which means that utilities don’t want more per-capita demand on their grid because that undercuts their profitability.
But most of the US does not subscribe to the belief that GDP can decouple from energy use, even though that was proven decades ago and is constantly reproven around the world. It remains a GOP article of faith.
Baud
@Martin:
Yeah, we’re burdened by entrenched interests that are forward looking at all.
Baud
@Baud:
Counterpoint. No one is perfect
Baud
Baud
JoyceH
This article really clarified something I’ve been sensing. Musk has always baffled me because he doesn’t sound like a scientist, he sounds like a teenage sci-fi fan. The media would gush about what a visionary he was, and his amazing new ideas were just stuff from mainstream sci-fi from the 60s through 80s. And DOGE? Genuine technicians are meticulous, and DOGE was always nothing but careless and sloppy.
MagdaInBlack
Thank you, A.L.
I just glommed on to this story the other day due to the youtube M.Bouffant posted at # 1.
MagdaInBlack
Also from Wajahat Ali, the Curtis Yarvin connection to this techbro wet dream:
thelefthook.substack.com/p/curtis-yarvin-and-the-intellectual?utm_source=publication-search
Matt McIrvin
@Baud: I love outlandish things. I don’t BELIEVE them. Not without good reason.
Matt McIrvin
@JoyceH: I’m about the same age as Elon Musk and I consumed all the same science fiction and popular futurism. It was blatantly obvious that if it was a shiny idea about THE FUTURE! that was current around the years 1977-1985, Elon Musk was going to propose it as a supposedly serious project eventually. But not everyone has had the same level of exposure to, like, Omni magazine.
p.a
“AI” marketing, younger brother of the aught’s “Hi-Tech”, the grandkid of “New and Improved!’
Nothing new under the capitalist sun…
Baud
@Matt McIrvin:
I hear you. I was thinking more about evil outlandish, like Trump or tech bros or fascists or QAnon etc.
There can be fun outlandish that isn’t evil and powerful. I should have thought of a better word.
What Have the Romans Ever Done for Us?
@JoyceH: It does explain DOGe though…if you’re living a Sci-Fi fantasy boondoggle having an entity around that is an unbiased arbiter of what’s fact and what’s fiction, real vs fantasy, true vs made up is inconvenient.
Federal bureaucracies are at their core (or we’re) dedicated to being reality based. It’s something these guys have in common with Trump – they’re selling different fantasies but both selling fantasies and having unbiased experts around to telling people these are fantasies built on bullshit rather that a feasible path to reality is extremely inconvenient. So they have to be degraded and destroyed and that doesn’t have to be done meticulously.
Baud
@What Have the Romans Ever Done for Us?:
Also why secular society and government is a threat to right wing religions.
What Have the Romans Ever Done for Us?
@JoyceH: It does explain DOGE though…if you’re living a Sci-Fi fantasy boondoggle having an entity around that is an unbiased arbiter of what’s fact and what’s fiction, real vs fantasy, true vs made up is inconvenient.
Federal bureaucracies are at their core (or were) dedicated to being reality based. It’s something these guys have in common with Trump – they’re selling different fantasies but both selling fantasies and having unbiased experts around to telling people these are fantasies built on bullshit rather that a feasible path to reality is extremely inconvenient. So they have to be degraded and destroyed and that doesn’t have to be done meticulously.
Matt McIrvin
@Baud: You’re not wholly wrong though. Sometimes I hear people talking about how they miss the days when conspiracy theories were fun and harmless, like they were just about aliens abducting cattle. But they never were–that stuff was laying the groundwork for believing that vaccines are poisoning your children and Hillary Clinton is harvesting adrenochrome from babies. It’s the underlying worldview that’s the problem.
satby
How come the only thing I can think of is this quote from Corinthians?
An entire valley of overgrown children high on their own farts. And because they’re insanely wealthy we’re trapped in their funhouse.
Baud
@Matt McIrvin:
Tales about bigfoot and the Loch Ness Monster were never partisan or used in service of MegaCorp. That’s really what’s changed.
Maybe the big problem with “technocratic” policy making often associated with Dems isn’t that it is incapable of solving problems or relies too heavily on the market, but that it doesn’t satisfy the human need for wonder.
Just a thought.
Baud
@satby:
Children would be an upgrade to the current MOTUs.
MagdaInBlack
@Baud: It is true that the Right does give me a lot more to wonder about…..
Baud
@MagdaInBlack:
I wonder if they were fed lead chips as children.
Ramalama
I was in grade school, staying over at a friend’s house and I had a chance to play the computer game Eliza. Though I think my friend called it “psychiatrist.” It was very cool at first but I saw how it fed me back my lines, and spent entirely too long a time getting a robot to say back to me, “tell me why feel like a pervert’s motherfucking asshole is troubling you? “
MagdaInBlack
@Baud: 😄 Yeah, that too.
divF
Just after the WSJ Epstein article broke, I watched the movie Chinatown (1974), for the first time since it came out. I was struck by the following lines:
Jake Gittes: Why are you doing it? How much better can you eat? What could you buy that you can’t already afford?
Noah Cross: The future, Mr. Gittes! The future.
Then a few moments later in the same scene …
Noah Cross: You see, Mr. Gittes, most people never have to face the fact that at the right time and the right place, they’re capable of ANYTHING.
mappy!
If wet
Then dream
Else restart
nest, loop.
Matt McIrvin
@Baud: Real science has produced information filled with wonder–something that popularizers like Carl Sagan were always good at conveying–and the liberal technocrats are usually the ones who want to fund it! Musk, Mr. Future Boy, is the one who regards all that nerd stuff as overhead and was slashing it to the bone.
Of course, the nuts and bolts of the most wondrous science are usually tedious, like the nuts and bolts of anything, and it takes a special kind of personality to tolerate that.
Matt McIrvin
@Ramalama: The author of Eliza became a harsh critic of AI research, and one of the reasons was that when he had people play with this really primitive bot, he could already see how willing many people were to ascribe it intentionality and confess secrets to it. Set off his Spidey sense.
Baud
@Matt McIrvin:
I couldn’t agree more, but I’m a nerd.
Matt McIrvin
I have come to dislike Arthur C. Clarke’s dictum that “any sufficiently advanced technology is indistinguishable from magic.” It’s sort of on par with manifest destiny as an attractive but dangerous idea that is useful for science-fiction writers. Technology you don’t understand may seem like magic to you. But it always has rules and a price. It’s using natural interactions, not supernatural ones. It doesn’t mean you can just propose that there’s a magical solution to any problem you might imagine. The AI boom is convincing a lot of people that they’re going to get solutions to every problem out of a black box, without having to pay scientists to do research. And I think it’s more a “sufficiently advanced technology” superstition than anything rational.
Gvg
Also power corrupts. Absolute power corrupts absolutely.
This is why I think slave ownership even with “good intentions” to free them is damning and the past version of marriage where the man had all the rights, was bad for men morally. It’s also why the ones with power who could see that are so special. Rare. Both Roosevelt’s stood up to wealth even though they were wealthy. Truman integrated the armed forces even though he was white, Kennedy and LBJ led Civil rights etc .
The constitution is full of checks and balances because nothing was supposed to have unchecked power. That’s for government. But everything else needs checks too.
stinger
@SpaceUnit:
At this point, I just want accurate Closed Captioning.
Rudi666
Don’t know if it was here or somewhere else, but Ed Zitron writes and podcasts abount the AI bullshit.
From the Google:
google.com/search?client=firefox-b-1-d&q=ed+zitron
The ROT Economy: Ed Zitron’s Unflinching Take on Tech’s Broken Priorities
youtube.com/watch?v=-WPTQXsFgFo
Another Scott
@Matt McIrvin: +1
I remember as a kid sneaking looks at Playboy once and seeing an illustrated article about how in the far future of The Year Two Thousand, all the young, attractive adult people would be wearing clothing made from sheer and transparent and translucent fabrics and everyone would be happy.
Because of course readers of Playboy were interested in thinking about a future like that.
🤪
It’s the Springfield Monorail all over again. The finance guys shoveling money into OpenAI want their huge returns, so Sam says, You Betcha! Huge returns are just around the corner, once we get our next $50B tranche. Don’t let your earlier investment go to waste – we’re almost there! Don’t miss out!!…
Meanwhile, in the real world, …
Thanks.
Best wishes,
Scott.
bluefoot
@Baud: This came up my last company R&D town hall. There’s a big push by the company to use in-house AI tools and a lot of people are concerned about the environmental impact with little return. Several people brought it up in the q&a but my impression is that leadership doesn’t want the company to be “left behind.”
Baud
@bluefoot:
Yeah. Unlike crypto, I can see uses for AI. But I’m not not sure if it’s worth the cost, and it’s way over hyped.
Dave
@Martin: This is exactly it my brother is part of the the tech bro set and there is a common belief that an actual AGI would either be Skynet or view us as incidental casualties while it does what it wants but they all figure someone else will make it maybe I’ll get it right because I’m smart and special so it might as well be me that makes its.
There is this entirely unexamined belief that pure logic must be preference cruelty. It’s very revealing to their internal often unexamined biases.
So great that we have given them such influence.
stinger
Part of the problem stems from when this new field of mathematics and logic was labeled “computer science.” Silicon Valley isn’t a group of scientists. They’re coders.
I enjoyed Red Mars, Blue Mars, Green Mars. But the books basically start after that bit on the blackboard, “Then a miracle occurs.”
Another Scott
@Baud: I’m not sure. Ancient Astronauts building the pyramids and all that always had a “here’s what really happened, and what the scientists and experts and “they” and the Government won’t tell you” aspect that is essentially reactionary and right wing.
While scandal sheets have existed as long as there have been newspapers, the biggest difference these days is that political parties around the world have explicitly embraced it and too much of the media pushes it. Truth doesn’t matter to these people chasing power and riches. And too many institutions have been captured or cowed.
Grr…
Thanks.
Best wishes,
Scott.
hells littlest angel
Not only are the techbros basing their hopes and dreams on science fiction, they’re basing it on science fiction from the 1940s.
lowtechcyclist
@satby:
Well, it’s certainly apropos!
Valley of the Dolts.
different-church-lady
What kills me about all this is how freaking obvious it is to anyone with half a brain.
different-church-lady
@Martin: The only reason these guys would want to destroy capitalism is because capitalism can’t function unless the money is at least partially shared.
bluefoot
@Baud: In my sector, AI/ML can be useful for specific cases where the training data is high quality and large enough. But we’re not looking for agentic AI to solve our problems for us. But we’re scientists, and we’re paid to think for ourselves and come up with new ideas/solutions/areas to pursue with the aim of developing new therapies and diagnostics for disease. We are very much aware of what we don’t know (and how much science there is to figure out!) and the limits of ML/AI…hence the pushback against company leadership.
then again, maybe I’m just an old fart with an onion on my belt.
lowtechcyclist
Test comment 1
The Audacity of Krope
Romulans? Rick of Rick and Morty?
JetsamPool
Yes, it is pretty obvious that these tech bros want a sci-fi future. An not a Star Trek, post-scarcity future but a 1980s vintage, direct-to-tv cyberpunk thriller with a billionaire elite and everyone else lives in a poorly lit urban setting with various sci-fi gimmicks making their lives worse. And the police have cybertrucks patrolling the streets, just so the those plastic rear bed panels can slide back menacingly, revealing a machine gun. Why else would Elno build them to look like that?
This whole AI business smells like a bubble. If they train it on Nazi propaganda, of course they are going to get a Skynet-type result.
dm
@Matt McIrvin: His (Joseph Weizenbaum’s) book, Computer Power and Human Reason, has a sub-title that pretty much fits the tech-billionaire delusion: “From judgment to calculation”
moonbat
The problem of a person’s inability to distinguish fact from fiction has always been with us. But give that person more money than they could spend in 1,000 lifetimes (and steady doses of designers drugs) is when it becomes an issue for the rest of us. I say, tax these idiots back into a more realistic world view.
“Aliens built the pyramids” is the modern iteration of “Phoenicians built Great Zimbabwe” back at the turn of the 20th century i.e. racism dressed up as pseudo scientific ‘theory,’ today made popular by entertainment media that rewards the more idiotic ‘theories’ the most attention.
BUT, and this is a big but, it is possible, even necessary, to have fantasy in a universe where the an unblinking gaze at the state of things would crush anyone’s spirit if they could not imagine something better. This discussion made me think of a passage from the very humanist author Terry Pratchett’s novel Hogfather where Death and Susan discuss the issue that begins: “Humans need fantasy to be human. To be the place where the falling angel meets the rising ape.”
BellyCat
This is a profound insight.
(Are you feeling OK?!?! //)
kmax
Programmer: please solve global warming.
AI: humans are the problem
.
.
Missiles launch…
MinuteMan
@Baud:
Look for the GOP administration to roll out sanctions against members of the court like they did when arrest warrants were issued for Bibi and his merry gang of war criminals.
Baud
@BellyCat:
No, I’m a little under the weather today.
Emily B.
@moonbat: I love that passage from HOGFATHER! I think about it a lot these days.
Somehow I missed reading Terry Pratchett all these years—but discovered his novels just in time for our current hellscape. Thank god (pick one).
wenchacha
@Another Scott: Where are the Illuminati people? The Tri- lateralists? They ought to be in on all the fun.
apocalipstick
@Matt McIrvin:
I’ve always thought that was the point of Clarke’s rule, that technology seems like magic because we do not understand it, and thus do not understand it’s limitations and costs.
And magic always has rules and costs, at least in good stories.
apocalipstick
@Baud:
One of my bugaboos is calling everything ‘AI’. “These scenes were created using AI.” You mean they are animated?
A friend of mine told me seriously how he uses AI prompts to write letters of reference. We’ve had those for years; they’re called templates.
The Pale Scot
This has all been covered before Sam, the answer is forty two
alex j.
These TITAN TECH BROs are so use to snapping their fingers and having some fringe food sourced from halfway across the planet arrive piping hot or their Lattes that are handed to them at exactly exactly exactly the “correct” temperature EVERY TIME…..they think they mentally willed those very items to existence. If they WANT it, that demand/command alone means it will magically appear like a flunky handing you a coffee cup. Their very thoughts defy Space-time……
Captain C
@hells littlest angel: Or complete misreadings of more recent sci fi. For example, Lone Skum claims to be a big fan of Iain M. Banks’ Cultureverse, and yet Skum’s understanding of said ‘verse seems to be pretty much as wrong as possible. In the Culture, people like him and Thiel would be placed in an immersive VR setup so they wouldn’t be able to harm anyone else.
WTFGhost
@The Pale Scot: Right. And it’s always hard to understand the answer, if you don’t quite know what question you’re trying to ask.
@Emily B.: Anoia is one of my favorite, and I find cleaning drawers to be a useful form of prayer, sometimes.
Captain C
@satby: Nominated.
Captain C
@bluefoot: Also nominated.
ruckus
@Baud:
You are most likely very, very correct – but.
Look at the world today, look what we are doing here, now. It wasn’t all that long ago that the backyard fence we talked over was replaced with what we are doing here. Most of us have never met and likely never will, if for no other reason than distance.
Look how different the day to day world is from over 1/2 a century ago. Think about what we are doing here. Look at a current car and compare that to one 20-25 yrs old. Do you still have a land line phone? Now none of these things are things that we didn’t do, it’s just how and with who we do them with. This is the world’s largest backyard fence. But the conversation is just a tad different. That tad is doing a hell of a lot of work.
Look at the day to day world around you. A bunch of it is the same things done 50-75 years ago – it is after all humanity. But the scope is entirely different. And a lot of the change is that we can see, discuss and understand differently than we did in the lifetime of humans alive today. I learned electronics when vacuum tubes were sold in grocery stores. How many here never saw that? The world changes with knowledge (and sometimes pompous arrogance). How old is what we are doing here, on the world’s biggest backyard fence? How much has changed within the lifetime of humans alive today? How much has it changed from the first people that lived here to today? Just a tad. In my lifetime (and yes I’m an old…) the world has changed significantly. And yes humans still are, although some seem less human. They really aren’t, we can just see them more clearly.
WTFGhost
@alex j.: Well, and, this is not unlike Jefferson building a mansion where there was no water. He just calculated the number of slaves needed to haul the water to fill cisterns, or whatever storage they used.
Like an old riddle; you’re a new 2nd Louie, and you have a sergeant, who runs two 10 person squads. You have a m-foot pole you must place in a hold n feet deep; you have three lengths of rope (none the same length, of course). How do you get the pole up?
“Sergeant! Get that pole up!”
It’s no longer your problem, and now, it looks easy.
WTFGhost
@Baud: What have I told you about playing Minecraft in real life? You get back from that mystical cavern, under the earth, and all weather upon it, and get back into the real world with the rest of us!
Unless you can share the secret entrance, of course.
azlib
I am a retired IT professional and I have always thought the current “AI” bubble is a fraud. I do need to read this guys book. Another good book is “Empire of AI” by Karen Hao. Really good insight into OpenAI’s delusions and fantasies. THey really do believe AGI is right around the corner, even though AGI is ill defined. The quest for AGI is simply a way to extract more money from gullible investors.
No One of Consequence
I love this discussion, but I am wondering in earnest if it is worth arguing in good faith on this forum. Having decades of BJ onions under my belt, I know better than to ask this question.
Too many good folks here have commented, and I will assume also in good faith. Except Baud.
The indistinguishable from magic quote, I also agree with the follow-on commenter who stated that it was an expression of practical equivalence. Most of us, having let’s say a high school education, would (again mostly) assume that the laws of physics such as they learned them, probably do not suspend themselves arbitrarily. Things falling up would raise eyebrows. If an alien landed and bopped on out of his space-craft, which had (of course) landed in a cow pasture, and you were there minding your own business, but you observed the alien gesture/point a device at a cow — which immediately started to fall up, why, one such individual might reasonably assume that there is some technology there with that alien guy that is beyond our current understanding of how to make things fall up.
Now, let’s say it wasn’t a space-craft and alien, but instead a crusty-old-dude in a purple robe with stars and moons on his pointed hat, who cruises in on a small cloud of purple and green swirling fog, and they point a rather obvious-looking wand at one of the cows — which immediately started to fall up. why, one might reasonably assume that this little wizard has fantastical cow-manipulative magic abilities.
I, for one, always took this quote to mean, the only difference in both of these unbelievable examples is, the dress of the cow manipulator.
But, then again, I am getting older. So fare us all. So far.
-NOoC
No One of Consequence
Sorry, had too much fun with my (rather strained) example above. I believe the skepticism is warranted. I have been through a few tech bubbles now. AI will undoubtedly burst to some extent, short of a new model release that truly moves the needle. I’m not sure what that would entail, myself.
I did want to pose a very (in my opinion) significant question for those that think AGI impossible or even highly unlikely: Based off of our current scientific (evidence-based) understanding of the human brain, the structure provided by our genetics is one thing, but the plasticity of our neural networks to effectively utilize the unreliable substrate of biology is another. Is it so unreasonable to think that digital circuits are not (when considered as merely value-holder mechanisms) so different than dendrites and our complex and constant re-wiring that our own brains are doing every minute?
I ask this in all earnest seriousness. I am not so certain. Just an LLM isn’t probably going to get us to AGI, and I also agree that AGI is fairly loosely-defined by most folks, even in their own minds. However, I also believe that our own human wetware isn’t so different than attempting to create functional neural networks with silicon. I’m not being hand-wavy about the specifics, I am very much interested in the specifics. But my own thinking on the matter is closer to: if/when we see awareness or consciousness in an AI model/system, we may not recognize it. It will be alien in a way we really can only guess at.
I posit such emergent capabilities may not arise until models are given the ability to self-modify and to ‘experience’ temporally. I used to thing corporal sensory inputs or real-world ambulatory/manipulation would be required, but now I am not longer so sure. WOULD they be necessary? Probably to arrive at human-recognizable consciousness perhaps.
I am not spittle-flecked nerdgasm about AI, I am skeptical as well, but I am also very intrigued and quite surprised at the progress that has been and is being made. Is the whole thing worth it? There will be problem spaces that AI will be able to provide value in. Real-world, recognizable value. AlphaFold is a good example of that. If you could give the AI molecular rules, materials engineering / chemistry / protein synthesis could just iterate blindly through possibilities, some of which would work and could be investigated further. (Again, my understanding in any of these three sciences/fields is minimal.)
We do indeed live in interesting times. I am concerned with the deepfake BS and money to push it for the next round of elections. Gone be some straight stoopid shit being flung by monkeys on meth.
-NOoC
artem1s
Nothing new about any of this. Altas Shrugged main character was based on this kind of magical thinking. The industrial and Robber Baron ages spawned all kinds of these assholes. Before this there were the Alchemists. Some of the shit that came out of the Age of Enlightenment and Renaissance is truly nuts. We also got some really great thinkers and helpful stuff too. But grifters always find a way to exploit the next big thing. Since we spent the last 75 years sinking more money into football, porn, cat videos, social media, and Ponzi schemes than education and science we’ve been overdue for a collapse back into a dark age. How hard it is to come back depends on how much gets destroyed. If we are very, very lucky we won’t lose all the stored scientific research data when the energy grid collapses under the weight of bitcoin and AI server farms.
BellyCat
FTFY. You’re welcome.
Martin
@Baud: Pretty sure China is simply hitting the duck curve that has plagued CA. We too dialed back on new generation so we could dump all of our resources into batteries because we were overgenerating on solar so badly that the solar investors ability to pay off their projects was being threatened (when you overgenerate, the wholesale price of electricity goes to zero and your panels earn no money during that time). Once we get enough grid battery installed and the over generation problem gets back to a manageable size, you’ll see solar/wind + battery get cranked back up again. Texas has the same problem but it’s a bit harder for them to solve because solar overgeneration is much more predictable than wind overgeneration so the battery costs are lower to counter the solar problem than they are to counter the wind one. China has a lot of wind and tends to look more like Texas curve than Californias.
Martin
@No One of Consequence:
So, the experts who are not invested in the outcome argue that the current AI progress do not lead to AGI. The reasoning part is lacking, as are introspection and a bunch of others.
AGI is loosely defined but the inflection point to society is not. An AI that can by itself design a better AI is the inflection point. It may lack other human characteristics but once you hit that point, we’re off to the economic races. You now have a machine that can make technological progress faster than humanity can and whatever gaps exist in robotics or AI it alone will fill and the owner of that AI will in time own all future progress in humanity. That’s an untenable state and if we haven’t already thrown our economic systems in the garbage, that moment is when it’ll have to happen.
So we’ll recognize it because we’ve got a pretty good sense of ‘here’s where everything breaks’ but otherwise I agree that it won’t map onto human consciousness in a 1:1 manner.
Moondoggus
@Baud:
as a fan of Star Trek, this sounds like M5.
ask it how to stop global warming and it’s answer is to turn itself off, but it won’t let you and starts killing people instead.
as an aside, M5 was programmed using the engrams of a crazy person.
These LLM are programmed using the content of the Internet. Does the Internet seem like a sane place to you?
Moondoggus
@Martin:
there’s a paper published in Nature that shows LLM generated input used in training an LLM leads to an LLM that hallucinates all the time. The proof has to do with assumptions in the training data that are violated when the training data is from an LLM.
the researchers conclude that eventually, LLMs will have to be trained on curated data, since much of the content on the Internet will be LLM generated data.
if LLM can be used to curate the data, then that technology might be able to sustain itself.
otherwise, we may be nearing peak LLM effectiveness, and subsequent ingestion of data without — trustworth — human validation will result in garbage, or an LLM that cannot keep up with new information.