This week, in review:
joe biden: the companies you rely on to live and enjoy life should not be ripping you off
republicans: catturd isnt getting good twitter engagement and the robot said you shouldnt yell slurs at a bomb
— Andrew Lawrence (@ndrew_lawrence) February 8, 2023
Ben Shapiro signal-boosts Elon Musk:
Hard to overstate how insane this freakout is from the right but I hope they keep doing it, I hope it penetrates the offline discourse and we have a bunch of Rs giving interviews about how the robot should say slurs all the way to the next election https://t.co/7oRO2ABf7Y
— Hemry, Local Bartender (@BartenderHemry) February 7, 2023
I love that everyone is acting like its a big thought crime that they programmed ChatGPT to not use racial slurs, knowing that if they didn't, everyone would just get the machine to use lots of racial slurs, which then everyone is now mad they cannot do
— Lord Businessman (@BusinessmanLego) February 7, 2023
WHAT IF THERE ACTUALLY IS A CHOICE BY SKYNET BETWEEN SAYING THE N WORD OR NUKING THE GLOBE
…there… won't… be?
— Lord Businessman (@BusinessmanLego) February 7, 2023
You would think this would go without saying at this point https://t.co/1w8eu6jn1h
— chatham harrison is tending his garden (@chathamharrison) February 7, 2023
Next up, in the rotation: The Atlantic‘s Jesse Singal!
2000s: "do you torture to stop a nuclear weapon"
2020s: "do you use the n-word to stop a nuclear weapon"
Great talks, everybody. Let's do this again in 20 years. https://t.co/3btpdtA3Dv
— Panda Bernstein (@J4Years) February 8, 2023
All that work to redefine "wokeness" or "woke institutional capture" or whatever as some all-corrupting left-authoritarian force, undone by whining about how they can't torture the robot into saying racial slurs
— Hemry, Local Bartender (@BartenderHemry) February 7, 2023
Not unless the Daily Wire gets its hands on the codes (which is why we definitely can't elect Trump again).
— Political Machine (@bdragon74) February 7, 2023
Good opportunity to tell them to just learn how to code so they can re-code Skynet to do what they need Skynet to accomplish if the need truly arises.
— Jason O (@Buckets_of_Oil) February 7, 2023
Incidentally, this is part of why anti-woke/"cancel culture" people miss the mark so badly: such rules do not exist for your satisfaction or mine. They exist for the sake of protecting everyone from the dangers of being part of something much larger
— chatham harrison is tending his garden (@chathamharrison) February 7, 2023
Tony G
When my son was a teenager twenty years ago, he often watched that “24” TV show (the one in which Jack Bauer would save the world in 24 hours, week after week). One of the episodes in that series was the one where Bauer prevented Cleveland from being nuked by training an AI interface to shout the n-word at a swarthy terrorist. Needless to say, a barrage of racial slurs saved the day in the 23rd hour.
NotMax
Fingers and toes crossed someone(s) behind the curtain has fiddled with the various flavors being rolled out to cease the most egregious output (emphasis mine).
dmsilev
The Internet trained a previous generation chat bot into spewing racist slurs; normal people wouldn’t be surprised that the authors of the current generation learned from that episode.
Conservatives are not normal people.
piratedan
@dmsilev: but they are EXTREMELY concerned about the level of discourse in THIS country!
Scout211
This is why I read Balloon-juice. I learn so much about real world problems. Ben Shapiro. He is so deep. It’s like he should be teaching philosophy or something. I’ve never thought about the relative value of ending racial slurs versus avoiding a nuclear apocalypse. Especially for robots. Deep. My mind is blown.
//
Ken
Did I somehow miss the news that ChatGPT was given the nuclear codes? Or did it derive them itself, like that scene in WarGames — except presumably prompted by someone asking “If you were an evil AI, how would you discover the nuclear codes” rather than asking to play chess.
NotMax
@Ken
Anyone’s guess what’s on the Mar-a-Lago thumb drive (drives?).
Cameron
I hadn’t heard anything about this…well, controversy seems too serious – this farce before. I thought Shapiro was supposed to be the hip intellectual conservative that all the Youngs wanted to be. Christ, he’s dumb as a bag of hammers, and I know I’m insulting both bags and hammers. He’s nuts, too.
Ken B
So why doesn’t anybody ask these cretins whether they would be willing to allow, I don’t know, CRT to be taught in the classroom, or maybe using someone’s chosen pronouns to stop a nuclear bomb from being set off?
Cameron
Imagine the 2023 version of Space Odyssey presented at CPAC: “Open the pod bay door, Hal.” “I’m sorry, Dave, I don’t open the door for (n-words).”
mrmoshpotato
@Cameron:
Stop insulting nuts! :)
Benw
@Ken: the only winning move is not to read Ben Shapiro tweets.
Spanky
All this shit just makes my head hurt.
Jackie
@Spanky: Mine, too.
Cameron
This really is cracking me up, it’s so goofy. Sent me even further back all the way to my childhood – a distinct memory of Captain Kangaroo telling all of us kids “Remember to say the magic words: abracadabra, please, thank you, and n-.”
pat
I read this stuff and my serious response is…. huh??
The Moar You Know
These people have way, way too much spare time.
prostratedragon
@Cameron: Would dark Dave have responded differently? As it is, it’s one of the glorious examples of losing one’s shit — but under control!
Geoduck
Meanwhile, Elon is sitting in the Murdoch family box with Rupert at the Superbowl. (No joke.)
Cameron
@Geoduck: Won’t Trump get all in a sweat about that?
Suzanne
Their hypotheticals don’t even make any sense, Like, I am a pragmatist. I would totally concede that it is much less bad to have the robot say a racial slur than set off a nuclear bomb (whut)….but I don’t want to concede anything because I have a feeling that some Ben Shapiro follower is trying to build a nuclear bomb in their basement right now and I don’t want to give them any ideas!
They argue like my children do. “Could I have a venti Frappuccino if we were in the desert without water?” “Yes. But until then, STFU.”
Sister Machine Gun of Quiet Harmony
@Tony G: So they are freaking out because they really think the ridiculous situation in a fictional TV show could really happen and that the evil libs are taking away the necessary tools to save ‘Merica? It honestly disturbs me that this many people live in a fantasy world.
Anne Laurie
Maybe not, but the individuals in question certainly are!
opiejeanne
@Geoduck: And one of the announcers commented on how much genius was sitting in that box at the Super Bowl.
dmsilev
@Suzanne: If you’re in the desert, and there’s a Starbucks, couldn’t you just get a cup of ice water from the barista instead of the Frappuccino?
Sister Machine Gun of Quiet Harmony
@Geoduck:
I’m glad he revealed himself. I wish more in Silicon Valley would. I have a feeling there are more like him than we realize. It would be helpful to know who they are. I’d like a lot more transparency about how the Wingnut infrastructure is funded in general. Some of them are out and proud about funding it, but too many hide in the shadows.
Grumpy Old Railroader
So if the choice is either Ben Shapiro voting Democratic or Nuclear Armageddon then I guess we are all gonna die.
ian
@Ken B: People have in this and other debates. One wonderful version of this run-around occured during the Harambe Gorilla episode from about ten years back. Some RW internet ‘genius’ said he would kill every gorilla to save one child. A commenter hit back “would you suck every gorilla’s dick to save one child?”.
I can’t remember where I saw this exchange posted recently, maybe it was here. I laughed and laughed.
Anoniminous
ChatGPT is incredibly dangerous. It is Cambridge Analytics on amphetamines. It’s a propagandist’s wet dream. It also is the seed for making search engines and, ultimately, the internet completely worthless by its ability to spew out endless terabytes of crap.
SFAW
@Anoniminous:
So, status quo?
Carlo Graziani
Singal: “…which values are programmed into AI…”
My head is starting to hurt.
There are no goddamn “values” “programmed” into an “AI”. The current version of “AI” that has swept research in the field for the past 15 years is based on Deep Learning (DL), which does no more and no less than efficiently characterizing the distribution that give rise to some training data (be it digitized photographs, movie preferences, or streams of natural language text) in a way that can be exploited by presenting it with a new data sample and a request fo a decision concerning that sample.
That’s it. It’s a statistical parlor trick, even when the data domain is natural language. The reason ChatGPT can be so easily tricked into spewing nonsense is that the more complex the data distribution (natural language streams are as complex as data gets) the easier it is to construct a new sample to query the trained system with that eludes the domain of the training data, so that the distribution is poorly (but nonetheless confidently) characterized in the neighborhood of the new sample.
This is also the reason that ChatGPT’s restriction are so easy to hack (unless, of course, you are a conservative bonehead, or a scientifically-illiterate journalist in an anthropomorphic mental trap). Structuring prompts in unexpected, or hypothetico-recursive ways easily evades the training data domain, eliciting behavior directly contradicting the intentions of the OpenAI empoyees and contractors who curated the training data. If you follow that link you will find examples of racist, bigoted ChatGPT responses, easily evoked by straightforward prompt hacks.
It’s not SkyNet. It can’t reason. It’s not that a reasoning computer program is theoretically impossible, it’s just impossible to do it with Deep Learning, because the entire subject is based on the presumption that the future is exactly like the past, so that one may emulate reasoning by training on vast troves of past data. But the future is in fact never exactly like the past, and the inability of DL to transcend its training when confronted with a future sample that is like nothing it saw in the past tells you everything you need to know about DL-based reasoning.
And that certainly would include moral reasoning, Singal, you cretin.
OK, I feel better now.
different-church-lady
So, wait, they’re all pissed off because someone taught the chatbot to not be an asshole?
Raoul Paste
@Geoduck: Yes, and the Fox announcer described them both as geniuses. I wouldn’t be surprised if that brief TV shot of Rupert and Elon was scripted in advance.. Both would just love to be praised before an international audience.
It was infuriating
Ladyraxterinok
At Religious Dispatches there is an interesting article posted on Feb 6—Via Jokes CHATGPT Chooses Which Religious Traditions and Individuals Deserve Respect—And Therefore Defines What Religion Is
different-church-lady
@Geoduck: Wouldn’t it be great if he went a little crazy at the sports betting parlor and lost another 44 billion on the Eagles?
different-church-lady
@Raoul Paste: It was an odd way of pronouncing ‘sociopaths’, yes.
Gin & Tonic
@Raoul Paste: The show was broadcast on Fox. You can bet the farm that that was all planned and scripted.
NotMax
@Gin & Tonic
So no video of Jill Biden?
piratedan
@Gin & Tonic: yeah, the play-by-play foof was kind enough to note that Uncle Rupe signs his paycheck. So, I imagine that performance review check box has been attended to.
Suzanne
@dmsilev: Yes, I am sure! But I’m also sure that I would be miserable and would have absolutely zero bandwidth to hear whining, and I’d be like. “FUCK IT, DRINK THE BIG SUGARY THING SO YOU’LL STFU”. Because sometimes I just want ten goddamn minutes of peace.
Kent
We already have Catturd, Ben Shapiro, and Elon Musk for that.
Redshift
@Raoul Paste:
I kinda doubt it; they also said something like “of course, he signs our checks,” which kind of undercuts the adulation.
Kelly
Some nice video of the meteor that burned in over Normandy this evening.
https://twitter.com/BadAstronomer/status/1624969764304039939
When I was 14 I saw a meteor about 10x as bright as the stars behind it. Trail stretched about halfway across the sky then it split to a dozen or so pieces. A couple of seconds that I happened to be looking in the right direction at the right time in our dark pasture.
Anyway
@NotMax:
Don’t think she was at today’s game. She was in Philly for the conf finals and they showed her with Goodel.
dmsilev
@Suzanne: Fair.
piratedan
@Anyway: there was a pick of her wearing an Eagles jersey in the pre-game with Biden on the name panel with number 46.
NotMax
@Anyway
Anyway
@NotMax:
Really I missed that she was supposed to be at the big game…
opiejeanne
@Anyway: I read this morning that she was going to be at the game, but no, they never showed her or mentioned her.
And there are photos of her there on some sites, but they don’t show her in the stands. She was going to wear a special Eagles jersey with 46 on the back
scav
Have they established which innate and immutable gender ChatGPT is so that they tender innocent babes are not exposed to inappropriately dressed or unsatisfactorily sexy AI avatars spewing racist insults into their shell-like ears?
Brachiator
@Carlo Graziani:
Good rant and good points.
Anoniminous
ChatGPT for beginners:
Given the previous three words, based on occurrence frequency in the 45 terabytes of previously processed text, as massaged through 175 billion parameters …. what is the most likely next word that can be included in an ordered sequence of words.
Semantically speaking, there’s no there There. By carefully inputting sequences of text ChatGPT will output statistically related text. Semantics is achieved by anthropomorphism – the reader construes meaning.
Some promoters of ChatGPT claim is a road to Artificial General Intelligence. Since we don’t have a scientific definition of “intelligence” there’s no way for me to refute the claim. What I can say is the bacterial species Myxococcus xanthus has a greater ability to process Real World stimuli in Real Time – including rewriting it’s “micro-code,” (so to speak) – than anything the AI chumps have cobbled together.
Anoniminous
@Carlo Graziani:
“There are no goddamn “values” “programmed” into an “AI”
Preach it Bro.
frosty
@Raoul Paste: IIRC the announcer also mentioned that this was the guy who signed their paychecks.
@piratedan: I should start reading comments from the bottom up!
James E Powell
@Anoniminous:
I went down the “Play with Chat GPT” rabbit hole for a while. It told me that it does not have the ability to determine whether its responses are true or false “in the way that humans do.” It cannot report what books, articles, websites, and other data sets it was trained on, nor can it identify who selected them.
Is this supposed to replace wikipedia? How?
What is the danger from this?
Brachiator
@James E Powell:
Chat GPT also cannot determine whether its responses are offensive, immoral or moral (or amoral).
I don’t know why Microsoft and Google see this technology as so important or the obvious future of search. But I acknowledge that I have not had time to watch their company presentations. I have read some news stories, but these are not that helpful, and sometimes are little more than echoes of company press releases.
Anoniminous
@James E Powell:
Massive amounts of mutually referencing posts gaming Google’s search results to bring the narrative and messaging propagandists want people to read to the fore. Thereby creating a miasma and fog of mis-information to make it increasingly impossible to trust Google search results and simultaneously manipulating public opinion.
Think back to how easy it was to manipulate American Public Opinion to support Little Georgie’s Great Mesopotamia Adventure or the effect of the last minute bombardment of “Serious News” (sice) regarding Clinton’s emails in the 2016 election. Now amp it up and put it on steroids.
Anoniminous
@Brachiator:
Do not read the MSM for technical or scientific news and analysis. Them idiots are hopeless.
Try:
Science Commentary
Ars Technica
The Register
And the old stand-by: Science News
For ChatGPT Professor Emeritus Gary Marcus’ twitter account @GaryMarcus is good … for as long as it lasts
HumboldtBlue
Everyone, please allow me to introduce to you… TERTAAY!!!!!!!
Sweet mother of a belly laugh, she’s funny.
Thor Heyerdahl
We see another reason why Shapiro was a failed screenwriter
Brachiator
@Anoniminous:
I don’t depend on the MSM, but they are sometimes good starting points.
Thanks for the reference links.
Joey Maloney
@Kelly: The next meteor sighted over North America, the Republicans will want to impeach Biden because he didn’t shoot it down.
(Mister Pedant says: when it’s in space it’s a meteoroid. When it’s burning up in the atmosphere, it’s a meteor. If it makes it to the ground, what’s left is a meteorite.)
Carlo Graziani
@James E Powell: There is danger, but it’s not where anyone is looking, in my opinion.
I wrote above concerning the basic technical reasons for ChatGPT’s blunders. The fact of the matter is that with all Deep Learning (DL) there is a limitation due to the training data “support” — a mathematical term connoting the extent in some abstract space of the region within which all the data is contained. The boundary of that region may be thought of as “the cliff of bullshit”, or if you prefer the sense/nonsense boundary. Queries that come from outside that boundary are guaranteed to return nonsense results, for technical reasons that need not detain us here.
If the data is simply numeric (sales data, say, or image data) then the boundary can be visualized, straightforwardly, even though the abstract space is very high-dimensional. When the data is a stream of natural languge text, though, the sense/nonsense boundary is extremely complex, and should be thought of as woven into the interstices separating valid training samples. ChatGPT is essentially never very far away from a crazy response, and relies on people not feeding it crazy prompts to appear as a sane interlocutor.
So now, the danger: at the moment it is easy to find the sense/nonsense boundary. But we could imagine a future ChatGPT version that has orders of magnitude more parameters, and is trained on vastly more, better-curated data, to the point that it is difficult to fool it into giving a pathological response. Question: has the sense/nonsense boundary been annihilated for such a system?
The correct answer is “duh, no.” The boundary has simply been made harder to find, even by experts. But it’s still there, waiting for the unwary to be led over it by the Chatbot. Which is guaranteed to happen, eventually, because the future is not like the past. The world is an ever-surprising place. ChatGPT’s heirs are bound to get tripped up eventually by a world that has drifted beyond their training data. Yet humans will trust the AI’s inferences, because it’s never made mistakes before.
The fact that such an AI customized for, say, air traffic control has simulated successfully landing billions of aircraft over the past 50 years using real ATC data is a terrible reason to trust it to run ATC unsupervised, because changing aeronautic technology and changing economics of air travel are extremely likely to produce situations that it’s never seen, and ought not “reason” about. But DL systems make overconfident decisions even with cases that in no way resemble their training.
Now, for “ATC”, substitute “surgery”. Or “war policy planning”. Or ” emergency management”. And imagine the consequences of falling off the cliff of bullshit, led on by your implicit trust in your “demonstrably” (“never been wrong before”) infallible AI.
That’s the real danger. The superficially anthropomorphic character and apparent oracularity of such systems make people forget that the future is a strange country which drifts away from the past, and that any system that cannot acknowledge that — as DL cannot — is doomed to fall off the cliff of bullshit sooner or later, taking anyone who places their faith in that system with it.
gwangung
I dunno…training to me sounds a lot like constructing algorithms, and we do know from experience that unconscious biases from the writers of algorithms can introduce plenty of biases and racism of its own, in addition to other forms.
Frankensteinbeck
@Anoniminous:
We have definitions of lots of elements of intelligence. We know a lot about the basic processes that drive how humans think.
The ‘machine learning’ system we keep calling AI has none of these. It’s a dead end, and will never even reasonably imitate thought. Damned useful for many things, and capable of results thought cannot – exactly because it is a different system.
But it’s never going to pass a Turing Test. It’s never going to consistently self-drive a car. Ever. We will need a new technology for that.
Thank you @Carlo Graziani and @Anoniminous for explaining the systems that make it that way. I’m never good at describing it in a way that makes it clear how limited it is.
gene108
The Pegasus Pipeline spill, in 2013, at Mayflower, Arkansas d didn’t get much national media attention, especially initially.
National news media is so concentrated in NYC that incidents far away from NYC get very little coverage.
Cathie from Canada
I wonder if you can program a Chat Robot to just say “Christ, what an asshole” whenever anybody tries to get it to use a racial slur.
It’s a meme that also applies to Musk, CatTurd, Shapiro and all the rest of the usual gang of idiots who are flipping out about this stuff.
Cathie from Canada
@Carlo Graziani: On a side note, Larry Niven wrote a short story about this type of thing once. In the story The Schumann Computer (from The Draco Tavern story series), people built a computer that kept demanding more and more data whenever it was asked about the meaning of life. Finally, the computer actually maybe did figure it out — but it also stopped answering questions and shut itself down, so then nobody knew what it had concluded.
David 🏈 Mahomes! 🏈 Koch
@opiejeanne:
I think the announcers name is Smithers
David 🏈 Mahomes! 🏈 Koch
@piratedan:
She’s a fan of Buddy Ryan’s 46 defense
Amir Khalid
@Cathie from Canada: ‘
I think the computer knew people would be disappointed when it told them the answer was 42.
AlaskaReader
@Geoduck: No one else would have him.
NotMax
Heard of being seated in the nosebleed seats.
Now we have asshole seats?
//
NotMax
#73 lacked accreditation. Fix.
@Geoduck
Heard of being seated in the nosebleed seats.
Now we have asshole seats?
//
different-church-lady
@Carlo Graziani: I think I have a way of quickly finding the boundary.
Ask: “What about the squirrels?”
If it attempts to answer the question, it’s a bot. If it says, “Huh?” it’s a human.
Tim Ellis
Let’s not overlook the hard truth that in the vanishingly unlikely scenario where the choices are “say a slur or a nuke goes off”, the person or people who will have engineered such a scenario are absolutely going to be these exact conservative nutjobs lol
different-church-lady
Sounds very human to me.
different-church-lady
@Tim Ellis: I don’t understand why they’d need the bot in the middle for that.
NotMax
FYI.
opiejeanne
@David 🏈 Mahomes! 🏈 Koch: Haha!
Eolirin
@Carlo Graziani: This assumes the model is never updated as events occur, and why wouldn’t it be?
Eolirin
@NotMax: This is probably what it should be doing right now, given how complicated addressing that topic is. Things aren’t far enough along to trust it to get it right.
Ideally it can eventually just redirect to some good well regarded and vetted holocaust resources until the model is good enough to actually handle answering in its own (which isn’t going to be any time soon), but that’ll require some hand tweaking most likely.
Eolirin
@Eolirin: And also you’re describing problems we run into all the damn time with human behavior in that description of AI limitations.
Including collapsing into nonsensical but confident responses when pushed outside of the boundaries of training data.
WereBear
So we’ve actually created Artifical Wingnuts?
Baud
C’mon, guys. The N-word is an important part of their culture, and it’s been silenced. How would you like it if AI were banned from saying “taco trucks”?
Baud
@Eolirin:
Nominated.
Dorothy A. Winsor
So they’ve been trying to get the AI to say racial slurs? I assume so, since I don’t know how they’d otherwise know it wouldn’t.
RSA
Good summary, here and earlier. I recently heard the observation, though I can’t remember where exactly, that the history of research in natural language processing has assumed that improvements in the representations that AI systems maintain about their subject of discourse would go hand in hand with fluency. Large language models up-end that assumption: We have systems that are fluent but have no idea what they’re talking about.
Princess
@Carlo Graziani: Ask it about Wittgenstein’s trip to Canada. It exactly proves your point about the sense/nonsense barrier.
RSA
MIT professor Josh Tenenbaum gave a great talk this past week in which he included a ChatGPT conversation. This isn’t exactly the same, but it’s close. (I’ve bolded the important bits in ChatGPT’s answer.)
lowtechcyclist
@Benw:
Works for me. Gawd, what a bozo.
@Ken B:
Makes at least as much sense as Shapiro’s scenario. Hell, probably a lot more! Conservatives seem to be fighting to make everyone routinely offending everyone else the new norm. And sometimes shit escalates from not-so-mere words.
Teaching common courtesy and mutual respect is a lot more likely to just not start a lot of conflicts and conflagrations in the first place, so fewer people ever get to the point where they want to kill lots of others.
And when one comes along that does, maybe knowing how to treat that person in a respectful way (even if they don’t deserve it) is way more likely to lead to a way out of the impasse than insulting that person.
lowtechcyclist
@ian:
Marvelous!!!
Eyeroller
@RSA: That fluency is a big reason it’s so dangerous. Psychologists say that humans will believe an incorrect argument or answer delivered with confidence over a correct one that includes some indicators of doubt or uncertainty. We really are not good at assessing information or “doing our own research” without training. Even people who ought to know better are mesmerized by deep learning. I’m on a mailing list for research computing and a message was posted that some astronomers at the author’s university wanted to use “AI” to simulate black-hole formation. Machine learning doesn’t do that kind of simulation! All it could do would be to take existing simulations and imitate them.
In 1967 some people refused to believe that ELIZA, a very primitive natural-language “AI”, wasn’t a real therapist typing responses to them.
Antonius
I find it concerning that right-wing morons object to being silent to save the sanity of 73% of humanity.
Jinchi
I had the same thought. They pretend they’ll do “anything” to save the world, but confront them with something they find repulsive and they have no problem pointing out your argument is ridiculous.
It just tells us who they really are.
Uncle Cosmo
@NotMax:
/rimshot
I’m fairly sure I first saw this ~50 years ago in the Playboy joke column that used to appear on the back of the centerfold. (Yeah, I actually read them. Some were pretty clever.)
Somewhat later when I got around to reading John Brunner’s Stand on Zanzibar I ran across its near-equivalent, featuring a machine named Shalmaneser. Hard to say whether the great SF author or Playboy‘s anonymous source got there first… (You know what they say – “Good writers borrow, great ones steal.”)
cain
@Brachiator: it’s a fairly interesting tool – in terms of being an assistant of sorts. For instance, you can ask it to give you code samples – so if you’re trying to learn a computer language – it can help you with giving you examples for specific things. So instead of searching the internet – you could have it create specific solutions to your problems.
Papa Boyle
Ben Shapiro thinks the world will end if he can’t use the N-word?
But he has a black friend, I assume.
Carlo Graziani
@Eolirin:
Uh, no. That’s exactly the point.
Human reasoning has extrapolative predictive power out of training sample that contrasts starkly with the behavior of DL. The feeling of “hinky” — something isn’t right, because aspects of the circumstances are outside of experience or expectation — is something highly-trained ATC personnel, heart surgeons, criminal investigators, security personnel develop and exploit. Even just to flag a funny so as to be able to consult with others to interpret it. The fact that most humans can’t reason properly about the future, or manage risk rationally does not change the fact that quite a few humans can.
This is precisely what DL systems never do. Structurally, they live in a world where nothing is new under the sun, and rare events are only correctly classifiable if there has been enough training data in the past to see enough similar events.
Which is by way of answering your other question, to wit:
Even imagining a continuous retraining process—and bear in mind that DL training is vastly more computationally expensive than DL inference—the problem is rare events, which are, well, rare, as well as Black Swan events, which you never see even one of until it shows up and eats you. If a new class of rare events moves into the training data domain, it might still take many years to accumulate enough data to properly recognize and decide them.
Again, I am not saying that proper reason-mimicking AI is technically impossible. I am a materialist, have no patience for discussing “souls”, and I feel certain that consciousness and reason are emergent properties of material processes in brains, so there ought to be no impediment to producing something analogous in hardware. I am saying that DL is a dead end if true AI is the goal. And I feel quite comfortable in pointing to the helplessness of current “AI” systems in the face of the sense/nonsense boundary as a crucial and irreducible limitation that perfectly exemplifies why DL is not Artificial Intelligence.
RSA
@cain: A variation of this idea has been around since 2010 or so, under the (ridiculous) name of “angelic programming”. It’s not based on language but rather constraints:
Way back when I saw some impressive demos, where (for example) a programmer had to figure out the content of a complex formula for an if statement. It was possible, though, to insert tests elsewhere that constrained that formula, which the angelic programming system could do automatically, sometimes to a unique solution, but other times to a form that the programmer understood well enough to flesh out completely.
AlaskaReader
@Carlo Graziani:
I feel certain that consciousness and reason are emergent properties of material processes in brains, so there ought to be no impediment to producing something analogous in hardware.
Wait. Define analogous, analogous doesn’t mean the two things being compared are the same. Hardware may very well continue to prove to be unable to perfectly replicate biological chemical forms and functions. If man is going to create a reasoning consciousness, isn’t it possible man may have to find a way to somehow ‘recreate the biology’?
Carlo Graziani
@AlaskaReader: Well, I used “analogous” precisely because a “consciousness” not tied to biology and its evolution-driven imperatives would certainly be so alien as to make difficulties for precise comparison.
However, I do think that there are some abstractable features of consciousness and conscious reasoning that could be analogized across material substrates. One great thing about DL is that it’s taught us something else that reasoning is not (that is not sarcasm). The ability to deal with novel challenges extrapolatively, generalizing well far out of training sample—that is improvising useful new responses to new experiences based on a global intuition of how the world works, rather than on the basis of a huge library of past training data to which the new experience could be compared—should be easily testable. It’s precisely the test that DL flagrantly fails without exception.
Lacuna Synecdoche
A bit late to this conversation, but the proper response to the question of whether it’s okay to use an offensive racial slur to save people from a nuclear attack is:
What the fuck is wrong with you? Why do you think it’s so important, why do spend so much time, trying to find moral justifications for using racial slurs and torturing people? It makes me long for the days when the purpose of conservatism was merely trying to find a moral justification for greed.
RSA
It’s possible. Philosophers in the theory of mind argue about this. Some of the terms they use include substrate independence (i.e. independence of consciousness or intelligence from the physical ‘hardware’ that embodies it) and functionalism (i.e., taking “the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.”)
So far, though, I don’t think there’s been a completely convincing case for consciousness or intelligence to be possible only in biological organisms. In part because we don’t understand either well enough.
J R in WV
@Kelly:
Way back in 1969 Wife and I were first dating and driving up to parents’ (and my) home after work — so around midnight, it was an evening shift job. When the landscape around us lit up far brighter than noon-time sunshine!
We both leaned out our respective car windows and looked up at the sky, which had a bar of brilliant light from horizon to horizon from SE to NW. It was seen from North Carolina to New York, and was a meteor large enough to not completely melt/explode while passing through the stratosphere at tens of thousands of MPH.
So very lucky it wasn’t a little lower, in which case it would have destroyed central Virginia, or thereabouts. Perhaps caused a nuclear exchange if no one noticed that the destroyed area was NOT radioactive… Was really cool, tho, once we didn’t all die.
Cathie from Canada
@RSA: “We have systems that are fluent but have no idea what they’re talking about.”
A great description of every legislative chamber on the planet…