That was quite a day.
Here is what I want to talk about- what am I missing? What is it about AI that everyone has lost their god damned minds over it. What practical application does it have that can be in use in under 20 years that can not be done by people?
I’ve been an early adopter of every piece of tech since I can remember using the old Pr!me mainframe and teletypes in the 70’s, but I can’t figure out what the urgent need is for this tech that we are ready to burn everything down to get there. Someone fill me in. I mean, if I want analyses riddled with errors I can just read student papers or anything from the Heritage foundation.
TBone
I almost threw my phone out into the street (it is an older Android dumphone) when the stupid AI “Clippy” helper thingy popped up outta nowhere and started asking me if I wanted “help” from it. GTFOH with that bullshit!
I don’t know what the real name of the “helper” is because I shut that shit down pronto!
dmsilev
It’s cheaper than a person. That’s the pitch.
Granted in many cases it’s both cheaper and vastly less accurate/useful, but the hucksters conveniently forget the second part.
Suzanne
My profession has been trying to figure out a way to use AI for a while. Haven’t really found a compelling use for it yet. The hard part is making choices, and that can’t be automated.
Eolirin
Generative AI is mostly going to be a transitional technology, not an end point. Things should cool down a lot once that becomes clearer and energy starts getting focused more on what comes next instead of building ever larger LLMs.
But I do think there’s a real potential that the language comprehension abilities and some of the in context learning we’re seeing with existing techniques will eventually translate into properly useful digital assistants and act as a translation layer with other more abstract algorithms and data sets; that stuff will have value, in the same way that a search engine has value, except one cleverer than Google when it worked right, capable of handling far more complex queries. The really interesting stuff won’t be the LLMs themselves though.
Makes a lot more sense than crypto at least. ;p
Edit: The biggest value to things like LLM style automation will actually be things that can be directly validated as part of the output pipeline; that’s actually something like *programming* not writing. It’ll still run into issues with badly formatted design, but programmers already run into that problem. And in the best case it could result in far more reliable code than humans can put out, though it’s still a fair ways away from that.
Matt
It’s a two-headed situation:
The second category are shitting their collective pantaloons this week because a Chinese effort appears to have delivered roughly comparable results with 50x less capital outlay, raising questions about why these clowns need hundreds of billions of dollars more investment.
grumbles
A small clique of extremely wealthy people are neo-feudal accelerationists who believe they can survive the coming climate/war cataclysm and rule what’s left, and they want to get on with it before they need hip replacements. Among other dubious ideas, they think AI is going to be labor to keep the high standard of living they of course require without the need for hundreds of millions of folks working on the upstream inputs to all those high-tech toys. Oh, and Mars, bitches.
It is sort of a secular Rapture cult for billionaires, cycled through the bowels of 90’s era Extropianism and updated with current tech and resentments.
So that’s why you’re seeing crazy investments from certain names.
On the other hand, LLM programming tooling is getting good, fast. I do think my profession as a whole is going to be making a lot less money soon, with a split between grunt line programmers who mostly drive an LLM and a much smaller group of architects who kinda understand what’s going on.
dmsilev
In my neck of the woods, machine learning definitely has found uses, basically teasing out subtle patterns in large datasets or complicated sets of equations that are resistant to other techniques.
Two very big differences to the current AI hype. First, when these models find a proposed solution to a problem, it’s relatively straightforward to validate how good that solution is, which is vastly different from the AI chatbots that like to hallucinate things. Secondly, the scale and scope is vastly smaller; nobody is recommissioning Three Mile Island or fleets of coal plants to power a protein-folding solver or whatever, nor are they scraping every last byte of every vaguely public website for training data in the hope that it might incrementally improve performance.
Eolirin
@dmsilev: Yeah, gen AI is mostly a deadend, in the long term imo. At best a translation layer.
Except for things like coding.
frosty
I have no idea what that means but it sounded great!
comrade scotts agenda of rage
@grumbles:
Techbros and the many “initiatives” they fund, will be the death of us all.
TBone
OT but I really can’t catch a freakin’ break. Noah The Love Cat started having a watery, bright yellow discharge out of one nostril after his 6pm feeding. His nose has been so dry the past few days, I’d been wiping gently with a cool, wet tissue and applying a very thin layer of Vaseline. Once per day. Also, he’s been loudly snoring for several days (unusual – he’s always snored a tiny bit but this has been ridiculous). I texted the vet voicing my conern and then, when I didn’t hear back right away (it’s a 24 hr. hospital) of course I looked online and goddamnit I scared myself reading the possibilities. So I put the phone down and crawled into bed next to my lil snoring buddy, wiped his nose and his ass again, and settled in to watch my beloved Samantha Brown and then a George Raft noir. Finally relaxed and….RING goes the landline! It was the vet, and yes reflux through nostril is a possibility if tube placement is wonky. Among other scary possibilities. I told the vet I am too damned tired to do anything tonight but they’ll get him in first thing in the A.M. if he’s still having bright yellow discharge out of his nose. The only change made today was a new supplement called Liver Hepato that vet recommended earlier and Chewy just delivered today. It is a dark brown, mucky liquid substance and, since I don’t trust supplements, I am the suspicious!!! GAH!
Steve LaBonne
@grumbles: This is another opportunity to recommend N. K. Jemisin’s marvelous novella “Emergency Skin”. Without giving any spoilers, it shows how desirable it would be for these fucking parasites to actually fuck off to another planet.
Anonymous At Work
@dmsilev: It won’t complain to HR about work hours or work conditions. Nothing to harass but no danger in unionization either. Also drives down the bargaining power of remaining employees.
The big change with the DeepStake AI is that it consumes FAR FAR less power to do its job. Far less than people imagined possible for multiple years. That brings down the scale of the enterprises needed to use it and find problems for which it is a legitimate solution.
middlelee
We don’t need any of it. Someone else (possibly a BJ commenter) said what I’m about to write.
“one search on chatGPT uses 10X the amount of energy as a google search.
Training one AI model produces the same amount of carbon dioxide as 300 round trip flights between NY and SF and five times the lifetime emissions of a car.
We don’t need AI art. We don’t need AI grocery lists. We don’t need AI self-driving cars. We don’t need chatGPT or gemini or grok or all-E or whatever “revolutionary” technology already exists inside our own human brains.
We need the earth.”
grumbles
@frosty if you’re curious, some links:
https://en.wikipedia.org/wiki/Accelerationism
https://www.wired.com/1994/10/extropians/
VFX Lurker
Practical uses:
Image editing – noise removal
Image editing – object removal
Searching through a photo collection for “cat” or “food” to find all the photos you took of cats and food
Coding assist (but ONLY if you yourself can tell the difference between good and bad code)
…but yes, too many expect too much from this tech.
Steve LaBonne
@TBone: I am sorry you’re going through this. Hoping for the best.
randy khan
My perspective is that there are several kinds of AI, and some are more useful than others.
There are myriad little things being added to various kinds of software that are AI. It seems to be pretty good for image editing, and video editing. I am less impressed by things like suggested text in emails, partly because usually I can just type it as fast as I can read the suggestions, decide, and accept them.
There also are expert systems that can be very helpful to professionals like doctors, particularly for things that come up only rarely in their work (unusual diseases, for instance), and ensure that they don’t miss something.
Generative AI, though, seems to need a lot of work to me, and is subject to just making things up. (I think the term for it in the field – hallucinations – is perfectly evocative.) And I’ve seen too many examples just from plain searches where the AI summaries are laughably bad.
Eolirin
@grumbles: Yeah, you can rigorously validate programming output. A lot of the issues with LLMs can be made to go away when you can do that, it just takes extra steps.
TBone
@Steve LaBonne: thank you! Might as well stay up until his midnight feeding now, I guess. My consolation prize: crunchy organic (they were on sale) ginger snaps! I’ll have to brush my teeth again later if I can still stand up.
Eolirin
@middlelee: That’s actually a pretty small amount of energy output, and there’s a focus on getting those things to go down.
There’s orders of magnitude more cars than models being trained. And there’s orders of magnitude more google searches currently happening than chatGPT use. We can make the same arguments about the entire internet, or computers. We don’t need any of these things, really. We don’t need movies, and we don’t need video games, and we don’t need TV.
It’s kind of a bullshit argument; what we actually need is a green energy grid, top to bottom. We need it for what already exists and we need it even more for newer more energy intensive technologies. But even without those, if we don’t have a completely carbon emission free grid just for what we already use, we’re fucked. AI isn’t really the core problem. A lack of infrastructure investments are.
Eolirin
@randy khan: Some of nVidia’s DLSS techniques are technically genAI, I think, so that’s actually another valid use case for it.
Sister Machine Gun of Quiet Harmony
Machine Learning (one type of AI) has a ton of uses from fraud detection to predicting resource use so you can plan appropriately.
Large Language Models, the type of AI that’s causing all the big hype, has a more limited set of uses, but they are really, really useful. Some companies such as insurance and health care have a lot of data that is just text/narrative. Other companies have some data like this, such as customer complaints or security incidents. Until LLMs, that data mostly just sat in storage because it cost far too much money to have people spending countless hours combing through it to find useful patterns or to search through it to find out if it is useful. LLMs allow you to quickly do both of those things. You can extract categories from 10s of thousands forms, such as types of customer complaints, so that you can track issues over time. You can quickly search through tons of old records to find relavent information or summarize thousands of scanned legal documents in minutes. This is a big productivity booster. That’s why its a big deal
I’ll also add that Google Translate uses an LLM too. LLMs have helped with decoding ancient languages, as well.
Acronym
Off the apparent subject, but I find myself asking what an effective opposition to this TFG regime looks like, and how it would function. And who would be effective in this role. I feel like there is not adequate pushback to the flooded zone. Thoughts?
Layer8Problem
Hey, my first big system was a Pr1me 750! I became acquainted with my partner of many years in front of its line printer. And I came this close to hauling off a retired campus Model 33 teletype. Said partner put a stop to that by saying “And then you’ll do what with that, beside park it in my basement?” Ah, knowing the price of everything and the value of nothing . . .
Chris
For rich people, the reason they go ga-ga over AI is the idea that it lets them cut costs and fuck employees.
From the beginning I’ve understood AI primarily through the Google-Translate analogy. When Google-Translate became good enough, a ton of professional translators lost their jobs, because now Google-Translate could just do it for them. Of course, Google-Translate couldn’t actually do it for them, and in any professional setting you needed an actual human to go over its translations and correct them, and it turns out that takes almost as long and requires just as much expertise as actually translating the text yourself. But now, you’ve got a pretext for paying these people a lot less, because they’re no longer “real” translators, they’re just proofreaders “making a correction here and there” while Google-Translate did the “real” work.
Rich people want to use AI the same way on a whole bunch of jobs. Not an actual replacement for them, but a pretext for cutting them and paying them less.
It’s also, as I said yesterday, a buck-passing mechanism. Whether you’re firing a thousand employees, or denying coverage to a shitload of clients, or bombing a civilian aid convoy in the Gaza Strip, you now get to say “it wasn’t me – it was the AI!” Of course it wasn’t the AI; you’d never accept a conclusion from the algorithm if it told you to do something you didn’t want to do. But now you have something you can blame, and therefore yet another layer between the decision-makers and responsibility for their decisions. (As if corporate personhood wasn’t enough).
That’s what the rich people love about AI, I think.
As far as ordinary people who go ga-ga over this stuff?
They think we’ve invented Skynet. Or at least we’re building up to inventing Skynet. A real functional actual artificial being. And that’s cool, because, well, it just is! Don’t you like science-fiction? Don’t you wish droids were real? It’s awesome, that’s what!
(I think a bunch of rich people, who are themselves nerds for this kind of stuff, have similarly convinced themselves that they can invent Skynet if they just throw enough money at the problem. But mostly this is just a way to hype the product and convince people that we’re somehow on the verge of living in a science-fiction movie).
Timill
@Layer8Problem: You hook it up to the PDP-8, of course…
Gloria DryGarden
@Sister Machine Gun of Quiet Harmony: I love knowing about these uses for AI, that are not problematic.
google translate is pretty rough, though. I’ve used it in a hurry between my L1 and L2, and gosh it makes a mash of meanings, sometimes losing meaning altogether. I wouldn’t know it in other languages. I don’t trust it. But very cool to know it helps w ancient languages. And it beats nothing, when I don’t know the other language.
it would be cool to use it in a blast of research asap to record so many languages going extinct right now, that have only 5 speakers left, or only 100. There’s much valuable richness in these other languages.
Gloria DryGarden
@TBone: I am sorry it’s gone off track from a straight uphill graph of improvements. Your schedule sounds really intense. Sometimes catnaps help, little 20 minute bits. Not always though.
I hope both vets can help you, and that the homeopathic vet answers next time you try them.
holding up the hopes and positives for y’all.
I’ve been frolicking on blue sky and there’s a whole feed of haiku, called micro poetry. You might like it.
Smiling Happy Guy (aka boatboy_srq)
@dmsilev: It’s way faster plagiarizing, too, and it can plagiarize in a way that’s difficult to challenge.
Written works will be at major risk from AI crawling whatever resource it’s pointed at. That’s novels, histories, training manuals, confidential correspondence, classified documents, the works.
Remember when it was bad to copy someone else’s essay or research paper? AI will do it automatically.
If anything, AI is the revenge of the lazy, less-studious tech bros against all the educators and authorities whose work they could never match. Now they don’t even have to try: just release their new toy on a document repository and it will produce, if not the same calibre of work, certainly some of the same words and phrases.
Afterthought: AI will also be highly useful rectifying all the ALEC, AEI and Heritage model legislation that the legislators they push it on aren’t diligent enough to modify for their own jurisdictions. No more state laws in Oklahoma with “Georgia” all over the page.
Chris
@Steve LaBonne:
Obligatory repost of my pet topic: the world of Firefly makes complete sense if you figure that the exodus from Earth-that-was was the project of some Silicon Valley techbro types (presumably slightly more competent than the current set), with everyone in the new planets being descended from them and the people they carefully selected.
It explains the Alliance: of course these people would eventually organize themselves into a military-corporate dystopia.
It explains the war: of course they’d rule so obnoxiously and intrusively that they’d eventually trigger one of those.
But it explains the Independents, too. They’re smart enough to realize they’re being screwed, but they, too, are products of the same techbro civilization, which means they have no frame of reference for their resistance or how to organize themselves, other than vague generic libertarian and anti-government platitudes of the kind Mal regularly spews. So of course they’re never able to get their shit together, and get their ass handed to them when they fight a war.
(Meanwhile, the people back on Earth-that-was are presumably picking up the pieces and rebuilding themselves into something semi-functional now that all the insane people have left).
By the way, notice the opening narration? “Earth-that-was could no longer sustain our numbers, we were so many?” Compare and contrast the techbro culture’s angst about overpopulation…
Chris
@Matt:
If you’ve got a strong enough stomach, read up on the stories of lavish post-apocalyptic bunkers being marketed to modern day billionaires… and how those billionaires’ first questions are invariably how they can ensure that the staff and their guards don’t overthrow them, and whether such ludicrous devices as shock collars and deadman switches should be used to deter such behavior.
It’s not hard to guess that these people would love to live like Trade Federation viceroys with an army of droids serving them and no human element to worry about.
Yeah, at a certain level the whole thing is just the way our economy works now: billionaires obsessively chasing Next Big Things that are somehow going to be as huge and make them as successful as the invention of Facebook or Ipods or personal computers, except nobody’s come up with anything like that in a decade and their attempts to replicate that success keep getting more and more crazy. AI is just the current version.
hitchhiker
Maybe 10 yrs ago I was working for a company that thought it could re-invent driver training for teenagers. (No one is shocked to find out that there’s a U-shaped curve for crashes v age. Teens and old people wreck cars.)
Anyway, in that context I first learned about how much data our cars are collecting about us. For example:
More here.
So when I think of AI, my head goes to that unimaginably huge database, growing exponentially every hour of the day. AI could be deployed on it to figure out how to use it to exploit us in ways I don’t even want to think about. (Look what Fuckbook did during the 2016 election.)
During those years, btw, I bought a mini-cooper. I remember the first time I drove it, thinking, “It feels like I’m driving my iPhone!” I wasn’t wrong.
Mr. Bemused Senior
Of course LLMs “hallucinate.” They are statistical models manipulating symbols. There is no understanding, no attachment to meaning. They work amazingly well in certain applications, but intelligence they are not.
Captain Sunshine
I wish I could remember who said it, somewhere on the Internet, but I do think this holds a lot of weight in the “what is AI for?” answer set:
AI exists to allow wealth access to expertise, and to prevent expertise from accessing wealth.
This was said (I think) back during the writers’ and actors’ strike, when they were trying to get clauses removed from their contracts that allowed studios to retain images and recordings of them, allegedly to train AI to digitally replace those actors. To buy their voices and faces without paying the actors again. And the use of any text available leading to plagiarism and derivative works without paying writers again is already a problem. See: schools.
Students I know are complaining that their teachers are making them turn in more work that is handwritten rather than typed, and it makes them mad. And their handwriting is worse because they’ve had less practice actually writing anything.
I do like this quote, too: I don’t want AI to write and draw for me so I can clean and do dishes. I want AI to clean and do dishes so I can write and draw.
NotMax
OT.
Oh joy. Just checked the forecast (bolding in original).
Cool and breezy northerly winds will weaken today and become southerly on Wednesday. A potent winter low pressure system will develop west of the islands Wednesday night, pass by just north of Kauai on Thursday, then drag a cold front eastward across the islands Thursday night and Friday.
This system will bring a threat of flash flooding, damaging southerly winds, and strong to severe thunderstorms Wednesday night through Friday. Much cooler and drier weather will move in behind the front Friday night and Saturday and continue through early next week.
As the winds turn southeast Wednesday, we could see volcanic haze extending from the Big Island down over Maui County and Oahu.
Wind gusts from these developing rain bands and thunderstorms will likely exceed gale force in some areas.
Gloria DryGarden
Another of my beginner type questions, here. If you need to scroll on pie, do it.
I’m hearing that tech folks say AI is the biggest danger to our world, even more imminent than climate crisis. (maybe that episode of “diary of a ceo,” that talked about it , was before predicting the super fast shock and awe of 8 days in an authoritarian takeover attempt). Seemed to be about the AI being able to teach itself, and could take over everything …Does anyone know much about that?
Not a very soothing topic, sorry. Answers welcome.
beef
Modern “deep learning” systems:
LLM assistant are eamned useful for solo programmers. They’ll basically allow you to breeze through the boilerplate and focus on the actual logic.
LLMs are pretty useful for summarizing text. Give one a legal document and ask it to highlight the atypical clauses. Now you know what to pay the attorney to look at carefully. Similarly, the new LLMs are pretty good at cutting through the bullshit and throat clearing in the scientific literature. They’re good at recognizing novelty, because they have basically memorized the non novel stuff.
deep learning models (not in this case LLMs) are quietly changing the way we look for solutions of complicated differential equations. They’re basically the gold standard in chemical modelling and drug discovery.
Image generation is an order of magnitude easier than it was. The copyright issues are a mess, but it’s far easier than it used to be to make illustrations. We might as a society end up putting more decorations and illustrations on the things we make. I’d be ok with a little beautification of boring docs, or of better illustrations in math papers.
kindness
My previous employer has tried to use AI to replace the humans who currently work there for the last 10 years. Thankfully I retired and ‘the robot’ still doesn’t work. Yea, the bosses at said place were uniformly assholes.
toine
I work in IT for a non-IT company and I have been worried since they have jumped on the AI bandwagon… AI this, AI that… it is all just buzzwords and magical thinking on the part of the suits who do not understand the tech. It certainly has uses in my everyday work life, but it will most certainly not “transform” the industry. It’s the latest magic-bullet tech that will change everything until it doesn’t. A brand new Silicon Valley generated tech-bubble that is sure to burst. I guess the only upside is that it is corporations that are getting fleeced this time. Although, in the end, it will probably prove that trickle-down economics actually do work when someone needs to get screwed over…
Gloria DryGarden
@Captain Sunshine: in college 9 or so years ago, my teachers had plagiarism checking software. One teacher showed me a bunch of it. It showed a percentage and several other markers, for a given paper.
in that case, you’d want it in electronic form. I don’t know what has changed now with all the AI available.
Such an interesting concept to consider (eek!):
Eolirin
@Gloria DryGarden: Most of those fears are very overblown. Current technology isn’t capable of doing anything like that at all.
The worst case scenario right now is that humans use AI that has too high of a failure rate in mission critical areas and bad things happen because it does the wrong things. Not with any maliciousness, but because it’s not good enough for what it’s being tasked to do.
Gloria DryGarden
@beef: but can AI write a good clear summary, for the abstract portion of scientific papers?
Eolirin
@Mr. Bemused Senior: I hate to break it to you, but there’s good reason to think that human cognition isn’t fundamentally different, and that our sense of meaning is basically a statistical illusion. It’s all just information processing anyway. If the AI is worse at something, it’s more down to a lack of sophistication in the current techniques.
Or, that something really requires having a lived history, which is the one thing we’re not really able to give AI at the moment, though you start sticking models capable of evolving in real time (a technical challenge we’d still have to solve) into a robot, and even that goes away.
Eolirin
@Gloria DryGarden: Yes, but sometimes it’ll include information that’s not present in the paper, currently. That needs to get better before it’s fully useful.
eclare
Ever since I saw that ChatGPT answered “glue” when asked how to keep cheese from slipping off of pizza, I have wondered the same. I just don’t get the usefulness or charm.
Then I saw that olighbros love the websites and I knew to stay away.
Gloria DryGarden
@Eolirin: I think writing a good abstract can be challenging. But it’s good communication practice for the experts. And done well, it can give a taste of the complex information, but makes it suitable for laypeople, or those with a different expertise.
A little odd to think it would contain things that aren’t in the paper. That would be a bug to clear up.
Mr. Bemused Senior
@Eolirin: what exactly our neurons actually do is not something we understand. Yes, there are electrical signals but how information is represented and processed in living systems is far from clear.
I don’t discount the possibility of a sentient machine. We are made of the same atoms as everything else after all. My point is, this technology isn’t even close. These machines are automata. They are not alive.
Kelly
@Timill: Ah nostalgia, a PDP-8 with a teletype at high school was my computer. I’m a retired IT guy now.
VFX Lurker
Me, I can see AI failing in spectacular ways and taking users down with it. Exhibit A: those lawyers who tried to offload their work onto ChatGPT.
AI will ruin a lot of lazy people’s lives.
Jay
@Gloria DryGarden:
“AI” as it currently exists, is just basically a summarized “Google search” of the web, or a database search in closed searches. Depending on what it was “trained on”, it can provide good results, (data searches in a closed system), or devolve into real Nazi/CT shit because of the web.
Machine learning, a practical form of “AI” is different. It can use sensors, if properly programmed and monitored, can use “machine vision”, (IR etc) to watch a production line and respond with “meets quals”, or “defective”.
Again, it is the human interaction that makes it useful, or worthless.
Years ago, had an issue where QA failed, for weeks, all our sheet metal enclosures. Board trays to safety shields.
Took me a while, but I finally got through to MGMT and QA that sheet metal is measured in degrees of and angle, not thousands of inches. All the rejected sheet metal met qualifications and worked because the Engineers screwed up their specs. IE, on the assembly line, torque down the screws properly, and the sheet metal fit.
sentient ai from the future
clippy got swole on a diet of cryptocurrency
AI is dependent on the same hardware approach that gave us the crypto bubble
i’ve been trying to read Kahneman and Judeah Pearl and just dont have the fucking time to do the background work while my transgender middle schooler’s access to medical care is under assault.
i do know, however, that literally all of these AI motherfuckers have to get in line with TSMC and say “heres my specification, please produce it, here is my money, thank you” – nvidia, amd, amazon, google, all of these motherfuckers. because tsmc fucking owns the foundry market, especially at the practical edge of a 3nm process.
you techbros want to build fancy parallel processing tulips? tsmc got you, just pay your money.
sentient ai from the future
@VFX Lurker:
https://arxiv.org/pdf/2311.16119
Redshift
@Eolirin:
Google includes an “AI” answer in nearly every search result, so it doesn’t seem like that can possibly be true any more.
And tech companies aren’t pursuing contracts to buy the output of power plants because the energy needed is insignificant. Maybe it’s a problem that will get solved, but I don’t see the evidence that it has been already.
sentient ai from the future
@Eolirin: “our sense of meaning is basically a statistical illusion”
except that all of those concurrent senses provide a context that a ML algorithm, no matter how many vectrons it can plurve*, cannot possibly pick up on.
*i made this lingo up
Eolirin
@Mr. Bemused Senior: It’s a whole lot less mysterious than that. We know an awful lot about how neurons function. They’re pretty dumb state machines. The different receptors hang around waiting for a chemical messenger to float into the synapse and bind to them, and if they do, a fixed number of things happen, some of which cause an electrical charge to run down their length, which release chemical messengers at the other end.
Nervous systems and brains are complex, but mostly because of all of the emergent properties that arise from a trillion points of connection and many different kinds of cells all interacting with each other all at once; there are a lot of chemical messengers, produced in all sorts of ways, and transported in all sorts of ways. But we don’t know how LLMs work in the same way; what’s happening with the data processing becomes opaque at even that scale, just like it does for us.
I think ‘alive’ is a dumb criteria too. Is the weather alive? Is it a kind of intelligence? It all depends on how you define those things. The climate of planet Earth is certainly as complex and ever changing as our brains are, if not more so. We can say, well the climate doesn’t have intention, but do we, really? We’re certainly not free from the influence of our environment or history. We’re not unconstrained in our choices. We have no idea how to quantify intention or consciousness. Is the feeling we have a will just an artifact of how our brains work? Conversely, maybe the climate does have intention? How would we know?
LLMs are not nearly as sophisticated as all that, yet. There aren’t nearly as many systems working in conjunction with each other as we have. We probably don’t want our AI to have some of the systems that promote our behavior even. We don’t really need our AIs to have motivation and want to seek out rewards for having novel experiences, for instance. That’ll make them less like us, inherently. But that’s not really a distinction that matters that much either? All information processing, all transformation from one state to another through a system of rules, is a kind of intelligence, and none of that intelligence is more or less valid. It’s all automata all the way down. You just get weird and interesting outcomes from complex rule interactions when you’ve reached a certain kind of scale. All of it’s beautiful in it’s own ways.
sentient ai from the future
@Redshift: there is a url-encoded flag you can include, “&udm=14” that at least doesnt spit back the “ai” results at you. i have no idea whether they go into the training tank or not.
but better to construct things yourself, because the plugins and the website that is purporting to include it on all your searches might be convenient but they are tracking all your searches themselves, for who knows what end.
Eolirin
@sentient ai from the future: I mean, multimodal models are a thing, so… only currently. :P
sentient ai from the future
@Eolirin: spoken like someone who’s never spent 10 hours shifting 64th notes on a sequencer for a musical composition 3 people would wind up hearing.
Eolirin
@Redshift: There’s an expectation of exponential growth in demand.
I don’t think it’s actually going to happen, with the current products at least. Maybe if they make something more useful. I think a lot of those infrastructure investments are going to overshoot and cost those companies a lot of money.
But we’ll see.
Redshift
@Gloria DryGarden:
Hmm, what I’ve heard is tech CEOs saying that, not tech folks. I think it’s two kinds of BS. The first is hype – their systems are only so-so now, but the next generation will surely be intelligent enough to be dangerous.
The second is a dodge about regulation. They testify to Congress that regulation is needed (they’re being so responsible), but only to protect us from the real threat of the coming truly intelligent systems, and not regulation to deal with then sucking up days without permission, reproducing bias, etc. Pay no attention to the man behind the curtain!
Eolirin
@sentient ai from the future: You’re saying *never*, when there’s nothing that’ll stop that from happening *eventually*. This isn’t a technical problem to be solved per se.
We’re already starting to put these things into robots.
Memory is a bigger issue.
And then that there will be fundamental deviations from humanness in how we design these systems. We don’t want them to be like us to that extent. There wouldn’t be a place for us anymore if we did that.
sentient ai from the future
@Redshift: any c-suite dipshit saying this is saying it because they are attempting to evade prioritization of actual people’s uses for electricity over server farms’ usage.
if the climate crisis is less important than keeping a notional all-powerful ai in line, then you can use whatever dirty or radioactive fuel you want to generate the power to keep that notional all-powerful ai from becoming all-powerful.
it’s a neat trick, but uses too many mirrors.
BellyCat
Sage advice from plumbers: “Sewage flows downhill.”
sentient ai from the future
@Eolirin: if i said “never” (did i, really?) i meant “asymptotic”
Eolirin
@sentient ai from the future: Through implication, but sure. It’ll be asymptotic mostly by design though.
BellyCat
@Eolirin: Motivating AI’s toward desired outcomes would be trivially easy (and opaque to users).
Oh… I now understand the allure. /s
sentient ai from the future
DONT TELL THEM ABOUT ROKO’S BASILISK
danielx
@BellyCat:
AND everything over 40 hours is time and a half.
The Audacity of Krope
It isn’t about practical applications. Our broligarchy is populated by sci-fi futurists who want their Ultron from the Avengers, Skynet from Terminator, or Brainiac from Superman.
For something so many find aspirational from fiction, I could only find a couple positive examples (not listed here).
Marc
I spent a couple of hours today setting up a not so powerful AMD desktop with all of 16Gb of memory and a minimalist AMD Vega Mobile GPU to dual boot into Ubuntu, then installed the smallest pre-built language model (DeepSeek-R1-Distill-Qwen-1.5B). I’ll be damned, the thing runs at a quite reasonable speed. It’s a bit verbose, but it works. The main thing this model appears to lack is the ability to remember prior queries; you can’t ask it to refine an answer.
Martin
That doesn’t matter. In the investor community AI is an intellectual property (IP) land grab, with the recognition that if someone did create a general intelligence, under the current rules of capitalism and ownership, that entity would effectively own all IP going forward. It’s the last great cash grab before the rules of the game get forever broken.
The problem is that the open source community has stayed pretty close to the tech folks on AI, taking away the opportunity to cash grab. This was identified as a problem 2 years ago, but investors tend to be pretty stupid as a group. The latest model out of China appears to be pretty good and takes a lot of the opportunity to make money out of the air.
Like crypto and diamonds and single issue Wu Tang albums, it doesn’t matter if it doesn’t do anything useful.
Pittsburgh Mike
@Sister Machine Gun of Quiet Harmony: Great comment.
I’d just add that there are lots of things that fall into the area of ‘pattern recognition’ beyond fraud detection. Reading radiology images is one area of interest, and computers don’t get tired and miss things. Machine learning (a form of AI) is also responsible for things you’re just use to now: reading checks from a photo, and understanding spoken speech.
Note also that most of these pattern matching applications require much less training than a full-on LLM.
Pittsburgh Mike
@Captain Sunshine:
Isn’t this broadly speaking what all computer software does? I can’t compute logarithms myself, but I have a machine that can do that near instantaneously. It’s a way of encapsulating expertise in an easy to duplicate fashion. And it doesn’t only help the wealthy — it can help the expert reduce errors by checking their work, for example.
Ramalama
@Smiling Happy Guy (aka boatboy_srq): yes theft of creative work. In trying to build a website as a the definitive archive for artwork by my uncle, now in his 70s, I learned how easy it is for machines to replicate/steal his work in a much more invasive way than screenshots.
U Chicago came up with free software to better protect art online via something called glazing. Once I get the correct titles of his paintings I’m going to start the hella long process of glazing every goddamned painting. Either that or no website and his Modern day Singer-Sargent-like work leans against the walls in a closet or up in the attic, when not hanging on walls of some enthusiastic patrons.
For this I’m outraged.
RSA
I say that the main danger from AI isn’t AI per se but rather (a) how thoughtless use of AI can degrade our society and (b) how bad actors can use AI to amplify their actions. AI in its current state is best viewed not as having agency but being a potentially powerful tool.
Ramalama
@TBone: you and Noah the Lovecat hang in there!
side note: The Cure’s song The Lovecats is one of my favorite pep me up tunes. I can’t post a link, if it exists, as I’m under the covers in bed and don’t want to make noise.
David_C
I use AI for my genealogy work. I’ve done things like reading handwritten Civil War pension files (always checking the work and filling in words the machine finds challenging), adding other information about an ancestor and coming up with a narrative – which battles he was in and that kind of thing. Land and probate records scanned by FamilySearch are not searchable, and AI will summarize these legal documents. I also use AI to clean up my own text for blog posts. Last night I had a good session analyzing what scant information I had for a g-g-g-grandmother of mine, giving reasons for conclusions and point me to more records.
At work I’m just scratching the surface. As long as I still have a job, I’m putting tons of data into a database and will work with data scientists to put it all together into a systems biology story to answer questions like, What measurements using noninvasive techniques will show underlying injury to the lung.
Central Planning
I used deepseek last night to write some python code for me.
I was trying to write a function in Excel since the calculation doesn’t natively exist as a function. I wasted probably 4 hours of time trying to find working samples on the internet, or trying to figure out the error in the ones I tried. No luck.
I then realized i could just export the data, run it with a python app to do the calculations, and paste the results back in. Deepseek created that code for me and it worked the first time.
Then, I was telling my kid about that and they said “Why don’t you just use the Python library to read/write the data in your spreadsheet?” One query back to Deepseek and I had that code too.
So: a total of maybe 10 minutes of work with AI got me two different solutions. I wasted 4 hours trying to get one solution to work without AI help.
Also, deepseek didn’t give me the middle finger after a bunch of queries. ChatGPT stops and tells me I need to pay for a subscription.
Mick McDick
There is substance to AI despite the hype! Are you aware of LIDAR (Light detection and ranging)? This tech has been used to accurately map jungle floors, piercing the treetop canopy via laser, and the resulting map reveals manmade structures evidenced by unnatural right angle mounds covered with leaves and detritus and hidden by canopy. A human examines at the resultant LIDAR visual output, and looks for the right angles.
BUT–they feed LIDAR output to an AI to train it, and once trained, the AI can “see” more hidden structures than any person. The AI can’t see, of course; it examines patterns of 1’s and 0’s that we can’t see, and looks for patterns similar to the confirmed structures that it trained on. It sees differently–it’s as if it uses echolocation like a bat, or the olfactory powers of a bloodhound. And it is tireless and fast, like any computer. Advantage: AI.
This is a trivial use, interesting only to archaeology nerds like me. BUT–they can use this on x-rays, sonograms, and MRI. Train AI on known heart and cancer patients and it can spot things lost on a simple visual examination. Yeah, tests of this application were a success, from what i’ve read.
But mostly AI will be used for us to amuse ourselves to death with weirdo memes and teen wet dreams. So there’s that. Graphic illustrators will be out of work. Literature will suffer, too (and hasn’t it suffered enough in the 20th century?).
artem1s
Nothing new. It’s just a very expensive Ponzi scheme developed to attract vulture capitalist dollars. They got bored putting money into real estate – so 2000’s. After all the crypto pump and dumps and fraudsters like SBF it scared the VC away. And the techbros were desperate for another hook to reel them in. AI and virtual crap like
Second LifeMeta is just the latest hook. It’s the latest Brooklyn Bridge sale scam. Also,too probably a fair amount of Russian mob money being laundered. Bezos, Soutpiel, Fuckerberg, Branson, et al are waging their own person robber baron ‘arms race’. We’re nearly at the point where one of them will cause a Chernobyl level disaster and hopefully most of them will come crashing down and to break up their monopolies.EM
“What practical application does it have that can be in use in under 20 years that can not be done by people?”
Seriuosly? There are components of my job that would literally take me all week to do that I get done now in 15 minutes on Monday morning.
It fixed my dryer for me yesterday. Seriously. I used GPT advanced voice on my phone with the visual feature, and it analyzed exactly what was wrong and walked me through how to fix it, verbally, in about 5 minutes. Previously I would have spent about $200 calling the guy down to fix it.
I’m currently fighting a foreclosure case. It’s serving as my lawyer. I actually got a compliment the other day asking what law firm I was using because the defense I mounted was so efficient.
I just did a research project, using Gemini deep research and perplexity, that typically would have taken me a week. It was done in 10 minutes. And then I put it into notebook LM and I can iterate and analyze the information I just collected in a manner that, once again would have previously taken days, but now I can do it about 15 minutes.
The other day I was traveling in an area I didn’t know very well, I whipped out GPT on my phone and just showed it the surrounding area and asked it where I was and how to get to where I need to go and it verbally walked me through everything, identifying every specific Landmark along the way that I was showing it.
I’m in the middle of creating a marketing campaign, where in the past it would take me probably, oh I don’t know, a month to put together the initial personalized emails I need to do, and now I pretty much automate away and it gets done in 20 minutes, after creating the knowledge base.
Part of Me loves the fact that 95% of the population isn’t using this and thinks it’s some kind of a gimmick. It makes me feel like I have super powers. When I show most people what I’m doing with it they’re in shock. The first reaction is, I had no idea that you could even do this right now. Nothing special about me at all. Just really find it remarkable that the vast majority of the population just isn’t using this.
Kayla Rudbek
@VFX Lurker: yeah, image processing and analytics is a practical use of it (food tracking for diabetes, mammogram analysis, things like that) or for taking biochemical or chemistry articles and trying to come up with new compounds or new synthesis methods. Other than that, don’t see much practical use for it (too damned prone to hallucinations, too energy-inefficient).
No One of Consequence
Love this post and all the comments. A little bit of potential insight from a different set of inputs (life experience and education, by definition, unique) — take these for what you will, and question their validity.
Discussions of current understanding of the human brain are interesting in their relation to Artificial Intelligence. As posited here by another jackal, our understanding of the brain is significant, but incomplete. To my knowledge, we still cannot point to consciousness. Cannot engineer it’s arising. Cannot agree upon the necessary prerequisites even.
I posit that General Artificial Intelligence may not arise without some corresponding physical construct that interacts with and manipulates the physical world.
My thinking and logic may be flawed, but without means to effect change in the physical world and to experience the results of that change, crucial value determinations won’t arise naturally.
Could a consciousness arise without Instinct? A priori knowledge. Uncertain. Humans have a priori knowledge. Certainly other mammals do. Lower life forms do as well. Honeybees certainly have a limited amount of neural material to work with, but a simplified language has evolved.
I don’t believe that consciousness will arise out of mass-data-gathering and language prediction capabilities.
That said, our own grey matter is a bio-electro-chemical chain of reactions, at base. We understand many of the mechanisms, but even the plasticity of a neural network in our own wetware continues to amaze and confound us. We cannot point to where the Mind exists. We know it is in the brain, but cannot create an artificial mind, because we don’t quite understand how our own works.
Much more thinking on this topic is required on my end, and more education. I do know this though: DeepSeek is different spin on what some of the larger tech interests have been chasing. LargeLanguageModels trained on staggering amounts of data. If my understanding is correct, DeepSeek trained their own AI by querying ChatGPT and other LLM’s to refine itself. So they didn’t build their own LLM, they used other LLM’s to systematically train their AI.
I.E. : DeepSeek wouldn’t exist if other tech interests had not already created their own LLM’s and made them accessible.
Please clobber the living Luddite out of my misunderstandings or misrepresentations or flawed logic. I really am seeking greater understanding here.
Peace,
-NOoC
Miss Bianca
@Chris: As a died-in-the-wool, juice-of-the-grape Firefly fan, I approve this backstory.
Marc
Pretty much all LLMs are trained this way these days, whether proprietary or open source. The two big things the DeepSeek folks did was optimize the hell out of the same basic algorithms that everyone else uses, which reputedly allows training at least 20X faster, and came up with a viable method to “distill” an LLM down to much smaller size, so it can be run using far less resources than prior implementations.
Personally, I use smaller ML models that take data from various sensors, LIDAR (as someone else suggested, now a $5 chip), microphones (2$), inertial measurement units (now a $2 chip), and cameras ($5) to navigate around indoor spaces. I also use ML for management games (how to bankrupt your company in 10 easy steps) where I’m simulating aspects of human behavior. DeepSeek alters the equation, as with a bit of work I can now run much more sophisticated models. I use synthetically generated data to train my models, so I’m not dependent on “borrowing” other peoples text.
I don’t believe the nonsense about consciousness evolving yet, the “experts” are still mostly blowing smoke to up their corporate valuations. I suspect someone will figure it out someday, but mostly we will still be dealing with convincing emulations.
As to why this has caused a problem a Silicon Valley, well, few of these companies can afford to hire PhD level “AI” experts exclusively to write the code (not that it would make much difference), so they hire a lot of people with undergraduate CS degrees (at $500K+ per year) from a small set of elite schools, most of whom are nice nerdy white or Asian guys who took one or two “AI” classes and can’t program their way out of a paper bag, but they can talk a good line. They “optimize” by throwing more $30K GPU cards at the problem (Google uses tens of thousands of them).
I’ve had a successful 50+ year career in this field precisely because I could clean up and optimize the messes left behind by the elite CS types.
No One of Consequence
@Marc: Thank you so very much for replying. This helps me immensely. My own background is in tech, but not in a coding capacity. Sales, consulting and project management, etc.
However, I greatly enjoyed my Philosophy of Technology class I took in about 1990. We had far flung discussions regards security, Panopticon, hacker ethics, etc. Fascinating.
Now, I see the convergence of (or at least the promise of or inklings of) a couple of philosophy class concepts: mind design (a bit metaphysics too) and AI. Throw in recent breakthroughs with nuclear fusion for power concerns and there is a point in the future where we might have both the power (electrical) and power (computational) to do what now seems impossible: true General AI.
I posit that will require, by necessity, if not by definition: consciousness. Not Turing test, but real, honest-to-goodness consciousness.
Anyway, I am looking for work, and looking for a career shift, and I think with the recent news, AI or AI-adjacent analyst work should be what I am angling for. Have been a application and support analyst for a dozen years or so, and am looking for the next act in my life.
One of my takeaways from the Philosophy of Technology class was that military-grade technology eventually becomes Street Tech. Once that happens, clever monkeys figure out new or novel uses for that tech. DeepSeek is an example of (forgive me) a paradigm shift in that top-level-stuff getting to street-level faster than it being defenestrated from the C-suite office spaces.
Am I a silly mid-fifth decade sod, or have you different thoughts?
Again, much appreciated, thank you.
-NOoC
12xuser
AI is just billionaire FOMO. The big players (Google, Microsoft, Apple, etc.) are hoping to cash in on the Next Big Thing and don’t want to be in second place when there may be only one winner. So, they are throwing fucktons of money at it and trying to get people locked in to their platforms before they have anything that’s worth a shit. I don’t know why the freakout about China, except that it exposes the fact that Big American Tech doesn’t know what the fuck they are doing.
Marc
Honestly, I don’t have a good answer for you, other than to say I’m a silly early-seventh decade guy who just retired after 20+ years of working with Civil Engineering PhD students with various computing activities, including AI-adjacent techniques. My personal belief is that most big “AI” companies believe too much of their own hype, bur at a much smaller scale the same techniques can be applied to much simpler day to day problems (like navigating indoor/outdoor spaces without vision or hearing). Things will likely evolve much like the last AI boom (“Expert Systems” in the 80s/90s), in that the big companies will eventually fail as they won’t be able to accomplish their stated goals, but the underlying techniques will live on as part of a savvy programmers’ toolkit. And that’s the thing DeepSeek R1 demonstrates, you can solve much more complicated problems with a lot less computing power by using clever optimizations. There will be openings for those who understand this.
No One of Consequence
@Marc: Fair enough. I am soliciting any learned opinions whether or not I like them. Historically (and sometimes histrionically) Balloon-Juice has been a surprisingly good source of knowledge and not a little wisdom. I listen to retirees whenever they offer their opinions on General Principle.
I was fortunate enough to get into Stanford’s first online foray into massive online class presentation, which was Introduction to AI, taught by Professors Peter Norvig and Sebastian Thrun. I took the advance track and got a solid B. At the time I had a newborn to contend with, but managed to get through to course. Sadly, I didn’t have any coding chops to test things with. I’d like to remedy that, so I am trying to pick up Python, as well as SQL. Making progress.
As a participant in the dot boom through the dot gone, I am well aware of The Hype, and generally don’t believe much of it. Independent of General AI, I do believe that AI has been/is/willBe useful going forward. I’d rather be involved with it I think, than look to another complex application and mission to support that doesn’t involve it.
I’m a Gentleman Analyst, looking for an employer with a cause and mission to believe in, and to bend my back and brain advancing the enterprise however I may.
Paul in KY
They don’t have to pay actual ‘living’ people, best I can tell.