I’m gonna make it profitable for REAL this time, bro, I swear! I realise that there are more entertaining clown shows happening at the moment, but so far during the Betwixtmas period, I’ve mostly been paying attention to the pressure building inside the GenAI bubble. Some picky bits for you all to graze on:
Yesterday, according to CNBC, OpenAI published a statement on its website with more detail about its controversial intention to transition from a non-profit organisation to a for-profit public benefit corporation (PBC) in 2025. The post admits that this transition will require “more capital than we’d imagined.”
Mmhm. (OpenAI is currently valued at $157 billion, and raised $6.6 billion in October.)
“OpenAI expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed in September. Those numbers are increasing rapidly.
By transforming into a Delaware PBC ‘with ordinary shares of stock,’ OpenAI says it can pursue commercial operations, while separately hiring a staff for its nonprofit arm and allowing that wing to take on charitable activities in health care, education and science.
The nonprofit will have a ‘significant interest’ in the PBC ‘at a fair valuation determined by independent financial advisors,’ OpenAI wrote.”
Sure this seems unsustainable but keep in mind the massive losses today are positioning them for much bigger losses tomorrow.
— A.R. Moxon, Bluesky Elderberry (@juliusgoat.bsky.social) December 28, 2024 at 11:47 AM
[image or embed]
The staggering gap between incomings and outgoings isn’t the only thing at issue, obviously. OpenAI has bled top talent in recent months, with one former exec quoted in the CNBC article above saying that “safety culture and processes have taken a backseat to shiny products.” OpenAI is also embroiled in on-again, off-again lawsuits brought against it by Elon Musk, who wants to stop the company’s transition to a for-profit model. It’s also being sued by a consortium of U.S. news publishers, Canadian news publishers, the Intercept, a major Indian news agency, and the comedian Sarah Silverman, among many others. (You can see a running list here.)
But wait! There’s more.
Tech journalist/ PR guy/ podcaster Ed Zitron has been yelling about the GenAI threat to the tech industry and overall economy (often at great length) for the better part of two years now. This morning on Bluesky, he pointed out that OpenAI’s CEO Sam Altman, in addition to going cap in hand to investors for eye-watering sums of capital, has also been bankrolling several other startups from his own personal line of credit. Many of these startups do business with OpenAI. A June Wall Street Journal article laid this out. Click here to read the whole thing sans paywall, or just read the nut grafs that Ed Zitron posted below:
Hundreds of millions of dollars from his personal line of credit being passed out like candy to startups. Seems like a big risk… for JPMorgan Chase. And anyone else relying on JPMorgan Chase. Possibly. Someone who is good at the economy please help me budget this, etc.
To top this all off, OpenAI’s flagship product and its many imitators are sucking up more water and electricity than some small nations, per this September 2024 environmental impact report from the UN. The entire project is a planet-killing scam, and there’s still vanishingly little evidence that most people like the product or want to use it. The Irish journalist Séamas O’Reilly had an entertaining take on ChatGPT yesterday, calling it the 21st-century answer to past grifts like the learned pig or the talking dog. His whole article is worth a read, but I nodded along hard to this:
“I hate AI because it does not work at most of the things its promoters claim it does, and many of the things it does do are explicitly evil. Its missteps not only kill but dissolve the fragile fabric of trust in information we have left. The jargon of AI boosterism, like NFTs and cryptocurrency before them, has seized the imaginations of punters and investors who believe they’re being led to a world of ease and profit that will change the world and make them filthy rich in the process. It’s the last true ‘something for nothing’ we have left, delivered via mechanisms so abstruse to the lay person that its powers can be described with the folkloric hyperbole of a magic chicken.”
In theory, you could say that ChatGPT and other such products are the first step toward artificial general intelligence (AGI). Ali Akhatib, an editor at LOGICs Magazine with an academic background in ethical and justice issues surrounding algorithms and the use of data, points out that this line of argument is hard to justify when we can’t even agree what AGI will look like, or what AI as we currently know it actually is. We get wrapped around the axel trying to define it technically, he argues in a recent blog post, when “we should shed the idea that AI is a technological artefact with political features and recognise it as a political artefact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralised structures of power.”
That has truthiness for me, but it all just seems like a big dumb money rush. I don’t really hear a lot of strategy here in what Google CEO Sundar Pichai said to employees about the AI arms race their company is doubling down on next year:
“I think it’s really important we internalize the urgency of this moment, and need to move faster as a company. The stakes are high. These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solve real user problems.”
Have you ever heard more bland, fluffy management-speak than that? It could’ve been written by ChatGPT.
Use this thread to vent about tech or talk about whatever. The Child and I are going for a walk.
Baud
I’ll give people wrong information for half the cost.
satby
In my limited experience so far, the more Ai infests something, the worse it works.
Kosh III
Praise be to Skynet the All MIghty.
mappy!
Big A, there’s a reason why it’s called artificial…
Another Scott
Someone noted a while ago that the history of America is wrapped up in the chasing of the next great pile of wealth. The Fountain of Youth, beaver pelts, land in Ohio, sugar, tobacco, gold, egret feathers, railroads, Coca-Cola, bigger suburban homes, software monopolies and per-processor contracts, etc, etc.
Someone else noted that if you take away the bubbles then the US economy’s performance is pretty pathetic, so bubbles get boosted. Boring but necessary work is neglected.
True? Dunno, but it sure is truthy.
Trouble is, the pace is increasing, governments cannot keep up, and the damage is larger and more intense. With the scammers in charge here and in too many other places, it’s a dangerous time.
Hang in there, everyone.
Best wishes,
Scott.
catclub
@satby: I would have thought that programming, which has a defined (mostly well) language, and the possiblity of writing fairly well defined goals, would be taken over by AI. But I think if that were happening I would have heard about it.
Princess
The thing that worries me about AI is that , although it never gets more than 85% of what it’s saying correct, that’s enough to get a college or high school student a B+ and completely disincentivize almost all of them from learning how to read, write, analyze texts, come up with arguments, learn to think critically. Sometimes I think making us all stupider is the actual point.
Princess
Also too, I don’t see how a language model trained on everything, which includes fiction, propaganda, fantasy, etc can ever be trusted to get things right. Eventually it’s going to be trained on its own errors.
WereBear
AI was a business con. Buy this incredible software and look how many people you can fire!
So they did. And now they find out the self-checkouts cost them more money!
I never used them. I don’t know how to do that, I’m not efficient or in tune with all the ways it can go wrong. I’m insulted that you make me do everything. Shall I stock a shelf before I can buy from it? Can you at least open the carton?
It does not surprise me. They all fell for the blood machine lady, that We Work thing, it’s a history of con artists as much as economics.
Career of Musk…
catclub
Over the last 100-130 years or so, the US economy has clearly been less pathetic than all the other nations’ economies.
So I say, not very truthy.
sentient ai from the future
The truest part there is the bit about shifting authority and autonomy.
Because the places that are AI’s best use-cases are those where business logic would be best served by having an unaccountable intermediary, to deflect responsibility for decisions that otherwise would need someone to be responsible for them.
Like with unitedhealth, put an AI in the chain of denials and then blame it, and suddenly you make more money while everyone else is trying to hold this fiction accountable for the death and suffering that lines your pockets.
moonbat
As an academic I was appalled by the touted potential of AI to replace students’ need to produce original (at least to them) research with a prompt. But the longer it is out there and the more lazy students who try to cheat with it the more I realize how truly shitty the product is. AI cheat papers are as easily spotted as regular cheat papers and they all get the same grade: 0.
It truly is a vast waste of resources to produce something stunningly mediocre to bad.
WereBear
@moonbat: We are mere consumers and deserve no better.
catclub
Buy? then why is OpenAI losing money?
moonbat
@WereBear: I should add that the vast majority of my students have already seen through both the con of AI and its mediocrity. They are insulted by it by and large.
Bless them, BTW.
MattF
Want an eyeball-scanning cryptocurrency? It’s for your own good, we promise.
Starfish (she/her)
Tech has had no new ideas for at least two years now. If you look at the past two Google I/Os, Sundar Pichai and others are talking about AI a lot, and there is no innovation in their other products.
If you go back before two years, they would discuss the limitations and biases of Machine Learning and how they are trying to account for them more.
They gave up on that to push out products.
It’s almost like people need to go back and read the Stochastic Parrot paper that got Timnit Gebru and Margaret Mitchell fired from Google. All the authors on that paper have been deeply skeptical of whatever it is that is going on right now. They warned everyone, several years ago.
comrade scotts agenda of rage
This. A thousand times this. Places targeted for data farms, if they know enough in advance, *all* resist them for various reasons but one is they know their electricity rates are gonna skyrocket.
I tell all my Electricity Uber Alles pals when they’re screeching about everybody residential going entirely over to lectricity (forgetting/ignoring how that played out in the late 70s) and how that’s so much better for the planet, where are all these electrons gonna come from? Have you looked at the latest deal-dealing idea the techbros have? Do you realize that’s gonna compete with your 2-3 mini splits? How long will utility companies keep their coal plants online (beyond their previous plans to shut them down) in order to feed the proverbial beast?
We discuss kWh rates nationally on the Bolt EV forum quite frequently with several owners remaking how the mere hint of a data mining center coming to their area has resulted in hikes in rates.
And yet, government officials will welcome these bastards not unlike how they welcome any “big” company with egregious tax breaks and concessions.
stinger
Goody! Just what we need for “charitable activities in health care, education and science.”
dm
Google has some excellent AI products. They’re just not consumer products. illuminate.google.com, given a PDF, will produce dialogue, podcast-like, that is a decent summary — enough to know if you want to read the actual paper for details or not.
Google Notebook, I think, is a decent assist to scholars, in that it takes your (in digital form) notes, and finds relevant stuff. Google has started to realize that, beyond a certain point, it’s not how much training data you gobble up, it’s the ability to give a user a large “context window” — that is, “here are my notes from the last fifteen years of study. I want to make the following argument. Show me what I’ve read that will help me make that argument.” It helps that Google Notebook was partly designed by a writer to support his writing.
When Google abandons these, as they do so many things, I will be a little sad.
But they’re not consumer market-exploding hundred-billion dollar things, and I don’t see how they could be. I could see maybe a few million people willing to spend maybe $300-600 a year for something like this, but I’m hard pressed to see a much bigger market.
Jeffg166
It’s going to be one hell of a depression when the bubble pops.
dm
@catclub: Over on Mastodon I saw a post by a programmer who has been using Copilot for a year, trying to give it a fair shake, and had decided to turn it off. Like all things labelled “AI” these days, it gets 90% right, saving 90% of your time. It’s the 10% it gets wrong that turns into a productivity sink, requiring hours to days of debugging.
My own experience is that it’s great for quick little projects that I probably wouldn’t have found time for finding the right open-source components and reading the necessary tutorials to do on my own. It gets 90% of those right, too, but it’s also pretty good at responding to the error messages I get from that 90% to cut down on the errors.
But I don’t think it will help me find the memory leaks or the thread timing bugs in the commercial product I work on.
Betty Cracker
I appreciate Rose for posting on this topic and the contributions of the commenters in this thread. The AI issue has been driving me nuts for a long time, but I don’t have anything more coherent to say about it than BLAAARGH!
@sentient ai from the future:
That rings true to me too. Also this:
Sort of on topic: I was saying to Bill the other day that it seemingly has become easier to convince us (the public writ large) to accept unacceptably dangerous shit by enclosing it in tech bubble wrap to deflect debate.
Self-driving cars. Gambling apps. AI in fucking everything. These are realities foist upon us without a broad-based debate about whether it’s safe or even a good idea. I think money in politics explains that at least partly.
mapanghimagsik
Babbel has been using AI for their A1/A2 chats and that’s been working pretty well. But I also think they’re using earlier models that don’t need to be re-trained as often or overlayed with new additions quite so much. Its been fun not only practicing the language, but also trying to get passed the guardrails.
Phylllis
@dm: I’ve found it useful as a prompt method to get started on a grant narrative or other work-related proposal. I then take that and make it ‘my work’. What I’m starting to see is folks using AI to write their proposals, with little or no effort to personalize it. This is going to be an issue for Grant makers very soon.
Having said that, I really soured on the whole grant model a few years ago. No longer interested in spinning weaknesses and issues into a compelling narrative that the grant dollars will magically solve decades-long systemic problems in 3-5 years, because it won’t.
dm
OpenAI, and most of the other household names have this theory that the models will just keep getting more capable as they get trained on more and more data. For the first couple of years, that appeared to be true. And they’re trying to get people to make $100 billion bets that it will keep happening.
But there’s no argument why that should be the case (and it doesn’t appear to have continued happening, either — progress has slowed a great deal).
More convincing, sure — they produce more plausible sounding stuff because they produce more probable stuff and avoid less probable stuff. But more capable — “understanding”, “reasoning” — no.
comrade scotts agenda of rage
@Betty Cracker:
I carp on this constantly in how tech bros (Theil being the most active) have captured a fair number of self-professed progressives on certain economic policy ideas which is straight up libertarian, deregulatory trickle-down bullshit being laundered for the Totebagger Radio crowd.
And they’ve done that with money in politics, shitloads of money, which in itself is a result of 40+ years of tax policy helping create the wealth inequality which is at the root of this.
artem1s
I would argue the problem is bubbles combined with an education and MSM that are little more than giant Skinner boxes, combined with unregulated capitalism, unrestrained monopolies, endless inherited wealth instead of earned income and a greed is good culture that benefits and encourages the advancement of narcissist and conmen. We’ve bred multiple generations of CEO’s hyper-fixated on getting their investors to push the right button (gambling/advertising), wearing the right cloths (hoarding/consumerism), getting likes and being noticed (narcissism – name your self-centered addiction of choice) while using their products (influencers) and perpetually rebuying and reinvesting (Ponzi schemes). And despite the inevitable multiple bubble collapses they somehow persist in believing that there is some perpetual motion generating deux ex machina (fuck you Ayn Rand) that will make it possible to eliminate the need for any labor on their part or now paychecks to workers to do it for them (no way they want the poors to be gazillionbillionaires lazing about all day too) and those that thought it all up will be able to laze away their days like a bunch of Golgafrinchans on their B Ark to Mars.
It’s laughable how much of a cliché these guys are. If any of them had actually read any books or understood the ones they were reading were fables, parables and metaphors, not literal training manuals they might not have been roped into these Ponzi schemes. No one should be able to inherit any money that is greater than the cost of a descent graduate education. After that every dollar should get put into educating those that don’t have billions to waste on their idiot failsons.
Fair Economist
@catclub:
The rate of basic intro questions “how do I do xxx” has dropped off on programming forums lately, so AI is probably used frequently for repetitive scutwork (including a lot of tests and assignments). When used for serious programming it seems to produce lots of bugs and no overall productivity improvements.
I think AI could be quite valuable if used for real purposes. For example, Google can do pretty good AI image descriptions now. The problem is that almost all work is still being directed at LLMs, and they are a parlor trick.
stinger
@catclub:
It seems as if we’ve done so via the wholesale destruction of the environment. For examples, the near-extinction of bison and bald eagles and white-tail deer and the actual extinction of passenger pigeons, when the native peoples had eaten and otherwise used them all for millennia, but at a level that allowed for their continued existence.
This really is nothing to brag on.
dm
@Phylllis: Yes. I can see that it’s a good tool for starting the process, but that, for the results to be any good, you have to put in a lot of work. Terence Tao, looking at the mathematical capabilities of some of the more advanced models, compares them to a mediocre graduate student assistant. I suspect “mediocre assistant” is about all we can expect from the current paradigm.
Starfish (she/her)
@Betty Cracker: Our policy makers are coward who refuse to regulate this stuff. AI has no business in K-12 education because it is not proven to teach anyone anything.
comrade scotts agenda of rage
@stinger:
An article in National Geographic earlier this year on the Whooping Crane and the efforts to save it (they tracked a newborn chick’s migratory path) is another example of your point.
We’re going to great lengths to try and save magnificent creatures such as the Whooping Crane all the while not really changing our overall trajectory as a people and culture. Or so it seems some days.
different-church-lady
Oh if only that was right…
Fair Economist
@comrade scotts agenda of rage:
The piggishness of AI doesn’t change the economics of electricity. It’s money saving at this point to close down a coal plant and replace it with renewables, and renewables are cheaper than any new fossil fuel plant, which is why the US has stopped building them. By 2030 or so it will be economic to close down gas power plants and replace with renewables. So renewables are still going to wipe fossil away; at most demand from data centers will keep some fossil plants online a bit longer since we’ll have to rev up renewables more than anticipated. This is bad, but in the long term it won’t matter much.
Bill Gates can fantasize about nuclear power for data centers as much as he wants, but the incredibly high price for nuclear power means that any data center using it will be uncompetitive. Nothing will happen with that other than tricking some investors.
No Nym
I am in no way qualified to comment on technical features of AI, but I work in a procurement-related field as a (human) writer. My last two employers have had writers using AI in a way that is training it to do our job of writing RFP responses. As writers, we are always stunned at how bad the AI results are for anything beyond generating some ideas for an outline, but execs are convinced it will get better as we aggregate more of our written responses for it to emulate, and then they won’t have to pay humans to think critically and write with purpose anymore.
Meanwhile, the culture and schools increasingly lose their ability to educate people for critical thinking, so in the future I imagine humans will be grunting at each other and throwing rocks while “the robot” generates wealth for our overlords on Mars or wherever. I think it is part of the childish fantasy that we will be able to live a life devoid of work or responsibility and somehow we will be taken care of. I suspect that’s not how things are going to work out.
stinger
@comrade scotts agenda of rage: I can’t help but think of places such as Italy, Greece, Japan, where the land has been used intensively for millennia without destroying the things that made it desirable in the first place. Unfortunately, our societal and governmental structures encourage “growth” without even examining likely and potential costs. AI being a prime example.
Kristine
Altman et al claimed they couldn’t make any money if they had to pay for all the works they scraped to build their LLMs. So they stole them—two of my books got chewed up in one maw—and they still can’t turn a profit.
And as a bonus, climate takes a hit and our electric bills go up.
Really, really tired of Move Fast/Break Stuff.
different-church-lady
I think the true engine of the A-I craze is these people honestly believe its going to be a machine that does their desk jobs for them, while they continue to have bonuses and benefits anyway.
stinger
@No Nym:
Yes, all of this.
Fair Economist
@sentient ai from the future:
I saw a series of vids from a YouTuber complaining that YouTube had demonetized them. She had a hard time finding out why, and eventually got a computer form with a long list of reasons, and the reason was “other”. She communicated with several people at Google/YouTube, and none could find out why she had been demonetized, or change it. Eventually, after several weeks, she got remonetized.
When I saw that, my thought was “they’re using AI for demonetization”.
comrade scotts agenda of rage
@Fair Economist:
I know all of that. I’m not suggesting “Build more coal plants” or even worse “Build nuke plants!”
My point is that this past year, utilities across Flyover Country have indicates they’re not going to close down coal/gas plants as soon as they’d anticipated because of the surge in demand for electricity.
And there’s an ongoing debate about the uncertainty of demand coming from data centers/AI/fucking crypto, etc. This is coming straight from the utility CEOs, none of whom I trust granted, who are probably looking for more concessions from feckless Public Utility Commissions. But, they do have an underlying believable point when they say if residential usage goes up 3-fold if everybody magically converts to ‘lectricity overnight while all the other industrial demands (ie, techbro) ramp up, where are they gonna get that production?
Said utilities could start by making it easier for more residential solar load to be installed but no, they don’t do that (ask any PG&E customer or Xcel customer) because they can’t charge for that.
You can look at Xcel’s CO portfolio as an example. It’s clear they’ve been aggressive and cutting back on fossil sources while greatly expanding wind. But solar? It remains at 7% today just like it did 8 years ago and one big reason for that is the impediments they put in the way of residential and commercial solar that’s owned by the property/building owner.
TBone
This caught my other eye today: hacking our Christmas lights
https://www.theregister.com/2024/12/25/joyce_christmas_lights/
Fair Economist
@No Nym:
One of the striking things about current AI is how bad it is at learning. It requires terabytes of data and millions of computer hours for it to learn something that a human can learn taking a college course. It can’t learn from a set of data that isn’t absolutely enormous, which is very different from humans or even animals, which can pick up tricks from just a few tries. What your bosses want won’t happen without a major change to the technology.
LeftCoastYankee
My understanding is when they update the LLM models, the new version has to be retrained by the user, and things like frequency, sequence of that training may or may not work the same (as in produce the same result). So it’s a near black-box (“hazy grey box?”) tech.
It is the last throes of the “computers is magic!” con. Programming has developed so that it’s repeatable, transparent and more mechanical, and the business consumers have gotten more sophisticated over time. In other words actual work to build and sell.
For 35 years we’ve heard that technology moves too fast for government oversight. Alot of that is money, but it’s generational too (1 more reason to move on from gerontocracy). Funding tech that actually help people*, breaking up monopolies so they are actually in competition for customers and not investors, etc.
*An AI which could successfully generate real time non-gibberish closed captioning, for example.
Fair Economist
@comrade scotts agenda of rage: It’s certainly true that utilities can completely fuck over the transition to renewables, and most seem to be doing the best they can. It’s getting to the point where going off the grid to solar + battery is getting economical. In AL, where the government is completely controlled by Republicans, they’re planning to put a fee on houses NOT connected to the grid to try to stop that.
comrade scotts agenda of rage
@LeftCoastYankee:
By that I take it you mean something close to the Universal Translator from Star Trek? That would be handy.
Smiling Happy Guy (aka boatboy_srq)
@Baud: You already give people wrong information free of charge.
comrade scotts agenda of rage
@Fair Economist:
You mean the good Republicans in AL, aka the severely emotionally disturbed kid in the overcrowded classroom that is ‘Murka, want to *tax* people? Oh wait, that’s right, “fees” are another rebranding of “taxes” from the Reagan “Revolution” that are so baked into our system now.
I’ll come in again…
Princess
@dm: I’ve used Google Notebook on my own writing and I can tell you that I hope their fee per use product is better because Notebook sucks. Yes, it’s about 85% accurate. Which is kind of amazing for a machine. But that 15%is the difference between useful and unusable. If I use it on my own work, I know which 15%is bullshit. If I use it on someone else’s, I have no clue, which makes it useless.
No Nym
@Fair Economist: “One of the striking things about current AI is how bad it is at learning.”
So true! I piloted a medical scribe program for a group of physicians for a few years, and the reason the university would not fully fund a medical scribe program for the doctors was they were sure that “technology” was going to be able to do all the documentation for doctors eventually, so why have people do it? I can’t tell you how many times both I and the physicians were struck by the difference in quality and value of human observation and note-taking over automated or pre-populated responses in health care. A non-sentient recording of the patient visit would have no ability to make human observations or connections.
Smiling Happy Guy (aka boatboy_srq)
@moonbat: Speaking as a technologist, working in a market segment where intellectual property is valued, I think the waste is the whole point. Consuming energy, adding to planetary heat, and embroiling users in copyright infringement and IP theft litigation, all generates income for fossil fuel and extraction industries and for legal professionals, none of whom have an equivalently rosy future without AI to find them work. AI is crypto without the stench of money laundering.
Betty Cracker
@LeftCoastYankee: Agree 100% that the gerontocracy is doing us no favors on the tech regulation front. Individuals who are older certainly can and do understand it and get how deeply it’s embedded in every facet of life. But as a cohort, they’re pretty clueless on the topic and lack direct experience of the effects, as the embarrassing comments that come out of committee hearings so frequently demonstrate.
Gvg
@Another Scott: if you take away the bubbles the US is the world power with a huge economy and we started as a tiny unimportant colony on a frontier and part of the reason we won free is we weren’t valuable enough to waste resources fighting for when the empire had more valuable processions to fight for at the time. The bubbles haven’t been that significant until we became powerful enough to impact the world economy sometimes. Even then, it wouldn’t have mattered if the rest of the world hadn’t been doing the same stupid stuff even more so.
messing with the soundness of the dollar and the debt ceiling fights, threatening defaults etc, is MUCH more serious than any damn bubble.
Starfish (she/her)
@Fair Economist: I think we saw the same video. What was funny about this is the YouTube algorithm was boosting this demonetized video because it was getting a lot of engagement. Boosting this demonetized video made the time less profitable for YouTube.
Smiling Happy Guy (aka boatboy_srq)
@Fair Economist: We looked at a couple AI solutions. NONE were designed to produce code, let alone improved code. And all of them sucked up way more material, including IP, than either they were supposed to or than we could accept.
And – equally – NONE of the proponents, evangelists or advocates recognized the abject uselessness of their products in the context where we wanted to use it. They were so high on having an AI product that everyone ought to want that the inutility just did not register.
AI is a digital thneed.
Chris Johnson
@catclub: How have you not heard about Microsoft Copilot?
Another Scott
@catclub: I was talking in an absolute, rather than a relative sense.
I can’t find what I was thinking of, but I did find this Buttonwood piece at The Economist (from 2014). There’s some nonsense goldbuggery thrown in, but this makes sense to me:
There are a lot of feedback loops these days that encourage bubbles. And lots of money to be made if one gets in early enough (and out at the right time)! :-/
FWIW.
Thanks.
Best wishes,
Scott.
Phylllis
@Smiling Happy Guy (aka boatboy_srq): This is reminiscent of what happened in K-12 education during the pandemic. Prior to March 2020, the talk was all about ‘hand the kids devices with programs/apps and watch achievement soar’. What we learned is that even a mediocre teacher produces more positive academic results than any device or program.
Smiling Happy Guy (aka boatboy_srq)
@Phylllis: DINGDINGDINGDINGDING
Starfish (she/her)
@catclub: There are some repetitive and dull tasks in programming that are in reasonably popular formats that have been stable for years.
Those things can be automated.
However, a lot of programming is not necessarily repetitive and dull tasks. All programming is not done in reasonably popular languages. AND the languages get updated.
AI can help you go a little bit faster, but is it writing long term maintainable code? Naah.
Rose Judson
@dm:
I turned Copilot off and suddenly found that I couldn’t automatically save Word documents to the cloud anymore because the privacy settings involved were linked to the AI access. Cloud storage is useful to me when I need to work on my laptop away from my home office.
Now I have to email things to myself again before I go anywhere to work. It’s a small thing, but inconvenient.
Smiling Happy Guy (aka boatboy_srq)
@Rose Judson: Resetting those permissions is straightforward, if a PITA.
LeftCoastYankee
@comrade scotts agenda of rage:
That would be handy!
Better speech to text would be a start.
Chris Johnson
I know a BUNCH of audio DSP coders, from my work as one.
It seems to me quite a few of these folks are trying to use AI/Copilot/etc as a ‘rough draft machine’, with the understanding that it can whip together a broken thing and save them some typing and then they’ll take over.
I’m not even gonna speculate on game programmers: WOOF. Rider has AI now: I own it but don’t use it because I got it for Unity, which I’ve abandoned, and kept it for Epic’s Unreal stuff, which is an unfamiliar dialect of C++ loaded with special tech, and which is too big a reach for me to get into.
If you include hack programmers, I’m betting well over 50% of programmers are using AI routinely, including being led down garden paths and writing awful buggy stuff they don’t understand. It specifically enables you to jump into stuff you don’t understand, and pretend you’re doing it.
Expect more software bugs going forward.
Gvg
@stinger: not sure of all of them, but Greece grows Olives and traditionally had goats, both use up the soil and hang on when other things give up, but don’t help maintain or increase fertility. Greece is less fertile agriculturally than it was in ancient times. They could hardly have chosen crops that were less sustainable except maybe cotton. Traditionally agricultural was kind of hit or miss and luck was involved in if a people settled on good long term crops. Think how many “peoples” are no longer around.
Also traditional pest control that gets called organic (but is mostly avoided by organic farmers now) could be much more toxic than anything the chemical companies put out now. Some of the fruit tree sprays were seriously bad stuff. When you starve if you don’t save your crop every single year, you do a lot of things.
Thats the thing to keep in mind about sustainability, food is life, no crop is death unless we have trade and food preservation. Do not assume that the past agriculture was good for the ecology just because it went on a long time. The ecology from the records could have changed for the worse over time. I have read this about Greece, I don’t know the others you mentioned. There are others I have heard of. Some tribe in the Amazon seems to have figured a way to increase fertility (biochar/black soils) which is not well understood yet and being researched while people try to imitate. Other modern research is always ongoing too.
dm
@Rose Judson: Sorry, I was referring to the part of Copilot used for coding — probably built in to VSCode. Like you, I wouldn’t really trust it for writing prose.
@Princess: It’s good to hear from someone with real experience with it. I haven’t used Notebook LM for anything.
LeftCoastYankee
@Betty Cracker:
Very true. I think it’s generational more than age related. Maybe it’s about seeing both the Before and After.
Fair Economist
@Starfish (she/her):
Not necessarily. They can still run ads and make money. Demonetization means the *creator* doesn’t get a cut.
It was odd that their algorithm boosted the video (I don’t follow her and I don’t think I’ve seen anything from her before, although I do occasionally watch vids on historical clothing or costuming). I expect they’ll change that before too long, to maximize their ability to exploit and minimize their accountability.
dm
Just noticed the “Subprime AI” tag. Chef’s kiss.
Starfish (she/her)
@Fair Economist: Yes! It was someone that I don’t follow and the only reason that I saw the video was that it was probably algorithmically boosted
It was Jill Bearup. Here is when she got her monetization back.
Kayla Rudbek
@Princess: “Hapsburg AI” is what people are calling it because it’s so inbred, and it’s already happening.
Kayla Rudbek
@dm: as I said below, there is now a noted problem called “Hapsburg AI” where the AI is taking AI output and feeding that in as AI input (so the database is as inbred as the Spanish Hapsburgs). There’s speculation that this could destroy AI as a useful tool and all I can say as an intellectual property attorney is, bring on the popcorn so I can have something to do during all the continuing education classes that I’m going to have to sit through on this.
Kayla Rudbek
@comrade scotts agenda of rage: yes, this is what frustrates me, that there are Real Serious Problems out there to solve in the world that engineers and scientists could be turned loose on to fix, but instead we get more INEFFICIENT ways to push electrons around and play stupid games
catclub
Compared to whom? China? Russia? India? Are they much better in that regard?
kalakal
@Chris Johnson:
Pretty similar* to how I often use AI in graphics. I’ll use Dalle3 or SD then get busy with Painter or Affinity and a graphics tablet
*similar because the AI bits are often only used as partial sources
catclub
There is a project, I think in Niger, to dig halfmoons in near desert land, that catch water and completely turn around desertification.
laura
AI is theft.
Rose Judson
@dm: I added the tag to BJ, but credit for the term itself goes to Ed Zitron.
dm
@Rose Judson: Zitron is like a tech-journalism marriage of Hunter S Thompson and Charlie Pierce. “Rot economy” is another Zitronism that needs to become as well-known as Doctorow’s “enshittification”.
El Cruzado
@Starfish (she/her): Big Tech is out of ideas, this is the closest they got to a World Domination idea right now and it’s getting dumb to see them maintain the farce.
There’s use for the technologies in smaller scale, more focused applications with clear informational contexts and human review. But generative AI is a dead end and the huge planet-killing models have already peaked. The Internet is too full of slop already to serve as a training dataset, and solving the LLM endogamy issues may not be possible at all.
moonbat
@Phylllis: I can attest to that. During the pandemic there was a segment of education administrators who, ever eager to cut the costs of having human educators educate, were “We should have been doing this all along. Preplanned, programmed, digitally delivered, is the future!” Then learning outcomes tanked, student engagement tanked, and parents began demanding in-person classes again.
For all our sci-fi fantasies we have not yet been able to escape the reality that we are learning animals, finite in lifespan, capable of endless curiosity and we respond best/learn best from others like ourselves. I am not the most brilliant lecturer by a long shot, but I love my subject, really love it. Students respond to that — even students who don’t give a flip about my subject.
Central Planning
@dm: The digital signage unit at my makerspace can automatically play a Google Sheets doc for the signage. Our event calendar has an API to pull data out, and you can also use APIs to push data into a Google Sheet.
I tried to write the Python code to do it and was unsuccessful. Not even getting close to getting the data into the sheet.
My next thought was to try ChatGPT. That was unsuccessful too. It used the wrong API calls and when I gave it the errors I got, it would apologize and give me some other code that didn’t work.
I then had the great idea to use Google’s AI, Gemini, to help me. Their AI MUST be able to generate the correct code for its products. You will not be surprised to learn that didn’t work either.
So, we are stuck doing it by hand. We are not better or worse off, besides my time I wasted.
TBone
WTF who is drafting what
TBone
@TBone: from the WaPoo link at the link:
https://environmentamerica.org/center/media-center/statement-biden-admin-drafting-a-rushed-flawed-plan-to-fast-track-data-centers-and-power-plants-for-ai/