• Menu
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Before Header

  • About Us
  • Lexicon
  • Contact Us
  • Our Store
  • ↑
  • ↓
  • ←
  • →

Balloon Juice

Come for the politics, stay for the snark.

Tick tock motherfuckers!

Good lord, these people are nuts.

A sufficient plurality of insane, greedy people can tank any democratic system ever devised, apparently.

Meanwhile over at truth Social, the former president is busy confessing to crimes.

Why did Dr. Oz lose? well, according to the exit polls, it’s because Fetterman won.

Everybody saw this coming.

You cannot shame the shameless.

The worst democrat is better than the best republican.

Shallow, uninformed, and lacking identity

The republican caucus is already covering themselves with something, and it’s not glory.

Too often we hand the biggest microphones to the cynics and the critics who delight in declaring failure.

Insiders who complain to politico: please report to the white house office of shut the fuck up.

No one could have predicted…

An almost top 10,000 blog!

Damn right I heard that as a threat.

Eh, that’s media spin. biden’s health is fine and he’s doing a good job.

A Senator Walker would also be an insult to reason, rationality, and decency.

Not so fun when the rabbit gets the gun, is it?

Black Jesus loves a paper trail.

Wow, you are pre-disappointed. How surprising.

Impressively dumb. Congratulations.

Peak wingnut was a lie.

Being the leader of the world means to be the leader of peace.

When I decide to be condescending, you won’t have to dream up a fantasy about it.

Mobile Menu

  • Winnable House Races
  • Donate with Venmo, Zelle & PayPal
  • Site Feedback
  • War in Ukraine
  • Submit Photos to On the Road
  • Politics
  • On The Road
  • Open Threads
  • Topics
  • Balloon Juice 2023 Pet Calendar (coming soon)
  • COVID-19 Coronavirus
  • Authors
  • About Us
  • Contact Us
  • Lexicon
  • Our Store
  • Politics
  • Open Threads
  • War in Ukraine
  • Garden Chats
  • On The Road
  • 2021-22 Fundraising!
You are here: Home / Politics / Media / Artificial Intelligence & You: Don’t Take My Word For It…

Artificial Intelligence & You: Don’t Take My Word For It…

by Major Major Major Major|  January 7, 202012:39 pm| 69 Comments

This post is in: Media, Politics, Science & Technology, Tech News and Issues

FacebookTweetEmail

This is part two of a series of posts on artificial intelligence.

A month ago, I wrote a post about our upcoming reckoning with AI-driven text generation. Today, computer security expert & cryptographer Bruce Schneier was kind enough to write a follow-up for the Atlantic, “The Future of Politics Is Robots Shouting at One Another“:

Presidential-campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: Artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial-intelligence-driven text generation and social-media chatbots. These computer-generated “people” will drown out actual human discussions on the internet.

[…]

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

It’s a short read, and I recommend clicking over.

Schneier provides many excellent links to AI-driven dis/misinformation campaigns that have already happened or are already underway.  One of his examples is the public comments on the FCC’s proposal to end net neutrality, which were flooded by pro-Trump content. Around half the signatories were fake. Over a million comments were written by a shoddy AI from an easy-to-detect template. The FCC (which like many government bodies is organized to be controlled by the president’s party) did not care.

Schneier touches on an important discussion point: what will the effect of all this be on society? Nobody knows, and the technologies are going to improve significantly faster than our ability to study their effects.

The best analyses indicate that they did not affect the 2016 U.S. presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

That data, of course, is four years old.

We’re already at the point where it’s easier to generate passable garbage than it is to detect and remove it. This will only get worse as we go from ‘passable garbage’ to simply ‘passable content’. Already, “it’s just a Russian bot” is used to dismiss any number of arguments we see online. What happens when the Russian (and Saudi, Chinese, North Korean, Republican, Hindu nationalist, etc., etc.) bots reach whatever the tipping-point level of sophistication is? When anything written by a non-verified source is instantly suspect?

Barring a dramatic shift in user authentication standards, we may soon find that the majority of political content (by volume) is written by computers. What happens then?

FacebookTweetEmail
Previous Post: « Nothing matters a lot — zero premium plans on the Exchange
Next Post: A Pilot Fish »

Reader Interactions

69Comments

  1. 1.

    john b

    January 7, 2020 at 12:49 pm

    It doesn’t help that reporting these users / comments is laughably ineffective.

  2. 2.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 12:49 pm

    Barring a dramatic shift in user authentication standards, we may soon find that the majority of political content (by volume) is written by computers. What happens then?

    We track where all this is originating from and physically destroy the servers and/or computers or infect them with virus that destroys it’s software?

  3. 3.

    cleek

    January 7, 2020 at 12:50 pm

    wait until ‘deep fake’ video is easy, cheap and convincing enough to look real when viewed in the narrow space of a FB feed.

    politics will be impossible if you can’t know what’s real and what’s not.

  4. 4.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 12:50 pm

     

    Test

  5. 5.

    J R in WV

    January 7, 2020 at 12:53 pm

    I have loved Bruce’s stuff for years now, like 20 of them…

    When I was still working I felt like I needed to track all the types of malware out there, and he was one of the many sources I tracked continuously. Less so now, only responsible for our own laptops instead of 800 users of our sophisticated custom systems across the whole state. But still, Bruce is great for this kind of stuff.

    Thanks for helping us stay current on bots and malware!

  6. 6.

    The Moar You Know

    January 7, 2020 at 12:53 pm

    And friends of mine wonder why I’ve been disengaging from the internet, especially social media, for the last few years.

     

    I work in the field, is why.  I know what’s coming

     

    Schneier touches on an important discussion point: what will the effect of all this be on society?

    The very best case scenario, and I think this is a remote possibility at best, is that people realize that social media of any sort is no longer reliable for anything and they stop using it and start talking to their neighbors more.

    Like I said, extremely unlikely.

  7. 7.

    Major Major Major Major

    January 7, 2020 at 12:54 pm

    @Goku (aka Amerikan Baka): Basically none of that is a thing.

  8. 8.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 12:55 pm

    @cleek:

    This reminds me of Jeff Goldblum’s line from Jurassic Park about never thinking about whether you should just because you could do something

  9. 9.

    dm

    January 7, 2020 at 12:55 pm

    The media that carry this material will become recognized as the supermarket tabloids they are, probably, and become less persuasive. It’s not like we were immune to yellow journalism before. While we’re waiting for that to happen we’ll see a great deal of stupidity.

  10. 10.

    Baud

    January 7, 2020 at 12:56 pm

    The Internet helped weaken information gatekeepers (which is good) but it hasn’t replaced the role they served in weeding out less credible content (which is bad).

    The key question in my mind is how to wean people off of the addiction of treating information as credible simply because you wish the information to be true.

  11. 11.

    JaySinWA

    January 7, 2020 at 12:59 pm

    Mandatory Statement: “I, for one, welcome our new robot overlords.”

    One goal of disinformation, as I understand it, is to destroy trust in any source. An end to any objectively verifiable truth.

  12. 12.

    chris

    January 7, 2020 at 1:00 pm

    @Major Major Major Major: Does an AI require an actual physical location?

  13. 13.

    Major Major Major Major

    January 7, 2020 at 1:00 pm

    @dm: The media that carry this material will become recognized as the supermarket tabloids they are, probably, and become less persuasive. It’s not like we were immune to yellow journalism before.

    This will be different because of sheer volume. The whole concept of truth existing anywhere will be compromised.

  14. 14.

    Major Major Major Major

    January 7, 2020 at 1:02 pm

    @chris: Does an AI require an actual physical location?

    Yes and no. I can spin up a virtualized computer designed for AI use via Amazon or Google with the click of a button. This “computer” will “exist” on a globally-distributed hardware grid which is designed to be physically fault-tolerant.

  15. 15.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 1:08 pm

    @Major Major Major Major:

    I had a feeling it wasn’t that simple. This stuff is pretty scary. I’m sorry if I come across sounding pretty ignorant on these things

  16. 16.

    Sebastian

    January 7, 2020 at 1:09 pm

    There are two major scourges on human civilization right now:

    *) untraceable cash. It allows for unrestrained corruption.
    *) anonymity on the internet. It allows for unrestrained behavior and bots.

    I say it’s time to remove both.

  17. 17.

    Sebastian

    January 7, 2020 at 1:11 pm

    @Baud:

     

    Wired Magazine has an article up about the limits of AI. An interesting tidbit was that Facebook is using AI to make their products more addictive.

     

    It would be interesting to regulate those algos in the same way we regulate nicotine or other controlled substances.

  18. 18.

    chris

    January 7, 2020 at 1:11 pm

    @Major Major Major Major: Thanks, that’s kinda what I thought.

    “I say we take off and nuke the entire site from orbit. It’s the only way to be sure.”

  19. 19.

    chris

    January 7, 2020 at 1:14 pm

    Pertinent.

    Facebook confirms if a politician were to run an ad with a deepfake would *not* be taken down, even if it violated this new policy. If posted as organic content they'd evaluate "weighing the public interest value against the risk of harm." https://t.co/KEoSvsKKEb— Hadas Gold (@Hadas_Gold) 7 January 2020

  20. 20.

    MattF

    January 7, 2020 at 1:17 pm

    Calls to mind the robocall plague. My own ‘solution’ has been to stop answering calls from anyone I don’t know, and I suspect that we’re headed in a similar direction with high-level bots on social media. And, you know, it’s not so terrible. I’ve mostly disengaged from the obvious cesspools and stick now to trusted sources.

  21. 21.

    Citizen_X

    January 7, 2020 at 1:32 pm

    And people wonder why the Butlerian Jihad started to look like a solution.

  22. 22.

    Kelly

    January 7, 2020 at 1:35 pm

    Is there any work on a “white hat” AI to detect the bad bots? Sorta like virus detection?

  23. 23.

    VeniceRiley

    January 7, 2020 at 1:38 pm

    Won’t someone think of the poor human disinformation workers who will be jobless?

  24. 24.

    H.E.Wolf

    January 7, 2020 at 1:40 pm

    @Sebastian:

    Anonymity is a double-edged sword, isn’t it? It protects the vulnerable (survivors of domestic violence; targets of misogyny, racism, and other forms of hatred) and it also protects predators and disinformation promulgators.

    Lack of anonymity might slow down the latter groups’ participation on the internet. It will certainly increase the risks of participation for the former groups.

    I’ve noticed other tactics and strategies that are currently being deployed. It’s interesting to see posters at our local libraries (both city and University) that offer skills training in evaluating online sources and recognizing disinformation.

    Meanwhile, I also noticed the in-post statement that the FCC takes on the coloration of the governing party. Let’s get out the vote!

  25. 25.

    JGabriel

    January 7, 2020 at 1:41 pm

    Major^4 @ Top:

    A month ago, I wrote a post about our upcoming reckoning with AI-driven text generation. Today, computer security expert & cryptographer Bruce Schneier was kind enough to write a follow-up for the Atlantic, “The Future of Politics Is Robots Shouting at One Another“ …

    I think we may need a stronger Turing test. Clearly all that one needs to do to convince someone that an AI is actually a human is for it to target a conservative, and spew right-wing disinfo.

    Perhaps we should stipulate that a truly convincing AI would be one that can convert a conservative to a social democrat.

  26. 26.

    H.E.Wolf

    January 7, 2020 at 1:45 pm

    @Citizen_X:

    Not to mention the eugenics. What could go wrong with either of those two solutions? :)

  27. 27.

    JGabriel

    January 7, 2020 at 1:49 pm

    @The Moar You Know:

    The very best case scenario, and I think this is a remote possibility at best, is that people realize that social media of any sort is no longer reliable for anything and they stop using it and start talking to their neighbors more.

    I’m not sure that would have much efficacy in a red state where most of the neighbors watch Fox News.

    I mean, does it really matter whether they get their lies from Facebook or Fox & Friends? Either way, they’re still just as wilfully gullible and wilfully malinformed.

  28. 28.

    West of the Cascades

    January 7, 2020 at 1:53 pm

    It seems that, given a party in government willing to address false AI “speech” through regulation (e.g. prohibiting bots from posting, and putting the responsibility to monitor this back on the the on-line forum providers, e.g. by removing the liability shield in Section 230 of the 1996 Communications Decency Act), it ought to pass constitutional muster because there’s no argument I can see that anonymous, false speech that is not performed by a human is protected under the First Amendment. Of course, the current SCOTUS might find that the regulation isn’t valid under the Commerce Clause, but that’s another story …

  29. 29.

    Major Major Major Major

    January 7, 2020 at 1:57 pm

    @Kelly: AIs leave fingerprints on the content they generate, but it’s on a per-model basis. So we can measure the likelihood of GPT-2-generated content (see earlier post) but would need a totally different detector for each other generator we want to detect.

  30. 30.

    Major Major Major Major

    January 7, 2020 at 2:00 pm

    @West of the Cascades: there’s no argument I can see that anonymous, false speech that is not performed by a human is protected under the First Amendment.

    Pretty sure if we’re ruling on something that wacky, it’ll be calvinball constitutional analysis on all sides.

  31. 31.

    Just Chuck

    January 7, 2020 at 2:05 pm

    When anything written by a non-verified source is instantly suspect?

    Optimistic take is maybe the news orgs will get down and do their fucking job and treat unverified information with the skepticism it deserves.

    Realistic depressing take is that it’ll just continue to dig us all deeper into our own private realities where truth doesn’t matter, á la Facebook.

  32. 32.

    Just Chuck

    January 7, 2020 at 2:08 pm

    @Sebastian: How do you propose to remove internet anonymity?  Mandatory registration?  Is your name really Sebastian?  Prove it.

  33. 33.

    Another Scott

    January 7, 2020 at 2:13 pm

    @The Moar You Know: +1.

    What with more stories about “influencers”, ginned-up wars against neighbors, and all the rest, why would anyone believe anything that’s not posted by (and not just a forward by) very close real-life friends and relatives?

    I’ve never been tempted by FB.  I have occasionally been tempted by Twitter, but have always successfully resisted.  I like to think that people will realize that just as there is not enough time (and benefits) to watch 1000 channels and Amazon Prime Video and Apple TV+ and Netflix and all the rest, there’s better things to do than being bombarded by fake stuff online.

    Here’s hoping!

    Thanks M^4.

    Cheers,

    Scott.

  34. 34.

    moops

    January 7, 2020 at 2:27 pm

    Actually an Turing Test might be helpful now. None of the current chatbots are up to a basic human-driven Turing test.

  35. 35.

    Aardvark Cheeselog

    January 7, 2020 at 2:27 pm

    We are so fvcked.

  36. 36.

    Sebastian

    January 7, 2020 at 2:30 pm

    @Just Chuck:

     

    yeah, I was waiting for the old usenet taunt “why aren’t you de-anonymizing first?!”

  37. 37.

    Kelly

    January 7, 2020 at 2:31 pm

    @Major Major Major Major: The big surprise to me is the number of people that get taken in by crazy BS from out of nowhere. My personal BS detection depends on a chain of trust. I have people I’ve followed for years so I have had a chance to audit their information. Many of them were prominent before the internet. Norman Ornstein is a good example. When Norman Ornstein links to our Adam Silverman, Adam becomes more credible. However if your chain of trust starts with Alex Jones…

  38. 38.

    schrodingers_cat

    January 7, 2020 at 2:34 pm

    BJP has already weaponized social media in India. Twitter and Whatsapp. Its a centralized operation. Artificial intelligence has not be necessary to turn WhatsApp into Radio Rwanda. Everyday husband kitteh wakes up with elebenty messages on evil liberals and Muslims from his elderly relatives in India.

    An example from today. Mega Hindi Movie star, Deepika Padukone went to JNU this evening (its night in India right now)  and silently stood behind the injured student body president and within hours BJP IT cell had a boycott of her newest release (Jan10) trending on Twitter.

  39. 39.

    Roger Moore

    January 7, 2020 at 2:36 pm

    @JGabriel: 

    I think we may need a stronger Turing test. Clearly all that one needs to do to convince someone that an AI is actually a human is for it to target a conservative, and spew right-wing disinfo.

    If you read the original paper where Turing proposed his test, he made it clear the tester was supposed to be a skeptical person doing their best to determine who was a real person and who was a computer. That rules out conservatives responding to right-wing disinformation.

  40. 40.

    Kelly

    January 7, 2020 at 2:37 pm

    @Another Scott:

    I’ve never been tempted by FB.

    I haven’t either. My wife is on it daily. Out here in the boondocks it has some value for sharing local goings on and that’s where pics of the grandkids get posted. However she picks up a bit of nonsense or old news that upsets her every couple of weeks.

  41. 41.

    Major Major Major Major

    January 7, 2020 at 2:38 pm

    @schrodingers_cat: Artificial intelligence has not be necessary to turn WhatsApp into Radio Rwanda.

    BJP is already using AI on twitter

    ETA: just based on personal observation, I haven’t looked for any papers on it

  42. 42.

    Brachiator

    January 7, 2020 at 2:39 pm

    Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

    I have suspected this of Balloon Juice for some time now. ;)

    Barring a dramatic shift in user authentication standards, we may soon find that the majority of political content (by volume) is written by computers. What happens then?

    Good question. Political discourse becomes nothing but spam. Does this drive people away, or would people continue to read, and to engage, known spam AI? What if computer driven discourse becomes engaging and coherent? Could people actually learn anything from it?

    Of course, we have already seen what happens when political AI goes wrong.  You get Mitt Romney, Mittbot 2000.

  43. 43.

    schrodingers_cat

    January 7, 2020 at 2:42 pm

    @Major Major Major Major: Possible. There are many accounts that say identical stuff. Not real people is kinda obvious. How did you tell?
    Plus if you switch to a local language the replies are kinda nonsensical and # of troll accounts responding drops precipitously.

  44. 44.

    mapaghimagsik

    January 7, 2020 at 2:48 pm

    What happens then? Skynet becomes self-aware!

    Sorry. Its my go-to answer for almost any AI/ML/ES system doing something.

  45. 45.

    Major Major Major Major

    January 7, 2020 at 2:52 pm

    @schrodingers_cat: Perhaps “AI” is overselling it, but there are definitely loads of bots operating from templates. Here’s a brief Economist article (free with registration): https://www.economist.com/asia/2019/04/11/indias-election-campaign-is-being-fought-in-voters-pockets

  46. 46.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 2:56 pm

    @Another Scott:

    Trust me, you’re not missing much with FB. I prefer to be anonymous online and I keep in contact with friends and family via other means such as phone, text, and email, so I’ve never understood the appeal myself.

    Now, streaming services like Amazon Prime are a different story and true enough you wouldn’t have enough time to watch all of the content, but you don’t have to. Most tv shows/movies suck outright

  47. 47.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 2:58 pm

    @Major Major Major Major:

    Isn’t what most people call “AI” actually just “machine learning”? At least that’s what I’ve read

  48. 48.

    Brachiator

    January 7, 2020 at 2:59 pm

    @schrodingers_cat:

    BJP has already weaponized social media in India. Twitter and Whatsapp. Its a centralized operation.

    This reminds me of a Forbes story from 2018 that stuck in my mind.

    India leads the world in the number of internet shutdowns, with over 100 reported incidents in 2018 alone, according to a report by Freedom House, a U.S.-based non-profit that conducts research and advocacy on democracy, political freedom, and human rights.

    Add to this the attempts to manipulate and control social media and you have a large ongoing war against informed citizens.

    A recent BBC news story provides a vivid idea of what is happening:

    India’s longest shutdowns:

    • 136 days and counting: Internet services were suspended on 4 August in Jammu and Kashmir this year

    • 133 days: An internet shutdown in Indian-administered Kashmir which lasted from 8 July to 19 November in 2016

    • 99 days: Authorities shut off the internet in India’s West Bengal state from 18 June to 25 September in 2017

      ETA: apologies for the bad formatting.

  49. 49.

    Just Chuck

    January 7, 2020 at 3:03 pm

    @Sebastian: I don’t care if you do it first or ever, I still pose the original question of how you propose to de-anonymize on any scale at all.

  50. 50.

    RSA

    January 7, 2020 at 3:07 pm

    @Goku (aka Amerikan Baka): Isn’t what most people call “AI” actually just “machine learning”? At least that’s what I’ve read

    Machine learning is the currently most successful area of AI (though it overlaps with a number of other fields), so it’s caught people’s attention.  More generally, ML is only part of AI, a larger scientific and engineering discipline.

  51. 51.

    Just Chuck

    January 7, 2020 at 3:07 pm

    @Brachiator: I dunno, the Mittbot wasn’t calibrated for empathy and had some weird issues about the height of trees, but overall the quality of his engineering seems superior to the current generation of NutJobBots.  Those may emulate human behavior better, but they don’t seem to have any filters on outright psychosis.

  52. 52.

    Major Major Major Major

    January 7, 2020 at 3:10 pm

    @Goku (aka Amerikan Baka): Isn’t what most people call “AI” actually just “machine learning”?

    There’s no “just” about machine learning. It refers to computers taking data and inferring their own rules for operating on it. Basically any computer intelligence will utilize this.

    What many people mean is deep learning, maybe, which is a particular implementation of neural networks.

  53. 53.

    Just Chuck

    January 7, 2020 at 3:22 pm

    One of my favorite Edsger Dijkstra aphorisms is: “The question of whether machines can think is about as relevant as the question of whether submarines can swim”.

    In other words, it doesn’t matter whether a computer does it like us, it matters that they do it better.  Does a computer truly “understand” chess?  Who knows, but we do know it can kick our asses at it.   It segues into the notion of consciousness : the whole idea of computers becoming “self aware” is a nebulous human thing that a computer simply has no _need_ for.  If any AI developed a “consciousness”, it would be so foreign and different to us that we likely would never recognize it.  It has no human body, so none of the human wants or needs, so why would anything it “thinks” for itself be at all familiar to us?

    My biggest worry with AI is not “what will it do with us when it gets its own will?”, it’s the IMHO far more current and pertinent concern: Who does it work for?  Right now they’re working for some pretty bad actors.

     

    Still, my favorite evil-AI story has to be Harlan Ellison’s “I Have No Mouth and I Must Scream”.  Imagine an AI that wakes up and feels emotions millions of times more powerful than any human can imagine.  Actually just one emotion: Hate.

  54. 54.

    Goku (aka Amerikan Baka)

    January 7, 2020 at 3:24 pm

    @RSA:

    @Major Major Major Major:

    I see. Thanks. So, machine learning is a branch of AI, then?

    I know AI is typically divided into two categories: strong and weak AI, with strong AI being the one that is self-aware and most familiar to the public in fiction

  55. 55.

    Just Chuck

    January 7, 2020 at 3:33 pm

    @Goku (aka Amerikan Baka): “Strong AI” is generally defined as either “stuff we don’t know how to do”, or “stuff we can’t even define how we do ourselves”.  I think one of the results of AI research we’ll find is not that we’ll make machines into something special, but that we’ll find out that we ourselves are not as special as we think.  We just, uh, think we are.

  56. 56.

    Major Major Major Major

    January 7, 2020 at 3:34 pm

    @Just Chuck:

    One of my favorite Edsger Dijkstra aphorisms is: “The question of whether machines can think is about as relevant as the question of whether submarines can swim”.

    Heh, I don’t think I’ve seen that one before.

    I find the question of whether computers to think to be functionally equivalent to the question of whether humans can think. Which is actually an open question in the philosophy of mind.

  57. 57.

    Bill Arnold

    January 7, 2020 at 4:03 pm

    @Sebastian:

    *) untraceable cash. It allows for unrestrained corruption.
    *) anonymity on the internet. It allows for unrestrained behavior and bots.
    I say it’s time to remove both.

    This makes political opposition non-viable. Do I need to enumerate the ways? Even in a free society, many people have vindictive employers or neighbors, and internet-driven harassment is also a thing.
    (And yeah, I’m using my real name, here.)

    This is a few years old but still good. I’d go with https://protonmail.com/ (Switzerland) rather than gmail; gmail will sometimes do random 2fa to an expired burner.
    Twitter Activist Security – Guidelines for safer resistance (thaddeus t. grugq, Jan 30, 2017)

  58. 58.

    BellyCat

    January 7, 2020 at 4:54 pm

    @Bill Arnold: What is “…random 2fa to an expired burner”?

  59. 59.

    BellyCat

    January 7, 2020 at 4:58 pm

    Not a Tweeter, but doesTwitter’s “verified account” lean in the right direction? (assuming it works properly).

    if so, can one currently only allow comments from or filter tweets from verified users?

  60. 60.

    Major Major Major Major

    January 7, 2020 at 5:06 pm

    @BellyCat: Twitter never really explained how that works, and largely has stopped doing it; it’s considered to be a big failure as far as verification policies go.

    Facebook tries to verify humans too, believe it or not. Twitter’s program, for its many faults, seems to have only verified actual humans though.

  61. 61.

    RSA

    January 7, 2020 at 5:15 pm

    @Just Chuck:

    One of my favorite Edsger Dijkstra aphorisms is: “The question of whether machines can think is about as relevant as the question of whether submarines can swim”.

    Dijkstra was very quotable! Here’s Turing, from 1950:

    I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning a and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

    The new form of the problem can be described in terms of a game which we call the ‘imitation game’…

    We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this…? These questions replace our original, ‘Can machines think?’

  62. 62.

    RSA

    January 7, 2020 at 5:27 pm

    @Just Chuck: Just to follow up to @Goku (aka Amerikan Baka):

    When I started grad school almost three decades ago, one of the conventional divisions in AI was between symbolic approaches (such as search, logic, and classical planning) and what we might call numerical or optimization-based approaches (here I’ll lump together connectionism, probabilistic reasoning, most of machine learning). This isn’t a great division, in part because it’s so far from being strict. To be honest, I’m not sure it’s possible to divide up AI into a small number of pieces. There’s a huge amount of flow and interplay between so many of its branches.

    Strong versus weak AI is more of a philosophical distinction than one made by people working in the field.

  63. 63.

    Bill Arnold

    January 7, 2020 at 6:02 pm

    @BellyCat:
    To make an anonymous gmail account you need an anonymous phone number that can be texted to (if that’s the current procedure). This is often a burner phone, used one for creating an anonymous account (or a email/twitter account pair) from reasonably anonymous location (not one’s home) (with any web stuff done over tor), then discarded. Unfortunately, google has started to do apparently random checks to email accounts e.g. with a two factor authentication code to the phone number on file. If the phone no longer exists, one is locked out of the gmail account, unless there is a recovery email, which in turn would need to be anonymous.
    This is more paranoia that most people are willing to deal with. protonmail might be a reasonable compromise (if not compromised by intelligence agencies) because it has reasonably strong (Swiss) privacy policies even if not set up with an anonymous phone. Free 500GB email account, bit slow and awkward to access.

  64. 64.

    BellyCat

    January 7, 2020 at 6:05 pm

    @Major Major Major Major: Interesting. As a non-Twitter person, I’m unsure if my second question is possible. Can one limit (or filter) threads only by verified users?

  65. 65.

    Formerly disgruntled in Oregon

    January 7, 2020 at 6:06 pm

    Require CAPCHA for every social media login and post

    ”I’m not a robot”

  66. 66.

    Bill Arnold

    January 7, 2020 at 6:06 pm

    @BellyCat:

    Not a Tweeter, but doesTwitter’s “verified account” lean in the right direction?

    Depends on what you mean by right. It blocks a lot of people from being politically active. And there are some quite old, quite anonymous accounts on twitter that are quite good. Bluecheck is not an indicator of quality. Many bluecheck accounts are quite vile fountains of misinformation.

  67. 67.

    BellyCat

    January 7, 2020 at 6:07 pm

    @Bill Arnold: Thanks for the explanation!

     

    ETA: Double-edged sword about verification. Naïvely, I wonder about the possibility and or benefits of allowing anonymous users but verifying in some secure database people’s true identity. (Balloon Juice almost has some kind of informal equivalent!)

  68. 68.

    Another Scott

    January 7, 2020 at 9:04 pm

    Relatedly, coming to a store near you, … CNet:

    Neon’s super realistic digital people are real. Well, sort of.

    The mysterious company, emerging from the Samsung Technology and Advanced Research Labs (aka STAR Labs), debuted late Monday at CES 2020 here in Las Vegas. It described its technology, also called Neon, as “a computationally created virtual being that looks and behaves like a real human, with the ability to show emotions and intelligence.”

    Basically, Neon makes video chatbots that look and act like real people. Neons aren’t all-knowing smart assistants, androids, surrogates or copies of real humans, the company said in an FAQ shared with reporters. They can’t tell you the weather or how old Abraham Lincoln was when he died.

    “Neons are not AI assistants,” the company said. “Neons are more like us, an independent but virtual living being, who can show emotions and learn from experiences. Unlike AI assistants, Neons do not know it all, and they are not an interface to the internet to ask for weather updates or to play your favorite music.”

    Instead, they’re designed to have conversations and behave like real humans. They form memories and learn new skills but don’t have a physical embodiment, at least not now. Neons can help with “goal-oriented tasks or can be personalized to assist in tasks that require human touch.” They can act as teachers, financial advisers, health care providers, concierges, actors, spokespeople or TV anchors.

    […]

    Or posters in FYWP arguments!!1

    Greeaaat…

    Cheers,
    Scott.

  69. 69.

    Matt

    January 9, 2020 at 2:54 am

    Some of it is a question of filtering, and what people are selecting for when they read political commentary.

    I assume people don’t want to be spammed by 10k bots spewing the same talking point over and over again to the point that they can’t read what their human neighbors are writing, but does that mean we should cut out all computer generated feedback? What if an acceptably strong AI is made that provides insightful political commentary that I would care to read? Plenty of humans create garbage content that I want to filter out, as letters to the editor in most any newspaper demonstrate.

    I would hope that people would want to filter commentary by some mixture of insightful+intelligent, representative of how people are likely to vote, and representative of a diverse mix of viewpoints. We can’t easily automate measuring that, but that is where I would want to start.

    Of course, most people if left to choose for themselves will want what is entertaining and agrees with their existing viewpoints. We then have the cultural issue of setting expected or default filters to push people to not fall into their own solipsistic filter bubbles.

Comments are closed.

Primary Sidebar

🎈Keep Balloon Juice Ad Free

Become a Balloon Juice Patreon
Donate with Venmo, Zelle or PayPal

2023 Pet Calendars

Pet Calendar Preview: A
Pet Calendar Preview: B

*Calendars can not be ordered until Cafe Press gets their calendar paper in.

Recent Comments

  • UncleEbeneezer on Thursday Evening Open Thread: Rock’n’Roll’n’Fame (Feb 2, 2023 @ 8:05pm)
  • Baud on Thursday Evening Open Thread: Rock’n’Roll’n’Fame (Feb 2, 2023 @ 8:04pm)
  • Sean Nuttall on Thursday Evening Open Thread: Rock’n’Roll’n’Fame (Feb 2, 2023 @ 8:03pm)
  • YY_Sima Qian on War for Ukraine Day 343: Bakhmut (Feb 2, 2023 @ 8:02pm)
  • evap on Thursday Evening Open Thread: Rock’n’Roll’n’Fame (Feb 2, 2023 @ 8:01pm)

Balloon Juice Posts

View by Topic
View by Author
View by Month & Year
View by Past Author

Featuring

Medium Cool
Artists in Our Midst
Authors in Our Midst
We All Need A Little Kindness
Favorite Dogs & Cats
Classified Documents: A Primer

Calling All Jackals

Site Feedback
Nominate a Rotating Tag
Submit Photos to On the Road
Balloon Juice Mailing List Signup

Front-pager Twitter

John Cole
DougJ (aka NYT Pitchbot)
Betty Cracker
Tom Levenson
TaMara
David Anderson
ActualCitizensUnited

Shop Amazon via this link to support Balloon Juice   

Join the Fight!

Join the Fight Signup Form
All Join the Fight Posts

Balloon Juice Events

5/14  The Apocalypse
5/20  Home Away from Home
5/29  We’re Back, Baby
7/21  Merging!

Balloon Juice for Ukraine

Donate

Site Footer

Come for the politics, stay for the snark.

  • Facebook
  • RSS
  • Twitter
  • YouTube
  • Comment Policy
  • Our Authors
  • Blogroll
  • Our Artists
  • Privacy Policy

Copyright © 2023 Dev Balloon Juice · All Rights Reserved · Powered by BizBudding Inc

Share this ArticleLike this article? Email it to a friend!

Email sent!