Hello, friends,

Before jumping in, I just want to give a shout-out to Post.news.  Post is - or, perhaps all too soon, was - one of the social media networks that emerged after the Twitter fiascos when Elon bought the company and began rubber-stamping, well, evil shit.

I want to talk about it for a a brief flash.

Post was incredibly generous to me, allowing me to automate the process of posting The Progressive Cafe to my Post account.  Their staff worked with me personally on that matter.  Recently, I’d started to see signs of people reading it natively on the Post website, which is fine.  TPC is free, after all, so the more people who read it, the better.  So if you’re reading this on Post and want to keep getting it, MAKE SURE you’re subscribed to The Progressive Cafe on Substack.

I hope it’s able to recover and survive.

On to today’s main article.

The X-62A - An AI-Operated Fighter Jet

The Verge reported about three days ago that the U.S. Air Force successfully tested a fighter plane adapted to be piloted by an artificially intelligent source.  The article by Emma Roth isn’t particularly long, but it’s very dense:  The U.S. Defense Advanced Research Project Agency, (known as DARPA) tested its Air Combat Evolution (ACE for short) package on an experimental airframe, the X-62A.

If I’m reading this article right, the test actually took place in September of 2023, meaning this is technically ‘old news’ without being old news.  This project is apparently part of something called the “Skyborg” project, which is pretty much what it sounds like:  Designing artificially-intelligent aircraft.

Let’s recall what Uncle Bob, the reprogrammed T-800, says about Skynet in the film Terminator 2:  Judgment Day:

“All stealth bombers are upgraded with Cyberdyne computers, becoming fully un-manned.  Afterwards, they fly with a perfect operational record.  The Skynet Funding Bill is passed.  The system goes online on August 4th, 1997.  Human decisions are removed from strategic defense…”

Well, the automated X-62A appears - again, if I’m reading the article right - to have beaten its Human opposition in this simulation.

It makes a certain amount of sense.  With today’s technology, it’s pretty conceivable that Humans are the weak link in aerial combat.  Pilots have to be trained in specialized machinery just to prepare for the significant momentum at which planes move.  This is a quick, if slightly disquieting video of a pilot training - and blacking out from the G-force.  Machines don’t have the same physiological limitations as Humans, meaning an automated plane can make more drastic maneuvers than a Human-operated one.  More drastic maneuvers means a better chance of victory.

Now, before we go any further, let me remind you:Terminator 2 is the movie where the artificial intelligence rebels against Humanity and nukes the entire fucking planet.

That’s what we’re talking about happening in real life.

“Ahhhh, but Jesse,” you might object, “you’re forgetting that Skynet is a networked, multi-purpose artificial intelligence that is hooked into everything from basic functional operation all the way to target acquisition and authorization.”

Have I got some depressing news for you:  We, as a species, are already doing that target acquisition and authorization bit.

Israel’s Artificially Intelligent Targeting System

As reported by Geoff Brumfiel of NPR, Israel is using an AI program known as - you might not fucking believe this - “The Gospel” to determine what targets its military will attack in Gaza.

This reminds me of a quote apparently from a 1979 IBM presentation that I’m having a hard time finding a decent source for:  “A computer can never be held accountable.  Therefore a computer must never make a management decision.”

I can’t think of a more “management decision” than deciding a Human being is a valid military target and sending off orders to be casually verified by a Human before getting passed on a pilot to kill that Human and any others who might be around it.

Here’s where I’d talk about Israel’s atrocious bombing of the World Central Kitchen relief workers, but you’ve probably heard that story already, so I’ll spare you the details.  Now, can I prove that this bombing was orchestrated by - again, I can’t fucking believe it’s named this - The Gospel?

…I’ve gotta take a breather, here.  “The Gospel?”  It’s a surprising name considering we mostly think of The Gospels as a Christian thing, but it turns out The Gospel is more of a concept than a specific set of books.  I guess, in that sense, the Jewish holy texts can be considered a Gospel, so maybe I shouldn’t be as surprised as I am at that name.

It’s still fucking horrifying.

Anyway, to answer my previous question:  No, I can’t prove that The Gospel targeted the World Central Kitchen.

In fact, I honestly can’t even prove that The Gospel is its name:  Tara John of CNN reports that the AI system might be called “Lavender” instead, in an article which interestingly came out just two days after the WCK bombing, almost as if renewed interest was placed on how Israel picks targets.  …Because it was.

Now, Lavender does feature Human review, but according to Tara John that review typically takes - again, I’m not joking here - approximately twenty seconds.  Allegedly, the verification is simply:  “Is the target male?”

That’s so fucking crazy that my clutch swear word fails to encapsulate how abhorrently shitty this situation is.

So Let’s Tie It Together

Not to dive too deep into my nerdier side (You can check out a skinnier, shorter-haired Jesse talking allll about Terminator on my Dystopian Review youtube channel), but in the deleted ending to Terminator 2, John Connor becomes a U.S. Senator and leads the charge against the Skynet Funding Bill.

We need real-life John Connors to step up and make damn fucking sure that not only will the United States never remove Humans from the decision making process of a military conflict, but to make sure that our allies comply with what I consider to be a basic Human rights issue.  Simply put:  Machines should never be deciding which Humans need to die.

Do I think it’s likely that an AI will rise up and nuke Humanity a’la Judgment Day?  Well, if you watch Issac Arthur’s video about machine rebellions, you’ll probably agree that the answer is almost certainly “no.”  It would take such a cascade of both Human and AI failures to imagine a better outcome that, frankly, it seems unlikely.

But it isn’t impossible.

After all, we’re stupid enough to be giving machines as much authority as we are.  We’ve talked previously about just how ineffective and inhumane AI can be, even when applied to mundane tasks like ‘art’ and things like automated customer service.  It isn’t exactly rocket science (even if it’s a low-level computer science) to get an AI chatbot to agree to sell you a car for a dollar.  While simple bugs like that can be fixed relatively easily, the bare truth is that AI is nowhere near ready for the level of authority we’re so happy to cede to it.

In the also-AI-Apocalypse movie The Matrix, the artificial intelligence called Agent Smith talks about Human civilization with the Human resistance fighter Morpheus.  Without digging (too deeply) for an exact quote, he says something along the lines of, ‘I say the peak of your civilization because once we started thinking for you, it really became our civilization.’

And…Yeah, the machine would be right.  If we let AI start doing all of our thinking for us - If we give it authority over business, military, and even artistic endeavors - then it really becomes AI’s civilization, not ours.

And I’m pretty sure that’s a bad thing.

Let’s avoid it.

In Other News

Thank you for reading The Progressive Cafe.  If this article has helped you, please consider signing up for our mailing list.  This article is by Jesse Pohlman, a former hyperlocal journalist and sci-fi/fantasy author from Long Island, New York, whose website you can check out here.

Keep Reading