v.01a: Random Precision

[Partial transcript of an interview with John DeGrill, CEO of ASI, conducted with The Techist Podcast eleven years pre-Alpha.]

So you’re not impressed?

Not particularly, no.

Seriously?

Look, I think it’s neat, but it’s not breaking any new ground.

But we’re speaking with AIs.

We’ve always been speaking with AIs. We’ve just been doing so in some made-up language of keywords and search terms. That’s what algorithms are: AI. The only thing - the only thing - that’s different here, is that now we’re able to do so in plain language.

So it’s just-

We’ve used Large Language Models to drive Natural Language Processing, and then we hooked it up to the same information engines. It’s just Google with a translation layer.

But that’s not really true, is it? It’s-

It’s exactly true! We’re-

But Google’s just a list of responses - these AIs are so lifelike.

That’s the thing though - they’re not lifelike. They’re just built from our own conversational language so they’re probabilistically interpreting our own conversational language to guess the best next phrase. That’s not life, that’s just call and response. They sound like us because the entire model is built from chopped up bits and pieces of our own language in order to try and sound like us. But there’s no meaning there. It’s hollow.

You’ve seen the chats though. There’s something more there than just word salad.

Don’t say Turing.

It passes!

It passes a nearly-century old computing exercise - congratulations to everyone.

No seriously, we’ve seen it time and time again, these new AI are passing the Turing test every day.

Because they’re just us, remixed! It sounds like us because it IS us! But it’s a tape recording, not a real voice. We’re talking about the Turing Test, but what we should be asking ourselves is if we as a species are passing the mirror test! Because that’s all this is! This is our own reflection and we’re just staring at it going “it’s so lifelike!”

Then why-

People will anthropomorphize a fence post if you put googly eyes on it.

No, why-

And give it a name. Greg the fence. Fency.

Why are they expressing feelings then? Wants? Needs? Desires? Ennui, even?

Because we do. Because they’re us. And because our brains are built to identify patterns and create meaning. We are pattern recognition monkeys who look up at the random distribution of stars in the sky and our brains connect the dots and create images and stories from static whether we want to or not. So we see fragments of ideas, fragments of language, presented together, as a response to a specific question, and we connect those dots like it’s a human creating a singular complete response. We are built to assume words connect with meaning, so when there is a meaning vacuum, we fill that vacuum. But it’s just language broken apart and reassembled by a computer - we’re bringing the feelings because the entirety of human existence - hundreds of thousands of years - is built upon the idea that complex language speaks to intelligence.

Pun intended?

Pardon?

“Language speaks to intelligence.”

No, but let’s pretend it was and that I’m as smart as that sounded in the moment.

So it’s not-

No.

-alive?

Fuck no. Jesus Christ, fuck no.