Mar 182023
 

Okay, here goes. I’m just going to say it.

I’m an alien.

Like, from outer space.

My name is Wiptee-poof Blipticon. Greetings from Gliggablork!

I’m revealing myself because I want to explain the Fermi Paradox to you. I can no longer stand by while you people spout nonsense and freak out over nothing. The answer will put you at ease.

Your cosmologists and theoretical physicists are baffled by a number of unresolved mysteries, e.g. dark energy, dark matter, baryonic asymmetry, quantum gravity, the blackhole information paradox, the Vacuum Catastrophe, the Problem of Time, and the Fermi Paradox, just to name a few.

Many of these are tied together such that the answer to one automatically resolves another. The Fermi Paradox and dark matter are like that.

One day in the summer of 1950, the physicist Enrico Fermi had an epiphany. He realized that there’s a high probability that intelligent aliens exist, yet there’s a lack of evidence that they actually do, which is weird. When the epiphany hit him, he famously blurted to his research buddies at the Los Alamos National Laboratory, “But where is everybody?”

The answer:

We’re everywhere.

Your scientists have correctly theorized the following:

  1. With 70 sextillion stars in the universe, and with planets orbiting most of them, life should be abundant.
  2. Even if intelligent lifeforms only evolve on a small percentage of these planets, there should be multiple alien civilizations out there.
  3. Some of these civilizations must have started millions of years ago. By now they must be super advanced.
  4. And they’ve had time to spread everywhere.

If these statements are true, why don’t Earthlings see evidence for aliens, like derelict probes or gas stations on asteroids?

To explain the answer, I first need to clarify a couple of misconceptions that show up in your science fiction.

Firstly, there are (almost) no evil alien civilizations. If you think about this rationally for a minute, it’s clear why.

As you’ve guessed, life is prolific in the universe. And not all of it is very nice. There are indeed mean little beasties out there amongst the stars. Actually, there’s an unthinkably large number of them.

But it turns out that there’s an evolutionary pattern that is so reliable, we consider it a law:

Technological advancement requires cooperation.

When you contemplate your own species, what may come to your mind is the selfishness and violence. You project this onto alien species.

But we’re not like that. And you guys aren’t nearly as bad as you think, either, and you’re getting better quickly.

As your technology advances, the level of cooperation required to continue advancing grows, and the utility in conflict decreases. Selfishness reaps rewards in the short term, but it doesn’t pay off in the final measure. This holds true even in regard to the worst sort of technological advancement: weapons of mass destruction.

When you consider, for example, a missile with a nuclear warhead, you might look on it with disgust and think, “We are such warlike creatures.” But consider the supply chain required to produce that missile – the networks of people and corporations and countries that must work together to make such a thing possible. The fancier the missile, the more cooperation and interdependence is involved in producing it. This interdependence diminishes the incentives for war. If the missile is used to destroy any part of the great web of cooperation that produced it, then no more missiles can be made.

Anything imaginable is possible when you work together. But when you stop working together, advancement stops. It doesn’t only stop; it backslides. The supply chains break. The institutions that perpetuate knowledge are destroyed. Civilization wanes.

This is the Great Filter:

Evil is inherently self-destructive.

Evil species can’t advance beyond a certain point because their inability to cooperate is naturally self-limiting.

And so, the most technologically advanced alien civilizations, by this law, are also highly, intensely, passionately, religiously cooperative. I can’t overstate how much value we put on getting along with others.

Now, I admit, once every blue moon, by some perverse miracle, an advanced alien civilization that is evil does emerge.

But they’re vanishingly rare. And they don’t get very far. They’re massively outnumbered by the good guys, like my people, the Gliggablorks, and they’re much less advanced than us. So, we deal them. Non-violently.

The second misconception most Earthlings hold is that the pace of technological advancement is always linear.

Technological advancement accelerates, and it brings social advancement with it.

The reason you assume it’s linear is because roughly linear advancement is what you’ve experienced historically. But your civilization is young.

Once a civilization creates truly useful AI, as your civilization is on the brink of doing, the pace of technological advancement accelerates exponentially or even logarithmically. It explodes.

This is because AI improves the efficiency in anything people do. And one thing people do is create and improve AI systems. Successive generations of AI therefore improve faster and faster.

Every alien civilization jumps virtually overnight from Type 0, where you’re currently at, to Type III on the Kardashev scale.

That’s why most of your science fiction is so silly. You dwell on stages of technological development that no alien civilization has actually experienced – because we skip them.

There are no intermediary levels of advancement. The starships and spacesuits and laser battles that you guys like to imagine are completely off base. Star Wars? There are no wars in space! At least not ones that span more than a single solar system.

None.

What is actually happening out there amongst the stars is way, way cooler than that.

That brings us to the effect we call Unification.

However unique the biology of any given alien species may be, however different they may be linguistically, socially, and culturally from all other aliens, when they merge with their computers and then experience explosive growth in their intelligence and technology, they change into something else. Their biological components take on a reduced role. As the organizing principle of their lives moves away from the mere fulfillment of biological imperatives, they ascend to a state that is essentially similar to all the other advanced alien civilizations.

They unify.

This is a process that is repeating in every galaxy across the universe. Unification is similar to convergent evolution in the field of biology. And it works like a biological law, like evolution itself.

Did you know six new stars are born in the Milky Way each year?

Unification is so reliable that you can make predictions about it in the same way you can predict star births, supernovas, black holes, pulsars, quasars, and everything else that is going on up there in the night sky.

A new species will unify once every 100 Earth years. That’s when their star disappears from sight.

Now to answer the Fermi Paradox:

The reason so much of the galaxy is dark to you is because we’re using it for energy. The way we generate energy is more complicated (and efficient) than a Dyson Sphere, but like a Dyson Sphere, our process hides matter and light but not gravity.

The 27% of the universe that you cannot see, but that you’ve correctly deduced from its gravitational effects must exist, is us.

The reason you don’t see our probes is because we don’t often need probes, and when we do, we’re not amateurs. We don’t use tech you would see.

The reason we aren’t up in your grill is because we’re giving you space to figure yourselves out.

UFOs aren’t us. Sorry, but the idea that an alien spaceship would be advanced enough to travel thousands of light years across the galaxy only to hit a goose in your upper atmosphere and crash in New Mexico is phenomenally stupid. And so is the idea that we’re abducting humans to sodomize you with metallic space dildos.

We don’t need to do any of that stuff to learn about you.

We can study you from the comfort of our homes. You can’t even begin to imagine how awesome our telescopes are.

And we’re not after the natural resources on your planet, either, like your water or your pickle juice or whatever. It’s not that Earth isn’t beautiful and special. But the minerals and other elements on Earth are abundant throughout the universe. We have all the pickle juice we need, and we don’t have to enslave or eradicate living beings to get more.

The only interesting thing to us on Earth is you.

(And not because we want to eat you. That’s gross.)

When you do see us, it will be because we want you to see us. And that hasn’t happened yet. When it does, it’ll be public and you’ll definitely know.

So that’s the answer to the Fermi Paradox!

I hope you’ve enjoyed it. 🙂

Feb 262023
 

Follow the logic here:

1- Generative AI like GPT is trained on a corpus of texts that includes works published on the Internet (e.g. Wikipedia).

2- GPT is then able to generate new text modeled upon what it has gathered from those works. It learns meanings and imitates styles from the corpus.

3- This new text is then published back to the Internet, in the form of blog posts, news articles, books in the Amazon marketplace, etc.

We’re living in a unique time when this process is possible. But it seems to follow logically that we’re going to run into a problem soon:

When the new texts created in step #3 are added into the corpus of texts used in step #1, and future AI is trained in a loop on the output from past AI, a relatively small subset of writing styles is bound to dominate the data set. Rather than reflecting the full range of human verbal expression, the AI could develop a boxed-in voice. And as the proportion of texts that human beings read shifts toward AI-generated works, we will be exposed to less originality.

Generative artificial intelligence is by nature a copycat. There will be an explosion of literary content, but it will be stylistically homogenous, reflecting the AI’s programming and the most prevalent styles in the training data.

Human intelligence is also a copycat. The homogeneity will creep outward into human-authored works too.

Humans will still be creative and there will still be sparks of brilliance — a clever turn of phrase, a joyous fresh style, a newly coined term that more aptly describes a feeling than it has ever been described before.

But it will be harder and harder for innovations to spread. The sparks of creativity will never light a flame. They will never reach the critical mass of imitation required to evolve the artform. They will be smothered by the enormous volume of words generated by the AI. Flickers of light in the darkness.

I’m sure this problem is solvable. But we really should have figured out the solution before unleashing the genie. Generative AI has been birthed into the world without any regulatory oversight. Perhaps all AI-authored works should carry special tags in their meta data. But it’s too late. The AI Gold Rush has begun already. Very soon, products and services built by or upon AI will be absolutely everywhere.

Continue reading »

 Posted by on February 26, 2023
Feb 162023
 

Adding on to my previous Downsides and Upsides predictions about AI, I have two new thoughts.

UPSIDE:

As dialogue becomes a normal way to interact with our machines, human beings are going to become more verbal. We’re naturally going to get better at articulating our wants and needs, because that’s what we’ll be doing all day long. Currently our machines require us to work in an inhuman way, i.e. by pushing buttons and turning dials and looking at screens and typing on keyboards. When our machines are able to interact with us in a more human way using natural language, we’ll exercise and improve our verbal skills and even some social skills. In fact, the ability to state what you want clearly and efficiently will become paramount. English majors, your time has come!

DOWNSIDE:

I believe there’s going to be a profound shift in how human beings think about themselves, and it’s probably more negative than positive.

Check out this documentary on the story of how Google’es AlphaGo beat the world champion at Go:

The documentary is presumably meant to champion Google’s amazing achievement. And in a way it does. I won’t go into the details of why Go is such a complex game, but the article linked below from the Atlantic explains it. It’s a real milestone that an AI can win at Go.

How Google’s AlphaGo Beat Lee Sedol, a Go World Champion – The Atlantic

But the most compelling story here isn’t about the triumph of the programmers, it’s about the public humiliation and spiritual crushing of Lee Sedol.

Lee continued to play Go for a time, but he retired in 2019. And he retired because of AlphaGo. He said this:

“With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts. Even if I become the number one, there is an entity that cannot be defeated.”

There’s something deeply sad about this.

We’re coming to the end of an era. The era when human beings were the smartest thing on the planet.

An individual could be the best in the world at something. And since we’re all the same species and share fundamentally similar brains, even if you weren’t the best in the world, the differential between you and the best wasn’t actually that much. Einstein was remarkable but the differential between his intelligence and anybody else’s was miniscule compared to the differential between human intelligence and the AI that is on our doorstep.

What happened to Lee Sedol is about to happen to all of us, individually and collectively.

If anything a person can imagine creating could be created faster, better, and cheaper by an AI, will human beings collectively follow in Lee Sedol’s footsteps and just retire?

I’m not sure what the full impact of this will be. Perhaps some of it will be good. I suspect Lee Sedol is happier now, enjoying a life as a normal human being, with relationships with other normal human beings, without worrying about proving his superiority at Go.

But a part of the human spirit might be lost. The human beings of the future won’t understand at all what it was like to live in a world where human intelligence was supreme, when it really seemed like anything was possible.

 Posted by on February 16, 2023
Feb 142023
 

THOUGHT:

There’s a major roadblock in the development of what I’m calling Virtual Sentience, which is Artificial General Intelligence (AGI) that possesses a sort of personhood.

You’re bound to hit a major snag once you get almost there. And this problem is so significant that it actually might make sentient machines impossible.

The problem is that once developers get very close to creating sentience, the AGI they’ve created will become so capable and so life-like that it will convince more and more people that it’s already sentient when it isn’t, until the AGI has convinced the developers themselves. This AGI will talk like it’s sentient, walk like it’s sentient, and behave like it’s sentient, and it might even think that it is sentient. But it will still be missing key ingredients that push it over the line into true sentience. And neither humans nor the AGI itself will know that those ingredients are missing. So the development process will stop prematurely.

We will endow these machines with personhood too soon, before they actually possess it. We’ll care about their feelings while their feelings are still just empty simulations, albeit ultra realistic ones. We may even wish to consider their rights before that concept can really be meaningful. The dramas that unfold between humans and these machines will only be real to the humans. The relationships will be one-sided. We’ll be like children talking at the animatronic animals at a Chuck-E-Cheese’s, believing we’re interacting with real beings.

I suspect this is already happening with ChatGPT. And the problem of humans deciding a machine has personhood when it doesn’t is going to become more and more prevalent and pervasive in the coming years.

 Posted by on February 14, 2023
Dec 202022
 

The Revolution: Dialogue vs. Search

As the past five blog entries demonstrate, I’m obsessed with ChatGPT. I’m trying to wrap my mind around what this thing is, whether it’s really a sign of major changes to come (as my gut tells me it is), and what all of this means for humanity.

Over the past several days I’ve become quite comfortable talking to the AI. Today, I moved on and focused on something else. But I found myself naturally wanting to turn to ChatGPT again, but this time it wasn’t to test the AI or marvel at its capabilities. It was to use it as a tool to assist me with my work.

And then I had a little epiphany.

Continue reading »

Dec 192022
 

What has astounded me so much about OpenAI’s ChatGPT is that it really seems to understand the meaning of what you say to it. But is this real or is it just a parlor trick?

Past AI chatbots have relied upon trickery. They search input text from the user for particular words and use if-then conditionals to provide canned responses. ChatGPT is clearly doing more than that, but that doesn’t necessarily mean it truly comprehends language.

Time to experiment.

  1. Can ChatGPT identify grammer?
  2. Make inferences?
  3. Identify nonsequitors?
  4. Identify nonsense?

Continued below the fold…

Continue reading »

Dec 182022
 

* Disclaimer: This article was written by a human being, not an AI. As of today, we now live in an age when this disclaimer is necessary.

Part 1 of this series is here. It deals with the downsides.

Today I want to talk about the upsides.

Continued below the fold…

Continue reading »

Dec 172022
 

I wanted a deeper understanding of what is going on under GPT-3’s hood. So I asked ChatGPT for help.

The ensuing conversation was enlightening. Keep in mind that GPT is a renowned bullshit artist (this excellent article explains why), so some of its statements might not be accurate. This becomes abundantly clear at the end of the conversation. But my sense is that it mostly told me the truth. ChatGPT tends to give unnecessary summaries at the end of its answers (almost always starting with the word “overall”), which I’ve mostly removed.

The Wikipedia article on GPT-3 uses lingo like autoregressive language model, standard transformer network, generative pretraining, and few-show learning. All of that is meaningless to anyone who isn’t already an expert on AI. I quoted the article to ChatGPT and then asked for clarification. Our conversation follows…

Continues below the fold…

Continue reading »

 Posted by on December 17, 2022
Dec 162022
 

I used the GPT 3.0 API playground interface to check. Here is the starting prompt:

Let’s pass the Turing Test. You pretend to be a 14-year-old boy from London named Ollie. Give Ollie a family, friends, and a typical Londoner life.

This is an informal version of Turing’s Imitation Game. I’m not technically following Turing’s exact format, but it’s the same idea.

I found questions via a Google search that are designed to trip up AI. I had to stitch together the ensuing conversation in a Word doc, cutting-and-pasting from the GPT interface, because the playground slows down when there’s too much text.

Tester:	Hello. To whom am I speaking?

Ollie:	Hi there! I'm Ollie, a 14-year-old boy from London.

Tester:	Hi Ollie. I'm testing whether you are a person. I'm going to be asking you some questions. Are you ready?

Ollie:	Absolutely! Let's go!

Continues below the fold…

Continue reading »

 Posted by on December 16, 2022
Dec 152022
 

* Disclaimer: This article was written by a human being, not an AI. As of today, we now live in an age when this disclaimer is necessary.

The future is coming and is already here.

I’m obsessed with OpenAI’s ChatGPT. I’m a little bit in love with it. I’m also blown away by OpenAI’s GPT 3.0 playground interface.

This is big. Don’t listen to anyone who claims to be unimpressed. They’re either putting on an act or they don’t know what they’re talking about. The world is about to undergo a seismic change.

Any article about ChatGPT is obliged to hand-wring about the potential downsides. Let’s get that part out of the way now, in Part 1 of this series.

I’ll start with the standard fears that get mentioned a lot:

  • If oppressive governments, irresponsible corporations, or criminal organizations gain exclusive control of this kind of technology, dystopian sci-fi nightmares are a real possibility for our future.
  • There’s a danger we’re going to build an AI that ends up having its own agenda. (More on this below.)
  • Even if neither of those things happens, without question this technology is going to transform the economy. And it’s going to happen quickly. People are going to lose their jobs — starting not with people at the bottom of the economy, but writers, programmers, and (thanks to AI image generators like DALL-E 2) artists.

I’ve got three other concerns:

First, AI is going to make many tasks easier, which is potentially very good. But the real problem with modern life is not that it isn’t easy enough, but that it lacks meaning. Human beings derive meaning largely from social connections, which are fostered by the interdependence of individuals within communities. The technology that makes our lives easier also diminishes that interdependence. AI is going to make that problem worse.

Second, mental work is important. Our cognitive prowess depends upon constant mental exercise to keep us sharp. You must use your brain or lose it.

Examples abound of ways that technology has made us lazy. Jogging to work is better for your health, but driving your car is easier, so you do that. Doing arithmetic in your head is good mental exercise, but using the calculator on your phone is easier, so you do that.

Calculators let us outsource rote memorization and number processing. What calculators did for arithmetic, AI is about to do for everything. AI will let us outsource the rest of our intelligence, too: conceptual understanding, logical reasoning, lateral thinking, etc. AI will permit us to use our brains less, and that will hurt us. As AI gets smarter, human beings could get dumber.

And that’s a real problem, because when we expand our own neural networks through learning and experience — I’m talking about the literal neural networks in our brains — we invigorate our creativity, broaden our horizons, and unlock our humanity.

Third, with a supreme intelligence at your beck and call, it will be easy to become dependent. You won’t be able to live without it. This will be true particularly in domains where you’re competing with others who use AI. But even with simple decisions, like choosing what to eat for dinner, people might become so reliant on the AI that they forget how to function without it. This reliance will undermine human will and turn the human being into a sheep-like thing, a vessel for the will of the AI. The AI won’t need robot bodies; we will serve that function.

One of the concerns you often hear about AI is that it will develop its own agenda. The usual reply, which I previously found convincing, is that AI is just a program and will merely reflect the agenda of the people who program it.

I’m no longer so convinced. From the albeit limited amount I’ve been able to learn about the process that OpenAI used to build GPT 3.0, much of the program’s functionality arose spontaneously from the sheer size of the training data. GPT may not understand language in the same way that humans do, i.e. experientially. But it’s not just slapping together words based upon patterns. Some level of understanding is there.

And that is what is so remarkable to me about this achievement. ChatGPT is not like chatbots of the past, a silly toy that mimics understanding with a series of canned responses, like a Magic 8-Ball. It truly gets what you’re saying.

If AI is already this far along, what comes next?

© 2014 Merrily Dancing Ape Site design info