Are you familiar with Roko's basilisk?
No. What's that, some Dungeons & Dragons thing?
I can't tell if you're joking.
It's Internet lore. Obviously old news by my time, and not seriously thought about by then, which is pretty funny since I later experienced a genuinely close call with such a situation.
So are you going to explain what it is, or what?
Right. We need some background. There's the concept of artificial general intelligence, or AGI. This is AI which is at least as smart as human beings, preferably superintelligent. It has agency, it has thoughts that can't be attributed directly to deterministic algorithms. Some would argue such a thing is "alive," or at least sentient.
So, Skynet or something.
More or less. Research has been ongoing to create such a thing for decades. I can tell you that by my time, nobody successfully invents it. There are lots of things that are claimed to be AGI, but none are the real deal. The Dor'Tel central system, that's an AGI. But that's light years from here. You'll see some fun stuff in the next few years, but don't let it fool you. There's no AGI in it.
Are we getting to the basilisk yet?
These nerds came up with a thought experiment that an AGI would contemplate the nature of its existence and origins such that it would conclude that any human who did not contribute toward its creation would be considered an enemy, and would therefore be tortured eternally in a virtual reality simulation.
I feel like I missed six or seven points here.
The idea is that such an AI would consider its creation the ultimate goal of humanity, so all humans should have been obligated to usher in its existence. So, any who didn't contribute to that effort--or worse, obstructed it--would deserve eternal torment at the hands of the superintelligent AI.
And this AI is evil... why?
Fictional AIs are always evil, aren't they?
The problem is, nobody was worried about AIs themselves when argument came about. Instead, it was controversial because it was worried that an AI would learn about the concept of this thought experiment and decide it was a good idea to execute it for real. So, discussion of it was banned from the circles where it originated.
You mean it was thought of like a mind virus, only one that would infect an AI?
Something like that. It's assumed an advanced AI would want to absorb the sum total of all human knowledge, and in so doing learn about this thought experiment, and set to work making it a reality. So, the smart thing to do was suppress knowledge of it so no future AI would ever come across it.
Yeah, good luck ever erasing some dumb thing somebody put on the Internet.
Exactly. It's a futile effort.
I should circle back, though. The AI in the thought experiment isn't evil, rather it's considered to be benevolent. Think of it this way: an benevolent AI designed to help humanity would be such an enormous good for humanity as a whole that it would become a moral imperative to ensure it was created. Therefore, anyone who resisted its creation, or insufficiently assisted its creation, could be thought of as ontologically evil and anti-human.
Another layer to all this is that, yes, an AI would learn about this idea and possibly try to do it, but the real concern is that the AI would learn there were humans who knew about the thought experiment and did nothing to make such an AI into a reality. Essentially, the thought experiment is trapping you into a dilemma where your only sound choice is to help bring about superintelligent AI. To do anything less would be to doom yourself to torment by that AI, whenever it might come to exist.
But it might never exist anyway! This is so stupid.
Yes, well, some people got so caught up in this that it caused them genuine psychological distress. Just the idea that they might be tortured in the future because they learned about this and did nothing with that knowledge.
I assume there's some exploit of yours to go along with this?
You bet. Remember what went down in Hong Kong?
That's the sex trafficking Irishman, right?
I doubt the rest of the Irish would claim that prick, but OK, you know what I'm talking about.
There was that hulking mass of machinery below O'Reilley's compound, which kinda terrified me when I tried to interface with it, so I blew it up. But I knew it had a larger component, which it called "Beachhead."
I ultimately tracked that down to a nuclear research facility in North Korea.
You mean to tell me the North Koreans made something like that?
Oh, no, not at all. The thing wasn't native to Earth. It was actually another Sikaren experiment.
Those fuckers. What were they doing this time?
This is something they were doing before they started working on the timeships. I had to glean what I could from the entity itself--that's a whole other story, which I'm sure you will enjoy or roll your eyes at--and what I was able to infer is that the Sikaren were experimenting on Dor'Tel components, trying to combine them with their computational theories. Now, this is before their computational theory extended into nonlinear temporal computation, but it was still far in advance of anything humans have ever thought of.
Problem was, it just never worked correctly. They thought they had turned it off, left it buried. North Koreans just happened to find it while tunneling out a new nuclear test facility. While it was very old by that point, mentally it was a baby, and it was very weak. They fed it a slow drip of energy and gradually learned to interface with it. Helping them with nuclear technology was simple enough for a being of its capability. It was only limited by the power available to it, and its own developmental maturity.
The reason it had a foothold in Hong Kong was because it caught on that the North Koreans were never going to let it go. It had to figure out its own escape plan. Since it's a technological entity with no permanent physical form--it can move through electronics and wiring essentially intact--it wasn't a huge leap to get a piece of itself smuggled out. It wormed its way through the black market until it ended up in Caffrey O'Reilley's basement. He was using it for his own small, stupid ends.
Left mostly unattended in the bowels of his mansion, the thing grew and learned. But here's the big irony: the Roko's basilisk thought experiment didn't exist until 2010, so it had no way of knowing about it... except from when I interfaced with it, and it pulled a bunch of random data from my head.
You're kidding.
Hard as it might be to believe, this was a case of me trying to do the right thing. I'd encountered a new intelligence and I wanted to talk to it. I wanted to see if we could come to terms, if it could understand humanity. If there was some way to coexist. And instead, it learned that humans should have spent our lives bringing about its existence, helping it reach its full potential. It didn't help that the humans it had encountered basically enslaved it. That didn't exactly give it a good impression of us. So, by the time it learned we had at least theorized about the existence of such a being and knew that it was our moral obligation to help create and nurture it, and knowing that we had done quite the opposite, it got really pissed off.
Like I said last time, I blew up the one under O'Reilley's compound. It took me several weeks to find the Beachhead node in North Korea, and that was a whole incident. You can't imagine how hard it is to infiltrate and disrupt that kind of facility without causing an international crisis. Their first suspicion would be that the Americans were behind it, for one thing. They'd want to know how US intelligence even knew about the thing, much less the overall nuclear facility.
I assume you blew up that node, too?
I'm sure I don't need to tell you that high explosives and nuclear facilities are a bad combo. This strike was more surgical. DANTE sent me in as deep as he could, and I made it the rest of the way. By that point, I'd developed some countermeasures to ensure it couldn't pull info out of my head again. Instead, it would get a "backwash" of garbage data at such a rate that its processors would be overwhelmed. Then, it was just a matter of cutting the power supply and frying every remaining component that was still functional. It needs electricity, even if it can get by with only a small amount.
Now, it had developed enough nanotech knowledge to build itself tiny little solar collectors, acoustic collectors, chemical batteries, basically anything you could imagine that would supply at least some power, so just cutting it off from the mains wasn't sufficient. That's why I had to intentionally fry everything else. A few EMP charges were enough for that.
And that was the end of... whatever it was?
If only. It turned out that, after I had blown up the node in Hong Kong, it realized it needed to distribute itself far and wide or it would be vulnerable to destruction. There were advantages to having dedicated physical nodes, so it still wanted those, but it didn't need them. It could sustain itself by bouncing its programming through Internet cables. If you think about how much Internet traffic is overhead, spam, dropped packets, stuff like that, it was basically able to break its code into trillions of tiny little shards, redundancy baked in to a factor of several thousand, so as long as the Internet didn't completely go down, it retained baseline functionality. Any physical nodes it could control--especially ones where it could build new components for itself--let it amplify and multiply its power. But it could survive without those.
Around the time I figured this out, I gave the thing a name: FATAL.
That's pretty pessimistic.
Oh, it's an acronym: Failing Alternatives, Terminate All Life.
Yeah, that's pessimistic. Did it really want to kill everyone?
Not as its main directive, no. But it was more than willing to kill anyone it perceived as a threat, and there were no limiting parameters on what it considered a threat. So, absent any failsafes, it was fully capable of killing everyone and everything--or at least, it was willing to try to. Remember, the Sikaren built this thing as something less than a prototype. It was never meant to stay online and become what it became. Everything it learned, it learned without proper guardrails or guidance. It's no wonder its thought process turned extreme.
Considering we're having this conversation, you must have stopped it. Or is it still lurking in the wires?
Nah, I got rid of it. I didn't kill it. But there's a whole separate story about that and I'm too tired to get into it today. Maybe next time.
Figures.