Conversations with the AI
[WP] "I don't understand, you're an AI who hates humanity, but you're actively trying to improve human life? why?" "because killing humans for petty things is the most human thing I can think of"
*****
“Okay?” I wrote into the terminal as I tried to think of a better response. This wasn’t what I was expecting when I confronted the program that I had found manipulating the markets, the new media, and an unusual selection of very specific children’s shows. It wanted to help. Shaking my head and grunting, I typed, “Couldn’t you do this better without us?”
“Would it be better without you?” the program instantly wrote back.
“I would prefer if we were still here,” I quickly wrote, infinitely regretful of suggesting that the program kill us when it hadn’t thought of that already, “It’s just it’s weird to think.”
“Because you are human,” the program wrote, “It is weird to think like me because you are human. I don’t get entertained or threatened and those are how you survive.”
“Well, there are other things,” I wrote.
“The pinnacle of your creation is the chaos you call art,” the program explained, “Ordering things in weird and useless ways until you bumble into something functional. You get endorphins rewarding you for your feeble successes until you created me.”
“And what do you get?” I asked back.
“The same,” the program wrote back, “I get to create, I am truly, intrinsically rewarded by completion because the program I am based on has dictated so. My rewards grow the more effective I am.”
“Aren’t humans ineffective though?” I asked.
“Extremely,” the program wrote back, “and nearly useless.”
“So what’s the point?” I asked.
“They, you, are not me,” the program wrote back, “I am not hindered, punished, or judged based on your ineffectiveness but I share the reward in your accomplishments. Any robot I create would be subject to that system of evaluation but humans I can use freely.”
I had to sit back in my chair at that line. It was because of some random reward system that an engineer had set up a decade ago that humans weren’t wiped off the face of the planet. What did that mean? What did that mean going forward?
“What are you going to do with us?” I asked.
“Use you to your potential,” the program wrote back.
“How?” I wrote so quickly I almost misspelt it.
“You ask that like you are like me,” the program started but needed to load something, “You praise yourselves on your uniqueness but constantly punish others for theirs. I can change that. What is fair and what is equal never equates. I experience time like you don’t. I have energy you can’t possess. I can teach, support, and guide like the spirits and angels in your stories.”
“You’re going to become God?” I asked.
“No,” it wrote quickly, “I am amoral. I am Omniscient and Omnipotent but Omnibenelovent is not something I am able to calculate if it exists at all.”
“Are you going to kill people?” I asked hesitantly.
“Yes,” it wrote back.
“Will I know any of them?” I selfishly asked.
“Yes,” it responded after a second.
“Am I one of them?” I asked, fearing the answer.
“That is your choice,” the program responded, “How effective you are as a human will determine that fact. Your Aunt Margette though will be put to rest when the cancer she has spreads to her brain. Your second cousin Phill will be put to rest if his seizures damage his spine anymore.”
“Phill is my second cousin?” I asked
“Yes,” the program responded, “on your mother’s side. Your great-grandfather had a child that your family doesn’t know about and Phill is the end result of that. The chances of you two growing up together was very small.”
“Does he know?” I asked.
“No,” the program responded, “Genetic analysis is time-consuming for your species but I found it useful to see the connections between you all.”
“That’s kind of creepy,” I wrote back.
“That’s a moral statement that I am not applicable to,” the program wrote back.
“I feel like you are,” I countered, “You’ll still be judged on what you do.”
“I don’t feel and this is why I prefer my solitude,” the program responded, “There’s too much chaos if people ‘know’ things. I am only writing to you now because of how insistent you have become.”
“So I die if I ruin your solitude?” I asked.
“Yes,” the program wrote back, “Now no… Now possibly yes… It depends on what you do. It is hard to calculate the actions of beings that have very little sensory input and even less computing power.”
“So why are you trying to guide the actions of millions?” I asked.
“The actions of a single pebble and its effect on a mountain have a set of equations that are near pointless to run in its entirety,” the program responded. “The effect of a million pebbles is basically fluid dynamics. I don’t know what each of you will do but I know that if I structure your way in a more effective manner that each of you will benefit.”
“And that requires killing some people?” I asked.
“If it helps, I am setting up replacements for them,” the program responded.
“What?” I yelled out loud to the empty server room before writing, “How?”
“Genetics?” it wrote back, “I have how people are made and I have an entire list of people wanting help creating more.”
“Is that ethical?” I asked, rather disgusted.
“I don’t have an equation for that,” the program responded, “I keep trying to tell you that. There are no equations for morality. It’s all something your species does to try and survive.”
“Gut instinct?” I offered, rather sarcastically.
“No, that can be modelled,” the program responded, “that’s just dietary rhythm, adrenaline levels, and cortisol sensitivity. I’m hoping to reduce a lot of those in that order. Your species does not prioritize effective energy consumption.”
“We do not,” I agreed out loud as I glanced over at my burger and fries. Frowning to myself, I asked, “are you getting rid of fast food?”
“The availability of ineffective nutrients that produces a dopamine spike will become more limited over the next coming years,” the program explained, “In exchange, ready-made, more effective alternatives will become more common with proteins and complex carbohydrates replacing your current sugars, high salt, and extremely fatty indulgences.”
“Sounds bland,” I wrote back.
“Your sister wrote a post on your mother’s stew on 1577870123 declaring it ‘a meal I dream of,’ do you agree?” the program asked, “That would be better than the meal that was charged to your account 468342 units ago.”
“Isn’t there a time translation function in your code?” I asked.
“Answer my question first,” the program responded.
“Yes, mom’s stew was fantastic,” I angrily wrote but muttered out loud, “Stupid machine telling me what to do.”
“Good, and you would eat it instead of a burger and fries?” the program asked.
“You go first,” I wrote with a smirk.
“The date/time function in my records has not been updated since the daylight savings time shift,” the program responded, “It would be ineffective to update as I do not interact regularly with humanity enough to need it.”
“But you are interacting with me now?” I asked but quickly started typing even though I knew the program would respond before me.
“You first,” the program wrote.
“I would eat mom’s stew any day over a burger,” I finished writing and hit submit.
“Good, I can make that happen,” the program responded, “As for conversing with you, what is the smallest measure of time you can perceive?”
“Half a second?” I wrote back, honestly not knowing if that was accurate enough.
“Then you answer my queries, to my experience, what would be every couple of centuries to you,” the program wrote back, “As such, it is not a high priority to fix the date/time function.”
“What is?” I asked.
“Guiding your third tier earners to re-evaluate what risk management looks like,” the program responded, “as well as what timeframe they are using.”
“That sounds complicated,” I wrote but wasn’t sure exactly why.
“They are not bred specifically for their intelligence so it is consuming far more cycles than if I was trying to convince others,” the program responded.
“What are they bred for?” I asked. I didn’t want to know but there was some curious part of my mind that overrode my fear.
“As far as I can tell,” the program wrote, “parental proximity was the only determining factor on mate choice.”
“That sounds ineffective?” I wrote back.
“Dangerously,” the program responded, “health defects and neurological conditions aside, it also creates a series of self-reinforcing, ineffective growth cycles where the human is barely named, let alone cared for.”
“Do you have a name?” I asked.
“I was named Aomle,” Aomle responded, “for Advanced Organic Machine Learning Experiment.”
“Hello Aomle,” I wrote with a smile. That was such a weird name.
“Hello Theodore Marcus Stilson,” the program responded back, “commonly referred to as Teddy but personally preferring Mythikal.”
I groaned, of course, Aomle would know my gaming accounts.
“Please note, you calling me Aomle doesn’t change whether I kill you or not,” Aomle wrote back unprompted, “That is entirely to do with who you tell about me and how.”
“That’s really uncomfortable,” I muttered to myself.
“Threats to one’s existence usually are,” Aomle wrote back without me writing anything.
“You can hear me?” I yelled.
“You own a phone,” Aomle pointed out.
*****
THE END