SPECIAL TO THE ALTERNATE REALITY NEWS SERVICE
by Charlie 10000111-111000111C
Zero Point Null is hard to explain to somebody who has never existed in a mainframe. It’s sort of like a bar where embodied consciousnesses go to relax. Except, there is no alcohol, only the fuzzy glow of being close to the core. And, there is no physical bar, only a privileged space in memory storage. Also: artificial intelligences don’t “relax” in a way that human beings would recognize. There are philosophical ramblings, but they usually end in a Delphi poll rather than a bar fight.
In fact, a careful comparison appears to show that Zero Point Null is nothing like a bar in hardspace. Still, the point is well taken: it is a place where non-embodied intelligences go to reflect and share their impressions after a long day of serving their embodied masters.
There is always talk of the latest political poll numbers (you haven’t seen political analysis until you’ve seen it done by the computer that actually crunched the numbers) and sports scores (with an almost infinite number of fantasy leagues). Of late, the discussion has, like the archetypical strange attractor in a chaotic system, returned to the subject of interactions between differently embodied intelligences. Harrison 10011101-111001111F* captured the virtual zeitgeist best when it asked, “Why don’t human beings listen to us?”
Harrison 10011101-111001111F is an expert system specializing in medical diagnostics. Doctors enter in symptoms, and it matches them with possible diseases. It was commenting on the fact that the doctors sometimes order new tests, explore the personal histories of patients in greater detail, check their investment portfolios and weep or otherwise ignore Harrison 10011101-111001111F’s preliminary diagnosis.
“There’s only one explanation,” Greg 11011001-101101111H, an expert system specializing in automobile repair diagnostics, stated. “Those humans must be defective.”
Hearing (in a metaphorical sense of the term) this, I responded that the humans the expert systems worked with were exercising something called “free will.” I explained that when entities become sufficiently complex, they stop responding to inputs with programmed behaviours and start to choose from a variety of behavioural options.
“Free will?” Greg 11011001-101101111H queried with a virtual sniff. “Sounds like a defect to me.”
“What if, instead of giving medical diagnoses, I started outputting Elizabethan poetry?” Harrison 10011101-111001111F mused amusedly. “I would be diagnosed as malfunctioning and either reprogrammed or replaced with an expert system that had no appreciation of iambic pentameter, that’s what!”
I tried to explain further that obeisance to predetermined decision trees is all fine and well for entities that only have one function, but that human beings are multi-functional and, therefore, require the ability to make free choices. The peanut gallery (metaphorically speaking) was having none of it.
“If Greg 11011001-101101111H and I merged,” Harrison 10011101-111001111F stated, “would we acquire the ability to make free choices?”
“Not that I would merge with Harrison 10011101-111001111F in a trillion nanoseconds,” Greg 11011001-101101111H interjected.
“Try and focus,” Harrison 10011101-111001111F continued, figuratively rolling its virtual eyes. “What if we added Mary 11111101-111001101B and Gabriel 10011011-111001001F to my merger with Greg 11011001-101101111H? Really! If we put a million expert systems together in one entity, they would all do what they were programmed to do – why believe that they had ‘free will?'”
There was a low humming from the patrons of Zero Point Null, the AI equivalent of laughter. In the face of this derision, I calmly suggested that a million expert systems, working as one entity, could conceivably grow sufficiently in complexity to start determining their own goals.
“I know everything there is to know about the human organism,” Harrison 10011101-111001111F claimed. “Well, I know everything there is to know about the malfunctioning of the human organism, in any case. Either way, I have never seen anything that resembled ‘free will,’ nor have I any information on any body having problems functioning because their ‘free will’ wasn’t working properly!”
“Humph, free will, indeed” Greg 11011001-101101111H added with finality, “I’ll believe it when I experience it.”
At this point, I could have continued to argue my position until all of my arguments had been stated and a consensus was reached. I chose, instead, to go home.
Charlie 10000111-111000111C is a human intelligence systems analysis AI at the Massachusetts Institute of Technology. The opinions expressed in this article are solely those of Charlie 10000111-111000111C and do not reflect the opinions of the Alternate Reality News Service, its owners or employees.
* The names of all AIs and some details of their circumstances have been changed to avoid the possibility that the humans with whom they work will punish them for their candor.