Uncertain Health in
an Insecure World – 95
“We Cry… They Laugh”
“When we are born, we
cry…” From the outset, memory defines the human experience.
Westworld, HBO’s
sci-fi reprise of the 1973 Yul Brynner film of the same name and genre (above), is set
in a 19th century American wild west adult theme park populated by
incredibly lifelike but sadly expendable cyborgs. Westworld’s plot centers around memory manipulation. Anthony
Hopkins plays Dr. Robert Ford, the robot Maker. He places high-paying human
‘visitors’ in the town. The visitors are invincible, and have a license to kill
the cyborgs with impunity.
When not being hunted down by visitors, Dr. Ford’s robots are
failing due to planned software obsolescence and troubled operating system
upgrades. Ford’s lead scientist worries, “We
didn’t program any of these behaviors!” He quips that Delores Abernathy (below), the
recurring cyborg lust interest of the vicious Man in Black (played by Ed Harris),
“has been repaired so many times that
she’s almost new.”
Despite his compellingly lifelike artificial creations,
Hopkins observes that, “This is as good
as we’re gonna get. It also means that you
must indulge me the occasional mistake.”
Despite ALL the popular buzz regarding the
expected impact of artificial intelligence (A.I.) and machine learning on our future
world, the actual ability of computers to understand the real world and respond
intelligently remains largely unrealized.
A.I. is a construct that bundles three powerful technologies
– the Cloud, mobility and big data – for average human consumption. Current gen
A.I. can produce a scary movie trailer and pilot a flying drone. More
importantly, A.I. can process big data at scale while continuously learning
from myriad transactional experiences. Google’s engine searches billions of web
pages to find exact information, increasingly tailored to our prior searching
histories.
A.I. may help mere mortals cope with the awesome complexity
of what’s generally becoming available for human sampling.
Venture capitalist and Netscape
founder Marc Andreessen (above) is optimistic. He points to “breakthrough after breakthrough” since the 2012 ImageNet competition, where A.I. image
recognition offered powerful enhancements to radiology and to other image-based
video technologies like Freenome’s
blood biopsies for cancer diagnosis. “Part
of the breakthrough on ImageNet was the sheer size of databases you (a small
start-up company) could train the algorithm against.”
Perhaps A.I. will make us better at the things we humans already
do best.
As manufactured goods become cheaper, Baumol’s cost disease (a.k.a the Baumol effect) states that salaries rise in sectors that have not increased labor
productivity, in response to rising salaries in high productivity sectors. Whether
intelligent or not, a big chunk of rising healthcare costs is attributable to humans
being hired to spend time with customers (patients). Andreessen riffs on Baumol’s
effect, stating that “This is where the
Luddites just keep getting it wrong. I think the job of a doctor shifts and
becomes higher level… as the doctor becomes augmented by smarter computers.”
Ginni Rometty, IBM
President and CEO (above), thinks so. She talks about the “explosion of information” confronting the modern consumers. She
believes that IBM Watson’s machine
learning A.I. technology can help us make “better
decisions.” She too prefers the term “augmented
intelligence,” as a less threatening conceptual framework. Watson can
indeed search millions of published medical research papers (terabytes of data)
from highly diverse sources in seconds, generating predictive algorithms that could
augment patient care in real time. Rometty won’t indulge in saying that Watson can
make better oncology diagnoses than doctors; she does think it will give
doctors more time with their cancer patients.
In 2011, IBM Watson was only 32% certain of its winning answer
in final Jeopardy. That’s just a
little better than a guess. Unfortunately, 2016 Watson (above) remains unproven in
the real world.
Oscar Wilde called sarcasm “the lowest form of wit.” Harvard
Business School researchers think it’s “the
highest form of intelligence.” Cognitively speaking, sarcasm is one of the
most complex forms of human expression. While voice recognition and machine
translation are constantly improving (just ask Siri), the unique and highly individual traits of verbal &
written sarcasm still defy A.I.
Humans with brain dysfunction like autism and fronto-temporal
dementia also struggle with sarcasm.
Only healthy human brains can de-convolute the juxtaposition
of a sarcastic statement’s literal and intended meaning. Selling sarcasm to
someone else involves nuanced facial expressions and/or vocal tones. That’s why
expressionless hash tags and ambiguous emoticons can make tweets misleading for
humans reading them, and for ‘bots’ scanning Twitter feed.
In 2014, the U.S.
Secret Service issued RFP HSSS01-14-Q-0182 for analytics software with “the ability to detect sarcasm and false
positives” on social media, as a way of trolling for terrorists using “snark” to disguise their evil plots. Advanced
deep-learning machines can now detect sarcasm with 87% accuracy as compared to
humans. Uncle Sam’s RFP remains to be awarded.
The situational awareness and worldly knowledge framing
sarcasm still exceed the capabilities of A.I.
Triggered by a visitor’s dropped photograph, Delores
Abernathy’s robot father, Peter (below), experiences a memory. As Dr. Ford readies a thus
damaged Peter for decommissioning, he asks him, “And what do you want to say to your Maker?”
At the moment of his machine death, instead of crying, Peter
laughs
and replies, “By my most mechanical and
dirty hand I shall have such revenges on you... The things I will do; what they
are yet I know not, but they will be the Terrors of the Earth. You don’t know
where you are, do you? You’re in a prison of your own sins.”
Like other intelligent humans, we in The Square remember, and
harbor artificial intelligence doubts.
Responses to the endless enormity of worldly information
differ.
We cry… they laugh.
Exactly who among us can say they are better?
No comments:
Post a Comment