Tuesday, October 18, 2016

Uncertain Health in an Insecure World – 96


Mahatma Gandhi was courageous… a visionary. He advocated for the right of all humankind to think freely – regardless of race, gender or caste. His legacy transcends the test of time. Indeed, human beings all have the capacity to think, albeit illogically and emotionally at times.

Human intelligence (H.I.) uniquely defines the human condition.

H.I. = I.Q. + E.I. 

The origins for the most widely used metrics of human intelligence, intelligence quotient (I.Q.) tests, can be found in the early 1900’s when H.H. Goddard proposed that intelligence could be measured on a linear scale. In the 1920’s, Lewis Terman predicted that California schoolchildren with the highest I.Q.’s would claim top professional jobs. In 1969, Arthur Jensen stated that I.Q. boosting programs for minority children like Head Start would fail because of an innate genetic basis for intelligence. In 1994, Richard Hernstein and Charles Murray proposed the Bell Curve concept that would segregate Americans with the lowest I.Q.’s into “high-tech” reserves. In 2007, James Watson of DN fame opined that he was “inherently gloomy” about the prospects for Africa, because Africans score lower on I.Q. tests than Europeans.

Biases in the application and interpretation of I.Q. testing are long, and deep. 

Amid such controversy, in 1984 a scientist at the University of Otago in New Zealand received the results of I.Q. testing from two generations of Dutch 18-year olds. After analyzing the data, James Flynn (below) found that those tested in 1982 scored much better than did those tested in 1952. These Dutch data were subsequently confirmed around the world – I.Q. scores were rising at a rate of +0.3 I.Q. points per year! The Flynn Effect has withstood the test of time. If extrapolated back, I.Q.’s among children circa 1900 would be 70 points, sitting squarely in today’s mentally challenged range.

Of course, post-WWII Netherlands was very different from pre-EU era Netherlands.

In What is Intelligence (2007), Flynn wrote about the “crisis of confidence” created by his Effect, openly questioning how it could be that children’s intelligence had changed so much over a century. Perhaps there were other testing factors, such as how questions were framed, that caused the Effect. One common I.Q. test (Wechsler Intelligence Scale for Children, WISC) asks the question, “Why are dogs and rabbits alike?” (Answer: both are types of mammals). This question might have been framed differently in 1900 when “you use dogs to hunt rabbits.” Human intelligence may be largely innate, but it also requires a cognitive frame of reference based in the modern world, whether circa 1900 or circa 2000.

Perhaps, intelligence is not only about how smart we are, but more about how modern we are.

The leadership literature’s most widely quoted basis for management success, emotional intelligence (E.I.) dates back to the work of Michael Beldoch (1964) and of Peter Salavoy & John Mayer, as popularized by social scientist Dan Goleman in Emotional Intelligence (1996).  E.I. reflects the human ability to recognize, understand and manage one’s emotions, and the influence of such emotions on others. Learning how managers’ emotions, especially under pressure, drive and impact the behavior of other people (positively or negatively) is one key to leadership performance. Goleman has attributed 67% of leadership success to E.I., as compared to I.Q. In large part, E.I. derives from an individual’s ability to process emotional queues and to navigate the social environment of the modern workplace.

But self-reported E.I. measurements have been criticized.

Tools like the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) may actually be measuring conformity. The capacity of individuals to understand how they should channel their emotions does not accurately predict how they will actually perform under emotional stress. Scientists have also shown that general intelligence (Wonderlic Personnel Test), agreeable-ness (NEO-PI) measures and gender are also reliable predictors of E.I.

Socially desirable behaviors can be faked during E.I. testing.

There are wide ideological rifts between the “academic wing” and the “commercial wing” of the E.I. community. In 1998, Dan Goleman asserted that, “the most effective leaders are alike in one crucial way; they all have a high degree of what has come to be known as emotional intelligence… the sine qua non of leadership.” To the contrary, in 1999 John Mayer cautioned that, “the popular literature’s implication – that highly emotionally intelligent people possess an unqualified advantage in life – appears overly enthusiastic at present and unsubstantiated by reasonable scientific standards.” E.I. correlates poorly (ρ=0.11) with measures of so-called transformational leadership (Harms, Crede, 2010). And these correlations do not consider the effects of I.Q. or the big five personality traits (Joseph, Newman, 2010).
Despite such fundamental flaws and philosophical differences, E.I. traits do predict job performance.

I.Q. tests an individual’s ability to learn and retain new information. E.I. tests evaluate an individual’s capacity to deal with others when under personal stress.

As we observe the tightly choreographed words and behaviors of modern public “leaders”, whether elected or anointed, there is no shortage of raw I.Q. These leaders are often telegenic personas with a broad cross-jurisdictional appeal, or charismatic channelers of the emotions of a disaffected few. And while these leaders may exhibit a modicum of enlightened self-interest in the pursuit of their goals, one thing that is commonly missing is the personal sacrifice and utter selflessness of a Gandhi.

There are very smart and emotionally intelligent people in the policy, business and scientific worlds.

But to mouth the words of another… to answer before thinking… to offer vague open-to-interpretation positions to gain support… This is not the best reflection of human intelligence.

In his fearless dedication to the most abjectly poor and marginalized among us, Gandhi demonstrated very high H.I.. Increasingly, such human intelligence is either absent, or reflexively being delegated to NGO’s, governmental agencies or corporate CSR units. Today, the pragmatic goals of securing election majorities or growing company profits render such thinking a modern abstraction – an anathema – even in intelligent circles.

Until this week, when the Nobel Prize committee reminded us all of what a leader can do with brains, heart and the courage die in the peaceful pursuit of a cause. We in The Square congratulate Juan Manuel Santos of Columbia (below), the President of a country regaining its sanity and soul after these had seemingly been lost.

Three and a half months post-Brexit and three and a half weeks pre-Thrillary, our faith in human intelligence is restored… partially. 

Wednesday, October 12, 2016

Uncertain Health in an Insecure World – 95

“We Cry… They Laugh”

“When we are born, we cry…” From the outset, memory defines the human experience.

Westworld, HBO’s sci-fi reprise of the 1973 Yul Brynner film of the same name and genre (above), is set in a 19th century American wild west adult theme park populated by incredibly lifelike but sadly expendable cyborgs. Westworld’s plot centers around memory manipulation. Anthony Hopkins plays Dr. Robert Ford, the robot Maker. He places high-paying human ‘visitors’ in the town. The visitors are invincible, and have a license to kill the cyborgs with impunity.

When not being hunted down by visitors, Dr. Ford’s robots are failing due to planned software obsolescence and troubled operating system upgrades. Ford’s lead scientist worries, “We didn’t program any of these behaviors!” He quips that Delores Abernathy (below), the recurring cyborg lust interest of the vicious Man in Black (played by Ed Harris), “has been repaired so many times that she’s almost new.

Despite his compellingly lifelike artificial creations, Hopkins observes that, “This is as good as we’re gonna get. It also means that you must indulge me the occasional mistake.”

Despite ALL the popular buzz regarding the expected impact of artificial intelligence (A.I.) and machine learning on our future world, the actual ability of computers to understand the real world and respond intelligently remains largely unrealized.

A.I. is a construct that bundles three powerful technologies – the Cloud, mobility and big data – for average human consumption. Current gen A.I. can produce a scary movie trailer and pilot a flying drone. More importantly, A.I. can process big data at scale while continuously learning from myriad transactional experiences. Google’s engine searches billions of web pages to find exact information, increasingly tailored to our prior searching histories.

A.I. may help mere mortals cope with the awesome complexity of what’s generally becoming available for human sampling.

Venture capitalist and Netscape founder Marc Andreessen (above) is optimistic. He points to “breakthrough after breakthrough” since the 2012 ImageNet competition, where A.I. image recognition offered powerful enhancements to radiology and to other image-based video technologies like Freenome’s blood biopsies for cancer diagnosis. “Part of the breakthrough on ImageNet was the sheer size of databases you (a small start-up company) could train the algorithm against.”

Perhaps A.I. will make us better at the things we humans already do best.  

As manufactured goods become cheaper, Baumol’s cost disease (a.k.a the Baumol effect) states that salaries rise in sectors that have not increased labor productivity, in response to rising salaries in high productivity sectors. Whether intelligent or not, a big chunk of rising healthcare costs is attributable to humans being hired to spend time with customers (patients). Andreessen riffs on Baumol’s effect, stating that “This is where the Luddites just keep getting it wrong. I think the job of a doctor shifts and becomes higher level… as the doctor becomes augmented by smarter computers.”  
Can A.I. actually potentiate the prowess of doctors working in such an unproductive sector?

Ginni Rometty, IBM President and CEO (above), thinks so. She talks about the “explosion of information” confronting the modern consumers. She believes that IBM Watson’s machine learning A.I. technology can help us make “better decisions.” She too prefers the term “augmented intelligence,” as a less threatening conceptual framework. Watson can indeed search millions of published medical research papers (terabytes of data) from highly diverse sources in seconds, generating predictive algorithms that could augment patient care in real time. Rometty won’t indulge in saying that Watson can make better oncology diagnoses than doctors; she does think it will give doctors more time with their cancer patients.

In 2011, IBM Watson was only 32% certain of its winning answer in final Jeopardy. That’s just a little better than a guess. Unfortunately, 2016 Watson (above) remains unproven in the real world.

Question – why is sarcasm so hard for A.I. machines to master?

Oscar Wilde called sarcasm “the lowest form of wit.” Harvard Business School researchers think it’s “the highest form of intelligence.” Cognitively speaking, sarcasm is one of the most complex forms of human expression. While voice recognition and machine translation are constantly improving (just ask Siri), the unique and highly individual traits of verbal & written sarcasm still defy A.I.

Humans with brain dysfunction like autism and fronto-temporal dementia also struggle with sarcasm.
Only healthy human brains can de-convolute the juxtaposition of a sarcastic statement’s literal and intended meaning. Selling sarcasm to someone else involves nuanced facial expressions and/or vocal tones. That’s why expressionless hash tags and ambiguous emoticons can make tweets misleading for humans reading them, and for ‘bots’ scanning Twitter feed.

In 2014, the U.S. Secret Service issued RFP HSSS01-14-Q-0182 for analytics software with “the ability to detect sarcasm and false positives” on social media, as a way of trolling for terrorists using “snark” to disguise their evil plots. Advanced deep-learning machines can now detect sarcasm with 87% accuracy as compared to humans. Uncle Sam’s RFP remains to be awarded.

The situational awareness and worldly knowledge framing sarcasm still exceed the capabilities of A.I.

Triggered by a visitor’s dropped photograph, Delores Abernathy’s robot father, Peter (below), experiences a memory. As Dr. Ford readies a thus damaged Peter for decommissioning, he asks him, “And what do you want to say to your Maker?”

At the moment of his machine death, instead of crying, Peter laughs and replies, “By my most mechanical and dirty hand I shall have such revenges on you... The things I will do; what they are yet I know not, but they will be the Terrors of the Earth. You don’t know where you are, do you? You’re in a prison of your own sins.”  

Like other intelligent humans, we in The Square remember, and harbor artificial intelligence doubts.

Responses to the endless enormity of worldly information differ.

We cry… they laugh.

Exactly who among us can say they are better?