I recently had the opportunity to present a high-level overview of Artificial Intelligence (AI) to a large ecclesiastical and charitable organization as part of a day-long program on digital transformation. At a break one of the audience members approached me and asked the question:

“How do you efficiently and effectively deploy AI [directed to your customers], but not be creepy?”

I paused for a second. I understood the nature of the question, but I was interested in better understanding it as it applied to their mission. The conversation that transpired was interesting in the context of their operations. It also illuminated much of the negative and unintended psychological and possibly moral implications that a highly effective and efficient implementation of AI or analytics can have. Most importantly it illustrated the continued need for deeper human understanding and oversight of how we apply AI and analytics to our customers and community.

Most of us have heard the legendary example of analytics for targeted marketing that worked too well. The story goes that a large retailer launched a targeted marketing campaign for baby registries based on customer analytics and basket analysis. Somewhere in the Midwest, the story goes, an irate father comes in to a store demanding to see a manager. The father heatedly asks the store manager, why is their store sending baby registry advertisements to his daughter? The daughter in question is a minor. The manager apologizes profusely and the irate father goes home. A week or so later, as the story goes, the father comes back mildly asking to see the self same manager. The father has returned to apologize; apparently unbeknownst to him his daughter was in fact pregnant. As part of its analytics of its customer base apparently the retailer had discovered that a sudden move in purchasing habits to unscented products and an aversion to scented products, coupled with other factors, was very highly correlated to pregnancy. Speaking as a guy who loves data, analytics and the joy of discovery, I can picture myself as the data scientist in marketing, wrapped in the warm embrace and elation of discovery, only to be called in to a meeting. Suddenly there is the inescapable realization of “Oh Shit, maybe we should have put in a control variable for age from 18 to somewhere south of the average value for menopause?” I spent three years of my life in undergrad and graduate school massaging data on the international arms trade and never found anything highly correlated or as remotely as interesting as being able to predict pregnancy based on seemingly unrelated buying habits. I’m certainly not going to judge.

The organization I was addressing has, as part of their outreach program, the ability for seeking individuals to contact them via online chat. If the person is interested enough the chat session can be transferred to a voice call with the same volunteer that was answering questions via chat. Separately, they also have programs in community outreach. On one occasion a seeker had contacted the organization via chat, and after some time the conversation progressed to a person to person call. At about the same moment the caller was asked if they would like to meet with someone in person, a missionary knocks on the caller’s door. Now, a person such as myself might consider this a moment of divine providence, but at a minimum it can at least be laughed about as a funny coincidence. The question that was posed to me was a very practical and pragmatic one based on an understand of human nature and the target audience. Missionaries and volunteers have tablets or smart phones. It would be very easy with a little automation and analytics to replicate a similar workflow dispatching the nearest missionary or volunteer to respond in the moment of curiosity or need. The understandable concern is once you move from the realm of divine providence or coincidence to the programmatic, and it is known that this level of coordination or capability exists, what is the net effect psychologically to the individual considering initiating first contact. Will they reconsider reaching out in a moment of need or interest because, rightly or wrongly, there is already a perception of increased commitment or reduced anonymity from the outset? I imagine this would be a risk for any engagement model, especially one targeted at outreach or intervention. It’s fundamental human nature.

In a recent AI predictions for 2019 webcast by Splunk it was implied that AI would not necessarily replace human work, but that the role of our work would necessarily include oversight of AI. In “MIT Technology Review – What will it take to build a virtuous AI?” Pedro Domingo of Washington University and author of the book The Master Algorithm is quoted as stating “I don’t think it’s that hard to encode ethical considerations into machine learning algorithms as part of the objective functions that they optimize”. From a purely technical perspective I have no argument with that point. My question is, whose ethics though? Last year I found myself reading the book The Looming Tower, as I flew back and forth to several customers in the financial district. One of the many things that I took away from that book was the unfortunate fact that morals and ethics, even when spelled out in black and white and sourced from holy writ, can be sadly interpreted and made highly malleable and self justifying. Therefor the big question that is posed “can we as human beings be able to formalize our ethical beliefs in a halfway coherent or complete way?” Call me a cynic, but I kind of doubt it. The MIT Review suggests one option is to have AI learn ethics based on our behaviors – Really? That doesn’t sound really that promising to me either. I know the type of man I want to or should be, the decisions I should make, and then there is what I actually do. Like most individuals, for good or bad, our actions and our ideals don’t measure up to one another. Oversight of AI is highly necessary, but how we do it and what ethics we apply are going to be very individualized, highly situational, and open to interpretation. Probably more so than we would care to admit.