Wednesday, January 18, 2017

Big tech vendors still failing a simple CX test

I put myself in the position of customer yesterday - trying to trade with some of the leading technology companies of our time.
I tried 20 in total.

Most of these companies would claim they would at a minimum help you understand your customer. Some would claim to be leaders in improving customer experience. Yet all but one failed a very simple test. To contact them I had to fill in an online web form.

I hate online web forms with a vengeance.
To me they say: "You customer, You must do things my way if you want to trade with me."

  • They force my compliance with your data fields,
  • They mean I have no record of the correspondence (you do), not even which email or phone number I have given you (that's my data that is). 
  • They tell me what I should tell you.
  • They won't accept the realities of unstructured data. 
  • They don't allow me to send attachments. 
  • They make me act in the way you want and say only what you'll allow. 
  • They cost me time in later clarifications an email and attachment could have resolved.

As I came across example after example yesterday I understood how so many have so far still to go on the journey to putting the customer first (not customer centric, but customer as partner). The door to their businesses had been designed to serve those on the inside better than those on the outside.

It put me in mind of a restaurant I once had lunch at where I saw half of the customers turn away because the door was designed to be pushed rather than pulled open. Embarrassed customers tried the door and when it didn't work the way they expected, they turned and walked. If the proprietors reversed the door they would double their trade.

Food for thought for CX and UX designers.

Wednesday, January 11, 2017

The Internet of Experience - and the role of trust

With the rise of Virtual Reality the value of experience over things increases still further.

Kevin Kelly argues we are headed for a future beyond the internet of things which becomes the internet of experiences. He explains the shift in VR as taking (for example) a gaming experience and shifting it from something we watch to something that happens to us.

Perhaps in a similar way our social platforms will shift from things we contribute to and consume content from into social environments in which experiences happen to us - from a simple conversation with a friend as if they were in the room with us, to the sharing of an immersive experience with others as we try to solve a problem, fix a date for a trip or simply enertain each other with the stories of our experiences.

Given a world of always on tracking and recording - of behaviour and of experiences, we may be able to rely on replaying the actuality of a recent experience rather than retelling the story from our recollections - complete with an overlay of stimuli to share how we felt (who knows, it could even prompt your friends' heart rate and blood pressure to fluctuate as yours had - with suitable medical constraints).

This quickly takes us into the challenges of the Experiencing Self versus the Narrative Self discussed in my recent series of posts on the Four Dimensions of Customer Experience and illustrates once again how much we must catch up in our understanding of experience in order to improve it and select which of our 'selves' we should be designing experiences for.

Even online purchases will become an immersive experience happening to you, shaped specifically for you (probably for the decision-making Narrative Self).
That experience will be available anywhere anytime, just as e-commerce has become available everywhere through the miniaturization of computing to enable access on your mobile and tablet.

VR will follow the same route - starting out as helmets, suits and gloves in specially built rooms to deliver truly immersive experiences - the equivalent of the original warehouse-sized computers of the early 70s. In time VR could be delivered by any connection to the skin - a patch under your watch perhaps - so long as we figure out a way of fooling the body's systems of perception at brain level, who needs the bulky headgear?

Instead of granting an app access to our Facebook profile we may find ourselves being asked for access to our central nervous system. Anyone asking for that is going to have to build up one helluva legacy of trust.

Looks like the Trust-focused output of the 10 Principles of Open Business is going to be relevant for a long time to come...

Wednesday, January 04, 2017

Jobs expressing humanity are safe from AI

There is so much we still have to learn about the workings of our brains (let alone our minds) that I wonder how close we really are to creating a machine capable of learning in quite the way we do.

2017 seems very likely to be the year of AI (though more likely seeing implementations of its less 'intelligent' bed fellow Deep Learning, in platfoms of Cognitive Computing.

Robert Epstein (a senior research psychologist at the American Institute for Behavioral Research and Technology in California). reminds us that throughout history we have tried to understand how we think in the metaphors of the latest technological understanding. The six major ones over the past 2000 years being; spirit, humors, automota, electricity, telecommunication and finally digital.

He argues this final construction, with its language of uploads and storage and informaation processing and retrieval has given rise to an unreal view.

Instead, he states:

"As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types:
(1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens);
(2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars);
(3) we are punished or rewarded for behaving in certain ways."We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded. 
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. 
When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary
I am interested in this for two reasons;

1) Professionally. For its impact on the weighting we should give each of The 4 Dimensions of Experience I am working on for deployment in the development of improved Customer Experience.

If we can't be sure of how the brain works we certainly can't be sure of an algorithm gathering such a complete set of data about our preferences and needs that it could make better decisions for us than we could. I don't argue that a technical replication of the brain's functions is impossible but it remains improbable while we don't know what it is we are trying to replicate. We can approximate intelligence in this respect (quite literally developing proxies for it) but we can't create a copy of it functionally.

So what does this mean for the value of the Experiencing Self (The one behind the third of my four dimensions, Sensitivity).

We can argue it remains important because our Sensitivity has been shaped by the total of our experiences (gathered by our Experiencing Self and conceivably far better stored by digital rather than patchy human means).

That Sensitivity - whether we remember how it was derived or not - is our base setting against which our Narrative Self does it's peek-end rule calculations when we recall an experience.

Therefore striving to improve experiences for the Experiencing Self (ie at each step) will still have impact on the overall experience recalled by the Narrative Self - even if the impact may not be as great as changes made at the peek and end points of the experience.

2. Philosophically. I have, for example, argued that should an algorithm be better able to know what is best for us perhaps we should let it vote for us. Or even govern us?

We have to consider what measures should be applied to 'best for us'. Algorithms could manage our calorie intake to match our output and only ever suggest the 'right' thing to do for your safety, longevity and even your sanity, But here I am using right rather than best. What the algorithm can't know - because we don't know how we do this ourselves - is how we acquire tastes and proclivities. Why some love and some hate Marmite, what we find attractive, funny, challenging, boring. An algorithm can copy the outputs but it would struggle to innovate collection of concepts that make us uniquely human.

The algorithm could learn to approximate an understanding of us (eg at its most basic, presented with object A subject 1 did not purchase, therefore offer object B next time) but this is not knowing what is best for us - it's simply learning how we have behaved in the past.

So maybe this gives us a hint about the kind of fulfilling roles which will be left for us humans when the machines are running flat-out to make all the wealth; craft, artisinal manufacture - things with limited but genuine appeal to a few (the adhoc se;f-forming groups of interest the web allows to form globally serves this well, too), art and literature, film and drama, sport and sculpture, fashion,architecture (the interesting bits) and of course the most interesting, inspired and inspiring bits of science, maths, geography, history, economics, politics and more.

Everywhere the expression of what it is to be the human you are offers an advantage, that will remain safe from the algortihm - at least until we really understand how our brains work.

Friday, December 23, 2016

Artificial Intelligence could make you happier

As Artificial Intelligence improves, Bots become more effective and algorithms develop the capability to know ourselves better than we do, huge challenges for society emerge.
In recent months I have discussed several of those challenges here:
1. Can an algorithm do a better job of serving our best interests than we can:
(Could Your Next Vote Be Your Last?)
2, Which version of ourselves should hold eminence?
(The Fourth Dimension of Experiuence)
3. And - exploring some of the challenges discussed in the upcoming book What To Do When Machines Do Everything *- I discussed the challenges for work in The Technology Storm That Will Blow Trump's Promises Away

At the heart of all of this is how we derive meaning because it is core to why we worry about the rise of the machines.

Some see machines as the new bogeyman. They'll get so clever they decide they don't need us. I'm more optimistic than that, preferring instead to see a far future in which 'we' are as much part of the machine as the machine is part of us - an evolution which makes us digital and releases us from the constraints of the physical world. I grant - that's a long way off. But that goal demands a relationship with technology nearer equality than master and servant on either side. The bogeyman is a risk, but a manageable one.

Some see economic threat: They will take my job. And it's hard to say yet how far reaching that will be into blue and white collar roles but given the markets are already primarily run by algorithms and key decisions for financial institutions and Governments alike are aleady the reserve of machines, no one should feel too certain of their future. Again, I greet this with optimism. The machines we envision - self-driving cars and trucks, self-operating manufacturing, warehousing, customer service and delivery, robot farming and mining, AI health services etc etc etc will generate huge cost savings, increased efficiencies, a closer match between supply and demand in real-time (driving out waste). How will you pay for it? Well, in abundance would we actually need to pay? Money is the token the market uses to allocate resources. If the market has a more effective way to deliver that (data and ever improving AI decisioning built on it) we may not need to the old tokens. And if we did, perhaps we'd all get a comfortable base on which we can earn additional credits by performing tasks and behaviours the algorithm chooses to reward (those being to our own benefit - as it knows what is best for us). I know this all sounds distant and scary but if you told early capitalists they would one day be trading in a series of ones and zeros behind which there was nothing physical to pick up and carry away, not even enough promissory notes, let alone gold, they would have been terrified, too.

Others see threat to meaning: There is the obvious tradition of the protestant work ethic to consider. Ask someone what they do and they will tell you their line of work. The French ask 'what do you do in life?' Yet we still answer - businessman, binman, pilot, rather than husband, father, son.
Another way to consider this - as raised by my good friend Ted Shelton - is in reference to the central statement of the American Declaration of Independence.
    "We hold these truths to be self-evident: that all men are created equal; that they are endowed by their Creator with certain unalienable rights; that among these are life, liberty, and the pursuit of happiness."
It is worth breaking that down in the context of algorithms which have the potential to know us better than we know ourselves. Who or what defines the limits of our liberty?

But, perhaps more importantly in the context of this discussion is, what constitutes happiness?

A meaningful life is surely a happy life. So a life filled with the right kind of work is a happy life?

But is work the necessary route to fulfillment? Some may feel service to others provides their true fulfillment. They may use their 'spare' time to do exactly that.

Others may find their fulfillment in the service of a God or religion. Others find happiness in making others happy - particularly their nearest and dearest.

So provided we retain the freedom to pursue our happiness, work may be less the critical element to our identity, our construction of self-worth, our definition of meaning, than we often believe.

And if this is true, if we can disentangle ourselves from the concept that work=meaning, then we can plan a future in which the machines do the work (by which we also mean generate the wealth) and we pursue our happiness (among that abundance).

Merry Christmas.

Disclosure: *What To Do When Machines Do Everything is written by three fellow Cognizant employees; Malcom Frank, Ben Pring and Paul Roehrig. Everything I express here and elsewhere online is my own view and my own view only and should not be considered representative of Cognizant's corporate voice.

Friday, December 09, 2016

Could your next vote be your last?

My recent focus on trying to understand the constituent parts of experience (particularly in relationship to the experience of customers) when combined with the impact of the capabilities of both Cognitive Computing and Artificial Intelligence raise challenging questions about the primacy of the self and therefore of liberal democracy.
This starts from the premise that we don't know ourselves particularly well - and therefore we may not be best placed to know what is in our best interests.
And that's built out of the Open Business principle of Trust. Trust is built from the belief that the entity you are dealing with has your best interest at heart (this is what partnership requires, too).
So first - why don't we know ourselves particularly well - and why does that matter. Anyone who has read my articles, the third and fourth dimensions of customer experience will have had a reminder of the work of Daniel Kahneman onwards showing how we make short cuts all the time when making decisions. We recall experience using the Peak-End Rule. We average our low score and our score at the end. We don't aggregate the sum of our experiences.
We have evolved to experience this way to enable us to survive in fast moving environments. It was the most effective way of dealing with the data.
Wouldn't it be better if we could take account of all our experiences when making a decision. Like whether to turn left or right at the next junction.
Google Maps already does a better job of this. It (potentially) takes the sum of all the experiences of all the drivers on the road and plots your routes in the best interests of all. It does this very even-handedly. There's no way to upgrade so that everyone else gets sent out of your way, for example.
It makes better decisions for us than we do. In Google we trust.
Ok, so why not let Google select our partners? By storing and being able to access and analyse all of our experiences (at least those shared with Google - which are plentiful enough) Google could claim to know us better than our Narrative Self (the one that makes decisions based on recalling experience in its short-cutting Peak-End Rule way. It also has everyone else's experiences and outcomes to draw upon for its calculation.
Should you marry prospective partner A or B?
Those using dating sites are already handing over much of this cognitive spade work to algorithms. In Google we trust?
And if you want to hand the decision making to the algorithm for the selection of your life partner, why not to cast your vote?
If the algorithm knows your best interests better than you know yourself, why not let it make the right choice for you - uninfluenced by your short-cutting Narrative Self?
En Masse, why bother with voting at all. Are we ready for Government by Algorithm?
Humans have been, for a long time, the best things we had available to gather and intepret data.
Control (via Trust) has tended to concentrate with those who both have access to and interpret data for practical benefit. Priests could interpret the word of God to give you temporal guidance. Astrologers could read the starts to tell you when best to plant your crop. As economies grew more complex being able to read helped you make better decisions, bureaucracies grew, measuring, recording, predicting data about fields and roads and cities and people and incomes and food production and disease and health and threats and technologies and the instruments of Government grew around these data warehouses.
Now, to predict the complexities of the weather, the markets, the needs of the people, we turn to algorithms. They have become faster and better at interpreting more and more data than the best human agencies.
So why not be Governed by Google? By knowing us better than we know ourselves it can provide for us better than we can choose for ourselves. If only Google cars were on the roads, we would need a fraction of the cars currently produced (most are parked at any one time) and we would all get to where we wanted to go faster, with less pollution.
Give it control of our health and we would all live longer happier lives and our medical care could be delivered at a fraction of the current costs. Take a look at what Google Deepmind is currently engaged with the NHS to deliver for one small segment of improvement the algorithm could deliver.
Give it control of the economy and imagine the potential for supply to meet demand and the wastage that would cut.
This feels really uncomfortably like centralised, command and control economics to those in the liberal tradition.
And it's hard to deny that's very much what it is. But the difference is there is no politburo, no five year plan - no numbers set by politicians. This would be an economy run in the best interests of those engaged in it by a benign dictatorship of an algorithm which genuinely has your best interests at heart. The command and control is the needs and desires of the people.
When the time comes that the algorithm really could do a better job of governing us than our politicians, would you be prepared to make your next vote your last vote?

The rate of change is so rapid it's difficult for one person to keep up to speed. Let's pool our thoughts, share our reactions and, who knows, even reach some shared conclusions worth arriving at?