I ran one of my regular London Knowledge Cafes recently at the Rubens Hotel in London, sponsored by Joyce Harmon of Core and Conrad Taylor, one of the Cafe regulars very kindly did a great write-up of the evening.
You can find an abbreviated version of his account below and can find the full post here
A big thanks Joyce and Conrad. And of course the speaker Andrew Driver
Cloudy with a hint of fog
A personal account of a Gurteen Knowledge Café by Conrad Taylor
David Gurteen promotes the practice of ‘Knowledge Cafés', a kind of discussion workshop which is structured to encourage creative conversations around a topic, with the aim of bringing the knowledge of the participants to the surface, sharing ideas and insights between them. In process, a Gurteen Knowledge Café is related to the World Café process originated by Juanita Brown and David Isaacs in 1995, but the Gurteen café meetings are run with shorter table-group sessions, and smaller attendance overall. Not only does this make a Gurteen Knowledge Café easier to organise and to host, but with typically forty or so people in the room it is also possible to close out the event with a discussion in the round.
For a number of years David Gurteen has run a series of occasional Knowledge Café events in London. The principle is that an organisation hosts the meeting, providing the venue and some refreshments, and the meeting is open to all comers and free to attend. Note that the Café methodology lends itself very well to internal organisational knowledge sharing, but David's London Café series is left deliberately open and free, encouraging networking and inter-networking.
The most recent Gurteen Knowledge Café event was held on the evening of 16th April 2015 at the Rubens Hotel by Buckingham Palace, and like the previous café event was generously hosted by Core. Core is a Microsoft business partner company with special interests in secure mobile working for government and business, virtualised managed IT services and such like (See http://www.core.co.uk).
To seed the series of round-table discussions at a Gurteen Knowledge Café, the normal practice is for a presenter, who is generally from the hosting organisation, to speak quite briefly to the proposed topic, winding up with some open questions which the participants can then discuss. In this case, the meeting had been given the title ‘Cloudy with a chance of fog?' (explanation follows shortly!) After Joyce Harman of Core had welcomed us and David Gurteen outlined the process for the Café (generally about half the people who come have not attended one of these events before), Core's senior technology strategist Andrew Driver gave the talk.
There is, of course, an advertising process which David runs through his mailing list, so that people are aware of the event and attracted to it. It makes sense then for me to directly quote the topic synopsis which had been circulated about this meeting:
If you send and receive email, share photos or documents from your computer, or do your banking or shopping online, you are using 'Cloud' computing.
Hotmail, Skydrive (now OneDrive), iCloud and Dropbox are all examples of cloud computing which we now take for granted.
This is IT consumerisation; allowing an individual or a business can buy their IT the way they might buy any other subscription based product.
Now we have the 'Internet of Things', the idea of everyday objects like cars and toasters being connected to everything else. What next?
What are the wider implications for the future?
As well as the many benefits of a more connected world, should we be concerned about a future led by terms such as Machine Learning and Artificial Intelligence.
Further, what is the gap between what we believe and reality?
Table group conversations
The next, arguably the main phase of a Gurteen Knowledge Café, is the point at which the audience stops being an audience, and the table group conversations begin. The idea is that we sit four or five to each table (the small tables at the Rubens pushed us towards three and four per table), and we share whatever ideas come to us around the topic, for ten minutes until David Gurteen blows his magic whistle.
Some of the people at each table should then move to another table, and the reconstituted groups continue for another ten minutes. Quite often this second session includes a period of people sharing what just happened conversationally at the first table group they found themselves in. Chances are that the conversational trend was different at various first-round tables, so the conversation ‘re-fractures' in new directions. After another ten minutes, David blows the whistle again and a third session is initiated.
There will always be some people who say more and who may dominate the table conversation, but having small table groups tends to mitigate against that. However the groups are big enough not to put undue pressure on people to feel forced to contribute.
I made notes and recorded at each of the table groups I was at. However, it makes better sense to skip to the final in-the-round session which in a sense gathered all the conversations together. It's never a complete picture because in the larger group some feel ill at ease speaking out, and back-and-forth reactions will foreground some issues and throw dust over the traces of others. But as a lightly-managed method for knowledge sharing, it does pretty well…
In the round
Following the table groups session, we gathered our seats into a big circle and David asked us to share as we wished. This session lasted about 40 minutes.
The first person to speak said that in the conversations he had had, the issues seemed to be less technical than socio-political. For example, machine learning might make middle class and managerial professionals redundant, and this could result in serious social dislocations.
Andrew referred to a recent conversation with the person at a client organisation moving their email out to the Microsoft Office 365 system; he feared he might be left with nothing to do. No, said Andrew; at present you use the systems you have to facilitate communication in the business, and surely you will continue to have the same job, but using a different technical system.
Several people chipped in with worries about what machine learning and machine ‘intelligence' might do for a tier of middle class support jobs: amongst paralegals, legal researchers and journalists for example. The top fee earners won't be threatened, but the ranks who support them might indeed be replaced by expert automated systems.
One rather scary aspect of machine intervention is represented by the research trend towards ‘autonomous killer robots', drones and missile batteries and battlefield weapons which are coming close to being granted powers to decide whether to kill or not. They may be constrained by their coding, but when there is the need to react quickly, quicker perhaps that human judgement would take, how long will this remain the case? South Korea has automated gun emplacements along its border with the North (the Samsung SGR-A1 system), currently under human control but capable of being made autonomous.
One lady mentioned that South Korea may be the only country which has actually developed an ethical framework for robotic behaviour, possibly akin to what the science fiction author Isaac Asimov put forward in ‘I, Robot' and other books. For South Korea it is significant not just because of the defence system mentioned above, but also because they hope to drive towards each Korean home having a robot by 2020.
Richard Harris, in his book ‘The Fear Index', suggests that we may control the morals and parameters of robotic systems, but it may still be the case that a system decides its behaviours for itself. The scenario is based on automated decisions in the investment banking industry. Now, one hopes that good decisions would be coded in; but it is often the case that we have lost control of the code, and no-one knows how it is working.
As a thought experiment, someone imagined a self-driving car. A small child runs out in front of the car and the car must act. To the left is a bus stop with eight people in the queue; to the right is a precipitous cliff. Which choice should the vehicle make, and would it make that choice?
One of us raised the issue of how different generations think about privacy behaviours and privacy laws.
The conversation took a turn towards the second question Andrew had launched at us, about the gap between perception or belief on the one hand, and reality. Challenged to explain, Andrew expanded by saying that he was often in conversations with people who he might have expected to have a wider vision, but was coming to appreciate that many senior and experienced people have their mindset in a kind of rut, ill-prepared for what is about to bring radical change. For himself, he thinks it behoves us to show an interest in our future.
Someone recalled the perceptual experiment that asks people to count the number of times that a basketball is passed, and hardly anyone charged with this task notices that someone in a gorilla suit walks right through the shot. It's what we might call ‘entrained thinking', the captivating power of mental models, and though mental models have their uses, so does naïvety? Assumptions undermine our ability to understand the world, especially in novel contexts and arrangements.
I asked if any of the table groups had addressed the question of ‘the Internet of Things' and someone replied that yes, on her table they thought it had the potential to create some large security risks and loopholes.
David Gurteen said that as an iPhone user, he recently became aware that when he has his phone plugged in to charge in the same room, he has become aware that ‘Siri' (the natural-language control interface for the telephone) is listening to every second of time and his every word. Siri has imperfect ears, and might hear David and his wife use a phrase in dinner conversation and interject, ‘How can I help you?' I raised the recent news stories about the Samsung voice-control TVs and the talking Barbie doll, both of which use an Internet link to a natural language processing software system ‘in the Cloud' and which therefore are also continuously listening to whichever human is in the same room (though soon, they start to listen and react to each other).
Someone remarked that there is a kind of trade-off between gaining increased machine help and losing our privacy and control over our own information. A trade-off along those lines may be perfectly acceptable, were we able to decide about it ourselves. But do we really understand what are the terms of the trade off? And who is in charge of those terms? Until Edward Snowden enlightened us, how much did we understand about how those trade-off were handing vast amounts of information about us to the security organisations?
What, for example, are we to make of the harvesting and mass pooling of our medical records and genetic data? It has some huge potential to advance medical science through Big Data analysis.
We had a bit of a debate about whether ‘radical transparency' with respect to our data is asymmetric (they want to know everything about Us but don't let us know much about Them), or whether the information flow is more symmetric than that.
In closing out, Andrew Driver suggested we check out a book by Peter Fingar called ‘Process Innovation in the Cloud', which is related to an article called ‘Everything has changed utterly'. The book, he suggested, is not that exciting, but the article is worth a look.
At this point David Gurteen thanked our hosts; he got people's unanimous agreement that it was OK to share emails amongst us, but we demurred at him sharing those with his toaster. And so we rose, and spent some more valuable informal time networking with the aid of wine and beer generously provided by our hosts.