I ran one of my regular London Knowledge Cafes recently at the Rubens Hotel in London, sponsored by Joyce Harmon of Core and Conrad Taylor, one of the Cafe regulars who very kindly did a great write-up of the evening. A big thanks Joyce and Conrad. And of course the speaker Andrew Driver
It gives a good overview of how the Cafe is run and the details of the conversation in this particular Cafe.
Cloudy with a hint of fog
A personal account of a Gurteen Knowledge Café hosted by Core.
David Gurteen promotes the practice of ‘Knowledge Cafés', a kind of discussion workshop which is structured to encourage creative conversations around a topic, with the aim of bringing the knowledge of the participants to the surface, sharing ideas and insights between them. In process, a Gurteen Knowledge Café is related to the World Café process originated by Juanita Brown and David Isaacs in 1995, but the Gurteen café meetings are run with shorter table-group sessions, and smaller attendance overall. Not only does this make a Gurteen Knowledge Café easier to organise and to host, but with typically forty or so people in the room it is also possible to close out the event with a discussion in the round.
For a number of years David Gurteen has run a series of occasional Knowledge Café events in London. The principle is that an organisation hosts the meeting, providing the venue and some refreshments, and the meeting is open to all comers and free to attend. Note that the Café methodology lends itself very well to internal organisational knowledge sharing, but David's London Café series is left deliberately open and free, encouraging networking and inter-networking.
The most recent Gurteen Knowledge Café event was held on the evening of 16th April 2015 at the Rubens Hotel by Buckingham Palace, and like the previous café event was generously hosted by Core. Core is a Microsoft business partner company with special interests in secure mobile working for government and business, virtualised managed IT services and such like (See http://www.core.co.uk).
To seed the series of round-table discussions at a Gurteen Knowledge Café, the normal practice is for a presenter, who is generally from the hosting organisation, to speak quite briefly to the proposed topic, winding up with some open questions which the participants can then discuss. In this case, the meeting had been given the title ‘Cloudy with a chance of fog?' (explanation follows shortly!) After Joyce Harman of Core had welcomed us and David Gurteen outlined the process for the Café (generally about half the people who come have not attended one of these events before), Core's senior technology strategist Andrew Driver gave the talk.
There is, of course, an advertising process which David runs through his mailing list, so that people are aware of the event and attracted to it. It makes sense then for me to directly quote the topic synopsis which had been circulated about this meeting:
‘If you send and receive email, share photos or documents from your computer, or do your banking or shopping online, you are using ‘Cloud' computing.
‘Hotmail, Skydrive (now OneDrive), iCloud and Dropbox are all examples of cloud computing which we now take for granted.
‘This is IT consumerisation; allowing an individual or a business can buy their IT the way they might buy any other subscription based product.
‘Now we have the 'Internet of Things', the idea of everyday objects like cars and toasters being connected to everything else. What next?
‘What are the wider implications for the future?
‘As well as the many benefits of a more connected world, should we be concerned about a future led by terms such as Machine Learning and Artificial Intelligence.
‘Further, what is the gap between what we believe and reality?'
Defining the Cloud
Now, I had been thinking about the assertions in the above text, and doing some reading around, and it seems that the term ‘cloud computing' only really became current towards the end of the 2000s, when it became possible to use ‘software as a service' (SaaS) and remote storage and computation-on-demand services available over the Internet. By this definition, I considered my early use of email and file transfer (via SMTP and FTP) not to be that ‘cloudy', so I asked Andrew early into his presentation for clarification.
Andrew's usage is a very wide one; as far as he is concerned, it is a newly-minted term, but it describes arrangements that have been around for a long time. He said, ‘Cloud computing is whenever you have a collection of computers performing a function [for you], but they are not directly your responsibility.' By this token, the advent of the Internet itself was ‘cloudy' because it pooled the resources of all the participating networks (owned by companies, universities etc), and the routers forwarding data between them; every one of these items may have been owned by someone, but nobody owned The Internet per se.
(I wonder how far the envelope will stretch; and speaking of envelopes, if we remove the requirement for computers to be involved, was the Royal Mail even in Victorian times ‘cloudy' because once you dropped the envelope in the post-box, the business of who fed the horse, drove the train or tramped the pavements to get your letter to Aunt Emily was not your concern?)
Wherever we draw that line, it is clear that people and organisations increasingly use online remote services, some free of charge and some paid for by subscription, to host email accounts and web pages, back up large amounts of data and so on. Helping companies to do this big-time is one of the reason's for Core to be in business.
One can also, said Andrew, have a ‘personal cloud'; at home he has a several-terabyte Western Digital ‘My Cloud' drive attached to his home network (that is, Network-Attached Storage or NAS) where all his Stuff is kept. He can also access that remotely.
Turning to ‘a hint of fog', Andrew revealed Fog Computing as a concept that's been swirling about only in the last couple of years (and yes, there is a Wikipedia page about it). It refers to the sharing of computing resources between larger numbers of smaller machines and devices that are often rather more local to each other towards the ‘edges' of the Internet, in comparison to the more established model of cloud computing reliant on big central data centres. An article in IEEE Spectrum (1), for example, highlights the service provided by Symform, which federates the computing resources of its customers and uses them as a distributed storage resource with good redundancy built in, making it less likely that data will be lost in, for example, a large natural disaster. (Note that a data centre's backups are not *that* secure if the back-up disk sits in the next rack to the primary disk.)
Andrew then moved on to the ideas of ‘the Internet of Things' (sometimes called the Internet of Everything). This is a vision of things which are not traditional computing devices now being hooked up to the Internet to send and receive messages and data. One example might be a local authority using Internet connections to link its borough-wide CCTV cameras to a control centre rather than having dedicated fixed cables. But there are also stranger ones: Andrew described a refrigerator on the Microsoft campus which barcode-scans the container of milk as you remove it to make your tea, and weighs it as you replace it, to compute when it is necessary to resupply the fridge with a fresh bottle. (Andrew didn't say whether it also can sniff the milk to see if it has gone off.)
We had a bit of fun wondering whether a network-attached toaster might constitute a security risk (horrible thoughts of hackers mounting a Denial of Toast attack). I think it's a bit ludicrous to speak of everything talking to everything else. But generally speaking, we can expect more and more devices (sensors, environmental monitors, GPS locators, whatever next?) to communicate with relevant end-points using the Internet.
There are going to be a lot of emergent applications in health and social care, for example, helping to guarantee safety in independent living for elderly and vulnerable people. Londoners in particular have seen many aspects of cloud-connected things improve the capital's public transport, with GPS-tagged buses, Oyster and contactless-card payments, and bus-stops which offer increasingly accurate estimates of when buses will arrive (though I confess I do have a giggle when the bus-stop ‘crashes' and displays its IP address and an error code).
Machine Learning and AI
Machine Learning and Artificial Intelligence are two more terms with contested scope and meanings, and they do not necessarily imply anything cloudy at all. However, Andrew was focusing on what they might imply in the context of a ‘cloudy world', when the software services which we access are programmed to try to learn more about us and our preferences. Facebook and Google do this to direct advertising at you in a more targeted way. I have recently passed some sort of age threshold, so that Facebook no longer offers me dates with attractive African ladies, and has started to suggest solutions to incontinence and ways of paying for my funeral.
Should we worry? Professor Stephen Hawking has recently offered the opinion that we should be careful about what might emerge from machine intelligence, as they may not turn out to be those benign guardians, those ‘machines of loving grace', as the poet Richard Brautigan dreamed of in 1967. Might it be more like the Cyberdyne Systems ‘Skynet' computing cloud envisaged in the ‘Terminator' film franchise?
Some of us are wary of sharing too much about ourselves online lest machine intelligences get us in their sights; but as Andrew said, many of the younger people we know, the ones who seem permanently wired into social media, seem to care far less.
And so in conclusion, Andrew opened up to us the question, should we welcome the universality of connection, the wired things surrounding us, the machine intelligences keeping an eye on us? What sort of understanding do we have of where this is all going? What is the gap between what we may believe, and what is in fact true or probable?
Table group conversations
The next, arguably the main phase of a Gurteen Knowledge Café, is the point at which the audience stops being an audience, and the table group conversations begin. The idea is that we sit four or five to each table (the small tables at the Rubens pushed us towards three and four per table), and we share whatever ideas come to us around the topic, for ten minutes until David Gurteen blows his magic whistle.
Some of the people at each table should then move to another table, and the reconstituted groups continue for another ten minutes. Quite often this second session includes a period of people sharing what just happened conversationally at the first table group they found themselves in. Chances are that the conversational trend was different at various first-round tables, so the conversation ‘re-fractures' in new directions. After another ten minutes, David blows the whistle again and a third session is initiated.
There will always be some people who say more and who may dominate the table conversation, but having small table groups tends to mitigate against that. However the groups are big enough not to put undue pressure on people to feel forced to contribute.
I made notes and recorded at each of the table groups I was at. However, it makes better sense to skip to the final in-the-round session which in a sense gathered all the conversations together. It's never a complete picture because in the larger group some feel ill at ease speaking out, and back-and-forth reactions will foreground some issues and throw dust over the traces of others. But as a lightly-managed method for knowledge sharing, it does pretty well…
In the round
Following the table groups session, we gathered our seats into a big circle and David asked us to share as we wished. This session lasted about 40 minutes.
The first person to speak said that in the conversations he had had, the issues seemed to be less technical than socio-political. For example, machine learning might make middle class and managerial professionals redundant, and this could result in serious social dislocations.
Andrew referred to a recent conversation with the person at a client organisation moving their email out to the Microsoft Office 365 system; he feared he might be left with nothing to do. No, said Andrew; at present you use the systems you have to facilitate communication in the business, and surely you will continue to have the same job, but using a different technical system.
Several people chipped in with worries about what machine learning and machine ‘intelligence' might do for a tier of middle class support jobs: amongst paralegals, legal researchers and journalists for example. The top fee earners won't be threatened, but the ranks who support them might indeed be replaced by expert automated systems.
One rather scary aspect of machine intervention is represented by the research trend towards ‘autonomous killer robots', drones and missile batteries and battlefield weapons which are coming close to being granted powers to decide whether to kill or not. They may be constrained by their coding, but when there is the need to react quickly, quicker perhaps that human judgement would take, how long will this remain the case? South Korea has automated gun emplacements along its border with the North (the Samsung SGR-A1 system), currently under human control but capable of being made autonomous.
One lady mentioned that South Korea may be the only country which has actually developed an ethical framework for robotic behaviour, possibly akin to what the science fiction author Isaac Asimov put forward in ‘I, Robot' and other books. For South Korea it is significant not just because of the defence system mentioned above, but also because they hope to drive towards each Korean home having a robot by 2020.
Richard Harris, in his book ‘The Fear Index', suggests that we may control the morals and parameters of robotic systems, but it may still be the case that a system decides its behaviours for itself. The scenario is based on automated decisions in the investment banking industry. Now, one hopes that good decisions would be coded in; but it is often the case that we have lost control of the code, and no-one knows how it is working.
As a thought experiment, someone imagined a self-driving car. A small child runs out in front of the car and the car must act. To the left is a bus stop with eight people in the queue; to the right is a precipitous cliff. Which choice should the vehicle make, and would it make that choice?
One of us raised the issue of how different generations think about privacy behaviours and privacy laws.
The conversation took a turn towards the second question Andrew had launched at us, about the gap between perception or belief on the one hand, and reality. Challenged to explain, Andrew expanded by saying that he was often in conversations with people who he might have expected to have a wider vision, but was coming to appreciate that many senior and experienced people have their mindset in a kind of rut, ill-prepared for what is about to bring radical change. For himself, he thinks it behoves us to show an interest in our future.
Someone recalled the perceptual experiment that asks people to count the number of times that a basketball is passed, and hardly anyone charged with this task notices that someone in a gorilla suit walks right through the shot. It's what we might call ‘entrained thinking', the captivating power of mental models, and though mental models have their uses, so does naïvety? Assumptions undermine our ability to understand the world, especially in novel contexts and arrangements.
I asked if any of the table groups had addressed the question of ‘the Internet of Things' and someone replied that yes, on her table they thought it had the potential to create some large security risks and loopholes.
David Gurteen said that as an iPhone user, he recently became aware that when he has his phone plugged in to charge in the same room, he has become aware that ‘Siri' (the natural-language control interface for the telephone) is listening to every second of time and his every word. Siri has imperfect ears, and might hear David and his wife use a phrase in dinner conversation and interject, ‘How can I help you?' I raised the recent news stories about the Samsung voice-control TVs and the talking Barbie doll, both of which use an Internet link to a natural language processing software system ‘in the Cloud' and which therefore are also continuously listening to whichever human is in the same room (though soon, they start to listen and react to each other).
Someone remarked that there is a kind of trade-off between gaining increased machine help and losing our privacy and control over our own information. A trade-off along those lines may be perfectly acceptable, were we able to decide about it ourselves. But do we really understand what are the terms of the trade off? And who is in charge of those terms? Until Edward Snowden enlightened us, how much did we understand about how those trade-off were handing vast amounts of information about us to the security organisations?
What, for example, are we to make of the harvesting and mass pooling of our medical records and genetic data? It has some huge potential to advance medical science through Big Data analysis.
We had a bit of a debate about whether ‘radical transparency' with respect to our data is asymmetric (they want to know everything about Us but don't let us know much about Them), or whether the information flow is more symmetric than that.
In closing out, Andrew Driver suggested we check out a book by Peter Fingar called ‘Process Innovation in the Cloud', which is related to an article called ‘Everything has changed utterly'. The book, he suggested, is not that exciting, but the article is worth a look.
At this point David Gurteen thanked our hosts; he got people's unanimous agreement that it was OK to share emails amongst us, but we demurred at him sharing those with his toaster. And so we rose, and spent some more valuable informal time networking with the aid of wine and beer generously provided by our hosts.
Endnote: Privacy, protection and control
To the above I will add that at my first table group there was a strong focus on issues of privacy and confidentiality in email communications and in personal data in the cloud. For example, medical records are supposed to be kept securely, and this raises worries when suggestions are made that these could be kept ‘in the cloud'. Indeed, the most popular GP records system in the UK, EMIS, is moving to a cloud-based model for data storage, and this does provide substantial protection against data loss (for example in the case of a fire at the surgery). But just exactly where is the data being stored, and who can take a look at it?
Many cloud storage providers use servers based in the USA. When George W Bush signed the USA Patriot Act into force in 2001, its Title II in particular gave unprecedented powers to the US government agencies to snoop on the communications and data of individuals and organisations. This has caused concern amongst organisations in the European Union, which through Directive 95/46/EC has fairly stiff provisions in favour of protecting personal data. Companies operating in the European Union are not allowed to send personal data to countries outside the European Economic Area unless there is a guarantee that it will receive adequate levels of protection.
There is an agreement called ‘US-EU Safe Harbor' which was negotiated between the US Department of Commerce and the EU; this is supposed to provide a fast-track way to assure European customers that American cloud service providers will comply with Directive 95/46/EC, but it has been subject to at least two critical and sceptical reviews as regards to compliance and enforcement. It seems still very important to know where your data is, even if ‘cloud theory' says it isn't!
A further point that came up in one of my table groups was how companies use our information, and whether we mind about that. As has already been remarked, young people seem less concerned about privacy than older people, but perhaps that is a bit of a caricature, and it is more significant to know what value our information is to them, and what we get in return. Maybe we don't mind if by allowing a supermarket to associate our identity with our purchasing habits, by swiping a club card, we get access to special offers. But what about the recent sale of hospital data to commercial entities? Even if it is anonymised, said one person, we are ultimately the providers of that data, so should we not get some remuneration or benefit for allowing out data to join the pool?
(1) IEEE article about Fog Computing: http://spectrum.ieee.org/tech-talk/computing/networks/what-comes-after-the-cloud-how-about-the-fog