Does the newsworthiness of an event trump our right to privacy? And do aspects of smart cities create a potential threat to individual rights to privacy?

In a recent seminar on crisis communications a delegate challenged the media’s use of at-scene photos and video footage taken by members of the public. This was on the grounds that they may compromise the privacy of people who appear in them. There’s surely an element of justice in this challenge, but the presenter’s reply was that you are 'fair game' once you step outside your door. The irony is that the delegates at that very seminar were given the choice by the organisers not to appear in the promotional shots and footage that were being taken around them. So, does the newsworthiness of an event trump our right to privacy? This presenter clearly thought so, and his view probably reflects a pragmatic truth as far as media practice is concerned.

The argument often rolled out is that using at-scene 'citizen journalist' imagery is 'in the public interest'. But who (other than editors) gets to decide that and what does the phrase mean? Does it mean that showing the images would be 'in the public’s (best) interest' (which seems rather paternalistic) or that they would be 'interesting to the public' (which is, at best, merely pandering to public curiosity)? Also, it is, of course, worth pointing out that these choices about publication or not are made are made by people working in the commercial and highly competitive business of selling news, so the potential for conflicts of interest must be acknowledged. We can’t address that issue in detail in the scope of this article, but it is useful to connect the general idea of data capture, its use and its impact on individual privacy to another area of resilience interest – smart cities.

But let’s rewind a moment. In this year’s World Economic Forum survey of global risks, the section on 'future shocks' discusses the emergence of the 'Digital Panopticon'. It says…

"Facial recognition, gait analysis, digital assistants, affective computing, microchipping, digital lip reading, fingerprint sensors – as these and other technologies proliferate, we move into a world in which everything about us is captured, stored and subjected to artificial intelligence algorithms"  Emphasis added).

That means shared and used. After pointing out that this has both beneficial and adverse potential, it suggests that…

"Geopolitically, the future may hinge on how societies with different values treat these new reservoirs of data'"

Reservoirs is the right word. 'Smart' cities (or at least those that are getting smarter) are already gathering data on a huge scale about citizens’ movements, habits, preferences, behaviours and consumption patterns. It’s been called 'urban big data'. Back in 2014 a researcher pointed out that the…

 “ubiquitous collection of data about city processes may produce panoptic cities, in which systems that seek to enable more effective…governance (may) also threaten to stifle rights to privacy, confidentiality and freedom of expression”. (Emphasis added)

This is the quandary; who would not want smart systems that speed up travel, integrate our services, reduce congestion, reduce costs, improve services and generally make cities better, more efficient and even cleaner?  For one thing, smart cities are probably more resilient. In fact, smartness is a dimension of a city’s resilience – like security, safety, prosperity and communitarianism. The BSI publication Smart Cities Overview – Guide (PD 8100:2015) says that to achieve smartness the city needs to be 'instrumented', meaning the maximum use of sensors and data collection technologies. This data should be 'opened up', 'made easier to aggregate' and made easier to be 'visualized and accessed'. It does, of course, refer to its use for 'appropriate' purposes and, by implication, appropriate people, but it doesn’t really develop that line of thought.

So, aspects of the smart city agenda create a potential threat to individual rights to privacy. That seems clear.  So, some sort of consensus about safeguards is needed. But the general lack of debate about this means we might be sleep walking into a future that gradually but inexorably devalues privacy and makes it possible for the less well-intentioned to exploit our lack of it. 

The best examination of this issue is by Liesbet van Zoonen in an article published in 2016. She found that citizens in smart cities make clear distinctions about the city’s collection and use of data. They aren’t too bothered about systems that collect and use impersonal data in order to improve services, but in her area of study (Rotterdam) she found very high concerns about personal data collection and the proliferation of active (smart and integrated) surveillance. 

The boundary between these two 'domains' – impersonal data used to improve services, and personal data gathered by and used for surveillance – is ill-defined in practice. Also, the technologies of the Digital Panopticon are rapidly making the latter easier to do and more efficient. So, what are the safeguards? This should give leaders in city resilience and smartness a framework for understanding the public perceptions of this risk and designing effective communication strategies to address them.  

Are these fears justified? Well, connect them to our earlier feature about the Sendai Framework for Disaster Risk Reduction. This used various sources to suggest that the global trend appears to be in the direction of less privacy and fewer individual rights. In 2014 the UN stated that 'mass surveillance' (as opposed to 'targeted' surveillance for legitimate security and safety purposes) was a 'clear violation' of the core privacy rights that are protected by various conventions – including Article 8 of the European Convention on Human Rights. 

So, at some point the smart and resilient cities agendas will need to build and maintain confidence in their ability to protect people from the consequences of this visible risk to their privacy and their private lives.