Whatsapp, GPS and the art of urban navigation

Blog December 30, 2013 9:00 am

Whatsapp!!  [Image taken from  this page

Whatsapp!! [Image taken from this page]

Jared McCormick from Harvard Anthropology has an interesting piece on the uses that Whatsapp, a messaging application for smart-phones (which I know some of us at HASTS use!) , is put to in Lebanon.  The whole thing is worth a read but what really got my interest was this paragraph (my emphasis):

"Share location."  [Photo used from  his article.]

“Share location.”  [Photo used from his article and not from McCormick's original piece]

While we all tacitly understand that by carrying a phone we are trackable, this becomes clearer as smartphones allow for a tactile interaction with GPS. What is baffling, often times across class divides, are the ways in which our actual physical location becomes rendered on digital interpretations of space: on a colored screen, with a pulsing blue dot representing ourselves. This logic, portrayed through the cartography of services such as Google Maps, can be incomprehensible to someone who lacks the necessary literacy to read, interact, and decipher maps. This can then recast the physical-spatial representations we all have in our minds with the visual and experiential images we come to interact with in the city.I remember “sharing my location” with some friends in Cairo last year, and they WhatsApp-ed back wanting to know my street name or a local landmark. Having just arrived in the city, I had neither and advised them to follow the “blue dot” (them) on the map and to start walking along a path through a network of streets and buildings towards the red dot (me). While this example might seem somewhat insignificant, I think it foretells an altered way of learning, being, and moving in the city. These virtual representations of our physical environments are like an electronic guide, to be followed on our screens, as we step over curbs, through traffic, and around corners, all the while connected and existing in space in a different way. How many of us have seen someone walking on the street clearly navigating their path while looking at the map in their hand on their smartphone. How does this change our movement as well as our understanding of and connection to the city? On the same thought, how many of us remember phone numbers now that our phones do this for us? Thus, in what ways will the use, reliance, and integration of GPS services and products change our perception and knowledge of the spatial world? This is especially relevant as more locations, roads, and spots become geo-referenced and overlaid onto these virtual maps.

Street signs are prominently displayed on every intersection in most cities in the US.  [Photo taken from  this page.

Street signs are prominently displayed on every intersection in most cities in the US. [Photo taken from this page.]

I find this especially striking because one of my central a-ha moments when I moved from Mumbai to New York City in 2002 was the realization that my own experience of public transport in the two cities — so similar in other ways — was extraordinarily different.  Names of streets were prominently displayed on every intersection in New York and therefore used as landmarks.  Not so in Mumbai, where landmarks were often theaters and schools.  Maps of the train system overlaid over the geography, so common in New York, were hardly to be seen in Mumbai.  In New York, instructions to an unknown location would be in terms of going “uptown” or “downtown”, while in Mumbai, they would involve terms like “left” and “right”.  All of these made the phenomenological experience of navigating in these two cities radically different.

In Cognition in the Wild, his analysis of the process of ship navigation, Edwin Hutchins breaks down the problem of navigation (whether the unit of analysis is the ship or just a single human being) into two computational steps. 1  First, where am I (or where is the ship)?  Second, what do I do next to get to my destination (or what do we do next to get the ship to its destination)?

A woman using a sextant to determine the latitude and longitude.  Courtesy of the  Commons Archive at Flickr.

A woman using a sextant to determine the latitude and longitude. Courtesy of the Commons Archive at Flickr.

And the map, when it is available and widely used, is a key part of solving this first problem.  For Hutchins’ actors in the control-room of the ship, figuring out where they are now, is a fairly protracted process.  It involves various operators stationed at different observation points on the ship, who use their instruments and obtain readings, which they then convey to those in the control room.  The people in the control room then use these readings, as well as more instruments, and finally a map to locate a point where they are. 2  For a pedestrian or a car-driver, finding out where he is now depends on the kind of environment he is in.  A pedestrian in New York might be able to look at the street names and use a map to know where he is.  A pedestrian in Mumbai might need to ask another passerby, or find a landmark, or even look at the address printed on the billboards of the shops that he sees.  The availability of GPS applications on the smart-phone does transform this problem a little bit.  First, the map itself is more easily available and portable (provided of course that you have a wireless or mobile connection), and opening up the application you instantly get the red dot that tells you where you are on the map.

The GPS-based location app on the smart-phone.  The moving blue dot changes my relationship to the map representation.  [Image taken from  this article.]

The GPS-based location app on the smart-phone. The moving blue dot changes my relationship to the map representation. [Image taken from this article.]

The second step — what do I do next? — is even more difficult.  For now, even if we just think about pedestrians (and leave Hutchins’ ship-navigation team out of it),  it becomes clear that it isn’t that easy for pedestrians to know what to do even if they know where they are on the map (provided the map is readily available, as it is in New York City).  To know what to do, they need to establish some kind of correspondence between the map and what they see around them; they need to orient themselves using the map.  This is where things get difficult.  To orient myself to the map, I often find myself walking a block in a certain direction to see what the next street is — testing the waters, or the road, one might call it — and then looking back at the map I know what to do next.  (And since Murphy’s Laws often hold true, my “test walk” is often in the direction opposite to the one I really need to heard towards in order to get to my destination.)

I acquired a smart-phone only a few months ago and I’ve realized that it makes the little “test walk” unnecessary.  Because as I start walking, the little blue dot on my smart-phone moves as well  (Again, determining whether it has really moved or whether this is just noise is something I need to figure out.)  And by establishing a correspondence between my movement and the dot’s movement, I can figure out my next move.

What McCormick’s piece points out though is that this ability to figure out my next move using the movement of the dot on the GPS application is not necessarily something that is only possible in a map-saturated culture like the US.  It could allow a different kind of navigation even in places where street-names aren’t so prominently displayed (or used in everyday contexts), where maps aren’t so ubiquitous, and where landmarks like schools, theaters and shops often provide a sense of orientation as one navigates through the city.

Just another way in which location-based computing applications are reconfiguring knowledge and power relations.

Some other interesting works on how people navigate (feel free to suggest others in comments!):

  1. Ed Hutchins.  Cognition in the Wild (1995).
  2. Janet Vertesi.  Mind the Gap: The London Underground Map and Users’ Representations of Urban Space, Social Studies of Science 38 (2008).
  3. Kevin Lynch.  The Image of the City (1960).
  4. Colin Ellard.  You Are Here: Why We Can Find Our Way to the Moon, but Get Lost in the Mall (2010).

  1. Here, he is following the three-level organization suggested by computer scientist David Marr.  Marr suggests that every perceptual system (e.g. vision) can be decomposed into three levels: first, what problem is the system solving?  This is the computational level of analysis.   Second, what algorithms are being used to solve the problem?  This is the algorithmic level of analysis.  Finally, how are the algorithms implemented?  This is the implementation level of analysis.  Hutchins is thinking about the the problem of navigation at the computational level of analysis although he is thinking not about individual human beings but about socio-technical systems like ships. 

  2. This is where Cognition in the Wild seems to me a little dated because today all they would need is a GPS sytem and indeed, the US military was one of the first to use the GPS system before releasing it out so that it could be used by everyone else. 

Leave a reply