By Ian Besler, Post-Graduate Fellow
The city of Google Los Angeles is bursting with palm trees, edged by sparkling beaches, and even features a Hollywood sign, much like the conventional Los Angeles that many people know so well from film and television.
There’s a bustling port at its south end, teaming with massive cargo ships and the spectrum color palette of thousands of stacked, corrugated shipping containers. There’s a valley sprawling with single-family homes, light industrial warehouses, and car dealerships at its north end. There’s a basin south of that, and a range of foothills at the terminus of the Santa Monica mountains dividing the valley on the north from the basin on the south.
But there’s also something strange tucked into that range of foothills in Google Los Angeles. Another kind of division. There’s an edge. A frontier.
Like the manifestation of superstitions supposedly held by nervous sailors in stories of early global sea exploration, we eventually come upon an unsettling and disorienting digital edge while exploring Google Earth. This boundary is common to most Google cities. Some edges are more complex and considered than others, weaving along boulevards and thoughtfully avoiding housing developments. The edge of Google Los Angeles, on the other hand, is callous and doesn’t divert for mere homes and office buildings. In Alhambra, east of Los Angeles, the edge erases roughly half of the 12-story steel and glass headquarters of the Los Angeles County Department of Public Works —dissecting the tower almost exactly along its diagonal.
Prior to the ascent of ubiquitous digital earth browsing platforms (such as Google Earth and Bing Maps) and mobile mapping and traffic navigation applications (such as Google Maps, Apple Maps, and Waze), the comprehensive rendering of the city would have been a comparatively rare space of encounter. This might have commonly consisted of a flat map published by Rand McNally or Thomas Bros. Maps, publishers of The Thomas Guide, the once coveted fixture in the automobile of any serious Southern California motorist.
Any other point of encounter would have been novel or privileged—say the panoramic vista from the viewing deck of some tourist attraction or monument (The Gateway Arch in St. Louis or the Eiffel Tower), or the vista on display in some executive office suite (see Antonioni’s Zabriskie Point, for some cinematic examples).
The view afforded from the seat of a passenger airliner or an adequately elevated interstate overpass would have been a much more common site to experience and examine the city, its expanses, and its boundaries — albeit a relatively brief and transient view. The frame of the airplane or automobile window — a geometry of steel encased in plastic or some similar finishing material, surrounding a glass barrier— dictates rather forcefully the experience of this view; It’s difficult to feel connected to a landscape when the frame reinforces a sense of separation.
“This generally intellectual character of the panoramic vision is further attested by the following phenomenon, which Hugo and Michelet had moreover made into the mainspring of their bird’s-eye views: to perceive Paris from above is infallibly to imagine a history; from the top of the Tower, the mind finds itself dreaming of the mutation of the landscape which it has before its eyes; through the astonishment of space, it plunges into the mystery of time, lets itself be affected by a kind of spontaneous anamnesis: it is duration itself which is panoramic.” – Roland Barthes
Otherwise, the rare exhibition, say the General Motors Futurama display at the New York World’s Fair, or the U.S. Army Corps of Engineers Bay Model in Sausalito, California, or The Great Train Story at the Museum of Science and Industry in Chicago, would have been a singular occasion to study the built environment as an abstraction. These are scale models—objects that occupy a presence in a way distinct from images, but here, too, a frame is always present, often composed of similar materials and construction methods. The difference is that generally, in the case of scale models, for the viewer to take in the panoramic vision on offer, one must look in rather than looking out.
“Gone are the days when the only way to get a bird’s eye view of your favorite city was from the window of a penthouse apartment or helicopter. Now you can soar above the skyline by simply opening Google Earth on your desktop or mobile phone.” – Google
In Google Maps and Google Earth, innumerable hours can disappear in exploration of a mediated rendering of the city, generally through the frame of the computer or mobile screen. This is a rendering of a landscape that is accountable to a dataset of exact characteristics as they existed at the moment that the high resolution photographs were taken and the captured imagery interpolated to create the photogrammetric model. The viewer can control the scale depicted in the frame such that the rendering of the entire planet (roughly 40,000 miles above the Earth’s surface) can transition to a version of the view at eye level (roughly 6 feet above the Earth’s surface) in approximately 15 seconds — the duration being determined by one’s Internet bandwidth capacity and a speed setting in the application preferences of the application, which cover the gamut from “slow” to “fast.” Nearly eighteen years after the technology to render an entire planet surface was first demonstrated, the motto “from outer space to in your face” is still powerfully apt.
The set of video works, “The Resolution Frontier,” make explicit use of the filmic tracking shot to identify the edge of the model’s resolution in Google Earth and to activate it as a threshold — this is the frontier at which algorithmic geomodeling ends and the handmade model is allowed a tentative stay of execution, until, inevitably, Google’s scanning efforts envelope the entire surface of the Earth.
A video composition generated from Google Earth (a “tour” in the software’s parlance) is not so much composed by the user—as we might regard composition as a process of framing and recording in most time-based work — as it is scripted. The interfaces that Google Earth provides seem to resist the suggestion of the framing of a perspective or the movement through space of a recording device (as opposed to “Cameras” that users can create in most 3D software, such Autodesk’s Maya or Robert McNeel & Associates’ Rhinoceros). Rather, the software provides property windows, “Get Info” dialogue boxes for manipulating “views” of a “path.” Properties relating to the perceived speed of movement (from “Slow” to “Fast”) and rendering rate of the software, among others, are manipulated in the Preferences window.
It’s clear from the company’s promotional material and tutorials that Google imagines touring as a wonderful tool in the hands of well-intentioned land-use advocates and other motivated generators of content.
A search on YouTube for “Google Earth” yields an abundance of videos exposing “secret places,” “strange discoveries,” “hidden places,” and “Google Earth secrets.” Through rapid shifts in location, bombastic or evocatively mysterious soundtracks, wild gesticulation with screen-recorded cursor arrows, and provocatively suggestive captions (in default Arial typeface, ripe with misspellings) these videos insist that human-made disasters, elaborate conspiracies, and pre-historic symbology can be collected, constellations drawn, and rational conclusions reached — a powerful affordance, by any standards, for one piece of software. Such is the cultural influence that media generated with Google Earth possesses, despite being a resolutely consumer-grade tool.
Amazingly, Google Earth—despite whatever loyalties its name implies—is indifferent regarding astronomical objects. Google Earth allows users to view topographical renderings of not only the Earth, but also the Moon, and Mars via the “Explore” sub-menu—“Exploring” perhaps not a perfect term as it regards carrying out computational processes. In other words, if a user is “Exploring” the Moon in Google Earth, any pin, data point, or track of points that exists in the software’s library will be projected, almost as if by default, onto that astronomical body—scaled appropriately for the difference in mass and surface between the Earth and the body called-upon for exploration.
According to Google Earth’s re-projection of the Earth’s land masses onto the Moon, the Apollo 11 landing in Mare Tranquillitatis would have taken place somewhere in the north of the Democratic Republic of Congo, roughly 34 miles (on the Moon’s surface) southeast of the river port of Bumba. Coincidentally, four of the six manned Moon landings would, according to Google Earth’s projection, have taken place in Africa. The Apollo 12 and Apollo 14 landing sites would have been in the Atlantic Ocean.
What’s compelling about this activity—of inadvertently remapping American moon landings across the continent of Africa and the Atlantic basin—is that it suggests an ever-expanding set of misuses, hacks, and appropriation of software affordances in digital maps and models. These tools resist Google’s constriction on the agency that was once so central to Google Earth as a tool for creating a depiction of the city.
This essay was written as part of a post-graduate research fellowship in the Media Design Practices program at Art Center College of Design in Pasadena, California, with advice and support from Erin Besler, Anne Burdick, Walton Chiu, Tim Durfee, Ben Hooker, Kevin Wingate, and Mimi Zeiger. A version of this essay was originally published on Medium.