The digital culture we now live in was hard to imagine twenty years ago, when the Internet was hardly used outside science departments, interactive multimedia was just becoming possible, CDs were a novelty, mobile phones unwieldy luxuries and the World Wide Web did not exist. The social and cultural transformations made possible by these technologies are immense. During the last twenty years, these technological developments have begun to touch on almost every aspect of our lives. Nowadays most forms of mass media, television, recorded music and film are produced and even distributed digitally; and these media are beginning to converge with digital forms, such as the Internet, the World Wide Web, and video games, to produce a seamless digital mediascape.
At work we are surrounded by technology, whether in offices or in supermarkets and factories, where almost every aspect of planning, design, marketing, production and distribution is monitored or controlled digitally. Galleries and museums are far from exempt from the effects of these technological transformations. Indeed, it might be suggested that museums and galleries are profoundly affected and that the increasing ubiquity of systems of information manipulation and communication presents particular challenges to the art gallery as an institution. At one level these challenges are practical: how to take advantage of the new means of dissemination and communication these technologies make possible; how to compete as a medium for cultural practice in an increasingly media-saturated world; how to engage with new artistic practices made possible by such technologies, many of which present their own particular challenges in terms of acquisition, curation and interpretation.
Arguably, at another level the challenges are far more profound: they concern the status of institutions such as art galleries in a world where such technologies radically bring into question the way museums operate. This is particularly true of ‘real-time’ technologies with the capacity to process and present data at such a speed that the user feels the machine’s responses to be more or less immediate. Real-time computing underpins the whole apparatus of communication and data processing by which our contemporary techno-culture operates. Without it we would have no e-mail, word processing, Internet or World Wide Web, no computer-aided industrial production and none of the invisible ‘smart’ systems with which we are surrounded. ‘Real time’ also stands for the more general trend towards instantaneity in contemporary culture, involving increasing demand for instant feedback and response, one result of which is that technologies themselves are beginning to evolve ever faster. The increasing complexity and speed of contemporary technology is the cause of both euphoria and anxiety.
This is reflected in the recent work of a number of influential commentators. Richard Beardsworth states that ‘[O]ne of the major concerns of philosophical and cultural analysis in recent years has been the need to reflect upon the reduction of time and space brought about by contemporary processes of technicisation, particularly digitalisation’.1 In an essay published, ironically perhaps, on-line, the literary theorist J. Hillis Miller describes some of the symptoms of our current technological condition:
As this epochal cultural displacement from the book age to the hypertext age has accelerated we have been ushered ever more rapidly into a threatening living space. This new electronic space, the space of television, cinema, telephone, videos, fax, e-mail, hypertext, and the Internet, has profoundly altered the economies of the self, the home, the workplace, the university, and the nation-state’s politics. These were traditionally ordered around the firm boundaries of an inside-outside dichotomy, whether those boundaries were the walls between the home’s privacy and all the world outside or the borders between the nation-state and its neighbours. The new technologies invade the home and the nation. They confound all these inside/outside divisions. On the one hand, no one is so alone as when watching television, talking on the telephone, or sitting before a computer screen reading e-mail or searching an Internet database. On the other hand, that private space has been invaded and permeated by a vast simultaneous crowd of ghostly verbal, aural, and visual images existing in cyberspace’s simulacrum of presence. Those images cross national and ethnic boundaries. They come from all over the world with a spurious immediacy that makes them all seem equally close and equally distant. The global village is not out there, but in here, or a clear distinction between inside and out no longer operates. The new technologies bring the unheimlich ‘other’ into the privacy of the home. They are a frightening threat to traditional ideas of the self as unified and as properly living rooted in one dear particular culture-bound place, participating in a single national culture, firmly protected from any alien otherness. They are threatening also to our assumption that political action is based in a single topographical location, a given nation-state with its firm boundaries, its ethnic and cultural unity.2
French philosopher Bernard Stiegler points to a ‘technicisation of all domains’ being ‘experienced on a massive scale’. This is leading to ‘countless problems’, including:
The installation of a generalised ‘state of emergency’ caused not only by machines that circulate bodies but by data-transport networks: the growing paucity of ‘messages’, illiteracy, isolation, the distancing of people from one another, the extenuation of identity, the destruction of territorial boundaries; unemployment – robots seeming designed no longer to free humanity from work but to consign either to poverty or stress; threats surrounding choices and anticipations, owing to the delegation of decision-making procedures to machines that are on the one hand necessary since humanity is not fast enough to control the processes of informational change (as is the case for the electronic stockmarket network), but on the other hand also frightening since this decision making is combined with machines for destruction (for example in the case of polemological networks for the guidance of ‘conventional’ or non-‘conventional’ missiles, amounting to an imminent possibility of massive destruction); and, just as preoccupying, the delegation of knowledge, which not only modifies radically the modes of transmission of this knowledge but seems to threaten these forms with nothing less than sheer disappearance.3
This includes the ‘extraordinary influence on behaviour by the media, which controls the production of news that is transmitted without delay to enormous population masses of quite diverse cultural origins, by professionals whose activity is ‘rationalised’ following exclusively market-oriented criteria within an ever more concentrated industrial apparatus’. Stiegler suggests that:
In this age of contemporary technics, it might be thought that technological power risks sweeping the human away. Work, family and traditional forms of communities would be swept away by the deterritorialisation (that is, by destruction) of ethnic groups, and also of nature and politics (not only by the delegation of decision making but by the ‘marketisation’ of democracy), the economy (by the electronisation of the financial activity that now completely dominates it), the alteration of space and time (not only inter-individual spaces and times, by the globalisation of interactions through the deployment of telecommunication networks, the instantaneity of the processes, the ‘real time” and the ‘live’, but also the space and time of the ‘body proper’ itself, by tele-aesthesia or ‘tele-presence’.4
Friedrich Kittler suggests that the digitisation and circulation of information made possible by the installation of optical fibre networks is driven by Pentagon plans to construct a communications network that would not be disrupted by the electro-magnetic pulse that accompanies a nuclear explosion. This, in turn, is fundamentally altering our experiences of the media:
Before the end, something is coming to an end. The general digitisation of channels and information erases the differences among individual media. Sound and image, voice and text are reduced to surface effects, known to consumers as interface. Sense and the senses turn into eyewash. Their media-produced glamour will survive for an interim as a by-product of strategic programs. Inside the computers themselves everything becomes a number: quantity without image, sound, or voice. And once optical fiber networks turn formerly distinct data flows into a standardised series of digitised numbers, any medium can be translated into any other. With numbers, everything goes. Modulation, transformation, synchronisation: delay, storage, transposition; scrambling, scanning, mapping – a total media link on a digital base will erase the very concept of medium. Instead of wiring people and technologies, absolute knowledge will run as an absolute loop.5
Kittler does at least concede that ‘there still are media; there is still entertainment’. Literary theorist Bernard Siegart is somewhat more apocalyptic in that he sees the development of real-time networks leading to the end of art altogether:
The impossibility of technologically processing data in real time is the possibility of art.. As long as processing in real time was not available, data always had to be stored intermediately somewhere – on skin, wax, clay, stone, papyrus, linen, paper, wood, or on the cerebral cortex- in order to be transmitted or otherwise processed. It was precisely in this way that data became something palpable for human beings, that it opened up the field of art. Conversely it is nonsensical to speak of the availability of real-time processing, insofar as the concept of availability implies the human being as subject. After all, real-time processing is the exact opposite of being available. It is not available to the feedback loops of the human senses, but instead to the standards of signal processors, since real-time processing is defined precisely as the evasion of the senses.6
Meanwhile Andreas Huyssen suggests that one response to the ever-greater ubiquity of real-time systems is an increasing interest in memory. Writing about the building of Holocaust memorials Huyssen observes that:
Both personal and social memory today are affected by an emerging new structure of temporality generated by the quickening pace of material life on the one hand and by acceleration of media images and information on the other. Speed destroys space, and it erases temporal distance. In both cases, the mechanism of physiological perception is altered. The more memory we store on data banks, the more the past is sucked into the orbit of the present, ready to be called up on the screen. A sense of historical continuity or, for that matter, discontinuity, both of which depend on a before and an after, gives way to the simultaneity of all times and spaces readily accessible in the present.7
Elsewhere Huyssen proposes that:
Our obsession with memory functions as a reaction formation against the accelerating technical processes that our transforming our Lebenswelt (lifeworld) in quite distinct ways. [Memory] represents the attempt to slow down information processing, to resist the dissolution of time in the synchronicity of the archive, to recover a mode of contemplation outside the universe of simulation, and fast-speed information and cable networks, to claim some anchoring space in a world of puzzling and often threatening heterogeneity, non-synchronicity, and information overload.8
Huyssen thus suggests one idea about what the role of the museum or gallery might be in our current technological conditions; a ‘place of resistance to’ and ‘contemplation outside’ of the effects of ‘accelerating technical processes’. Indeed museums and galleries deal with things, objects, whose very materiality would seem to make them resistant to the transformations wrought on other discourses by electronic and digital media. Indeed, it would seem from visiting a gallery such as Tate Modern that art is still very much a matter of producing such objects, paintings, sculptures and so on.
But the status of the museum or gallery in relation to ‘the accelerating technical processes that are transforming our life-world’ is more complex. As an archive, a form of artificial, external, memory, the museum or gallery cannot stand outside of, separate and resistant to the technical means that structure our memories. In the mid 1980s Jacques Derrida flags for urgent attention:
[T]he immense questions of artificial memory and of modern modalities of archivation which today affects, according to a rhythm and with dimensions that have no common measure with those of the past, the totality of our relation to the world (on this side of or beyond its anthropological determination): habitat, all languages, writing, ‘culture’, art (beyond picture galleries, film libraries, video libraries, record libraries), literature (beyond libraries), all information or informatisation (beyond ‘memory’ data banks), techno-sciences, philosophy, (beyond university institutions) and everything within the transformation which affects all relations to the future.9
Derrida pursues this theme in his book Archive Fever where suggests that:
[T]he archive is not only the place for stocking and for conserving an archivable content of the past which would exist in any case, such as, without the archive, one still believes it was or will have been. No, the technical structure of the archiving archive also determines the structure of the archivable content even in its very coming into existence and in its relationship to the future. This archivisation produces as much as it records the event.10
[W]e should not close our eyes to the unlimited upheaval under way in archival technology. It should above all remind us that the said archival technology no longer determines, will never have determined, merely the moment of the conservational recording, but rather the very institution of the archivable event this archival technique has commanded that which in the past even instituted and constituted whatever there was as anticipation of the future.11
The gallery is as performative as it is constative. It creates the past it supposedly simply shows by what it chooses to accept as a donation, to buy, to curate, conserve, and display. Thus it affects not just our understanding of and access to the past, but also our relation to the future by choosing the legacies that are available to us and to future generations. And this is not just a question of taste, fashion, finances and so on. It is fundamentally bound up with the structure of the gallery as an institution, in terms of its understanding of its role, its intentions and duties, and even its physical embodiment. For example, the most cursory comparison between the history of post-war art and the Tate’s holdings will demonstrate that, for all its intentions to represent, as best it is able, art of that period, there are many forms of practice it has failed to engage with completely or at best only partially or belatedly. These include: Cybernetic Art, Robotic Art, Kinetic Art, Telematic Art, Computer Art and net.art
It is far from coincidental that all these and others I have not mentioned are practices that emerged either in reaction against or in response to the increasing importance and ubiquity of information and communications technologies, such as telephony, television, computing, networking and so on. It is not, of course, that Tate is deliberately following a policy of exclusion in terms of the above. It is rather that an institution founded in and for the very different conditions of art production and reception of the late nineteenth century, simply is not properly equipped to show such work, or at least not as it is presently constituted.
Yet such work has a history that goes back to the Second World War. The War had necessitated a number of important technological developments, including digital computing and radar, as well as related discourses such as Cybernetics, Information Theory and General Systems Theory. In the decades that followed the War artistic responses to the possibilities that these technologies and ideas offered proliferated. These were often facilitated or inspired by the emigration of artists and designers connected to Kineticism and the Bauhaus to the United States after the War. In the 1950s and early 1960s John Cage developed work that engaged with notions of interaction and multimedia and with the possibilities of electronics, such as his famous ‘silent piece’, 4’33. His work was one of the main inspirations not just for other composers working with electronic means but also for artists interested in process, interaction and performance, such as Alan Kaprow and those involved with the Fluxus Group.
In the United States the 1950s also saw some of the first electronic artworks, made by, among others, Ben Laposky and John Whitney Sr, as well as some of the first experiments in computer-generated music, by Max Mathews at Bell Labs. Meanwhile, in Europe, composers such as Pierre Boulez, Edgar Varese and Karlheinz Stockhausen were also experimenting with electronics, while artists such as Jean Tinguely, Pol Bury, Nicolas Schöffer, Takis, Otto Piene, Julio le Parc, Tsai Wen-Ying, and Len Lye (also known as a filmmaker), and groups such as Le Mouvement, The ‘New Tendency’, ZERO and the Groupe de Recherche d‘Art Visuel, started to explore the possibilities of Kineticism and cybernetics for art. This work was accompanied and encouraged by the work of theorists such as Abraham Moles in France and Max Bense in Germany, both of whom wrote works in which information theory and cybernetics were applied to art. Bense was able to put his ideas into practice through his founding of the Stuttgart University Art Gallery. During his two-decade long tenure as head of the Gallery it held some of the very first exhibitions of computer art.
In Britain a generally pastoral and anti-technological attitude had prevailed in the arts since the nineteenth century, though there were exceptions such as the Vorticist movement in the early twentieth century. But the major force for promoting technological and systems ideas in this country was the short-lived but influential ‘Independent Group’ (IG), which was a loose collection of young artists, designers, theorists and architects connected with the Institute of Contemporary Arts. Through shows and discussions at the ICA and elsewhere, advanced ideas about technology, media, information and communications theories and cybernetics were presented and debated. The most famous exhibition with which the IG was connected was This is Tomorrow at the Whitechapel Art Gallery in 1956, which explored many of these ideas with great panache.
The needs of nuclear defence in particular, and military funding more generally, had led to the development of the computer as an interactive visual medium, rather than simply a ‘number cruncher’. Along with other technological developments this produced an increased interest in the possibilities of such technology as a tool for art. In 1965 and 1966 the first exhibitions of computer art were held at the Stuttgart University Art Gallery and the Howard Wise Art Gallery in New York. In the late 1960s the increasing sophistication and availability of technologies such as computers and video and the ideas of theorists such as Buckminster Fuller and Marshall McLuhan gave further impetus to the development of art practices involving both the technologies themselves and related concepts. It is possible to discern the emergence of a utopian ‘systems aesthetic’, in which the combination of new technologies and ideas about systems, interaction and process would produce a better world. Artists, composers, filmmakers, scientists, architects and designers all seized upon the possibilities of new technologies and ideas to produce work that either involved such technology or alluded to the world it was helping to bring about.
Among the more important artists and groups were: Roy Ascott, David Medalla, and Gordon Pask in Britain, all of whom employed ideas derived from Cybernetics; Lilian Schwartz, Edward Zajac, Charles Csuri, Ken Knowlton and Leon Harmon, and Michael Noll, who pioneered computer graphics in the United States; while Manfred Mohr and others connected with Max Bense did the same in Germany, filmmakers Stan Vanderbeek, and Len Lye, Fluxus members Wolf Vostell and Nam June Paik, who were among the first to use televisions in their work. Paik, whose work also involved other technologies such as tape, was also one of the first artists to take advantage of the development of portable video cameras, to produce some of the first video art, a practice taken up by other young artists of the time, including Les Levine and Bruce Nauman. At the same time other technologies, such as electronics, lasers and light systems were exploited by artists such as Vladimir Bonacic, Otto Piene and Dan Flavin. One of the most important developments of the period was that of large-scale multimedia environments. Among those involved in such work were Robert Rauschenberg, Robert Whitman, John Cage, LaMonte and Zazeela Young and their Theater of Eternal Music, Mark Boyle, and groups such as USCO and Pulsa. This type of work intersected with developments in psychedelic rock music and underground entertainment. Many of those later considered to be part of Conceptual Art were then allied with these kinds of projects.
Some of the most important work was undertaken under the aegis of ‘Experiments in Art and Technology’ (EAT), a group founded by Billy Klüver and Robert Rauschenberg and dedicated to fostering collaborations between artists and engineers. In 1966 EAT held its famous show 9 Evenings at the Armory in New York. Over the eponymous nine evenings a series of collaborative happenings were staged, involving both artists and engineers. In the years that followed a number of major exhibitions involving new technologies were held, including The Machine as Seen at the End of the Mechanical Age at MOMA, New York in 1968, which was accompanied by a show of work commissioned by EAT, Some More Beginnings at the Brooklyn Museum. In the same year the legendary exhibition Cybernetic Serendipity, curated by Jasia Reichardt, was held at the ICA in London. A year later there was Art by Telephone in Chicago and Event One in London (the latter organised by the Computer Arts Society, the British equivalent of EAT). In 1970 critic and theorist Jack Burnham organised Software: Information Technology, its meaning for art at the Jewish Museum in New York. Like Cybernetic Serendipity this show mixed the work of scientists, computer theorists and artists with little regard for any disciplinary demarcations. In 1971 the results of Maurice Tuchman’s five-year Art and Technology programme were shown at the Los Angeles County Museum.
Jack Burnham and Jasia Reichardt were also among those who produced critical works on the subject of art, science and technology. Burnham published his magnum opus Beyond Modern Sculpture in 1968. At around the same time Reichardt published a number of works, including a special issue of Studio International to accompany her exhibition, while Gene Youngblood published Expanded Cinema, an extraordinarily prescient vision of experimental video and multimedia. How important this area was then considered is demonstrated by the fact that Thames and Hudson published two books on art and technology within two years of each other, Science in Art and Technology Today by Jonathan Benthall in 1972 and Art and the Future by Douglas Davisin 1973, the same year in which Stewart Kranz produced his monumental work Science & Technology in the Arts: a Tour Through the Realm of Science / Art.
It is hard to recapture the utopian energy and belief embodied in these exhibitions and publications. As far as Reichardt, Burnham, Davis and others were concerned the future of art was as a means of engaging with the concepts, technologies and systems through which society was increasingly organised. Yet the apogee of this thoroughly utopian project also represented the beginning of its demise, and the replacement of its idealism and techno-futurism with the irony and critique of Conceptual Art. To begin with at least, it was hard to distinguish between conceptual art and systems art. Indeed, for much of the time they were interchangeable and indistinguishable. But by 1970 the difference was beginning to come clear. In 1970, the same year as Burnham’s Software show, Kynaston McShine curated an exhibition at MOMA with a name that placed it firmly in the systems art area. Information may have sounded very systems-oriented, and showed some of the same people as Software, but it did not include the technologists and engineers who had also been included in the earlier show. Furthermore, the general attitude evinced by the artists towards technology was increasingly distanced and critical. Perhaps one of the last gasps of systems art was in 1971, when Robert Morris, now considered a paradigmatic conceptual artist, had a show at the Tate. Though it did not involve technology per se the show was almost entirely concerned with interaction, feedback and process, with the visitors encouraged to climb on and manipulate the works on display. (A show further from the arid intellectualism that supposedly characterises Conceptual Art is hard to imagine.) Famously the show was closed after five days, and only reopened in a far less interactive form.
Thus the early 1970s represented the apparent disappearance of systems art, and its supersession by other approaches. Its failure, if it can be so described, can be put down to a number of factors; the quality of much of the work itself; the failure of the exhibitions to work as intended; a rejection on the part of artists of the collaborations with industry necessary to realise projects and exhibitions; a suspicion of the technocratic pretensions of systems art and of cybernetics with its roots in the military-industrial-academic complex; and of technologies such as computers as the means of perpetuating an instrumental and scientistic view of the world, and particularly in light of their use in the Vietnam War and elsewhere; and finally, difficulties in collecting, conserving and commodifying such work. The souring of the counter-culture in the early 1970s and the economic crises of the same period did little to encourage any kind of technologically based utopianism. In the 1970s and 1980s Video art was gradually subsumed by the mainstream art world, but new media, electronic, computer, and cybernetic art was largely ignored. Such art continued to be made and taught, but it was mostly shown in specialist and trade shows such as Siggraph, the annual conference organised by the Association of Computer Machinery for those with an interest in graphics. Many of those who had considered themselves to be artists working with technology ended up working in industry.
Yet, at another level, systems art can be said to have succeeded incredibly well, though not as art. The economic crises led to a restructuring of capitalist economies and global finance, which was aided by the increasing ubiquity of networked computing. This in turn heralded the beginnings of what became known as the post-industrial economy in which information became the dominant mode of production (in the developed countries at least), as predicted by pundits such as Alvin Toffler and Daniel Bell. The techno-utopianism of the 1960s art world re-emerged in the 1970s in relation to developments such as the personal computer and the Internet, through which technologies developed during the Cold War by the ‘Military-Industrial-Academic complex’ were appropriated and repurposed by the neo-liberal end of the counter-culture. The late 1970s saw not just those developments but also the beginnings of computer special effects, video games, and the beginnings of user-friendly systems as well as cultural responses such as Cyberpunk fiction, Techno music and Deconstructive graphic design. At the end of the decade two French academics, Simon Nora and Alain Minc, wrote a report for President Giscard d‘Estaing which declared the ‘computerisation of society’ and the advent of ‘telematics’, meaning the coming together of computers and telecommunications. Work made in this period includes that of Douglas Davis, Harold Cohen and ‘Aaron’, Stelarc, Jeffrey Shaw, Legible City, Lilian Schwartz, Paul Brown, Robert Adrian X.
It is around this time that discourses such as Poststructuralism and Postmodernism began to emerge, partly as a critical response to the ubiquity and power of information technologies and communications networks. The writings of Derrida, Baudrillard, Jameson, Deleuze and Guattari and Lyotard, whatever the differences in their approaches and their ostensible subject matter, always imply a critique of systems and communications theories. It was possibly the space opened up by this critical approach that began to make systems art of interest to the mainstream art world again. In 1979 the first Ars Electronica festival was held in Linz, Austria, which aimed to look at the application of computers and electronic technologies. In 1985 the philosopher Jean-François Lyotard curated a massive exhibition at the Beaubourg, Les Immatériaux, which aimed to show the cultural effects of new technologies and communication and information. It was also around this time that the Tate put on its first show of computer-generated art, the 1983 exhibition of work produced by Harold Cohen’s ‘Aaron’, an artificial-intelligence program which drives a drawing machine.
But it was really at the end of the 1980s and the beginning of the 1990s that systems art began to re-emerge. This period also saw the beginnings of the World Wide Web (WWW), though it would take a few years for it to be become widely available. In Liverpool in1988 Moviola, an agency for the commissioning, promotion, presentation and distribution of electronic media art was founded, under whose aegis Videopositive, an annual festival of such art was held. (Moviola later transmogrified into the Foundation for Art and Creative Technology or FACT). In the same year the first International Symposium on the Electronic Arts (ISEA) was held. A year later the Zentrum für Kunst und Medientechnologie (ZKM) was founded in Karlsruhe, Germany, which remains a major centre for media and technology arts.
In 1990 a similar institution was opened in Japan, the NTT InterCommunication Centre, Tokyo, while the San Francisco Museum of Modern Art held its first show of new media art. Throughout the 1990s the Walker Art Gallery in Minneapolis showed digital and new media works. It was also round this time that the first use of computers for public display of information was undertaken at the National Gallery in London. In 1993 the Guggenheim in New York held an exhbition of Virtual Reality: An EmergingMmedium followed three years later by Mediascape. In 1994 the first Lovebytes festival of electronic art was held in Sheffield and in 1997 the Barbican Art Gallery put on the Serious Games: Art,Technology and Interaction exhibition, curated by Beryl Graham. In Hull the Time-Based Arts Centre was started, with a remit to concentrate on new media arts and the intention to build a large Centre for Time-Based Arts. Last year FACT opened a new media arts centre in Liverpool, while the Baltic in Gateshead has committed itself to increasing its involvement in new media arts, as will Bromwich’s new arts space The Public (formerly c/Plex), when it opens. It is noticeable that the only institution in London putting on gallery displays of such work is the Science Museum.
Perhaps the most important event in terms of digital art practice at this time was the development of the first user-friendly web browser in 1994. The World Wide Web had been developed as a result of the pioneering ideas of Tim Berners-Lee, a British scientist at the European Nuclear Research Centre (CERN) in Switzerland. Berners-Lee was interested in using the Internet to allow access to digital documents. To this end he developed a version of the Standard Generalised Markup Language (SGML) used in publishing, which he called Hypertext Markup Language or HTML. This would allow users to make texts and, later on, pictures, available to viewers with appropriate software, and to embed links from one document to another. The emergence of the Web coincided almost exactly with the collapse of the Soviet Union and it was the new-found sense of freedom and the possibilities of cross-border exchange, as well as funding from the European Unions and NGOs such as the Soros Foundation that helped foster the beginnings of net art in Eastern Europe, where much of the early work was done.
When ‘user-friendly’ browsers such as Mosaic and Netscape came out in the early to mid 1990s the possibilities of the Web as a medium were seized upon by a number of artists, who, in the mid 1990s, starting producing work under the banner of ‘net.art’. This meant work that was at least partly made on and for the Web and could only be viewed on-line. The term ‘net.art’ was supposedly coined by Vuk Cosic in the early 1990s to refer to artistic practices involving the World Wide Web, after he had received an email composed of ASCII gibberish, in which the only readable element was the words ‘net’ and ‘art’ separated by a full stop. Since then there has been an extraordinary efflorescence of work done under the banners of network art, net.art or net art, from Vuk Cosic, Olia Lialina, Alexei Shulgin, Rachel Baker, Heath Bunting, Paul Sermon, 0100101110101101.org, Natalie Bookchin, Lisa Levbratt, Paul Sermon, Radioqualia, ®Tark, Matt Fuller, Thomson and Craighead, and many others.
At the same time, discussions and commentary about technology and art have proliferated through email lists such as Rhizome, Nettime and CRUMB (Beryl Graham and Sarah Cook’s digital curation list based at Sunderland University), as well as publications such as Mute. As in the late 1960s and early 1970s there have been a number of important publications on this area by, among others, Lev Manovich, Christiane Paul, Oliver Grau, Stephen Wilson, Edward Shanken, and Michael Rush, as well as PhDs in Art History departments on, for example, net.art (Josephine Berry, Manchester) and Computer Art (Nick Lambert, Oxford). Art history departments here and abroad are now starting to look seriously at this area, as shown by the Arts and Humanities Research Board’s recent decision to award Birkbeck College, University of London, a large research grant to study early British computer art, as well as many similar projects elsewhere. As the above suggests, there is a great deal of interesting and important work going on in this area, in terms of actual art practice as well as of institutional and academic engagement.
Such work reflects the fact that our lives are now so bound up with what Donna Haraway calls the ‘integrated circuit’ of hi-tech capital. It would be hard to overstate the extent to which the reality of our lives is entirely governed by technologically advanced processes and systems, from ubiquitous and increasingly invisible computer networks to mobile telephony to genetic manipulation, nanotechnology, artificial intelligence and artificial life. These technologies are intimately bound up with broader issues of globalisation, surveillance, terrorism, pornography and so on. The work undertaken in the 1960s and 1970s now looks remarkably prescient in its attention to the meaning and potential of new technologies, networks and paradigms of representation and engagement.
Yet, despite this and the proliferation of current practice in this area, such work is still under-represented in Tate. Obviously there have been welcome developments, including the net.art commissions and the Art and Money Online show, as well as the increasing interest in film, video and photography. But, welcome as these moves to encompass various forms of new media practice are, they mostly fail to encompass or engage with the kind of work mentioned previously. In particular, work that is interactive, process-based or that involve networks, systems and feedback, are generally not catered for. The new media works now being collected and displayed by Tate are almost entirely static, even if they are time-based, in that they do not alter in response to interaction or their environment. This is true even for the net.art commissions.
But this is not an attempt to blame a particular institution for some kind of failure of perception and action. There are all sorts of good reasons why Tate should be wary of the work I have described. There are many difficulties in its collection, curation and display; there are other forms of art practice that have equal claim to Tate’s attention; and its historical and contemporary importance may not be obvious. Tate has also been exemplary in organising different means by which such work and its curation can be discussed, including the Matrix: Intersections of Art and Technology series of talks and the series of talks by well-know curators of new media art held at Tate Modern in autumn 2003. But I believe that there are also compelling reasons why Tate should be thinking about how to engage with such work. It has a long and important history, which intersects at crucial points with other better-known forms of art practice. Indeed, those practices would be very different without this kind of work. Renewed interest in it will enhance and deepen our understanding of artistic developments in the post-war era. I would go so far as to suggest that no attempt to understand art of that period can be undertaken without taking into account such work.
Furthermore such practice, both in its historical and current manifestations, is of great importance in its capacity to engage with and reflect upon our current technological condition. This is one of the reasons why there are such a large number of artists working in this area. It is also why any move to collect and display such work is likely to prove very popular, especially among younger people. New technologies affect almost everybody, whatever their age, at work, at home or elsewhere. For most people in Britain under twenty-five or even thirty years of age, a world without video games, computer special effects, the Internet, the World Wide Web, mobile phones and so on, is almost unimaginable. The ubiquity of such technologies is symptomatic of deeper issues such as globalisation, genetic manipulation, and bio-terrorism, that are the concern of many people, young or old.
One of the ironies of net.art is that, despite being supposedly responsive to current developments, it repeats the gestures of previous avant-gardes. As I put it in my book Digital Culture:
Practically every trope or strategy of the post-war avant-garde has found new expression through net.art, including Lettriste-style hypergraphology, involving the representation of codes and signs, Oulipian combinatorial and algorithmic games, Situationist pranks, Fluxian or Johnsonian postal strategies, staged technological breakdowns, such as previously rehearsed in video art, virtual cybernetic and robotic systems, parodies and political interventions.12
But this repetition, far from being a reason to condemn network art, is precisely what gives it its strength, much as did similar acts of repetition by the neo-avant-garde of the 1960s of gestures and strategies first enacted by the so-called historical avant-garde of the 1920s and 1930s. In his book The Return of the Real, Hal Foster describes the latter in terms of nachtraglichkeit, the Freudian term for deferred effect, by which an experience only assumes a traumatic dimension upon repetition and the delayed assumption of a sexual meaning. As Foster puts it:
‘[O]ne event is only registered through another that recodes it; we come to be who we are only in deferred action’. Historical and neo-avant-gardes are constituted in a similar way, as a continual process of protension and retension, a complex relay of anticipated futures and reconstructed pasts – in short, in a deferred action that throws over any simple scheme of before and after, cause and effect, origin and repetition.13
He continues that, ‘[O]n this analogy the avant-garde work is never historically effective or fully significant in its initial moments. It cannot be because it is traumatic – a hole in the symbolic order of its time that is not prepared for it, that cannot receive it, at least immediately, at least without structural change. Thus despite its continued repressions, failures and supercessions, the avant-garde continues to return, but, as Foster puts it, ‘it returns from the future’. It opens out the future to the contingent and the incalculable and thus the promise of the to-come. The avant-garde is the archive of the future.
The same might be said about net.art. Commenting on net.artist Vuk Cosic’s training as an archaeologist and Cosic’s own proclamation of the similarities between net.art and archaeology, Julian Stallabrass suggests that:
Net art, then, is seen as an archaeology of the future, drawing on the past (especially of modernism), and producing a complex interaction of unrealised past potential and Utopian futures in a synthesis that is close to the ideal of Walter Benjamin.14 [my emphasis]
‘Archaeology’ is of course cognate with ‘archive’, and both are concerned with the preservation of the material remains of the past. Net art delineates the conditions of archiving in our current regime of telecommunications. Derrida reminds us that the question of the archive is:
[A] question of the future, the promise of the future itself, the question of a response, of a promise and a responsibility for tomorrow. The archive: if we want to know what that meant, we will only know in times to come. Perhaps.15
This text has been written for the proceedings of the international conference "New Perspectives, New Technologies", organized by the Doctoral School Ca' Foscari - IUAV in Arts History and held in Venice and Pordenone, Italy in October 2011
The "portal" designed by Antenna Design to show net based art in the exhibition "Art Entertainment Network", Walker Art Center, Minneapolis, 2000. Courtesy Walker Art Center, Minneapolis.
In the late nineties and during the first decade of this century the term “new media art” became the established label for that broad range of artistic practices that includes works that are created, or in some way deal with, new media technologies. Providing a more detailed definition here would inevitably mean addressing topics beyond the scope of this paper, that I discussed extensively in my book Media, New Media, Postmedia (Quaranta 2010). By way of introduction to the issues discussed in this paper, we can summarize the main argument put forward in the book: that this label, and the practices it applies to, developed mostly in an enclosed social context, sometimes called the “new media art niche”, but that would be better described as an art world in its own right, with its own institutions, professionals, discussion platforms, audience, and economic model, and its own idea of what art is and should be; and that only in recent years has the practice managed to break out of this world, and get presented on the wider platform of contemporary art.
It was at this point in time, and mainly thanks to curators who were actively involved in the presentation of new media art in the contemporary art arena, that the debate about “curating new media (art)” took shape. This debate was triggered by the pioneering work of curators – from Steve Dietz to Jon Ippolito, Benjamin Weil and Christiane Paul – who at the turn of the millennium curated seminal new media art exhibitions for contemporary art museums; and it was – and still is –nurtured by CRUMB - “Curatorial Resource for Upstart Media Bliss” - a platform and mailing list founded by Beryl Graham and Sarah Cook in 2000 within the School of Arts, Design, Media and Culture at the University of Sunderland, UK. As early as 2001, CRUMB organized the first ever meeting of new media curators in the UK as part of BALTIC's pre-opening program – a seminar on Curating New Media held in May 2001.
In the context of this paper, our main reference texts will be CRUMB-related publications, from the proceedings of “Curating New Media” (2001) to Rethinking Curating. Art After New Media (2010), a recent book by Beryl Graham and Sarah Cook; and New Media in the White Cube and Beyond, a book edited by Christiane Paul in 2008. Instead of addressing the specific issues and curatorial models discussed in these publications, we will try to focus on the very foundations of “curating new media”, exploring questions like: does new media art require a specific curatorial model? Does this curatorial model follow the way artists working with new media currently present themselves on the contemporary art platform? How much could “new media art” benefit from a non-specialized approach? Are we curating “new media” or curating “art”?
Vuk Ćosić, History of Art for Airports, 1997. Web project, screenshot.
A medium based definition
“The lowest common denominator for defining new media art seems to be that it is computational and based on algorithms.” (Paul 2008: 3)
“[...] in this book, what is meant by the term new media art is, broadly, art that is made using electronic media technology and that displays any or all of the three behaviours of interactivity, connectivity and computability, in any combination.” (Graham, Cook 2010: 10)
Whatever one may think about new media art, when it comes to curating the definition becomes strictly technical and medium-based. New media art is the art that uses new media technologies as a medium – period. No further complexity is admitted. Beryl Graham and Sarah Cook, for example, in the continuation of the paragraph quoted, seem to be well aware of the sociological complexity of new media art, but willingly put this aside to focus instead on the art that displays “the three behaviours of interactivity, connectivity and computability”, wherever it is shown and whatever it has been labeled . This is no surprise, because – especially when it comes to museum departments – curating has always been medium-based. This model generally works, despite some criticism from curators, especially when the complexity of the medium in question doesn't allow oversimplification. In 2005, writing about video art, David A. Ross said: “Most often, at this point in time, video art is a term of convenience valued by museum conservators who have a professional need to devise proper storage and conservation standards for this specific medium, but even in this situation it is inadequate” (Gianelli, Beccaria 2005: 14 - 15). It is inadequate, Ross goes on, because video has become a ubiquitous medium, one that often makes its appearance in what would be better defined as “mixed media sculptural installations.” The same can also be said for other contemporary art forms such as performance and installation, but it applies to new media even more – a definition that, even in its strictly technical sense, applies to a wide range of forms and behaviors, from computer animation to robotics, from internet based art to biotechnologies.
Of course, both Paul and Graham / Cook – and, generally speaking, all good new media art curators – are fully aware of this complexity, and this awareness shapes their theoretical writing. It is exactly because of this that Graham and Cook, in their book, focus on behaviors rather than on specific forms and languages. At the same time, they are fully aware of new media art's resistance to the white cube and the specific kind of space it offers. As Christiane Paul puts it: “Traditional presentation spaces create exhibition models that are not particularly appropriate for new media art. The white cube creates a “sacred” space and a blank slate for contemplating objects. Most new media is inherently performative and contextual.” (Paul 2008: 56) Paul goes even further, arguing that new media art does not just resist the white cube, but even the kind of understanding provided by the contemporary art world: “New media could never be understood from a strictly art-historical perspective: the history of technology and media sciences plays an equally important role in this art's formation and reception. New media art requires media literacy.” (Paul 2008: 5).
Paul responds to this situation by painting a picture of a curator as less a caretaker of objects and more a mediator, interpreter or producer (Paul 2008: 65). But what does this mediation apply to? Paul implicitly responds to this question when she talks about the average museum / gallery audience, and their common criticisms of the new media art they encounter there. According to Paul, “the museum / gallery audience for new media art might be divided roughly into the following categories: the experts who are familiar with the art form; the fairly small group of those who claim a “natural” aversion to computers and technology and refuse to look at anything presented using them; a relatively young audience that is highly familiar with virtual worlds, interfaces and navigation paradigms but not necessarily accustomed to art that involves these aspects; and those who are open to and interested in the art but need assistance using it and navigating it.” (Paul 2008: 66, my italics). This paragraph already shows that, in most cases, what's at stake is differing levels of familiarity with technology among the audience. This is even more evident when Paul starts considering “recurring criticisms” against new media art – well summed up by the titles of the subsequent chapters: “it's all about technology” ; “it doesn't work”; “it belongs in a science museum”; “I work on a computer all day – I don't want to see art on it in my free time”; “I want to look at art – not interact with it” ; “where are the special effects?”
Paul concludes that “the intrinsic features of new media art ultimately protect it from being co-opted by the art establishment” (Paul 2008: 74). Yet, this argument can lead us to another, equally (or maybe even more) legitimate conclusion: that technology ultimately prevents new media art from being understood by the contemporary art audience.
Moving the focus
“The hype surrounding the technology driving new media art hasn't helped its long term engagement with the art world...” (Graham, Cook 2010: 39)
This is where a strictly medium-based definition obviously leads. If new media art is rooted in the active use of technology as a medium, there is no way to do without it; and if technology is the main obstacle between new media art and the art audience, all new media curating has to do is attenuate the impact of the technology, and make the art feel more “at home”, albeit artificially. Or, as Vuk Cosic puts it, talking about net-based art: “In my view, when you show online stuff in a gallery space, which is not online, you essentially put it in the wrong place. It's not at home. It's not where it is supposed to be. It's decontextualized; it's shown in a glass test-tube. So whatever you do is just an attempt to make it look more alive. You either move the test-tube or have some fancy lighting. And this is how it works for me.” (Cook, Graham, Martin 2002: 42).
An easy argument against this could be that technology won't be always new. We got used to TV monitors and projectors in galleries; we will get used to computers as well. The youngsters currently drawing their first pictures on an iPhone at the age of two will eventually grow up, and new media art will look more natural to them than it does to us. Yet this is only true up to a point. The hype surrounding the “new media” has not died down over the last two decades, quite the contrary: it burgeons any time a new gadget is launched on the market, reaching an even wider audience. And so far, the art world's resistance to new media art has not been greatly affected by the fact that everyone living in developed countries knows Google, and half of them have a Facebook account.
So, the questions at stake are: if technology is the problem, can curating allow the art audience to access new media art without technology, or at least reduce the impact of technology on the perception of the work? Can the curator become a mediator between art that tackles the social, political and cultural implications of technology, and the art audience, rather than between technology and the art audience, as in the model described by Paul and Graham / Cook? If this is possible, it can only happen, of course, outside of the strictly medium-based definition outlined before, and in the context of a definition that focuses more on new media art's critical engagement with new media and the information age, and on its ability to reach different audiences in different ways: not just the contemporary art audience, but also, on the one hand, the more specialized audience attending new media art events and, on the other, the “bored at work network”  that can be reached online.
In other words, if new media curating wants to better serve the practice it supports and the audiences it addresses, it has to shift its focus from the use of technology to other features that are intrinsic to new media art, but that have been sidestepped by the debate around new media curating so far. It has to be more about curating the art that deals with new media, and less about curating the actual new media themselves. Furthermore, it has to take advantage of the intrinsic variability of new media and the adaptability of artists capable of speaking different languages (something that should not be mistaken for conformism) in order to facilitate the presentation of their art to different audiences, and foster a better, broader understanding of their work.
Electroboutique's presentation in the show "Holy Fire. Art of the Digital Age", Bruxelles, iMAL 2008. Image curtesy the author
“The professional tends to classify and to specialize, to accept uncritically the groundrules of the environment. The groundrules provided by the mass response of his colleagues serve as a pervasive environment of which he is contentedly unaware. The 'expert' is the man who stays put.” (McLuhan, Fiore 1967 (2001): 92)
But why has the debate around new media curating, that, as we said above, involves curators active in the field of contemporary art, and well aware of the problems that the art audience can experience when faced with technology, not yet got the point? It is probably just a case of them uncritically accepting the groundrules of this arena, namely the new media art world. Their ideal audience is probably still that described by Paul as “the experts who are familiar with the art form” - that is, the niche audience of new media art. They probably still place media literacy above art literacy, as a condition for understanding a piece of new media art.
Unfortunately, this approach does not fit in with their declared mission, that is to bring new media art to a broader audience and forge dialogue with other forms of contemporary art. Of course, this mission also includes increasing the audience's familiarity with technology as a medium for art, but it is not limited to that. We could go even further, and say that this is just the last stage of a long journey undertaken to show the contemporary art audience the extraordinary impact of media and technologies on the world we live in, and the importance of increasing our awareness of them for a better understanding of contemporary society – and – as a consequence, the topical nature of the art that engages with them critically, in terms of both medium and content.
This might lead us to conclude that there is no need for the specific figure of the “new media curator”: a contemporary art curator open to new languages and with a good level of media literacy can do an even better job, in terms of picking out what is relevant to a contemporary art audience, working with the artist to find a good way of “translating” the work for the white cube, and forging dialogue with other forms of contemporary art. Perhaps this will be the case in the future. At the present time, the cultural insularity of new media art and the existence of two different art worlds means that specialized curators are still necessary. But new media curating should be reframed, in terms of mediating between two art worlds and two different cultures, rather than mediating between the art audience and technology. It should be about bringing new media art to the art audience in a way that enables it to be accepted as art, and also obliges people to reconsider their preconceptions about what can be accepted as art. With or without technologies.
Oliver Laric, Kopienkritik, 2011. Installation, Skulpturhalle Basel. Image courtesy the artist
Follow the artists
“My interest in technology is in its relationship with culture and its effects on society, and in many cases that can be communicated in things other than code.” (O'Dwyer 2012: 7)
Artists are already showing curators the way along this path. At some point, the artists formerly known as new media artists started taking the problem of how to present their art in the white cube more seriously, and realized that sometimes, putting technology aside was not just a compromise with the market , or a way of watering down their works and making them more palatable to the masses, but the right thing to do. It was a process that took time, involved trial and error and ultimately accepting failure, and was eventually facilitated by the emergence of a new generation of artists who enjoyed both bits and atoms, and who didn't see “new” and “old” media in opposition, but as lines of inquiry that should be pursued together, and that can sometimes converge, sometimes diverge, and sometimes criss-cross. A complete, or at least representative, list of examples would go far beyond the scope of this short paper, so I will provide just two recent, random examples. Around the time I started writing this text, I received two press releases: the first announcing that Berlin-based artist Oliver Laric, in conjunction with The Collection and Usher Gallery in Lincoln, had just won the Contemporary Art Society's £60,000 “commission to collect” award; and the second announcing a new work by US born, Paris-based artist Evan Roth, currently on display at the Science Gallery in Dublin. Though the “new media artist” label would be problematic for both, it is hard to dispute the fact that the two artists in question originally attracted the interest of a community of “experts” with their (mostly net-based) early practice. Thanks to the CAS grant, Laric will now be able to create a new work of art for The Collection and Usher Gallery's permanent collection. According to the press release, the work “will employ the latest 3D scanning methods to scan all of the works in The Collection and Usher Gallery's collections – from classical sculpture to archeological finds – with the aim of eliminating historical and material hierarchies and reducing all the works to objects and forms. These scans will be made available to the public to view, download and use for free from the museum's website and other platforms, without copyright restrictions, and can be used for social media and academic research alike. Laric will use the scans himself to create a sculptural collage for the museum, for which the digital data will be combined, 3D printed and cast in acrylic plaster.”  The commission allows Laric to bring his ongoing project Versions, started in 2009 with a video essay and developed in subsequent years with other videos, sculptures, installations, to a new level. Versions looks at the issues around copyright, originality and repetition through history, up to the digital age. With the project for The Collection and Usher Gallery, he will give the gallery's audience the chance to learn and think about 3D scanning, digital manipulation, sharing, and the shifting relationship between the physical and the digital, all in the familiar form of a sculptural installation. The online audience, on the other hand, will be able to enjoy and interact with this amazing collection of digital material.
Evan Roth, Angry Birds All Levels, 2012. Ink on tracing paper, 188cm x 150cm. Installation view at the Science Gallery, Dublin, Ireland. Photo by Seb Lee-Delisle, image courtesy Evan Roth.
Angry Birds All Levels (2012) is the telling title of Evan Roth's last work, consisting of 300 sheets of tracing paper and black ink attached to the wall in a grid with small nails. According to the Science Gallery website, it is “a visualization of every finger swipe needed to complete the popular mobile game of the same name. The gestures are visualized on sheets of paper the same size as the iPhone the game was originally created for. Angry Birds is part of a larger series that Roth has been working on over the last year called Multi-Touch Paintings. These compositions are created by performing simple routine tasks on multi-touch handheld computing devices [ranging from unlocking the device to checking Twitter] with inked fingers. The series is a comment on computing and identity, but also creates an archive of this moment in history when we have started to manipulate pixels directly through gestures that we were unfamiliar with just over 5 years ago.”  Even if it is on show in a science museum, nobody would ever say it belongs there.
In both works, technology is part of the creative process and one of the issues at stake (but not the only one). In both cases, technology does not feature in the gallery, not out of convenience or for marketing reasons, but because this is what works best for the artwork itself.
In most cases, artists arrived at this point under their own steam, with little help from curators. Are new media curators ready to help them take the next step? If so, they should probably start by focusing on their art rather than their media.
 “Artworks showing these behaviors, but that may be from the wider fields of contemporary art or from life in technological times are included, however.” (Graham, Cook 2010: 10)
 As Paul explains: “If a museum visitor is unfamiliar with technology, it automatically becomes the focus of attention – an effect unintended by the artist.” (Paul 2008: 67)
 “Art that breaks with the conventions of contemplation and purely private engagement shocks the average museumgoer, disrupting the mind-set that art institutions so carefully cultivated.” (Paul 2008: 71)
 The “bored at work network” has been theorized by artist and researcher Jonah Peretti in the frame of the Contagious Media Project. Cf. http://contagiousmedia.org/.
 A take on the way new media art circulates in the art market was the exhibition Holy Fire. Art in the Digital Age I curated together with Yves Bernard for the iMAL Centre for Digital Cultures & Technologies in Bruxelles, Belgium (April 18 – 30, 2008). Cf. Bernard, Quaranta 2008.
 The press release is available in the News section of the website of the Contemporary Art Society: “Rising star Oliver Laric scoops Contemporary Art Society’s prestigious £60,000 Annual Award 2012 with The Collection and Usher Gallery, Lincoln”, November 20, 2012, http://www.contemporaryartsociety.org/news.
 Cf. http://sciencegallery.com/game/angrybirds.