Wednesday, May 21, 2008

Speed Dating

To my regular readers, I apologize.

I'm afraid I've written a very long post about a series of very short presentations.

In any case, I've decided that I suffer from attention surplus disorder, so that all attempts to enlighten and entertain me in faster formats are inevitably lost on me as an audience member.

In the upcoming Virtualpolitik book about digital rhetoric, in the chapter about PowerPoint, I write about the transnational "pecha kucha" speed presentation style that the lore says was developed by Astrid Klein and Mark Dytham for designers and architects in Japan, which has since become a global phenomenon. I still have yet to see a real full-blown pecha kucha event, but this year's "SoftWhere" conference at the Software Studies Workshop was intended -- at least in principle -- to emulate the image-laden twenty-slides-of-twenty-seconds-each ideal. I'm also reading the new book Software Studies: A Lexicon this week, which incorporates selections from many of the presenters, so I made sure to make the trip south to attend this pre-HASTAC event at UC San Diego today.

Although a surprising amount of Warren Sack's talk was strangely taken up with images of book covers, such as The Postmodern Condition or Machine Dreams, he did have a number of provocative observations about the difference between what he called "digital ideology" and "digital life" or the vernacular experience of daily culture. As Sack described this cultural shift, “It is not only possible but usual” for a person to be in "two places at the same time." He also noted that area codes no longer indicate "where we live" but rather "where we bought the phone." For those teaching the grand narratives of the Western canon, such as my colleagues in the Humanties Core Course, Sack argued that certain kinds of stories about epics, quests, allegories of being lost, and separation from family members may eventually become "strange to future generations" with GPS-enabled cellular telephones and other ubiquitous computing devices. Sack claimed that "the digital" is "not just a condition of scientific knowledge" but a phenomenon that changes what we know as "common sense.” Of course, given the black-box nature of many programs and the ways that popular culture often mistranslates ideas from computational sciences, there are ways that this "common sense" can also be read differently.

When Sack argued that there was a critical role to be played by software studies to get beyond what he characterized as the limitations of “the aesthetics of computer scientists," he was using a very specific disciplinary lens intentionally to further his critique of those who generally teach software in the academy. For example, his definition of "functionalism," which for Sack was a negative term, hearkened back to Adorno's criticism of early twentieth-century art that eschewed ornament and yet couldn't recognize that avoiding style was a style itself, and effaced the role that the word "functionalism" has played in cybernetics and systems theory. In our discussion afterward, Sack quite reasonably argued that these interdisciplinary discourses between anthropologists and information scientists ultimately played a relatively minor role in teaching computer science as a profession and that it was necessary for artists to assert their pedagogical role as well. In his talk he argued that software instruction was often limited to "speed," "efficiency," and "correctness" borrowed from the worlds of business and mathematics, and that even concepts like "interactivity," "user-friendliness," and "realism" were generally treated in a cursory manner.

When it came to pretty pictures, there was much more to be seen in the presentation by conference impresario Lev Manovich, who shared his cosmopolitan impressions of the "new architectural and spatial imagination" Given the current fascination among academics for DIY production, Manovich made a particularly apt point that all the "excitement about amateur content" misses the importance of the role of the "global professional culture universe" in shaping a "software society" that may be distinct from a "knowledge society." This interest in "support networks" and "contemporary lifestyles" creates a somewhat different transnational map of the culture industry from the narratives of Manuel Castells and one that is oriented around design questions dear to my heart, such as this blog posting that asks about why one of my favorite things in the world (literally) can't be better designed: the overly humble hotel minibar. Manovich seemed to be almost riffing on Henry Jenkins's famously fan-oriented invitation to the reader, "Welcome to Convergence Culture," with his own explicit "Welcome to Cultural Analytics" after he sketched out seven trends. As he pointed out, statistics are becoming a part of "personal life" for many people in social media environments where data can be mined and graphed.

In the visuals for Ian Bogost's talk, there wasn't as much evidence of the rhetorical way he uses his Leica, but it was a noteworthy presentation nonetheless, which previewed his upcoming MIT Press book, c0-authored with Nick Montfort, in the new "platform studies" series. As a person trained in the study of literature, Bogost is well aware of the importance of "material constraint" to "expression" and cited a range of examples that included mnemonic devices for oral-formulaic poetry, the language experiments of the Oulipo, and the chemical properties of film emulsion. Bogost also offered a helpful taxonomy for software studies at which my work tends to be at the upper and -- probably fair to say -- superficial level: 1) Reception/operation 2) Interface, 3) Form/function, 4) Code, 5) Platform.

In the book with Montfort, Bogost looks closely at the Atari system and how cost considerations played a role in how software was written, particularly because of the position played by a piece of equipment called the television interface adapter or TIA. Because the early Atari lacked a frame buffer, the programmer had to construct every scan line and thus create displays that depended upon rows of color. Bogost argued that this was particularly significant because it shaped the very genre of many seminal games. In the case of the adaptation of Adventure, which had been a text-based game, these constraints were particularly important in shaping the history of videogame design.

This engagement with what has been for too long denigrated as "technological determinism" continued with the next presentation by Anne Helmond of the Institute for Networked Cultures, which sent me a wonderful surprise package of their publications a few months ago, thanks to Geert Lovink. Helmond is involved in research that looks how blogging engines may be shaping the very rhetorical practices of blogging itself, particularly when it comes to the topographies of what has been called "the commentosphere." On the practical level of pedagogy and publication, Cynthia Nie also discussed this issue at a meeting of the Digital Educators Consortium in December of last year.

DEC colleague Mark Marino followed up with a presentation about work being done on the collaborative blog Critical Code Studies. However Marino was also willing to undermine the gravitas that otherwise might be claimed by his new field by joking about the pretensions of yet another the field in the Critical _____ Studies mold, which The Valve has called the "Critical X Studies" paradigm. Certainly, as someone affiliated with the Critical Information Studies movement with Siva Vaidhyanathan, I also find myself interrogating this model as well. At the end of the Virtualpolitik book, I look at how postwar "Information Science" became "Information Studies" and the resultant loss in ithe field's interdisciplinary ambitions and possibilities for collaboration, so that -- although this move away from the absolutist ideology of science may have been salutory -- it also came with certain costs in that it encouraged a centrifugal tendency toward intellectual fragmentation. Marino also reminded audience members of how Espen Aarseth pushed against the colonization of game studies by critical theory. Although Marino felt that including "rhetoric, economics, and politics" is important, he is willing to acknowledge his own anxiety that maybe it is it all "just a metaphor." As Marino said, “Why don’t I just use cooking?” as a semiotically rich code system through which to understand the world.

In "#include Genre," Jeremy Douglass, Marino's frequent collaborator at the blog WRT: Writer Response Theory talked about the role that "quotation, intertextuality, and transclusion" plays in understanding how particular genres of digital media evolve. As models, Douglass looked at Raph Koster's work on the development of the 3D shooter or Jesper Juul's research on matching tile games.

Since media theorist Benjamin H. Bratton is recently the editor of the study of Paul Virillio, Speed and Politics, it was perhaps somewhat ironic that the accelerated pecha kucha format seemed to undermine the impact of his idea-rich talk, which was often illustrated with stock figures whose eyes were covered with anonymizing rectangles. On the Tarde/Durkheim balance, Bratton announced himself as being in favor of "more Tarde" and "less Durkheim" and thus more interested in emergent networks than in the nation-state. (He also cited the work of Bruno Latour. ) As examples he pointed to the conditions of late modernity in which "maps are instrumental mechanisms for the chain of representation."

Yet it could be argued that by declaring that "All Design is Interface Design," Bratton simultaneously privileged the "point of contact that governs the conditions of exchange" and nullified the prospects of treating it as a discrete object of study in relation to the underlying source code. Instead his work focuses on different conjugations of assemblages and the "distribution of the sensible and insensible" across many types of interface. In other words, instead of turtles all the way down, one could say that perhaps for Bratton it seemed to be a matter of interfaces all the way down. Instead of a hierarchy like the Bogost/Montfort ordering, Bratton provided a series of interpretive approaches to interfaces that included those for "groups of people" and "groups of groups of people" that indicated a much more sophisticated cultural analytics approach than the available time allowed for explication.

As a rhetorician interested in how government institutions simultaneously serve as regulators and media-makers, there were obvious reasons to take notes on the talk by Kelly Gates about how the Ocean systems line of products for the Avid video editing system's "government solutions" and more specifically the dTective software package was used to transform surveillance video into evidence admissible in a court of law through nonlinear editing techniques. She argued that "postproduction enables an emerging class of law enforcement specialists" who share media in collaborative work environments. For Gates, this involves optimizing both "visual opacity" and "visual acuity," although she also claimed that these Hollywood-style editing techniques were not necessarily "disruptive to the relationship between image and reality."

Many years ago, while still a graduate student, I was a research assistant for a Joyce scholar, so I was utterly charmed by the fact that Nick Montfort abandoned the pecha kucha format entirely in favor of running a Python computer program, which can be download here, that emitted variations of little but Molly Bloom style "yeses" for his appointed time. For electronic authors, Montfort argued that the challenge was not writing a sentence or a series of sentences but rather composing a "distribution."

When it came to snappy academic one-liners, Peter Lunenfeld's talk on "Counterprogramming" had perhaps too many to write down in six and a half minutes. He also was unafraid to pay homage to the visual power of kitsch in his slides, which included a Bruce Lee statue erected in honor of "universal peace" that was unveiled in Bosnia. Yet bon mots aside, Lunenfeld was certainly willing to give serious credit to three previous conferences as inspiration: Street Talk: An Urban Computing Happening in 2004, the UCLA workshop on Design for Forgetting and Exclusion in 2007, and a 2008 conference on The Workaround as a Social Relation.

He also previewed his forthcoming book from MIT Press, The Secret War Between Downloading & Uploading, which looks at a long history of the computer as a culture machine. He divides this chronology into a pageant that is ordered as follows into six sections: "The Patriarchs" (Vannevar Bush and J.C.R. Licklider), "The Plutocrats" (Thomas Watson Sr. and Thomas Watson Jr.), "The Aquarians" (Alan Kay Douglas Engelbart), "The Hustlers" (Bill Gates and Steve Jobs, "The Hosts" (Linus Torvalds and Tim Berners-Lee), and "The Searchers" (Larry Page and Sergey Brin).

As I argue in the following passage of the Virtualpolitik book, there is another sense in which Bush and Licklider can be seen as "patriarchs."

It is noteworthy that – even in Bush’s wildest imagination – the Memex owner does not know how to type, since the man inserts “longhand analysis” into the burgeoning hypertextual document that Bush describes. After all, typing was considered a skill consigned to women in the twentieth-century workplace. Even fifteen years later, J.C. R. Licklider was still assuming
that hand drawn symbols and speech recognition technologies would be necessary to achieve what he called “man-computer symbiosis,” since “one can hardly take a military commander or a corporation president away from his work to teach him to type."

Because Bush discusses how cultural prejudices and institutional forms of blindness can stymie technological innovation, it is particularly ironic that he can not see the consequences of his own gender ideologies and what may well be an unconscious set of beliefs that he holds about the femininity of certain labor practices. Although Bush’s interest in perfecting voice input devices and writing tablets may seem prescient in the current age of ubiquitous computing and intuitive interface design, at the time it meant that many of the inventions that he imagined would be unable to get off the drawing board for decades, so that Bush was essentially arguing that funding and effort be directed to impractical pie-in-the-sky technologies.


Although Lunenfeld well understood the appeal of the digital humanities movement that emulates "big science" by producing "big humanities," he argued that there was "still a lot that can be done on the small." In particular he cautioned against a "rush to replace psychoanalysis with cognitive science" exemplified in humanists' current fascination with brain mapping, the most dubious of which he singled out for criticism: the "godscan." For Lunenfeld, there are also other political and cultural stakes to be attuned to outside the academy, particularly when "the market is out there and pushing machines" to become a mobile mix of shopping mall and television screen.

Unfortunately, by the time we got to the double-serving presentation by Casey Reas and Ben Fry, I could do little more that promise myself that I would order their book Processing from MIT Press and check out their website for open source creative tools at Processing.org, which explains the goal of their software as follows:

Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is used by students, artists, designers, researchers, and hobbyists for learning, prototyping, and production. It is created to teach fundamentals of computer programming within a visual context and to serve as a software sketchbook and professional production tool.


Given my work on digital libraries, I was also sorry that Matthew Kirschenbaum only had six and a half minutes to talk about what he jokingly called "Critical Storage Studies," which involved scholarship about "storage, inscription, forensics, and materiality" and allows for the fact that software exist as physical inscribed objects, which can even be seen as palimpsests. Kirschenbaum has a major role in the big-budget Preserving Virtual Worlds Project with Illinois, Maryland, Stanford, and RIT.

Michael Mateas's talk about "Authoring and Expression" touched on some of the connections between rhetoric and software studies that he was just beginning to work with when I met him at the Digital Arts and Culture conference in 2005. By suggesting that "an architecture is a machine to think with," Mateas examines "authorial and interpretive affordances" and takes the traditional triad of author-text-audience in classic rhetoric into the realm of artist–system–audience. Furthermore, as a creator keenly aware of the role of ideology, Mateas asks: if “the space of the sayable” is constrained, "what does it mean to consciously design this?" He followed up this idea by pointing out that "establishing a sign system" is a privileging activity, although audiences can still play an active role. Although there is a risk of solipsism by asserting that it is "narrating to ourselves that defines progress and representation," Mateas is not necessarily a radical relativist. For him, "computation is always double" and involves the "relation between code machine and rhetorical machine" in which there is an active circulation of signs. In what could be read as a gentle jab at the serious games with which his work is often associated, Mateas noted that there are problems to claiming to know about "learning" and "planning" in AI architecture, since this certainty kills the very circulation of signs. As his final examples of the importance of thinking about "craft practices" and "representational practices," he pointed to "weird languages" that include esoteric programming languages such as Chef and Shakespeare.

As the day concluded, Noah Wardrip-Fruin, who is experimenting with blog-based peer review for his book Expressive Processing, had the final words. Since government and institutional rhetoric is my specialty, I'm not sure that I entirely agreed with his reading that asserted the superiority of The Restaurant Game to the ACM's letter of protest about the "Total Information Awareness Program," given the relative sizes of their constituencies, but Wardrip-Fruin's willingness to test out ideas in open forums certainly shows his sensitivity to public sphere issues. He also earned points for bravery by using automated timing for his slides so that the session ended with a strict-construction pecha kucha presentation.

Update: In addition to some interesting conversations about the conference that are linked to this posting on Facebook, there have been some noteworthy reactions in the blogosphere. Benjamin Bratton's reflections about the bifurcations of the project, along with his own definition of software, are here. Anne Helmond had a more concise review of the day's proceedings for the Institute of Network Cultures here.

Labels: , , , , , ,

3 Comments:

Anonymous Noah W-F said...

Hi Liz -

I probably wasn't quite clear in my presentation, but the idea I was trying to present is the main one of the statistical AI section of Expressive Processing.

Noah

4:35 PM  
Blogger Liz Losh said...

Thanks for the link! I enjoyed reading your fuller treatment of both the TIA program and The Restaurant Game.

5:04 PM  
Anonymous christian dating said...

Hey, do you know of any christian speed dating sites?
I really want to get into dating, but would like someone who also has the same faith as i do.

8:04 AM  

Post a Comment

<< Home