Saturday, May 31, 2008

The Gory Details

Although official "Safety Alert" mass e-mails from the University of California used to be brief and cryptic compositions with very little information about the details of crimes on campus, these missives now use a different rhetoric that presents complex narratives of the events that occurred, complete with times, places, persons, and even emotional affects, as this official sample demonstrates.

On Thursday, May 29, 2008, a female UCI student was waiting out in front of Langson Library late in the afternoon when she was approached by a young man. He introduced himself as "Danny", a student from another (unknown) school. Danny and the UCI student talked in front of Langson Library for two hours. Danny then suggested they visit the Anteater Pub to eat or drink and they walked over to the Pub at 7:30pm.

While at the Anteater Pub, they continued talking and consumed several beers each (no food). At 9:30pm, the female student was tired and told Danny she had to go home; Danny offered her a ride and she accepted. They walked to Danny's car in Parking Lot #1. Danny's car is only described as silver in color (no further description). Danny drove the female student out of Parking Lot #1 and made a hasty turn into Parking Lot 3-A (adjacent to the Merage School of Business and across from the Social Science Parking Structure). The parking lot was empty. The female objected and asked why he had turned into the empty parking lot; Danny told the victim he wanted to talk some more.

Once parked in Lot 3-A, the student and Danny talked for another hour or so. Danny then unzipped the victim's jeans and reached in, touching her pubic area several times; no penetration occurred. The victim verbally resisted and Danny stopped. Danny then put his hand into the victim's shirt, grabbing her breasts. Fearing for her safety, the victim fled the silver car and ran away toward Langson Library at 11pm. Danny started following the victim on foot for 100 yards, calling to her to stop; Danny returned to in his car and left the area. The victim made good her escape and arrived at Langson Library.

The victim was upset over the incident, and did not immediately seek police assistance. She sat outside Langson Library for two hours before being spotted by others who notified police at 1am. The victim provided the below suspect description; she was not injured.

Although I support providing these more elaborated narratives of alleged sexual assaults, in the interest of feminist consciousness-raising as well as law enforcement, I also worry that the novelistic level of detail in this version may encourage Internet "spoilers" to be skeptical of the account and indulge in victim-blaming, so that information excess spurs disputing the authority of a given source almost as much as information paucity can.

Labels: , , ,

Friday, May 30, 2008

The Hive Archive

The final roundtable session of the Social Computing Workshop was introduced by Nancy Van House of U.C. Berkeley who has been working on the archeology of the Presidio with her students but has faced pedagogical challenges on the project because her graduate students proved not to be sophisticated "digital natives" when faced with the laundry list of tool literacies that were necessary to do the work: Macs, Google docs, digital cameras, Nokia N95 cameraphones, GPS, GIS, Flickr, Piknik, Bluetooth, video, digital audio recorders, Sophie, the Sophie reader, and Wordpress blogging. She even admitted learning that her own expert personal archiving practices were flawed from the research of Cathy Marshall and her advice about backing up data. Yet these "reputation mechanisms" also have promise in stimulating academic research and teaching. For example, she showed an interesting slide of a reconstruction of an archaeological site being excavated in Turkey that had been visualized in Second Life.

Although Van House lamented that there was "no business model for free and open information" and that our relative "autonomy as academics" made it hard to understand market pressures, Citizendium's Larry Sanger presented a much more optimistic narrative of the near future. He predicted that by 2020 or 2030 there would be "enormous amounts of free and credible content." Although the credibility problem may not be completely solved and one might have to pay a fee to get books, he listed on the blackboard all the cultural valuables that were being digitized: books, journals, encyclopedias, and archives. Although he conceded that it would continue to be true that the Internet would be "mostly crap, since that’s what people are interested in, just like television," he thought that the fact that people are increasingly likely to "expect information to be free" would make the development of a vast online research archive inevitable.

In response, some in the audience questioned whether it was really possible to make value judgments about "good" and "bad" content, since -- as Tad Hirsch noted -- political ideology and repressive government could play a role in suppressing certain important kinds of cultural imaginaries. However, Sanger defended his point and further asserted that "neutrality was not about relativism but about tolerance." As the conversation progressed Alan Liu showed some tricky case studies, such as the obvious bias in the articles on "The Theory of Evolution," "The Earth," and other scientific subjects in Conservapedia and the Wikipedia lockdown of the article on Muhammed because it contained images of the prophet that offended some Islamic readers. As further caution, William Warner opined that too many sources of interest to humanities scholars would still be barred from this utopian archive because of copyright restrictions on music and video.

Labels: , , ,

Applied Humanities

When Tad Hirsch opened his talk by describing design as "applied humanities," he was guaranteed a warm reception at the UCSB Social Computing Workshop. As a specialist in tactical media, Hirsch has done important work in "activist infrastructure" by using mobile phones and site-specific sensors. His most recent project, Dialup Radio, provides links and a demo number to hear "freedom phone" broadcasts in Zimbabwe that connect citizens anonymously to news sources and sites of grassroots organizing. He also developed Speakeasy, which is designed to link new immigrants in a Chinatown community to multilingual volunteers with social service expertise to whom calls can be routed. Hirsch said that he was drawn to mobile phone networks because he thought that they offered a "ubiquitous kind of accessibility" that provided more "opportunities for shared projects," although they were too frequently underused for this purpose. Hirsch is part of the Zones of Emergency project at MIT that also includes VP friend Trebor Scholz, and his other projects range from the whimsy of Tripwire, which consists of monitoring devices for aircraft noise that are disguised as coconuts, to the gravitas of an unnamed 2004 project that caused him to be served with a subpoena.

Hirsch described himself as committed to "promoting social change through a particular form of collective action." In some important ways, he said, "all computing is social," and this fact is demonstrated in the "hows and whys of technology." As an example, Hirsch pointed to the work of fellow participant and UC Irvine graduate student Brian Rajski who is writing a dissertation about social computing the the mainframe Cold War era. As he said "notions of collaboration and communication and community are intertwined with the history of computing," because "early computing systems were dependent on sharing." According to Hirsch, the contemporary incarnation of these practices also "increases the number and the frequency of social contacts," particularly now that they are "evolving to include more than text."

He also noted the "proliferation of free and open source software tools," such as Drupal, that provide "novel applications or sites for collaboration," particularly now that blogging software and content-management can be combined. From Hirsch we learned about Crabgrass, a social networking web application for movements working for social justice, and about Yahoo's Fire Eagle service in which the user can choose to share location information with the API, although "location-awareness still largely a dream." He discussed the sharing potential of "sensorware" as well, such as the Cambridge Mobile Urban Sensing project in which individuals can monitor air quality and potentially other information about the local environment. He also praised free and open data visualization tools through which "random groups of people are creating new data sets," such as Many Eyes.

Hirsch emphasized that it was important not to neglect "traditional kinds of advocacy" in pursuit of techophilic coolware. For example, he pointed to the online medical community PatientsLikeMe, where those with the same condition can upload and share their own data, from symptoms to medications in order to find common "points of advocacy" for new kinds of treatments and clinical trials, just as AIDS activists had done in earlier decades.

Of course, he conceded that these kinds of project face "fundamental challenges" based on privacy, security, and history, because they potentially place a participant at risk. In political hot spots, dissidents can face dire consequences for breaking unjust laws, and patients could be manipulated by pharmaceutical companies posing as peers. Hirsch closed with some basic questions: "What are the rules of engagement? How can these processes be manipulated?"

If Hirsch argued that all computing was social, Peter Kollock countered that the dot.com crash seemed to show that computing was still actually insufficiently social. In his remarks, he looked back to the one billion dollars invested in what was breezily called "B2B," an attempt to build online markets for the wholesale industry, which Kollock called an "an astonishing disaster," because "they didn’t realize that it was an exercise in social computing" and could only see the process as an exercise in efficiency. In their naive model of the market, online exchanges would "reduce marketplace friction for both buyers and sellers," as one chart showed. However, by "wanting to make it all about price" they coded "behavioral realities" out of the interface and compounded their errors with 1) a failure to model and 2) a failure to harvest extant social wisdom in existing systems, which people were not aware of if they weren’t traders. For example, they couldn't see "favors as risk management device" or the reality that commodities weren't really commodities that could be aggregated, because even gasoline was "modified and had flavors" and had geographical considerations to manage. He pointed to the wisdom of Valdis Krebs of Orgnet.com and Matthew Mahoney of SocialText as better contemporary models that avoided the mistakes of the earlier B2B approach.

These presentations stimulated a lot of conversation about emergent behavior. Alan Liu asked, "Are these kinds of consequences predictable?" Liu pointed to the work of Paul Strassman and its willingness to acknowledge that some things are formalizable and some are not, and therefore it was important to accept the necessity of inefficiency, particularly if there were behind-the-scenes activities that were making the system work. To follow this point, Larry Sanger emphasized the importance of recognizing the distinction between collaboration and aggregation.

There was also a lot of talk about improvisation and the importance of flexibility, particularly in virtual environments. As Liu pointed out, IBM found itself surprised by what people did in their office space in Second Life , and that in the university virtual worlds create needs for new kinds of pedagogical rules. "Do hands have to be raised?" "Should flying be forbidden?" Liu argued that these questions about authority meant that "by definition it is going to be emergent." Furthermore, as others pointed out, "some groups desire to be loosely confederated" and that "optimization" may not be "really what people want." In closing, medievalist Carol Pasternack reminded the group that the old dichotomy between determinism and free will was still relevant in the current world of social computing.

Labels: , , ,

Blue Sky Diving

Rama Hoetzlein and Pablo Colapinto led the "Blue Sky" session for the UCSB Social Computing Group, which was designed to show their brainstorming about "engineering architectures of participation." Some of their insights were explicitly borrowed from the world of corporate consulting and market research. For example, they cited my high school classmate John Battelle on his work on the "database of intentions" in which "not everything is voluntary." They also mentioned learning from Forrester Research about the continuum from "people" to "objectives," "strategies," and -- finally -- "technologies," to which they added the term "issues." As Alan Liu pointed out, perhaps the Social Computing Group was still excessively focused on the middle part of that spectrum, given their interest in purposive actions in the public sphere.

To show the critical role that design and art can play in discourses about social computing, which are often dominated in academic professional and research associations by those in the social sciences and information sciences, they showed Hoetzlein's Quanta, which can serve as a "knowledge dissemination project" that may be more meaningful than many Web 2.0 applications, although it includes some of the same interpretive mechanisms, such as timelines, which can also be important features of many social networking sites.

I've put in boldface many of the key terms of the Blue Sky presentation, such as extending, teaching, distributing to show the literacies that they were sketching out that included ubiquitous computing technologies. They also thought more traditional web-based involving, commenting, journaling merited attention, such as the Annotated NY Times, where the group found readers deconstructing the Spitzer scandal and the MySpace variant of what I have criticized as "Facebook journalism."

The group included measuring, predicting, conspiring as important concepts, in what might seem to be an initially unlikely triumvirate because they acknowledge how "trust metrics" and "quantum computing" that measures the immeasurable may be linked to certain ways of navigating the Internet that go back to the etymology of the term in "breathing together." In the geopolitical sense, this could mean what the Atlantic Monthly has characterized as "Jihad 2.0" or -- as the audience pointed out -- it could be a veiled woman using Bluetooth technology to show her face to everyone with a compatible device for ubiquitous communication. They argued that organizing, rewarding, policing were also key functions and showed Santiago Sierra's analog artwork about putting laboring unpaid people literally in a box.

The group argued that visualizing, abstracting should not be tied to current versions of "the social graph," since it might just as easily turn out to be something entirely different, such as "The Social Giraffe," which represents a "different kind of geometry" or "whole other animal." In that spirit of fancy, which should not be a bad word in social computing -- although it often is, the group added fantasizing, narrating to the list and displayed Matthew Johnson "Liberty City vs. New York City" Flickr set.

It is interesting to note that they prefaced examples of their own imaginative creations with a discussion of high risk social computing that was illustrated with the cockpit of the Atlantis space shuttle to make the point that the "more you represent information" the more you are potentially in danger of "the glass cockpit syndrome," which deprives users of the "knowledge that they are actually flying." Certainly, given that many pilots in many airports take off and land in response to e-mail messages, our own proximity to this kind of social computing is certainly worth recognizing. (This syndrome is important for other highly mediated fields outside of avionics that also depend on simulations for training, such as medical technology.) According to Liu, books like Gene Rochlin's Trapped in the Net could be useful for understanding this pathology.

Beyond all or nothing, fly-or-crash scenarios, the group asked "how do we season our information?" In other words, how do we handle all the nuances of our informational personalities and dispositions. To demonstrate this point, they showed a mock-up of "Dis-Play," "an information free-for-all," in which the user can click on someone to assume their identity and adjust their "grain of salt slider" to indicate the appropriate level of skepticism for messages from the outside world. As they suggested, one could generate one's online presence for a month and then "go to Bermuda," much as guests at fancy hotels can check their Blackberries at the front desk. As they jocularly suggested, perhaps you might want to “automate love life but have more control over your blog.”

Inspired by "hardware for intelligence," such as QR codes or vision systems, the group also proposed "The Social Spectrum Camcorder," which would combine identity detection, exposure settings, annotated community feedback, intuitive adaptive filtering, and a geospatial engine to generate a product that could be a marked-up digital standard. They also suggested installations for graveyards or libraries where messages could be sent from the past to the present such as “go six bookcases back,” so that more of the world would become "a game or a puzzle," and one could "leave information behind on your own time."

Other hypothetical projects involved interrogating spaces and connecting notions of graphs to awareness of a number of different ontologies in which we can orient our "friends" (or whatever we would call them) in relation to music or careers or disciplines. They also talked about a social dynamics simulation called Social Evolution that is based on grid computing but functions like The Sims video game, although the computer runs the character rather than the player. Finally they showed "Chalk," which dematerialized the computer interface entirely, so that an e-link could cause chalk drawings by children to appear on sidewalks at other playgrounds throughout the world.

Slides, including the group's inventions, are here.

In a gentle critique of the designers, Bill Warner pointed out that too often policy is put last, which is particularly regrettable given the economies of attention at work in our society. Elsewhere during the day, Warner also noted the dangers of "end of history" arguments and the uncritical acceptance of ideologies of liberalism, especially given the existence of current intellectual property regimes.

Labels: , , , ,

Raising the (Side) Bar

The trope of the "side bar" in the margins of online documents proved to be a useful figure as a way to visualize self-presentation and connectivity between agents in the Social Computing Workshop about wikis, social networking, and social bookmarking today.

IBM's Joan DiMicco introduced the first roundtable sessions with a reminder of the fact that traditional HCI "user-centered design" with a "user-needs" model had only been relatively recently supplanted by the current computer as "communication tool" paradigm. DiMicco worked with the group that created Beehive, an internal social networking site for the company with 30,000 members to suit a corporate culture that increasingly depended on a consulting rather than production role. Later in the day, she discussed how members of this online community frequently revolted against top-down dictates yet also called for more governance within the system, despite possible redundancy with pre-existing employee codes of conduct.

DiMicco asserted that designers could structure "what two people see about each other" and thus had "tremendous power to control the type of communication." (Perhaps the most disturbing example of these constraints that I can think of would be the limited lexicon of allowed vocabulary in Club Penguin, Disney's highly popular social networking site for children.)

DiMicco also introduced the following provocative questions to spur discussion:
  • How does a system engender trust? Encourages competition
  • Should a system allow for deception?
  • Is the online community for group polarization or group critique?
  • How do you motivate users to participate? Persuasion, rewards, or neither?
The fact that DiMicco was considering possible social goods to be gained from deception was certainly refreshing in a group that was often extremely earnest about presenting an authentic online identity and seemed to be resistant to the salubrious effects of some dissimulation.

As an academic, I could use a social networking site that does two things Facebook can't: 1) recognize hierarchies and asymmetries in social relationships defined by institutions and 2) represent the dissensus in the university that makes for productive dialogue and debate among people who aren't merely "friends." (DiMicco very helpfully pointed out the existence of Essembly.com, but I was looking for something that was better suited to more subtle theoretical arguments rather than to relatively crude face-offs between members of differing political parties or "foes" in the system.)

DiMicco was joined by workshop organizer Alan Liu who described a "dream" that literally woke him up at night with a glorious vision of something that he described as "not a document but an identity," which included information about "authors, audiences, and genres." Perhaps more successfully than Ted Nelson has done with his Xanadu system, Liu was able to show several models to demonstrate the "action on the sidebar" that goes far beyond the simple text-encoding approaches of the past. As Liu said, "a template is a personality" that is humanized by virtue of having both a "head" and a "foot."

At the simplest level, this could be a blogroll that indicates an individual's discursive connections to others, as this blog does in a column to the right. Liu argued that it can also take more elaborated and collective forms, such as the set of reading tools on the sidebar of the Open Journal System. Although he admitted that there were many technical challenges, he was not opposed to automating this process, at least in principle. He did point out, however, that his own testing of Xobni (or "inbox" backward, since the firm chose not to use the Web 2.0 Company Name Generator) had come with some frustrations, since the software for organizing his mail into a sidebar of recognizable human conversations, people, and documents apparently slowed his system to a crawl.

Given our dependence on the code and systems of others, Liu very justifiably complained that too often what he called our "communities of treasured people" were stuck to particular proprietary applications or relegated to a sidebar that trivializes rather than acknowledges others. He even saw value in the portable system that he was imagining in "carrying around honored and valued dead," such as his recently deceased colleague Richard Helgerson. Thus, following Bruno Latour, he argued that "agents" may be the right way to think about making social connections more transparent to include posthumous participants. Particularly when a flavor-of-the-month mentality may dictate trends in online behavior, in which sites like Friendster fall into disuse, Liu's hesitance to commit to any of the current offerings for his dreamlike sidebar of cherished social connections is certainly understandable.

At one point, Liu asked attendees to raise their hands to indicate how many of them were active users of Facebook, and then he documented the moment with the photograph below. Liu is clearly a Facebook lurker, since he has no identifying picture or network to separate him from dozens of Alan Lius around the globe, but I was shocked to see how few other people, particularly of my age, counted themselves as active participant-observers. I don't know if that would be true of any other meeting of Internet researchers that I have attended during the past two years, and in the case of the recent Software Studies workshop, discussion about the conference continued on Facebook. Although I would agree that Facebook is obviously ad-driven data-collecting proprietary software, it is somewhat different in that its feeds and tagging features (and more intimate venue) make the act of citation more clearly related to the act of discussion. Unlike the perpetual "Zero Comments" syndrome that Geert Lovink describes in his book by the same name, I often get more comments on a blog posting inside the walled garden of Facebook than I do in the more impersonal space of the World Wide Web.

Yet Wikipedia co-founder and current Citizendium head Larry Sanger expressed concern that the group was too eager to collapse distinctions between "education" and "entertainment," and even described his own blog as not "individual" or "personal." Of course, from reading the Citizendium Blog, I might be inclined to disagree, especially when Sanger includes items like his public appearance at Oxford with anti-Web 2.0 gadfly Andrew Keen with a title like "Sanger versus/and Keen at Oxford" and a "should be fun" link to the event.

Medievalist Carol Pasternack observed that documents and the conversations surrounding them have frequently not been separable during the history of literature and that certain aspects of our social media behavior are consistent with the practices of the Middle Ages. She seconded Liu's point about the valorization of friendship for fellow scholars in the humanities who are long dead, but she also concurred with many at the workshop that forgetting could be as important as memory in many cases to preserve a functioning social document system.

As a group, we looked at a number of examples of tools for visualizing social networks, such as TouchGraph, which looked cool but did not show people's relationships to institutions. For example, my name is apparently unencumbered by professional associations, academic institutions, or scholarly presses. Liu also pulled up Where's George, which tracks the travels of a given dollar bill through user input of the relevant serial numbers; it is also a literal reminder of the "viral" character of Internet culture, in that it has also been used to model the progress of a pandemic. At the level of code, we learned about Facebook Markup Language or "FBML" and how the new P5 Guidelines from the Text Encoding Initiative incorporate "personagraphy."

Labels: ,

Street Cred

The UC Santa Barbara Social Computing Workshop convened today for a wide ranging discussion about technologies that "can be defined as the deployment of network communication systems for the purpose of allowing communities of people to interact within particular domains of knowledge for one or more goals." Led by digital pioneer Alan Liu, who launched the Voice of the Shuttle, the sessions of the day were intended to get beyond merely defending traditional academic gate-keeping and aspired to more creative brainstorming activities that looked toward the future as well as back to the past.

Credibility was perhaps the most obvious theme for many of the attendees. Several of the suggested readings designed to foster discussion related to UCSB's MacArthur-funded project about Credibility Online, which is headed up by Miriam Metzger and Andrew Flanigan, who contributed a lot to the discussion. During sessions, Metzger talked about how we "manage mulitple bounded identities," the problems with umbrella terms like "credibility" and "social computing," and the importance of being able to "visualize disagreement."

Pre-workshop suggested readings had included "The Hidden Order of Wikipedia," "Digital Media and Youth: Unparalleled Opportunity and Unprecedented Responsibility," "The New Metrics of Scholarly Authority," and a range of other articles and links. My only criticism of these reference points might be that too often they emphasized receptive rather than productive literacy. From my own anecdotal experiences I might suggest that assessing credibility and content-creation are often related activities and that initiatives for information literacy and distributed digital media production should not be separated in ways that they often are. In connection with this concern, Kathy Im of the MacArthur Foundation pointed me to the work of the University of Michigan's Soo Rieh, who is testing the hypothesis that forms of Internet participation and judgment may be related.

The other big agenda item involved constructive criticism of the campus's IGERT Proposal for a Social Computing Research Training program. Many attendees at the conference cautioned that this kind of interdisciplinary research program can fall prey to excessive rationalism and formalism, especially if the orientation of the inquiry focuses exclusively on online behaviors that are obviously purposive. For example, some called for inclusion of emotional considerations, meme-sharing, and the formation of collective identities on the principle that "expressive" practices should not be relegated to "entertainment." They also urged incorporation of the study of virtual worlds, ubiquitous computing, and games in the proposal, since social computing is already much more than use of the World Wide Web. Audience members also noted the disiciplinary absences of philosophy and ethnography in the current draft of the project.

(More photos of the social computing group being social are here.)

Labels: , ,

Thursday, May 29, 2008

Flood Watch


A flood is a powerful image, which is important to the cosmological stories of many societies and has also been significant for the mythology of digital culture as well. In the opening of Pierre Lévy's Cyberculture, he describes what he sees as a flood of information with which we are engulfed, which is "fluid, virtual, simultaneously gathered and dispersed." Lévy argues that the necessary response to this overwhelming deluge is to all create our own arks, although -- thanks to social media and principles of collective intelligence -- these arks "exchange signals" and "impregnate one another."

So it is appropriate that Virtualpolitik friend Mark Marino has created a preliminary demo for the LA Flood Metro Crisis watch blog that goes along with a cell-phone based interactive story, in which with each stage of the unfolding disaster listeners can choose a different character's point of view from which to hear their narrations of the cataclysmic events and commentary. These scenes are populated by a cast that includes a jaded homeowner, a breezy weather man, and a morbid newscaster. A locative component is planned for the future, so that participants can feel more spatially situated and perhaps even caught up in the suspense of such fast-moving catastrophic effects of nature.

Of course, distributed warning systems designed to send information to thousands of citizens' telephones automatically have been the subject of several UC research projects that are designed to improve disaster response. Marino seems to lampoon this kind of dehumanized official rhetoric, because one of the choices on the telephone menu takes the audience member to a government web page that is robotically read aloud by a machine reader, URL backslashes an all.

Labels: , , ,

Wednesday, May 28, 2008

Foreign Correspondent

When I think about the cosmopolitanism of university life, I often think of one particular Harvard dining hall conversation with Thant Myint-U, particularly when I am reading the newspaper about repression and military rule in Southeast Asia. Now, Thant Myint-U is known as a political pragmatist who did a stint as a UN official and writes editorials such as "The Burma Dilemma" and "Saving Burma the right way" in major newspapers, but when I met him he was a fellow college student, albeit one whose life experiences and global perspective made me self-conscious about my Pasadena provincialism. Now, like many authors, Thant Myint-U uses Facebook to keep in touch with his Ivy League peers to network and promote his publications. (It's also interesting to note the fact that the Wikipedia entry on Burma is currently locked in response to an apparent edit war over the very name of the nation, since many would place the information under "Myanmar," the name associated with the regime.)

Labels: , , ,

Typos Matter

In stories such as "Moody's facing more heat over debt ratings" and "Moody's Computer Glitch Prompts Stock Tumble," journalists are trying to explain how a particular glitch in the lines of code, which many reporters are comparing to a "typo," could have caused a number of risky investments tied to the U.S. mortgage crisis to be improperly labeled triple-A. The problem has to do with a complex debt instrument known as a CPDO or "Constant Proportion Debt Obligation," which is explained here at my pick for risk communication blog-of-the-month, riskopedia.

Labels: , ,

Tuesday, May 27, 2008

Collision Detection or Why Europeans are Better at Cheek-Kissing

I consider myself to be a patriotic U.S. citizen, but there are certainly some shortcomings in the nation’s vernacular body language that are hard to ignore. For example, in this country, people are far worse at cheek-kissing than their European counterparts. It's safe to say that even the most socially awkward European is likely to be a better cheek-kisser than the most suave American.

So, on behalf of my countrymen, I offer the following analysis that breaks the problem down into a series of conditional tests to avoid the most common pitfalls in this amicable nicety and to try to explicate what I see as the rules of this particular social interaction.

To do so, I'll use the construct of "collision detection," which may be unfamiliar to readers who've never taken a videogame design class like the basic one I'm taking now, but they can think of it as an algorithm that checks for the intersection of two solids. Contemporary versions of classic games like Pac-Man, Space Invaders, and many others are built with the understanding that the impact of one programmed object on another determines how the game’s action unfolds. (“Collision Detection” is also a title of a popular blog by Clive Thompson, which you may notice on the Virtualpolitik blogroll.)

From the recipient's perspective, the collision of a typical cheek-kiss can be broken down into three basic aspects: the location of the impact, the type of object encountered, and a report of damage after the event.

Location is generally the first mistake made by a U.S. native. Near the ear, neck, or mouth are not acceptable targets. A good cheek-kisser does not get extra points for aiming at more tricky locations. For something that represents boundaries and social convention, stick to the basics and avoid the arcane. A smooch near one's nostril is particularly unwelcome.

Second, there are many errors made in the actual cheek-kiss itself, which makes what could be a clumsy moment considerably more so. I say, if the objective is to kiss the cheek, kiss the cheek. Do not pucker so much that you seem to be avoiding the close contact that you've already initiated. On the other hand, clenched-lipped mashing in which an offending nose drills into the victim's cheekbone or eye socket is also wrong.

Finally, there is the matter of the impression that you leave behind when you withdraw. A slightly damp cheek is fine. Adults are perfectly capable of not drooling or slobbering, and an utterly dry physiological calling card is a little too reminiscent of a robotic uncanny-valley type of encounter that leaves no trace. Obviously, however, a bruise is a bad memento.

In other words, this procedural rhetoric probably deserves more attention from Americans. What should be a gesture of warmth and affection is too often treated as a perfunctory exercise of patriarchal decorum, and -- like mechanically opening a door or pulling out a chair -- there are ways that this incursion into another person's private space can be executed so that it is perceived as a sort of hostile act.

I use a trivial example to make a serious point, one that is worth keeping in mind in our era of jostling modernity, which Walter Benjamin once characterized as composed of "a series of shocks." Gestures of welcome and leave-taking should ideally ease those jolts rather than contribute to their repercussions, something to which continental sensibilities may be somehow better attuned. Moreover, a collision-detection reading of the norms of daily etiquette can be applied to many other situations as well.

I'm working on a paper now about Cruel 2B Kind, the alternate reality game created by Ian Bogost and Jane McGonigal, in which common forms of effusive politeness, such as giving compliments and blowing kisses, can actually serve as weapons in their game of "benevolent assassination." Unlike the
cheek-kissing case, the rules that are in play involve interactions between strangers, but -- as I indicate in this snippet from the abstract -- the game also facilitates a kind of useful defamiliarization of all kinds of standard operating procedures at work in civil society.

Alternate reality games or ARGs are interactive narratives that use the real world as a platform for community activity to tell stories that may be affected by participants' concepts or responses. These games frequently equip players with resources from computational media, distributed networks, and mobile technologies in order to coordinate events and actors in real time.

Because these games are generally staged in the built environments of cities, ARGs frequently comment on other, less obvious forms of “mixed reality” that are present in invisible architectures of control that dictate how social roles are assigned, economic resources are allocated, physical distance between human beings is maintained, and rules governing the general strategies of cooperation and competition appropriate for urban dwellers are formalized.


So I guess I'm interested in games because, as McGonigal says, "reality is broken." Too often I feel that my colleagues in literature departments see games merely as quaint or cool objects of academic study that can be divorced from their own behavior and the social conventions that they accept. But, as I previously argued in "On Procedurality," maybe the lesson of game studies is that we should all be analyzing our neighborhoods and personal interactions instead.

Update: Bogost points out that The Wall Street Journal ran a related article on this subject, "Americans Learn the Global Art Of the Cheek Kiss."

Labels: , , ,

Monday, May 26, 2008

Seven Minutes of Terror

The "Seven Minutes of Terror" that were part of a niche audience media event yesterday describe the time that it took for the Phoenix lander to travel from entry into the Martian atmosphere to touchdown on the planet's surface for its mission to study the presence of water there.

Since I write about the digital rhetoric of JPL in the book, which in past decades has included the skillful use of computer animation and successful early webcasting initiatives for persuasive purposes, I watched the climactic events in the mission control room carefully. For purposes of comparison, I viewed the "live" happenings simultaneously on the Discovery channel on television and on NASA TV on the web. Although it was less chatty, I thought the web-based government in-house coverage was much better than the network product, since you could actually see the screens of the team leaders and get a sense of their engagement with their labor practices. In addition to favoring anything other than a POV shot, the broadcast version tried to inject false drama by cutting away to talk about past failures and by hyping the critical "seven minutes" that took place before the successful conclusion. Unfortunately, unlike a live tour of the facility under normal circumstances, neither presentation did much to interpret the changing contents of the screens or explicate the distant data that was being represented in graphs and strings of numbers. (In contrast, see below for an image that I shot at JPL last year, where helpful attendants explained what all the colors we were seeing meant.)

Labels: , , ,

YouTube Literacy Test



There have been rock videos from big name bands before that use YouTube celebrities to complement the words of the song, but this is a particularly clever treatment of the genre that includes Miss South Carolina, the "Chocolate Rain" guy, the Numa Numa dance guy, the "Evolution of Dance" guy, and many others.

And besides, as someone who lived in Los Angeles all throughout her twenties during the launch of what was once a local band, I have great affection for that particular Weezer brand of disaffection.

However, some might argue that this appropriation of the appropriators is inappropriate, given that the bands' handlers in the past harassed at least one creator of a Weezer fan vid.

Labels: , , ,

Sunday, May 25, 2008

The Future of Writing

As I've argued before, attention to writing in the academy needs to be taken even more seriously in the age of digital media and distributed networks, since poorly thought-out multimodal compositions have the potential to travel far outside the writing classroom. This is also an exciting time for publishing, in which scholarly books and journal articles are benefiting from new forms of peer review and access by the general public. I'm planning to take part in this University of California conference being organized by Jonathan Alexander, and I hope that other colleagues who are readers of this blog will submit an abstract to him in the coming week.

Deadline: Sunday, June 1st, 2008

Call for Papers and Call for Digital Artwork

CFP: “The Future of Writing,” University of California, Irvine
November 6-7, 2008

Networked communications technologies have become a significant part of American life, resulting in a nearly unprecedented generation of a variety of multimediated texts, many graphically rich and collaboratively written. The Pew Internet and American Life Project reports that “Internet penetration has now reached 73% for all American adults. Internet users note big improvements in their ability to shop and the way they pursue hobbies and personal interests online.” The emergence and growing use of social networking sites have contributed to a significant rise in the production of individual and group Websites through which people and communicates construct, debate, and disseminate online identities, personal ideas, and group values. Again, Pew reports that “Internet users ages 12 to 28 years old have embraced the online applications that enable communicative, creative, and social uses.”

“The Future of Writing” is a mini-conference (November 6-7, 2008) designed to bring together scholars across the UC system and a cadre of nationally recognized experts to explore how the new communications technologies, particularly the Internet, are challenging previous conceptions of what “writing” is. Through a range of panels, demonstrations, and an art exhibit, participants will consider the following: How are new communications technologies changing the way people “compose,” “write,” and “author”? How do collaborative writing spaces and social networking challenge the concepts of “text” and “author”? How are emerging emphases on visual literacies shifting what we think of as writing? And, finally, how do such changes and shifts challenge us as instructors to reconsider and potentially re-conceive educational spaces?

We invite proposals for panels (70 mins) and individual presentations (15 mins) that engage the conference themes and that address-theoretically, pedagogically, or both-what the “future of writing” might (or could, or should) be.

We also invite proposals for digital art work that addresses the themes of the conference. Please submit a URL (linking to photos of work you wish to present) with an accompanying abstract describing how your piece speaks to the “future of writing.”

Please limit your proposal abstract to 300-500 words and submit it via email, by June 1, to Dr. Jonathan Alexander, UC Irvine: jfalexan

There will be no conference registration fees. Participants from out of town will be expected to secure their own lodging.

This conference is sponsored by UC, Irvine’s HumaniTech and the Office of the Campus Writing Coordinator.

For more information, contact Dr. Jonathan Alexander at jfalexan.

Labels: ,

Booby Prize

Back home in Santa Monica and wearing my Computers and Writing t-shirt while unpacking my C&W tote and water bottle, I am reminded about how this conference now emulates the swag-bag activities associated with other technology-related events. Although, given the unisex offerings, it's fair to say that it is still true that "Tech t-shirts aren't sexy enough," and paying women are given less usable garb than men. Kudos, however, to CalIT2, which actually produces a women's version of their shirt, which I suspect will be out of my closet more often than the one I have on at the moment.

I also noticed how networking between bloggers could have been much better orchestrated. For purposes of reporting on the conference, there should have been a kind of "news pool" to divvy up the writing labor. I would have been glad to have an assignment, since I like collaborative blogging. As it is, I met bloggers like Daniel Anderson by accident in the hallways or at the bar. (Check out Anderson's Vimeo version of his Sophie presentation on "Transforming the Teaching of Literature" here.)

Labels: , ,

Shuttered Shutterbugs

Driving through Atlanta after I left the conference, I was struck by this combination defunct photography establishment and derelict crack house. As digital photography places activities tied to the production of images in users' homes or remote locations accessible by the web, sites in urban and suburban neighborhoods that once handled processing are falling into disrepair or oblivion.

Labels: ,

Saturday, May 24, 2008

Portfolio's Complaint

Kathleen Yancey gave the second keynote at the Computers and Writing Conference about a topic that I've been thinking about seriously during the past month, as my university contemplates designing and adopting a new "e-portfolio system." On my campus, I've been arguing that there are a lot of tricky issues with portfolios that involve negotiating public and private audiences and complementing the social networking software that already shapes much of our students informal -- and yet intensely engrossing and meaningful -- writing. Furthermore, if it takes off, many students might want access to these materials after graduation, so universities may need to think about "life-long learning" and access to university online communities and computer networks for many years to come.

In "Inventing the Self, Co-Inventing the University: Electronic Portfolios, New Composings, and the 21st Century University," Yancey took the famed trope of David Bartholomae about "inventing the university" and combined it with interesting case studies about the self-fashioning taking place in successful programs that use electronic portfolios, such as LaGuardia College, Louisiana State, and Wolverhampton in the U.K. Yancey argued that "stickiness" could be an important factor, particularly since engaging students in reflection and reiteration seems to be linked to positive numerical indicators such as higher scores on writing examinations and better rates of retention and completion. In trying to schematize a general pattern of the textual process and product with traditional elements, such as "deliver," "arrange," and "invent," Yancey cited N. Katherine Hayles in arguing for a transformative type of composition.

Yancey asserted that such portfolios should be "a place to do work" and not an "archive or showcase." To illustrate her point, she noted that the medical establishment has long understood that portfolios are a critical part of professional development activities. According to Inside Higher Ed, even the GRE will be adding what they call "non-cognitive qualities," such as "knowledge and creativity, communication skills, team work, resilience, planning and organization, and ethics and integrity. " And Oregon State has embraced the online Insight Resume as a predictive tool for graduating high school seniors.

However, Yancey also expressed concerns that too often writing portfolios are linked to the wrong kinds of assessment activities, particularly in the current political climate in which a shrinking pool of FIPSE money is tied to overgrown versions of the flawed "No Child Left Behind" policy of the Bush Administration. Yet Yancey voiced her own high hopes for VALUE: Valid Assement of Learning in Undergraduate Education but cautioned against "online assessment systems that pass as e-portfolios," particularly if they lack opportunities for students to remix content and experiment with cultural memes. (Of course, regular readers may know that I'm no fan of Margaret Spellings either, so I was certainly sympathetic to this political portion of her talk.)

As she concluded, she indicated anxieties about how software choices may also -- perhaps unintentionally -- shape the results of students' communicative efforts. For example, certain kinds of proprietary software packages give users little control over the visual, which Yancey said was "important for personal presentation." She also warned that some software solutions also facilitate a troubling occupation with data mining rather than education.

Labels: , ,

By the Book

Gail Hawisher and Cynthia Selfe are well-known as pioneers and frequent collaborators in the Computers and Writing community and have now launched a new initiative, The Computers and Composition Digital Press, to which a panel at the conference was devoted. To dramatize some of the issues involved in the project, the team has created a number of Mac vs. PC parodies, "Print and Digital," including "lost under the bed," "dvd drama," and "the more things change." Another short film shown at the panel depicted Hawisher and Selfe in dialogue, while fencers were engaged in swordplay in the background. They explained how the fencing metaphor could be seen as relevant to their endeavor because they were soliciting material that was "nimble," "agile," and "pointed." Since I blog over at Sivacracy, which also represents a partnership with the Institute for the Future of the Book, it was interesting to hear that the institute was involved, given their good track record at getting buy-in from academic publishers who are important as providers of the additional imprimatur that can be needed in the university's reputation economy.

There were a number of other interesting speakers in the line-up. Heidi McKee discussed the importance of facilitating the most international possible reach in the editorial and distribution policies of online publication efforts. To critique the "geopolitics of academic writing." I liked the fact that she cited one of my favorite compositionists, Suresh Canagarajah, on the dynamic between center and periphery at work to question the excessively US-centric contents of Kairos, Computers and Composition Online, The WAC Clearinghouse, and The Alliance of Digital Humanities. Dickie Selfe distributed a list of questions for "author/creatives" and for "editors" that got at many of the critical issues about life expectancy and sustainability that are too often ignored by web publishers. Patrick Berry discussed the "stance that they adopt as learners" working with software and the importance of adopting an attitude of humility, particularly when wrestling with Drupal, a type of abasement with which I have sympathized here. Finally, Melanie Yergeau presented about the challenge of balancing "accessibility" with "usability" and shared an interesting statistic that 98.8% of computers are currently equipped with Flash (as opposed to 55.6% for Shockwave).

Labels: , ,

The Sophist in Azeroth

At the Computers and Writing Conference, Douglas Eyman won the outstanding dissertation prize, which is no wonder, since the recent PhD from Michigan State and assistant professor at George Mason University has been known in digital rhetoric circles for many years the editor of the peer-reviewed online journal Kairos. Eyman's talk, "Gaming and Writing: An Ecological Framework," detailed the five major aspects of what he called game ecologies: environmental action, para-textual development, documentation, infrastructural processing, and research. With a series of case studies derived from the MMO World of Warcraft, Eyman listed a range of relevant areas of interest for compositionists: writing about games, writing around games, writing inside games, and writing games themselves. As his in-world character, "Sophist," Eyman has been considering what Annette Vee has called "proceduracy" by analyzing the game's requirements for persuasive appeals, in-game documents, text-based communication, and the interface itself as an example of multimodal discourse. He noted that a study of composition readers published between 2003 and 2006 included no references to games, despite covering other topics related to artifacts from popular culture and entertainment such as advertising and films. Too often, Eyman complained, writing specialists only treated "writing on games" and did so superficially, by focusing exclusively on trite topics such as videogames and violence. Although education and literacy specialists were examining game-based fan fiction, websites, short story competitions, and online discussion, faculty in rhetoric and composition rarely considered the conjunctions of rhetoric and literacy, as in the case of web pages designed for recruiting new members to a guild.

Eyman is now launching "Digital Games/Digital Rhetoric: A Consortium of Scholars in Games and Writing Studies" and encourages researchers from both groups to contact him to spread the word. The rest of the profession may be catching up with him. At Michigan State, they are developing Ink as a "free online multiplayer game for writing & community." (Your first task as a player apparently involves writing a press release.) Meanwhile, the textbook behemoth Bedford-St. Martin's is launching Peer Factor, an online peer-review game that claims that it "provides immediate and tangible feedback that is pedagogically sound but also fun and engaging." To its credit, according to Eyman, the latter acknowledges that "learning to game the system" is an integral part of game play, so that -- as Mia Consalvo has argued -- cheating is recognized as a form of literacy.

Labels: , , ,

Game Day

Although a lot of work has been done in connecting games to relevant issues in the writing classroom about literacy and rhetorical competence, there were still only a few panels at the Computers and Writing conference that specifically addressed this subject, but I was pleased to see that they were generally well-attended and favored with lively question-and-answer sessions. The panel on which my own paper appeared, "Structuring Play, Playing with Structure: Working (with) Videogames," also featured Matt Barton and World Bulding: Space and Community veteran Lee Sherlock. Barton talked about the relationship between "roll play" and "role play" in games that generated random behaviors and the value of complicated statistical systems that modeled desirable traits for students to emulate, such as "abstraction," "experimentation," "collaboration," and "system thinking." Sherlock argued that compositionists could use serious games as a model to encourage rhetorico-dialectical inquiry. He had students do writing that produced a specific genre in the game field, the design doc, and compared the objectives served by his assignment to those elaborated in the WPA Outcomes Statement for First-Year Composition, which has been very influential in the field of writing instruction and curricular planning. My own talk, "The Fourth Wall: Can Open Source Do Virtual Reality?" (slides here) was in some ways the most pessimistic, because I argued that -- although 3D models, animations, simulations, and games had significant rhetorical force as means of persuasion in the public sphere in many venues (digital film effects, courtroom evidence, reenactments in the news, debates about urban development, advertising and corporate promotion, satire, political rhetoric, and environment-creation for spectacle, deliberation, and pedagogy) -- it was difficult to give students any training in these sophisticated professional software packages, given the time constraints of the writing classroom.

Labels: , ,

Friday, May 23, 2008

Begging Bowl

For faculty members working with electronic media, one of the central questions has become how their work will be evaluated in comparison to traditional print sources produced by their peers and whether e-scholarship will be weighted appropriately, given the labor-intensive character and potentially large audiences of digital texts.

Unfortunately, not much has seemed to have improved since "Tenure and Promotion Cases for Composition Faculty Who Work with Technology," the classic study that invented "five fictional tenure and promotion cases of composition faculty who work with computer technology — addressing their contributions in the area of teaching, scholarship, and service" and showed them to "real department chairs, deans, and personnel committee chairs," who were "writing anonymously and frankly about how the case would be evaluated at their institutions." The study found that even widely-read online journals that have made a point of rigorous peer-review were discounted, sometimes arbitrarily, in contentious committee meetings, and that such publications were considered by many in the academy as "essentially no scholarship or at best scholarship of a spurious kind." (I've published in some of the journals used in the fictional c.v.'s, so reading this study several years ago helped me understand that it was the presence of page numbers not peer review that might matter most to some parties.)

A special panel at the Computers and Writing conference about "New Media Scholarship Stakeholders: Departmental, Editorial, and Authorial Issues" tried to address these concerns with updated reflections on the current state of affairs. Catherine C. Braun of Ohio State opened the panel with a study of her own that had asked faculty members who were already tenured how they would evaluate digital scholarship that was already published. Braun inquired specifically about how faculty members would apply specific criteria from their own institutions to these electronic publications in order to encourage more discussion about "shared values." Although she said that there was respect for certain kinds of high-profile digital bibliographic types of scholarship, which cynics might note may also be grant-worthy as text-encoding initiatives, she found that more essayistic digital compositions were often judged very harshly.

Braun focused on one faculty member who was using a specific rubric with four criteria ("originality," "lucidity," "intellectual depth," and "significant contribution to the field") and gave him three online pieces to review. This faculty member intensely disliked the first sample, "Teaching Writing in the Space of Blackboard" by Evan Davis and Sarah Hardy of Hampden-Sydney College, and characterized it as mostly how-to, not original, and shallow. He also asked a question that surprised her, because it indicated a certain kind of web-savvy that she had not anticipated hearing from such a staid colleague: "Why is this a hypertext?" Although I enjoyed reading it, I might add that since the piece is written in Microsoft's Front Page software application, it may also defeat its own purpose of getting its constituency to think critically about how proprietary code may shape pedagogy.

Despite its intentionally distracting and disconcerting multimodal appearance, which included a bottom ticker on the page, the test faculty member praised Braun's second example, Anthony Ellertson's "Some Notes on Simulacra Machines, Flash in FYC & Tactics of Spaces of Interruption," because this professor liked the way it theorized and liked the ethos of the piece, particularly when Ellertson speaks in the piece's video clips.

Finally, Ellen Cushman's "Composing New Media: Cultivating Landscapes of the Mind" was dismissed quickly, according to Braun, because the faculty member was not willing to "play with the text" after becoming irritated with not being able to install the necessary Shockwave plugin on his office computer. Braun argued that this faculty member might have come to a different conclusion if he went through all the criteria he had listed, but I'm afraid that I also found the Shockwave piece slow to load and difficult to navigate, and I'm accustomed to reading online hypertext and finding it of value to my own thinking.

Next up moderator and Kairos editor Cheryl Ball gave a presentation on "New Media Scholarship: Taxonomies, Heuristics & Strategies to Connect (?) Authors, Editors, Departments, & Tenure Committees." Ball is an advocate for what she calls a "digital tenure binder," and her talk moved through several examples of heuristics designed to foster understanding of what constitutes quality in digital work. Ball covered the visual rhetoric heuristic of Kristen Arola, which included terms for a rhetorical reading, such as "audience," "purpose," "context," "emphasis," "arrangement," "proximity," "organization," etc. She also reviewed Jim Kalmbach's 2006 "Types of Hypertexts in Kairos," a list that commends my own piece on the rhetoric of September 11th, which now looks like a very dated model to me by Arola's criteria. Although Steve Anderson's 2007 list from "Regeneration: Multimedia Genres and Emerging Scholarship" may emphasize the cinematic in ways difficult for scholars of written composition to emulate, Ball asserted that it still represents useful "argumentative" and "essayistic" categories of discourse relevant to writing faculty. Ball also discussed Alison Warner's "Constructing a Tool for Evaluating Scholarly Webtexts" and its influence on the "Suggested Guidelines for Online Publications" from "Best Practices for Online Journal Editors" from the Council of Editors of Learned Journals (CELJ), which accounts for different audiences and the desire for content about both institutional affiliations and expertise in web design. Finally, Ball looked back to the late Ernest L. Boyer's Scholarship Reconsidered: Priorities of the Professoriate (1990) to explain her own recently premiered "Digital Scholarship Axes," which I have reproduced below. (Thanks to "genevieve is" for the image.)


Virginia Kuhn's talk on "revising new media (or “huh, it’s finished!”)" also raised a number of salient issues about how digital works are read and valued in the academy. As Ball pointed out, Kuhn has been an author of her own heuristic in her contribution to the Kairos Manifesto Issue, "The Components of Scholarly Multimedia," which includes "conceptual core," "research component," "form//content," and "creative realization" as critical elements. In her presentation, Kuhn talked about the practical difficulties involved in creating a "Gallery" for the 2006 Conference on College Composition and Communication, which appeared in Kairos as "From Gallery to Webtext." Kuhn examined nuts-and-bolts issues that included compression, Mac/PC compatibility, and conveying an impression of uniformity with such a range of experimental texts. As her talk title indicated, this was also about the material challenges of revision when working with new media. She detailed some of the technical difficulties in bringing text, sound, image, and video together in Victor Vitanza's "Writing the Tic," Tim Richardson's "Bereshith," and Byron Hawk's "Rhetoric of Revolution" for the web. Kuhn also discussed the work of her own students at USC, including Evan Bregman, creator of "Immersive Flow: Narrative Through Interactivity."

Many of Kuhn's USC students, including Bregman, used the Sophie reader and authoring system to create large-scale multimodal works. Sophie is the free and open software that many see as a logical successor to Bob Stein's earlier TK3 Reader.

During the question-and-answer session, audience members discussed issues about access, particularly in light of prominent protests about "anti-publishing" by closed online journals, such as Nick Montfort's "Digital Media, Games, and Open Access" and danah boyd's "open-access is the future: boycott locked-down academic journals." Some in the audience claimed that writing faculty find themselves in a strange double-bind, in that open access work that is available free of charge is frequently devalued, and -- at the same time -- work creating textbooks is similarly shunned, ironically because it is seen as too personally lucrative to merit credit as scholarly research done for academic promotion.

Labels: ,

Deaf Ears and Blindfolded Eyes

Jay David Bolter gave the opening keynote at the Computers and Writing conference, where he began by recalling his earlier work in the electronic literature movement among enthusiasts for what was then the nascent medium of hypertext. Like one of the keynote speeches at the ACM Hypertext conference in 2007, however, Bolter conceded that these pioneers also made some fundamental mistakes, particularly in missing the importance of the World Wide Web.

Although at one point, hypertextual writing had declared itself to be the definitive literary avant-garde, with those like postmodern fiction-writer Robert Coover declaring “The End of Books” in The New York Times, even successes promulgated by the group -- such as Patchwork Girl -- had little impact on the literary establishment. Soon critics like Laura Miller in “www.claptrap.com” were ridiculting the groups pretensions. Although the newspaper and the encyclopedia had been transformed by the Internet, along with the production lines of books, Bolter said that the texts of the literary world had changed little in response to the advent of electronic distributed networks and remained in the realm of belles lettres and traditional forms of publication. Genres like digital poetry may have been at the forefront of the electronic literature movement, but they never got beyond a small group of practitioners.

Bolter criticized how this Bolter's recollections of the "heyday of hypertext" also looked back at how space was foregrounded and questioned. As Bolter pointed out, "our culture’s notions of writing" have been not only "expanding to multimedia but also other forms of inscription," he thought it was important to consider "new kinds of writing surfaces" and practices of "down to earth" compositions that are literally grounded in physical locations to get beyond the old paradigm of cyberspace publicized by William Gibson that was divorced from the everyday physical world and focused on the graphic representation of data as "otherworld" or "nonspace of the mind."vision of virtual reality represented a "shutting out of the world" that was literalized by googles that blocked the viewers eyes. Of course, in the book Virtual Realism, Michael Heim has challenged this notion of VR. Yet, according to Bolter, the idea of "post-symbolic communication" and its associated metaphors was spread throughout the popular consciousness by Jaron Lanier among others.

Bolter argued that he and his colleagues at Georgia Tech were challenging the notion that computing and abstraction were necessarily connected and that digital identities were bodiless entities cut off from public speech and political life. Although Bolter discussed the work of Ian Bogost and Gonzalo Frasca in the context of procedural rhetoric and authorship through processes, much of the second half of his talk involved work being done by himself and his colleagues at the Augmented Environments Laboratory. Bolter explained that projects being undertaken there had been influenced by the ideas of Howard Rheingold on social media and Paul Dourish on locative technologies and also incorporated concepts from ubiquitous computing (Mark Weiser), mixed and augmented reality (Steven Feiner), wearable computing (Steven Mann), and tangible computing (Hiroshi Ishii).

Bolter's own work on what he called "task-based AR" focuses on perceptual digital media in service of 1) informal education, 2) entertainment, and 3) expression. Projects include "The Voices of Oakland," set in an Atlanta burial site, "Four Angry Men," an interactive VR drama based on the famed play and film about a group of deliberating jurors, and "Subterranean Voices," which has the voices of local poets embedded in each MARTA subway stop.

Labels: , , , ,

Copping to It

As I've written before in this blog, anonymous user-generated content is being collected and indexed about a number of figures of authority, since students can rate their professors and lawyers can rate their judges. Now comes RateMyCop.com, which allows those who have had run-ins with the law to search for an officer by name, badge or employee number, department, or state and to "review the interaction you had with an officer" after the fact. You can also give your cop of choice a variable number of appropriately six-pointed sheriff-style stars.

However, none of the cops I knew personally were in the database, so it was difficult to gauge how the system worked. I did notice that there seemed to be some hoax names and gag accounts of fanciful arrests, but there were also a number of testimonials to sensitive policing in abuse cases and responsiveness to the needs of a community, hopefully not submitted by the officers' loved ones.

(I learned about this website from the radio show Digital Village on KPFK.)

Labels: ,

Thursday, May 22, 2008

Georgia on My Mind

Unfortunately I will not be at the annual HASTAC conference this weekend -- reporting on talks and demos there -- as I am at the Computers and Writing conference at the University of Georgia. Since this blog is in the HASTAC bibliography, I feel guilty for not being there at my home U.C. Irvine campus, but I also have loyalties to the Computers and Writing community, which recognized this blog with the John Lovas Memorial Award last year.

Labels: , ,

Wednesday, May 21, 2008

Speed Dating

To my regular readers, I apologize.

I'm afraid I've written a very long post about a series of very short presentations.

In any case, I've decided that I suffer from attention surplus disorder, so that all attempts to enlighten and entertain me in faster formats are inevitably lost on me as an audience member.

In the upcoming Virtualpolitik book about digital rhetoric, in the chapter about PowerPoint, I write about the transnational "pecha kucha" speed presentation style that the lore says was developed by Astrid Klein and Mark Dytham for designers and architects in Japan, which has since become a global phenomenon. I still have yet to see a real full-blown pecha kucha event, but this year's "SoftWhere" conference at the Software Studies Workshop was intended -- at least in principle -- to emulate the image-laden twenty-slides-of-twenty-seconds-each ideal. I'm also reading the new book Software Studies: A Lexicon this week, which incorporates selections from many of the presenters, so I made sure to make the trip south to attend this pre-HASTAC event at UC San Diego today.

Although a surprising amount of Warren Sack's talk was strangely taken up with images of book covers, such as The Postmodern Condition or Machine Dreams, he did have a number of provocative observations about the difference between what he called "digital ideology" and "digital life" or the vernacular experience of daily culture. As Sack described this cultural shift, “It is not only possible but usual” for a person to be in "two places at the same time." He also noted that area codes no longer indicate "where we live" but rather "where we bought the phone." For those teaching the grand narratives of the Western canon, such as my colleagues in the Humanties Core Course, Sack argued that certain kinds of stories about epics, quests, allegories of being lost, and separation from family members may eventually become "strange to future generations" with GPS-enabled cellular telephones and other ubiquitous computing devices. Sack claimed that "the digital" is "not just a condition of scientific knowledge" but a phenomenon that changes what we know as "common sense.” Of course, given the black-box nature of many programs and the ways that popular culture often mistranslates ideas from computational sciences, there are ways that this "common sense" can also be read differently.

When Sack argued that there was a critical role to be played by software studies to get beyond what he characterized as the limitations of “the aesthetics of computer scientists," he was using a very specific disciplinary lens intentionally to further his critique of those who generally teach software in the academy. For example, his definition of "functionalism," which for Sack was a negative term, hearkened back to Adorno's criticism of early twentieth-century art that eschewed ornament and yet couldn't recognize that avoiding style was a style itself, and effaced the role that the word "functionalism" has played in cybernetics and systems theory. In our discussion afterward, Sack quite reasonably argued that these interdisciplinary discourses between anthropologists and information scientists ultimately played a relatively minor role in teaching computer science as a profession and that it was necessary for artists to assert their pedagogical role as well. In his talk he argued that software instruction was often limited to "speed," "efficiency," and "correctness" borrowed from the worlds of business and mathematics, and that even concepts like "interactivity," "user-friendliness," and "realism" were generally treated in a cursory manner.

When it came to pretty pictures, there was much more to be seen in the presentation by conference impresario Lev Manovich, who shared his cosmopolitan impressions of the "new architectural and spatial imagination" Given the current fascination among academics for DIY production, Manovich made a particularly apt point that all the "excitement about amateur content" misses the importance of the role of the "global professional culture universe" in shaping a "software society" that may be distinct from a "knowledge society." This interest in "support networks" and "contemporary lifestyles" creates a somewhat different transnational map of the culture industry from the narratives of Manuel Castells and one that is oriented around design questions dear to my heart, such as this blog posting that asks about why one of my favorite things in the world (literally) can't be better designed: the overly humble hotel minibar. Manovich seemed to be almost riffing on Henry Jenkins's famously fan-oriented invitation to the reader, "Welcome to Convergence Culture," with his own explicit "Welcome to Cultural Analytics" after he sketched out seven trends. As he pointed out, statistics are becoming a part of "personal life" for many people in social media environments where data can be mined and graphed.

In the visuals for Ian Bogost's talk, there wasn't as much evidence of the rhetorical way he uses his Leica, but it was a noteworthy presentation nonetheless, which previewed his upcoming MIT Press book, c0-authored with Nick Montfort, in the new "platform studies" series. As a person trained in the study of literature, Bogost is well aware of the importance of "material constraint" to "expression" and cited a range of examples that included mnemonic devices for oral-formulaic poetry, the language experiments of the Oulipo, and the chemical properties of film emulsion. Bogost also offered a helpful taxonomy for software studies at which my work tends to be at the upper and -- probably fair to say -- superficial level: 1) Reception/operation 2) Interface, 3) Form/function, 4) Code, 5) Platform.

In the book with Montfort, Bogost looks closely at the Atari system and how cost considerations played a role in how software was written, particularly because of the position played by a piece of equipment called the television interface adapter or TIA. Because the early Atari lacked a frame buffer, the programmer had to construct every scan line and thus create displays that depended upon rows of color. Bogost argued that this was particularly significant because it shaped the very genre of many seminal games. In the case of the adaptation of Adventure, which had been a text-based game, these constraints were particularly important in shaping the history of videogame design.

This engagement with what has been for too long denigrated as "technological determinism" continued with the next presentation by Anne Helmond of the Institute for Networked Cultures, which sent me a wonderful surprise package of their publications a few months ago, thanks to Geert Lovink. Helmond is involved in research that looks how blogging engines may be shaping the very rhetorical practices of blogging itself, particularly when it comes to the topographies of what has been called "the commentosphere." On the practical level of pedagogy and publication, Cynthia Nie also discussed this issue at a meeting of the Digital Educators Consortium in December of last year.

DEC colleague Mark Marino followed up with a presentation about work being done on the collaborative blog Critical Code Studies. However Marino was also willing to undermine the gravitas that otherwise might be claimed by his new field by joking about the pretensions of yet another the field in the Critical _____ Studies mold, which The Valve has called the "Critical X Studies" paradigm. Certainly, as someone affiliated with the Critical Information Studies movement with Siva Vaidhyanathan, I also find myself interrogating this model as well. At the end of the Virtualpolitik book, I look at how postwar "Information Science" became "Information Studies" and the resultant loss in ithe field's interdisciplinary ambitions and possibilities for collaboration, so that -- although this move away from the absolutist ideology of science may have been salutory -- it also came with certain costs in that it encouraged a centrifugal tendency toward intellectual fragmentation. Marino also reminded audience members of how Espen Aarseth pushed against the colonization of game studies by critical theory. Although Marino felt that including "rhetoric, economics, and politics" is important, he is willing to acknowledge his own anxiety that maybe it is it all "just a metaphor." As Marino said, “Why don’t I just use cooking?” as a semiotically rich code system through which to understand the world.

In "#include Genre," Jeremy Douglass, Marino's frequent collaborator at the blog WRT: Writer Response Theory talked about the role that "quotation, intertextuality, and transclusion" plays in understanding how particular genres of digital media evolve. As models, Douglass looked at Raph Koster's work on the development of the 3D shooter or Jesper Juul's research on matching tile games.

Since media theorist Benjamin H. Bratton is recently the editor of the study of Paul Virillio, Speed and Politics, it was perhaps somewhat ironic that the accelerated pecha kucha format seemed to undermine the impact of his idea-rich talk, which was often illustrated with stock figures whose eyes were covered with anonymizing rectangles. On the Tarde/Durkheim balance, Bratton announced himself as being in favor of "more Tarde" and "less Durkheim" and thus more interested in emergent networks than in the nation-state. (He also cited the work of Bruno Latour. ) As examples he pointed to the conditions of late modernity in which "maps are instrumental mechanisms for the chain of representation."

Yet it could be argued that by declaring that "All Design is Interface Design," Bratton simultaneously privileged the "point of contact that governs the conditions of exchange" and nullified the prospects of treating it as a discrete object of study in relation to the underlying source code. Instead his work focuses on different conjugations of assemblages and the "distribution of the sensible and insensible" across many types of interface. In other words, instead of turtles all the way down, one could say that perhaps for Bratton it seemed to be a matter of interfaces all the way down. Instead of a hierarchy like the Bogost/Montfort ordering, Bratton provided a series of interpretive approaches to interfaces that included those for "groups of people" and "groups of groups of people" that indicated a much more sophisticated cultural analytics approach than the available time allowed for explication.

As a rhetorician interested in how government institutions simultaneously serve as regulators and media-makers, there were obvious reasons to take notes on the talk by Kelly Gates about how the Ocean systems line of products for the Avid video editing system's "government solutions" and more specifically the dTective software package was used to transform surveillance video into evidence admissible in a court of law through nonlinear editing techniques. She argued that "postproduction enables an emerging class of law enforcement specialists" who share media in collaborative work environments. For Gates, this involves optimizing both "visual opacity" and "visual acuity," although she also claimed that these Hollywood-style editing techniques were not necessarily "disruptive to the relationship between image and reality."

Many years ago, while still a graduate student, I was a research assistant for a Joyce scholar, so I was utterly charmed by the fact that Nick Montfort abandoned the pecha kucha format entirely in favor of running a Python computer program, which can be download here, that emitted variations of little but Molly Bloom style "yeses" for his appointed time. For electronic authors, Montfort argued that the challenge was not writing a sentence or a series of sentences but rather composing a "distribution."

When it came to snappy academic one-liners, Peter Lunenfeld's talk on "Counterprogramming" had perhaps too many to write down in six and a half minutes. He also was unafraid to pay homage to the visual power of kitsch in his slides, which included a Bruce Lee statue erected in honor of "universal peace" that was unveiled in Bosnia. Yet bon mots aside, Lunenfeld was certainly willing to give serious credit to three previous conferences as inspiration: Street Talk: An Urban Computing Happening in 2004, the UCLA workshop on Design for Forgetting and Exclusion in 2007, and a 2008 conference on The Workaround as a Social Relation.

He also previewed his forthcoming book from MIT Press, The Secret War Between Downloading & Uploading, which looks at a long history of the computer as a culture machine. He divides this chronology into a pageant that is ordered as follows into six sections: "The Patriarchs" (Vannevar Bush and J.C.R. Licklider), "The Plutocrats" (Thomas Watson Sr. and Thomas Watson Jr.), "The Aquarians" (Alan Kay Douglas Engelbart), "The Hustlers" (Bill Gates and Steve Jobs, "The Hosts" (Linus Torvalds and Tim Berners-Lee), and "The Searchers" (Larry Page and Sergey Brin).

As I argue in the following passage of the Virtualpolitik book, there is another sense in which Bush and Licklider can be seen as "patriarchs."

It is noteworthy that – even in Bush’s wildest imagination – the Memex owner does not know how to type, since the man inserts “longhand analysis” into the burgeoning hypertextual document that Bush describes. After all, typing was considered a skill consigned to women in the twentieth-century workplace. Even fifteen years later, J.C. R. Licklider was still assuming
that hand drawn symbols and speech recognition technologies would be necessary to achieve what he called “man-computer symbiosis,” since “one can hardly take a military commander or a corporation president away from his work to teach him to type."

Because Bush discusses how cultural prejudices and institutional forms of blindness can stymie technological innovation, it is particularly ironic that he can not see the consequences of his own gender ideologies and what may well be an unconscious set of beliefs that he holds about the femininity of certain labor practices. Although Bush’s interest in perfecting voice input devices and writing tablets may seem prescient in the current age of ubiquitous computing and intuitive interface design, at the time it meant that many of the inventions that he imagined would be unable to get off the drawing board for decades, so that Bush was essentially arguing that funding and effort be directed to impractical pie-in-the-sky technologies.


Although Lunenfeld well understood the appeal of the digital humanities movement that emulates "big science" by producing "big humanities," he argued that there was "still a lot that can be done on the small." In particular he cautioned against a "rush to replace psychoanalysis with cognitive science" exemplified in humanists' current fascination with brain mapping, the most dubious of which he singled out for criticism: the "godscan." For Lunenfeld, there are also other political and cultural stakes to be attuned to outside the academy, particularly when "the market is out there and pushing machines" to become a mobile mix of shopping mall and television screen.

Unfortunately, by the time we got to the double-serving presentation by Casey Reas and Ben Fry, I could do little more that promise myself that I would order their book Processing from MIT Press and check out their website for open source creative tools at Processing.org, which explains the goal of their software as follows:

Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is used by students, artists, designers, researchers, and hobbyists for learning, prototyping, and production. It is created to teach fundamentals of computer programming within a visual context and to serve as a software sketchbook and professional production tool.


Given my work on digital libraries, I was also sorry that Matthew Kirschenbaum only had six and a half minutes to talk about what he jokingly called "Critical Storage Studies," which involved scholarship about "storage, inscription, forensics, and materiality" and allows for the fact that software exist as physical inscribed objects, which can even be seen as palimpsests. Kirschenbaum has a major role in the big-budget Preserving Virtual Worlds Project with Illinois, Maryland, Stanford, and RIT.

Michael Mateas's talk about "Authoring and Expression" touched on some of the connections between rhetoric and software studies that he was just beginning to work with when I met him at the Digital Arts and Culture conference in 2005. By suggesting that "an architecture is a machine to think with," Mateas examines "authorial and interpretive affordances" and takes the traditional triad of author-text-audience in classic rhetoric into the realm of artist–system–audience. Furthermore, as a creator keenly aware of the role of ideology, Mateas asks: if “the space of the sayable” is constrained, "what does it mean to consciously design this?" He followed up this idea by pointing out that "establishing a sign system" is a privileging activity, although audiences can still play an active role. Although there is a risk of solipsism by asserting that it is "narrating to ourselves that defines progress and representation," Mateas is not necessarily a radical relativist. For him, "computation is always double" and involves the "relation between code machine and rhetorical machine" in which there is an active circulation of signs. In what could be read as a gentle jab at the serious games with which his work is often associated, Mateas noted that there are problems to claiming to know about "learning" and "planning" in AI architecture, since this certainty kills the very circulation of signs. As his final examples of the importance of thinking about "craft practices" and "representational practices," he pointed to "weird languages" that include esoteric programming languages such as Chef and Shakespeare.

As the day concluded, Noah Wardrip-Fruin, who is experimenting with blog-based peer review for his book Expressive Processing, had the final words. Since government and institutional rhetoric is my specialty, I'm not sure that I entirely agreed with his reading that asserted the superiority of The Restaurant Game to the ACM's letter of protest about the "Total Information Awareness Program," given the relative sizes of their constituencies, but Wardrip-Fruin's willingness to test out ideas in open forums certainly shows his sensitivity to public sphere issues. He also earned points for bravery by using automated timing for his slides so that the session ended with a strict-construction pecha kucha presentation.

Update: In addition to some interesting conversations about the conference that are linked to this posting on Facebook, there have been some noteworthy reactions in the blogosphere. Benjamin Bratton's reflections about the bifurcations of the project, along with his own definition of software, are here. Anne Helmond had a more concise review of the day's proceedings for the Institute of Network Cultures here.

Labels: , , , , , ,