Reality Engines
Yesterday's "Event Webs" conference orchestrated by Ramesh Jain promised to explore how the notion of "events" shape "human understanding of history, science, culture, and even personal experience" but pose particular kinds of representational challenges that are also relevant to Jain's areas of research in computer vision, multimedia information systems, or experiential environments. If you've written code, the language about "objects" and "events" that dominated the conference would have probably been quite familiar, although it was often used in ways that challenged the analogy between human cognition and the computer.
In Donald Hoffman's talk about the neurological properties of "objects and events," he tried to debunk what he called "the camera theory of vision." Instead he argued that vision operated by what he characterized as a “reality engine” in which a massive array of as many as one hundred trillion parallel processors in the human brain. Hoffman showed a number of optical illusions and examples of change blindness to demonstrate the fact that the recognition of objects and events actually use different pathways in the cerebrum. He argued that the idea that our perceptions give us an accurate view of the "truth" or "reality" of the world is a mistake, even on evolutionary grounds.
Using another computational metaphor from the field in which you can never have "good, fast, and cheap," he argued that “the truth is expensive” and that to get an adequate representation of the relevant information in their environments, animals developed what he called "species-specific hacks" involving space, time, objects, colors, and events -- none of which are aspects of reality. Hoffman described how he had been using computer models to explore "evolutionary game theory" to match up truth-seeing animals against "quick hack animals," in a highbrow version of the popular meme kitten war perhaps. In Hoffman's match-ups, the truth-seeing animals always lost.
Hoffman also emphasized the promise and limitations of what he called a "User Interrface Theory" of perception. He compared our mechanisms for understanding reality to a Windows desktop in which "an interface is there to hide the truth" because users don't want to see the intricacies of how the software actually works. As Hoffman put it, "It is useful, because it is not true." This didn't mean that people should disregard their perceptions, because one should not step in front of a speeding train any more than one should drag an icon with a valuable electronic file containing one's thesis into the waste basket on one's desktop. This distinction between "taking something seriously" and "taking it literally" was also picked up in the subsequent talk by Pulitzer Prize-winning scholar of religion Jack Miles. In the question-and-answer session, participants tried to relate these "cognitive layers" to the advent of language and particular technologies in human history.
Given the discussion about interfaces, it was appropriate to see Lev Manovich in the audience, given his classic work on the subject in The Language of New Media, who brought up the problem of how to represent history not only as breaks and events but also as continuous changes or "continuous functions." During lunch, I got to see a preview of Manovich's talk on "cultural analytics: tracking and visualizing global cultural patterns," which tries to suggest some answers to his central question: "how can we track "global digital culture" (or cultures), with its billions of cultural objects, and hundreds of millions of contributors?" He showed different kinds of gorgeous webs, maps, and layered representations that would allow the user to drill down to see specific artifacts. Sample objects of study from different kinds of transnational communities of influence could include scientific papers, design portfolios, and nightclub architecture. Cynic that I am, I argued that it might also be important to represent reaction and disagreement as well as more easy-t0-represent patterns of influence. His blog, databeautiful, is also worth checking out for anyone interested in thinking about how data representation and data mining could be used outside of computer science.
Photograph courtesy of Peter Krapp.
In Donald Hoffman's talk about the neurological properties of "objects and events," he tried to debunk what he called "the camera theory of vision." Instead he argued that vision operated by what he characterized as a “reality engine” in which a massive array of as many as one hundred trillion parallel processors in the human brain. Hoffman showed a number of optical illusions and examples of change blindness to demonstrate the fact that the recognition of objects and events actually use different pathways in the cerebrum. He argued that the idea that our perceptions give us an accurate view of the "truth" or "reality" of the world is a mistake, even on evolutionary grounds.
Using another computational metaphor from the field in which you can never have "good, fast, and cheap," he argued that “the truth is expensive” and that to get an adequate representation of the relevant information in their environments, animals developed what he called "species-specific hacks" involving space, time, objects, colors, and events -- none of which are aspects of reality. Hoffman described how he had been using computer models to explore "evolutionary game theory" to match up truth-seeing animals against "quick hack animals," in a highbrow version of the popular meme kitten war perhaps. In Hoffman's match-ups, the truth-seeing animals always lost.
Hoffman also emphasized the promise and limitations of what he called a "User Interrface Theory" of perception. He compared our mechanisms for understanding reality to a Windows desktop in which "an interface is there to hide the truth" because users don't want to see the intricacies of how the software actually works. As Hoffman put it, "It is useful, because it is not true." This didn't mean that people should disregard their perceptions, because one should not step in front of a speeding train any more than one should drag an icon with a valuable electronic file containing one's thesis into the waste basket on one's desktop. This distinction between "taking something seriously" and "taking it literally" was also picked up in the subsequent talk by Pulitzer Prize-winning scholar of religion Jack Miles. In the question-and-answer session, participants tried to relate these "cognitive layers" to the advent of language and particular technologies in human history.
Given the discussion about interfaces, it was appropriate to see Lev Manovich in the audience, given his classic work on the subject in The Language of New Media, who brought up the problem of how to represent history not only as breaks and events but also as continuous changes or "continuous functions." During lunch, I got to see a preview of Manovich's talk on "cultural analytics: tracking and visualizing global cultural patterns," which tries to suggest some answers to his central question: "how can we track "global digital culture" (or cultures), with its billions of cultural objects, and hundreds of millions of contributors?" He showed different kinds of gorgeous webs, maps, and layered representations that would allow the user to drill down to see specific artifacts. Sample objects of study from different kinds of transnational communities of influence could include scientific papers, design portfolios, and nightclub architecture. Cynic that I am, I argued that it might also be important to represent reaction and disagreement as well as more easy-t0-represent patterns of influence. His blog, databeautiful, is also worth checking out for anyone interested in thinking about how data representation and data mining could be used outside of computer science.
Photograph courtesy of Peter Krapp.
Labels: conferences, database aesthetics, information aesthetics, interdisciplinarity
0 Comments:
Post a Comment
<< Home