Showing posts with label Heidegger. Show all posts
Showing posts with label Heidegger. Show all posts

Wednesday, June 25, 2014

Looking #Throughglass, Part 3 of 3: Risk, Doubt, and Technic Fields

In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.

Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.

The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence. 

Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main --  means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.

And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time? 

In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread. 

In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive. 

In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.

Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!).  But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons. 

First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back.  Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands.  A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions. 

The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly. 

For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen. 

Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly.  Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.

Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience. 

At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how 
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point. 



My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:

Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience.  In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,”  users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.

If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality. 

Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things.  Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it.  It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space. 

But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project. 

Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found. 

Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.

#Throughglass, however, the future is in the past-tense.


[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.




Sunday, July 21, 2013

Hide and Seek, Part 1: All Those Years They Were Here First

Hide and seek.
Trains and sewing machines.
All those years they were here first.

Oily marks appear on walls
Where pleasure moments hung before.
The takeover, the sweeping insensitivity of this still life.

- Imogen Heap, "Hide and Seek"

As the editors of the collection for which I'm writing prepare their final comments and suggestions for my essay, I've been thinking about some of the possible trajectories of the "posthuman determinism" I propose; specifically, the ethical ones.  Since I used hoarding as an ongoing example in the piece, one of my editors sent along a talk by Jane Bennett.  The subject of her talk was hoarding, and its relationship to the "vibrant materialism" about which Bennett writes.  I was taken by her statement that hoarders are "preternaturally attuned to the call of things," as well as her theory that hoarders are under a kind of "animistic taboo" in their attachment to things.  Very loosely, Bennett's idea is that despite a prevalent consumerism in our culture, too much or too strong of an attachment becomes a taboo of sorts.

This made me realize that her overall approach generally skews toward artifacts rather than objects. That is to say, manufactured or made objects, rather than more natural objects.  Interestingly, however, although there is the occasional hoarder who hoards rocks or leaves, for the most part people are aghast at the hoard as a series of artifacts:  of what use is the stuff? Why are you holding onto this (useless) piece of junk?

But if we stand back and put all phenomenal objects (i.e. objects that have extension, and can be physically apprehended), on a level playing field, we find that we have a very culturally constructed -- even politicized -- hierarchy of objects.  The environmentalist values the object of the tree, the river, the field as higher than the artifact of the iPhone, the refrigerator, or the shards of lead paint.  And that, in specific circles, is desirable, right, and just.  Whereas the person who finds comfort surrounding him or herself with artifacts is "shallow," "superficial," or, in the least philosophical sense of the term, "materialistic."  Obviously, an obsession with artifacts can have detrimental effects, not the least of which are the very philosophical ones which Heidegger outlines in The Question Concerning Technology. Indeed, falling prey to a predatory consumerism can have terrible effects on us psychologically and culturally.

Now, putting all phenomenal objects on an equal conceptual footing as objects -- making no distinction between "object" and "artifact" or "natural" and "artificial" -- we can, perhaps, alter our approach.  Why is feeling a sense of well-being from one group of objects better, or even more "normal" than feeling a sense of well-being from another group of objects?  Is there something inherently better in one group than another?  Why, exactly, is it better to feel comfort from trees, rocks, and grass than it is from a big-screen TV, a warm blanket, and soft, fuzzy socks?  Is it because we have deemed one "natural" and the other "artificial"?  Because one is made of certain materials which are safer than another?  Or because one group has come into existence apparently without the aid of human intervention, whereas the other has come into existence at the cost of human health and dignity?  Fair assessments, absolutely.  However, even a cursory investigation of the "natural" scenario can transform those objects into instruments of death and destruction:  natural bacteria in the water can make a person gravely ill.  The tree can fall over and crush whoever's under it, etc.

These are reductionist, broad-stroke examples, of course.  I use them as points of departure.  Because if we simply view objects as "other," then we will always fall into a myriad of binary systems by which such objects are classified, or worse, an endless Marxist exercise in subjugation and valuation.  As a posthumanist, I see the subject-object/self-other dichotomy as the product of an obsolete worldview.  Denigrating the value of ANY objects to our well-being as humans can have negative consequences as well.

There are times when an object, for whatever reason, can, psychologically make us feel good, or give us a sense of place and well-being.  A comfortable room, a soft blanket, a childhood teddy-bear, or even a cell phone or tablet that does all we need it to do, can -- albeit momentarily -- "complete" us.  I've written repeatedly that, from a posthuman standpoint, an artifact "works" for us when we feel no boundary between it and us.  We are in union with it.  Immediately disqualifying a physical artifact as a source of an existential moment simply because it is physical,  is to cut off an entire field of study.

Furthermore, ignoring the very real materialism around us through haphazardly elevating any object -- including a natural one -- can have serious consequences.  Heidegger maintains that applying a "setting-in-order" to nature sets us on a path to the "standing reserve," or a blind objectification of the world around us.  I agree.  However, I also believe that the variables in this equation can be flipped: bracketing "nature" as something that is other than a materialism is to actually miss out on the authentic vitality of those objects.  To plant, to harvest, to conserve are all manipulations of the physical materiality of the objects of nature.  Weeds don't commit suicide because they are choking our our tomatoes.  Rainwater does not gather magically in the right place and irrigate our terraced garden.  Non-native vegetables, fruits, and legumes do not plant themselves.  Furthermore, the "natural" enzymes in the manure we spread over the soil will just as easily make us ill if we don't wash our hands (with soap) after handling it.  Walking barefoot, not bathing, eschewing vaccines, only myopically represent humanity's "natural" state.

Objects are objects.  They always already precede us in the world (or even the lifeworld).  This is where Bennett's work comes in really handy. In her characterization of objects as having a "vital materiality," she's not referring to a kind anthropomorphic animus; instead, she's pointing to the unique materialisms of each, and how the material character of objects affect us.  She does so without using an existential, subjective conceit: those objects are already in the world we occupy.  We do not "bring them forth." They are already there.

As for me, I take this a step further.  The "I" is manifested in a world of objects -- not the least of which is the physical body.  But that self-aware I, that Dasein, is composed by and through the physical objects around us at any given moment.  And as I write through this idea, I'm starting to understand more clearly Bennett's contention that a heightened awareness of the objects around us could be linked in some way to Freud's death-drive, or even Sartre's "being-in-itself" vs. "being-for-itself."  Perhaps we don't want the objects to be part of us as much as we want to fall back into the world of objects.

More of that in Part 2 ...







Saturday, November 3, 2012

Gelassenheit, Serendipity, and Multitasking.

A student of mine came across a Heideggerian term which I haven't thought about in along time.  He was reading an article that linked Heideggerian existentialism with theology.  I've read these things before, and the arguments of the authors usually hinge upon an attribution of purpose or telos on the Dasein which isn't necessarily justifiable. These interpretations of Heidegger are also contingent upon a general ignoring of anything and everything Heidegger has ever said about death.   But that's another post.  What struck me about this article was the appearence of the term "Gelassenheit," which the author of the article unforgiveably translated as "openness or attunement," which is -- I believe -- more than just a stretch of a translation.  Then again, the article was trying to shoehorn Heideggerian ontology into a theological perspective.  Just because Heidegger borrowed the term from Meister Ekhart, it's not a crack in his atheism.  But that's a rant for another post.

After double-checking from several sources -- including two German-speaking colleagues, I found that my own translation as "stillness" or "calm" was more accurate in that context.  The implications of the slanted translation definitely affected the arguments of the article.  But it did inspire me to think more of this term, and to track it down in Heidegger's writing -- and I've got Country Path Conversations (Davis translation) queued up on my Kindle.

But this use of Gelassenheit serendipidously worked with some things I've been wrestling with in terms of technology, distributed cognition, and a theory of posthuman determinism I've been thinking about.  I am finally getting to read Nicholas Carr's The Shallows: What the Internet is Doing to Our Brains, but from my own personal experience, I know that certain ways of using technology can mess with my concentration.  I realized last year as I started to write again that multitasking had really screwed up my ability to write.  I had to painstakingly extricate myself from multitasking and re-evaluate the way I used technology.  It took more will power than I care to mention.  But I did it.  And now I find myself much more aware of how I use my own technology than I was before -- specifically, my behaviors:  How and why do I multitask?  Was there ever a moment when multitasking was useful, or even necessary?

But as I thought more of multitasking, it became clear that we multitask even when we don't think we're multitasking:  music on in the background?  Guilty.  Screensaver on while doing something else?  Guilty.  Texting or surfing on a smartphone while the TV is on?  Guilty.  Once again, serendipity came in to play as I started reviewing McLuhan for my Communication & Theatre class AND found McLuhan cited in a source I was using for the book chapter I'm working on.  McLuhan's theory of hot and cold media, and autoamputation are still useful and valid.  But I think they have a wider application than what we often attribute to him.

But from what I've read so far of The Shallows, I think that there will be a very interesting crossover between his work in how the brain re-wires itself in light of technological media, and my own ideas of the role of topological spaces and distributed cognition.  I also think that there just might be some hope regarding more effective ways to use technological artifacts and applications with Carr believes re-wires our minds in counter-productive ways.  This idea of Gelassenheit may be a part of it, but there's a distinct possibility that Heidegger's "stillness" may itself be a red herring.

Regardless, part of this requires some experiments on my part -- some of which I'm already engaged in.  But I haven't been engaged in them long enough for them to become habit.  But I will say that the few things I'm doing differently have increased my concentration a great deal.

More soon.


Saturday, September 15, 2012

A little bit on process; and a turn of thought

There are telltale signs that I am at the "saturation point" for material for a piece I'm working on:  I have trouble finishing sentences; I cannot think of the right word for things; I sleep fitfully, and when I do sleep, I'm plagued by very odd dreams.  That's when I know that my subconscious is working overtime on the broad landscape of material I've been reading and annotating in previous weeks.  And the fact that I'm working on this at the start of an academic year, when my classes are starting out and I'm trying to figure out the best pedagogical approaches to the material is just exacerbating my overall inability to articulate myself verbally.

Wednesday morning at 5:30am, after dreams in which I was occupying two spaces at once, my eyes popped open and I could see (and hear) the complete introductory paragraph to the chapter I'm writing.  It was an odd experience, and the little pad I keep on my nightstand would never be able to handle the heft of the paragraph in question.  I ran to my study, grabbed a pen, and started to scribble the paragraph down as best as I could, knowing that only 20% of it might make it into the finished piece; but I also knew that there were some key phrases that would act as "markers" for other ideas.  After the paragraph was done, I sketched out a very rudimentary structure/flow chart of ideas.

What's most interesting, however, is that for this particular piece, some of my best nuggets have come from my more "editorial" notes -- where I comment on an author's style or rhetorical choices; or where I document my own difficulty in understanding a point, or in articulating an analysis (i.e. "This is a really tricky bit, I can avoid this argument or try to walk the reader through it").  But there was one particular essay I was reading where, after a very promising first 2/3rds, the author then abruptly stops a deep and thorough philosophical meditation to show "an example" of the philosophy in action in some obscure film of which I never heard.  I was frustrated, because it seemed he was so close to something really profound in the piece, and then there was this ... example.

It made me think of Heidegger's prolonged deconstruction of Trakl's "A Winter Evening," and also my concluding chapter in Posthuman Suffering, where I went on a somewhat meandering analysis of one scene A.I.: Artificial Intelligence.  So I put myself back in that space and tried to think about what was going on when I watched that film and when I was composing that chapter.  The short answer was: a lot.  Actually, it was that particular scene in that film, as well as the ATM epiphany in DeLillo's White Noise which became what I thought were seminal moments:  seeds for the larger book.  But after all of this reading, writing, dreaming, stammering, and procrastinating, my thinking is beginning to turn and I'm realizing that "seed" is very much the wrong word; and that, perhaps, through an elusive temporal slight-of-hand (or is that, "slight-of-mind," what we see in "perfect examples" of our theories are not examples at all -- or at least not examples of what we think they're examples of.  

So as I work though this, I'll be bumping into things and becoming even more inarticulate.

As for the blog, I'm not sure if I'll be updating during the writing of the actual chapter.  I'm playing that by ear.  So if you don't hear from me until October, you'll know why.

Friday, July 6, 2012

Aokigahara Forest, Part 1: You Always Find Something

I came across this video recently. It's been making the rounds on various blogs and Tumblr, mostly in the context of suicide prevention and various religious discourses. On an emotional level, it's quite moving. On an intellectual level -- especially in the context of posthumanism, it's fascinating. Watch the video before proceeding. Heed the disclaimers, though, because there is some imagery that might be disturbing to some.



From a posthuman perspective, I was struck the most by geologist Azusa Hayano's statements regarding the tape that had been wound around trees and tree-limbs:

"People who are indecisive about dying, wrap this tape on trees along their way so they can find their way out ... In most cases, if you follow the tape, you find something at the end. Either you find a dead body, or you find traces that someone was there. You always find something."

Hayano's statement here is telling. For him, indecision and doubt are marked by the existence of physical evidence/objects. He later implies that those who are determined to commit suicide walk into the forest without any kind of objects to leave behind. They seem to walk in and "disappear" completely. From a posthuman perspective, it would make sense that doubt would be marked by objects. Objects have the quality of holding one to a specific time and place. On a practical level, they can mark goals or destinations (literally, a starting line or finish line, or monument, or landmark). On an affective level, they become representations of a specific moment in time and space, and take on the a kind of emotional burden. As I've said previously in Posthuman Suffering, we seek out technology as a means to "lift out" pain and suffering from our human selves. We want technology not to alleviate that suffering, but to actually suffer with us. Note, that's with us, not for us.

For objects, the relationship is similar, I think. But because of the physically simple nature of a single object -- in this case, the tape -- the practical/affective boundary is indistinct. The tape in Aokigahara Forest allegedly serves a singular practical purpose: to allow the individual to find his or her way out should they decide not to kill themselves. But how can we not assign a more profound, symbolic purpose to them as well: traces of lives; tendrils that anchor the individual to the possibility of living. The tape becomes, literally, a lifeline back to the world. And, given Japanese technoculture, that 'world' is one represented by technology, or that which is not nature (Hayano discusses this toward the end of the video, more on that later).

I can't stress enough exactly how important the affective/practical nature of that lifeline is. For the individuals who want out of the Aokigahara, the tape is their only way to navigate back to the main trail system. Furthermore, it seems as if those who do actually decide to leave the forest still leave objects behind. Granted, those objects may simply be trash, but they are evidence of their presence nonetheless. Even the tape remains behind. Are these texts? Testimonials to moments of despair? I wonder how many of those who decide not to kill themselves ultimately pick up after themselves and take the tape with them. I would be inclined to believe not many at all; but that is pure speculation. The objects seem anonymous enough to not specifically identify an individual (and thus not bring the potential for some kind of public shame), but are personal enough that they represent an individual life or path. I'm reminded here of Sting's "Message in a Bottle." At the moment of despair, "100 billion bottles washed up on the shore," testaments of lonliness for 100 billion castaways.

On the other side, however, for those who do succumb, these lifelines are only lifelines in the past tense. For the living, the life-lines represent the final moments of a person, and mark the final trajectory of a life. And, in such a fashion, what is found at the end of the line are "things." As Hayano stated, "you always find something" whether it's scattered objects, or a corpse. And in that moment, the person is now definitively a "thing." That thingness is even more emphasized by the state of the human found.  And it's up to the people on the other end to bestow upon the corpse its human-in-the-past-tense status.  If the corpse is never found, it becomes indistinguishable from the forest itself. In a similar register to the Heideggerian "they," the corpse-as-thing, devoid of a certain essential "human-ness" (very problematic term here, I know), becomes obfuscated by the otherness of the forest itself.  It is only found when there is someone else there to find it; and furthermore, it is found because it is not of the forest -- but is distinguished in the same fashion as the other artifacts which themselves are markers of a once present human.  Poignantly, the suicides of the forest are rendered into debris; the debris of modern life.

More in the next post.

Sunday, June 24, 2012

The shape of thoughts

After I wrote Posthuman Suffering, I wasn't exactly sure where to go from there.  Since the core material of the book was my dissertation, and my dissertation took years (and years and years) to write, My ideas were already evolving from my key argument that technology is more of an ontology than an epistemology.  What I hadn't realized at the time was that my own idea of "ontological" was heavily influenced by an existential perspective.  The world "out there" was always already filtered though the consciousness.  So, as I'm so fond of saying in my philosophy classes, "the world is out there in our consciousness."

But after I gained some distance from the book, and after teaching several classes and having a lot of very good class discussions with the students in those classes, I began to feel that this "primacy of consciousness" often presented a conceptual brick wall of sorts, especially when it came to otherness.  In Heidegger's The Question Concerning Technology, instrumental technology (that is to say, technological artifacts themselves -- aka, "stuff") and technology-as-concept are very quickly separated.  As I've said in my book, Heidegger implies that "the technological" is itself an epistemology.  It is a way that humans "know" the world.  At the time, I fully supported this opinion, but even then knew there was a bit more to it.  But thinking about technology at the time of his writing, I doubted that Heidegger could come to any other conclusion.  The ubiquitousness of virtual, "always on" technology (I'll get into that in another post) was not yet visible to him.  Yet, taking into account the ideas of Donna Haraway in A Manifesto for Cyborgs and N. Katherine Hayles great How We Became Posthuman, I started orbiting around the idea that how we "are" -- our "Be-ing" -- is itself shaped by the technological.

What I couldn't see then was that I was still putting consciousness first.  How we express the self is dictated by our technological systems -- but in a more traditionally existential way.  I had located that expression of self in terms of mindedness, and not something greater.  Like many existentialists, I gave myself a pass by always inserting some disclaimer about the physicality of the "wetware of the brain" (Hayles' excellent phrase).  No matter how many times I said it, though, there was always something gnawing at me.  Materialism became that pea under the mattress.  Simply cordoning off a more materialist perspective into the biological body was not enough.  There was just too much stuff.  And that stuff had more than an affective pull on us.

I was able to keep all of that at bay for quite a while, actually.  I had a Philosophy program to help develop, maintain, and grow; classes to teach; and tenure to worry about.  But then, around the time that my wife and I bought our first house, I could no longer ignore that gnawing feeling.  We had been living in the same rental for six years before we moved into our new place.  It was the move into the new space -- a dramatically different space than our old rental -- which affected not only my thinking, but the way in which that thinking unfolded.

That's when I started thinking about the shape of thoughts.

Thursday, June 21, 2012

The Space Between

For my latest project, I've been thinking a lot about the concept of the interface -- that elusive space which defines the "human" from the "object" which it is manipulating.  The simplest and most successful type of interface, in Heideggerian terms, is one which disappears as the object is manipulated.  Some interfaces are so intuitive or effective, that we don't even conceptually think of them as interfaces.  Think about the handle of a hammer.  Normally, we see the handle as part of the hammer, and not as a handle in and of itself.  Why?  Because it intrinsically works ... until you have a blister, or have the first pangs of arthritis, or are forced to use your non-dominant hand.  Then you become very aware of the hammer not being a hammer, but instead being an impediment to what you want to do.  Oddly, though, more often than not, the "impediment" is localized in person/limb wielding the hammer and not the hammer itself.  If you have a blister on your hand, and you need to do some work, do you tend to think "this blister is killing me" or "I can't do this because of this blister", or do you think "I could do this if the hammer handle were softer and padded"? Chances are, you go for one of the former ideas first, and the latter comes only after figuring out how to supplement the handle to make up for your pain/deficiency.

There are several roads I can take with this idea, and in the past I've found myself torn between looking at interface on the smallest, most basic of level (where a specific aspect of the body meets an object), and looking at it from a broader, conceptual level (where we think of ourselves in relation to the object, on an epistemologoical and ontological level).  I've usually opted for the conceptual only because that was more comfortable.  But, as I've started delving into some new texts, I'm realizing that focusing on the conceptual pulls away from the physical, and potentially privileges thought and thinking in Cartesian ways.

So I'm purposely paying attention to the ways in which I interact with objects on a daily basis, and specifically thinking about how the physical interaction with the object changes the "shape of thinking."  I was inspired by Andy Clark's Supersizing the Mind: Embodiment, Action, and Cognitive Extension, and am now reading Jane Bennett's Vibrant Matter: A Political Ecology of Things.  I want to be careful, though, because I don't want the piece on which I'm currently working to become a critical analysis of either.  I want to take the idea of posthuman topologies in a new direction.