At the suggestion of a colleague, I recently read J.G. Ballard's "The Enormous Space." The short story, about a man who decides that he's never going to leave his house again, has a "Bartleby The Scribner" meets Don DeLillo vibe to it, where -- as his self-imposed isolation sets in -- he starts to explore the space of his home more intimately, with predictably hallucinogenic results. But his initial explorations resonate with work in New Materialism and Object-Oriented Ontology: particularly as he explores his own relationship with his physical environment.
I believe the story has gotten more attention in the shadow of COVID and its resultant quarantines (which, as of today, June 24th, 2020), people in the United States have seemingly become bored with and "prefer not to" follow. But the ongoing, slow collapse of the United States is something for another entry. I also believe that strict quarantines will be in effect again in some states after death tolls reach a level that registers on even the most fervent pro-life, evangelical conservatives' radar: that is to say, when enough of the right people die for the "all lives matter" crowd to actually notice; and/or when "bathing in the blood of Jesus" is no longer the necessary tonic to mitigate the long, slow, isolated, and painful COVID-deaths of loved ones. I have no doubt those deaths will be inevitably and preposterously blamed on Hillary Clinton, Barack Obama, and somehow Colin Kaepernick and the Black Lives Matter movement.
On some level, however, I think that broader politically- and religiously-based science denial is linked to the same emotions that people felt when they were compelled to stay home: an abject fear of seeing things as they are. Now that's a philosophically loaded statement, I know: can we ever see things "as they are"? Let's not get mired in the intricacies of phenomenology here, though. Those who were in quarantine for any length of time were suddenly faced with the reality of their living spaces. Those home environments were no longer just spaces in which we "crashed" after work, or the spaces which we meticulously crafted based on home decor magazines. Whether living in a "forever home," a "tiny house," or the only space a budget would allow, people were faced with the "reality" of those spaces -- spaces which became the material manifestation of choices and circumstances. Those spaces no longer were just the places we "had" or "owned" or "rented," they becamethe places where people actually lived. We were thrust into an uninvited meditation on the difference between occupying a space and living in one.
Much like Geoffrey Ballantyne in "The Enormous Room," we found ourselves subject to the spaces which previously remained "simply there." Some, I know, went on J.A.K. Gladney-like purges as they suddenly realized just how useless -- and heavy -- much of the objects around us were, and instead of finding ourselves surrounded by the fruits of our labor, we were instead trapped by the artifacts of the past. How many people during quarantine fumbled through their possessions, timidly fondling knicknacks, looking for some kind of Kondo-joy. Others, I'm sure, went the opposite route and ordered MORE things from the internet to serve as an even more claustrophobic cocoon of stuff to block out all the other stuff which we couldn't bring ourselves to face -- let alone touch and purge. While still others continued to fail to notice their surroundings at all, yet found themselves suffering random anxiety and panic attacks -- blaming the fear of COVID rather than the fact that their surrounding spaces were becoming increasingly smaller as the detritus of daily life "at home" collected around them.
Those spaces ... the spaces in which we "live" ... which were once relegated to the role of a background to the present, were suddenly thrust into the foreground, reclaiming us and our subjectivity. They didn't just become present, they became the present -- a present in which we were implicated; a present with which we may have grown unfamiliar. And, given the circumstances, can you blame anyone for not being too keen on the present? Whether its seeing more unrest on the news or on social media, or being compelled to haplessly homeschool your own children? The present isn't always that much fun.
I think, though, that there is at least one positive thing that we can learn from Geoffrey Ballantyne: that it is possible for us to more consciously occupy the present moment instead of trying to avoid it. While I don't advocate the extremes to which Geoffrey goes (no spoilers here, but you may never look at your freezer the same way again); I do think that there is something to be said for noticing and engaging the spaces in which we are implicated. The spaces in which we "live" should be the ones which with we engage rather than just treat as some kind of visual or ontological backdrop. Engaging with our spaces is a way of seeing things as they are. It's a way of being aware.
In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.
Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.
The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence.
Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main -- means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.
And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time?
In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread.
In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive.
In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.
Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!). But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons.
First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back. Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands. A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions.
The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly.
For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen.
Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly. Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.
Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience.
At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point.
My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:
Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience. In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,” users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.
If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality.
Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things. Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it. It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space.
But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project.
Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found.
Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.
#Throughglass, however, the future is in the past-tense.
[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.]
My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things." Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.
And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.
Now, carry on.
I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.
I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed. In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.
On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.
For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all. But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.
Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.
But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.
Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it. Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”
Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.
Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?
Although inspired by Bennett's vital materialism, I'd like to think about why objects give us comfort from the position of "distributed cognition" which I've written about in previous entries (once again, owing much to Andy Clark's work). If we follow the hoarder scenario, there is that jarring moment when the extent of the hoard is thrust into the hoarder's perception by some outside actant. It's at this moment that the hoarder is forced to see these objects as individual things, and the overall seriousness and magnitude of the problem becomes apparent. I think that even non-hoarders get a glimpse of this when faced having to move from one dwelling to another. Even people who aren't pack rats find the task of having to -- in some form or another -- account for each object that is owned. Dishes can't be packed away in sets. Books can't be moved in their bookcases. Everything has to be taken out, manipulated, and handled. The process is exhausting, no matter how healthy the individual is.
The objects become more "present" in their consecutive singularities. And in each instance, we have to make an effort to justify the existence of each object. And that's it, isn't it? It is up to us to justify that this object is worth the effort of dusting off, packing, unpacking, etc. In this way, the objects seem dependent upon us, since we are the ones burdened with bestowing purpose on those objects. Objects cannot justify themselves. They are, for lack of a better term, insensitive. We, however, are sensitive; and some of us, as explained by Bennett, are more sensitive than others. Perhaps this helps us to understand the hoarder mentality, especially the tears that are shed when something that seems to be non-functioning, decomposing junk is cast away. The hoarder has become invested in the objects themselves -- and bestowed sensitivity upon them. To throw them away is to abandon them.
But here we come dangerously close to the more existentialist viewpoint that it is the subject who bestows value upon the object: that is to say, the act of bringing an object into being is to automatically bestow upon it value. But, let's pause on the moment and process of "bringing." Etymologically speaking, "bring" implies a carrying. There must be a thing (even in the loosest sense) to be carried. The object is at least as important as the subject. Now, I don't want to just flip the model and say it's the thing which brings "I" into being, because that's nothing necessarily new. Hegel implies a version of this in aspects of his Herrschaft und Knechtschaft[Lord and bondsman ... or "master/slave"] dialectic. And there really is no way around the "I": the embodied "I" is a kind of locus of a specific bio-cognitive process. The particular I, of itself at the present moment, is made manifest by the phenomenal environment around it.
The objects by which we're surrounded are (not "represent", but phenomenally, functionally, are) a secondary material substrate through which our cognition is made manifest. A "first" material substrate would be our physiological, embodied brains. But, beyond that, our surrounding environments become an "outboard brain" which helps to carry our cognition.
I cannot stress enough that I'm not speaking metaphorically. The phenomenal world we occupy at any given moment partially constitutes a larger, distributed substrate through which cognitive processes occur. That environment is not "taken in" or "represented": it constitutes the very mechanisms of cognition itself. The process happens as instantly as thought itself, and is highly recursive -- meaning that the better and more efficiently a distributed cognition works, the less visible and more illusory it becomes. The more illusory the process, the greater our sense of autonomy. So, if something goes awry at any point in the process (whether environmentally, emotionally, or physically ... or any cumulative combination of them), then our sense of autonomy is skewed in any number of directions: an inflated/deflated sense of one's presence, an inflated/deflated sense of the presence of objects, skewed senses of efficacy, body dysmorphic disorder, etc. When the hoarder, or even the "normal" individual having to pack up his or her belongings, suddenly must account for each individual object, it causes a breakdown in the recursivity of the distributed cognitive process. The illusion of an autonomous self is dissipated by the slowdown -- and eventual breakdown -- of the mind's capacity to efface the processes which constitute it. Try to accomplish any complex task while simultaneously analyzing each and every physical, mental, and emotional point during the process: the process quickly breaks down. The process of constituting a viable self is quite possibly the most complex in which a human can be engaged.
What then, are the implications to posthumanism? What I'm getting at here is something which a follower of mine, Stephen Kagen, so eloquently said in a response to my Aokigahara Forest entries: "my bias is that the artificial distinction between human, technology, and nature breaks down when examined closely."
Yes it does. The distinction is, in my opinion, arbitrary.
Technologically speaking -- and from a posthuman standpoint -- this is very important. Current technological development allows us to manipulate matter on an unprecedented small scale: machines the size of molecules have already been created. It is theoretically possible for these machines to physically manipulate strands of DNA, or structures of cells. The boundary between human and machine is now -- literally -- permeable. At the same time, developments in artificial intelligence continue, as we begin to see that robots that are learning by physically exploring the spaces around them.
The distinction does, in fact, break down. Posthumanism steps in as a mode of inquiry where the arbitrary condition of the subject/object divide is the starting point -- not the end point. Ontologically and ethically, the lack of boundary between self and other is no longer just a theoretical construct. It means viewing our environments on the micro- and macro-level simultaneously. We must fuse together what we have been warned must remain separate. The smart phone is no more or less "native" to the space I occupy as the aspens in the distance. Within my locus of apprehension, the landscape includes every "thing" around me: things that grow, breathe, reproduce, talk, walk, reflect light, take up space, beep, light up, emit EMFs, decay, erode, pollute, and pollinate. And, the closer and more recursively these various objects -- or gestalts of objects -- occupy this sphere of apprehension, the more integrated they are to my cognition. They manifest my topological self.
And I think this also is where one can start to articulate the distinction between posthumanism and transhumanism. More of that in another post.
I've never been a real fan of Kant, but every time I cover some of his philosophy in any of my classes, I always keep coming back to his more poetic turn of phrase about the dove thinking that if there wasn't resistance, it could fly higher:
Mathematics gives us a shining example of how far, independently of experience, we can progress in a priori knowledge. It does, indeed, occupy itself with objects and with knowledge solely in so far as they allow of being exhibited in intuition, but this circumstance is easily overlooked, since this intuition can itself be given a priori, and is therefore hardly to be distinguished from a bare and pure concept. Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space. (from The Critique of Pure Reason)
In many ways, one could say that we need our own topological spaces as a kind of "resistance" for our embodied cognition. Importantly, I'm not using the term "resistance" pejoratively. I'm thinking of it in the same register as Kant. The spaces give our biological cognition something to "push against" that is more than simply the body in which it (primarily) seems to be housed. Of course, too much resistance can be counterproductive. But too little can be equally problematic. "Too much" resistance would be a space which has too many distractions -- whether they be things which are superficially distracting (i.e. noise, uncomfortable environmental conditions, etc); or more subtle, emotional distractions (good or bad memories).
But what would "too little" resistance be? At first I thought that a space that was too comfortable either physically or emotionally would provide little of the resistance I'm thinking about. Then I wondered if it would be a space that was somewhat empty -- devoid of objects and distractions, similar to a topological tabula rasa. The latter, however, is not feasible in the practical sense of the word, really. Unless we're talking about exceptional situations such as prolonged solitary confinement or sensory deprivation. But if we remain within the confines of the Geneva accords, I'm thinking that the former is more suitable: where too much comfort or even familiarity offers little resistance. And, when we're too comfortable in a particular place, where does our motivation come from? What do we "push against" in order to really think? On top of that, we also need to consider that the desirability of these two aspects is situationally contingent. Sometimes, we need a bit of ease. We need to also take into consideration the fact that certain types of people will rely more heavily on the physical "outer" spaces around them than others.
It's important to note, however, that it's not so much the presence of the "stuff," as it is how effectively it's utilized: how efficacious is the environment to our thinking?
Regardless of the degree of integration with our topological spaces, those spaces act in a similar fashion to how Kant's "experience" provides a priori knowledge with something to push against. Even though I'm not one for a priori knowledge, I think that Kant's metaphor is useful. Affectively, no resistance = no ambition; no drive; no motivation. Topologically, the spaces we occupy provide that needed resistance for our biological cognition. The phenomena around us provide the stimuli through which a distributed cognition is woven.
I wrote this entry over two days while I was on my semester break in Denver. I was sitting in a Starbuck's, amid the traffic noise, reading through articles to help with a revision of an anthology chapter I was working on. I found myself thinking again about place and topology. And I came to an unsurprising, but somewhat disappointing conclusion: I think better in cities. My mind can make high-end and productive connections so much more quickly when I'm surrounded by a more urban landscape. I find my thoughts more centered, more precise, and less encumbered by counter-productive meanderings.
The reason this is "disappointing" is a more personal one. For an academic who lives in the middle of the mountains, to know that my best thinking doesn't happen in the place where I live is concerning. What would I be like if I taught in an institution located someplace else? How different would my teaching be? I already know that my research would be more productive. So yes, there is something a little concerning there.
But academically, and in the scope of the chapter I'm working on, this is really par for the course in terms of the topological nature of distributed cognition. This also helps to prove an important point that I think a lot of scholars working on "thing theory" may be overlooking. It's quite tempting to look at the types of thoughts analogously with the types of spaces we're occupying at that moment. That is to say, it seems to make sense that if we're in someplace that is quiet, serene, and pastoral, that our thoughts should be similar. But think for a moment about people who visit places that are serene and quiet and find themselves even more stressed and agitated than they would be if they were in their more "native" environment. For me, being in downtown Denver doesn't give me more "cosmopolitan" thoughts. More accurately, the downtown atmosphere seems to match up to, and be more conducive to, a more native modality of thought. That is to say, i can concentrate on things more easily. I can sustain deeper, more complex thought for a longer period of time. At least -- and this is a very important caveat -- it seems that I can. It feels like that's the case. I feel more "me." It would be interesting to perform a more formalized study using various memory and concentration tasks. Perhaps in May.
In terms of the chapter I'm working on, however, I think it's important to move beyond what I mentioned above, and move away from characterizing types of thoughts per se and instead concentrate on the specific thought process that brings forth a specific self-ing one's specific lebesnwelt. Actually, a more accurate way to put it would to simply say "brings forth a particular, individual lebenswelt."
So, let's take the cultural construction of hoarders on reality television, for example. In most of the shows I've seen, the hoarders themselves seem to fall into two categories: 1) the hoarder who has come to the conclusion him or herself that he or she can no longer live the way in which they are living; or 2) the hoarder who has been thrust into an intervention due to some outside circumstance (i.e. a health/fire scare where rescuers could not get into the dwelling in a timely fashion; or a local municipality is threatening to condemn the property due to neighbors' complaints). In the former case, the hoarder is more self-aware and knows that how they are living is -- within the larger cultural framework -- "wrong." Even if they don't see their existence as uncomfortable or unsanitary, they have had some kind of insight or interaction that tells them that their own sense of "home" or "comfort" is somehow sociopathic. In the latter case, however, one can usually see a complete lack of awareness on the part of the hoarder that what they are doing is "wrong," "unhealthy," "sick," or "crazy." In fact, interventions for these hoarders have an added facet of difficulty, in that the hoarder fully and actively works against the team's efforts to help them clean things up.
But in both cases, there is a disconnect between the perception of it being wrong in terms of how "society" sees the issue and what the hoarder is actually feeling. That is to say, the hoarder feels at home in his or her hoard. It feels right to put more things in the home. The squalor and decomposing matter around them doesn't affect them in the ways it does an outsider. Yet, the hoarder is told that it is wrong and feels a certain kind of socially-instituted shame about their condition. And I'm sure that much of that is engineered as well by the reality television industry itself. If we were to really think about it, in many cases, the thing that separates a hoarder from a collector is socioeconomic standing and/or the perception of a culturally-constructed notion of "squalor." The millionaire who owns multiples of the same car, or who has a facility filled with "collections" is simply that, a collector -- probably because he or she can afford to keep the collection in perfect condition -- sans mummified animal carcases and rodent droppings.
But, in terms of the hoard itself, for whatever psychological reason, the hoard is intrinsically related to a sense of self and well-being. The stress from the removal of the hoard comes from the breakdown of that self and well-being. The hoarder's own habits and highly personal and protected "being" is suddenly held under scrutiny, and deemed "abnormal." Shame and/or resistance follows. Removal of the hoard becomes a highly stressful enterprise, causing the hoarder to often just shut down as the hoard is removed, or to actively thwart the efforts of the removal team. As viewers, this is where we often feel the most sense of superiority and when we get to judge the hoarder as being "crazy" or pity the mental illness that brings them there. However, we might be able to find a bit more compassion if we found ourselves having to voluntarily remove one of our own limbs. The "oneness" of our physical bodies is, in most cases, intrinsic to our senses of self. For psychological reasons, the sense of that bodily oneness for hoarders -- pathologically -- is more acutely distributed among the hoard.
I am hesitant, though, to put too much credence in the way in which reality television portrays the hoarder. That being said, the fact that the situation is specifically engineered by the producers of the show to bring about as much "drama" as possible, it is the very fact that the situation is so artificial that makes it compelling and useful to look at the role of the hoard. To present the hoarders' living situation in such an artificial, constructed manner -- constructed for consumption by an audience -- further highlights its pathology through multiple frames.
This entry has spanned two days. And now I find myself sitting in the same spot in the same Starbucks, facing a 4 hour drive back to the mountains. As I drive back over snow covered mountain passes, I probably won't be able to keep track of the shifts in the modality of my thinking. I'll just know that when I sit at my desk at home and look out over the breathtaking landscape, that something will be missing.
There are telltale signs that I am at the "saturation point" for material for a piece I'm working on: I have trouble finishing sentences; I cannot think of the right word for things; I sleep fitfully, and when I do sleep, I'm plagued by very odd dreams. That's when I know that my subconscious is working overtime on the broad landscape of material I've been reading and annotating in previous weeks. And the fact that I'm working on this at the start of an academic year, when my classes are starting out and I'm trying to figure out the best pedagogical approaches to the material is just exacerbating my overall inability to articulate myself verbally.
Wednesday morning at 5:30am, after dreams in which I was occupying two spaces at once, my eyes popped open and I could see (and hear) the complete introductory paragraph to the chapter I'm writing. It was an odd experience, and the little pad I keep on my nightstand would never be able to handle the heft of the paragraph in question. I ran to my study, grabbed a pen, and started to scribble the paragraph down as best as I could, knowing that only 20% of it might make it into the finished piece; but I also knew that there were some key phrases that would act as "markers" for other ideas. After the paragraph was done, I sketched out a very rudimentary structure/flow chart of ideas.
What's most interesting, however, is that for this particular piece, some of my best nuggets have come from my more "editorial" notes -- where I comment on an author's style or rhetorical choices; or where I document my own difficulty in understanding a point, or in articulating an analysis (i.e. "This is a really tricky bit, I can avoid this argument or try to walk the reader through it"). But there was one particular essay I was reading where, after a very promising first 2/3rds, the author then abruptly stops a deep and thorough philosophical meditation to show "an example" of the philosophy in action in some obscure film of which I never heard. I was frustrated, because it seemed he was so close to something really profound in the piece, and then there was this ... example.
It made me think of Heidegger's prolonged deconstruction of Trakl's "A Winter Evening," and also my concluding chapter in Posthuman Suffering, where I went on a somewhat meandering analysis of one scene A.I.: Artificial Intelligence. So I put myself back in that space and tried to think about what was going on when I watched that film and when I was composing that chapter. The short answer was: a lot. Actually, it was that particular scene in that film, as well as the ATM epiphany in DeLillo's White Noise which became what I thought were seminal moments: seeds for the larger book. But after all of this reading, writing, dreaming, stammering, and procrastinating, my thinking is beginning to turn and I'm realizing that "seed" is very much the wrong word; and that, perhaps, through an elusive temporal slight-of-hand (or is that, "slight-of-mind," what we see in "perfect examples" of our theories are not examples at all -- or at least not examples of what we think they're examples of.
So as I work though this, I'll be bumping into things and becoming even more inarticulate.
As for the blog, I'm not sure if I'll be updating during the writing of the actual chapter. I'm playing that by ear. So if you don't hear from me until October, you'll know why.
[This is the final installment of the Aokigahara posts]
"Studying how people co-exist with nature is part of environmental research. I was curious why people kill themselves in such a beautiful forest. I still haven't found an answer to that."
Sometimes my literary theorist upbringing can really do me a disservice. I was initially going to jump all over this statement and say that Hayano is actually not looking to see how people co-exist, simply because he is finding the things that seem to not belong to nature. After all, it is the objects themselves which lead Hayano either to decomposing corpses (which, as per the end of the video, are transformed into objects or markers of pity, "Sometimes I feel sorry for them"), or to nothing. But even if nothing is found, Hayano populates the empty space with a narrative of what might have happened. By no means am I faulting him for this, or even criticizing him for it in an academically snarky way. On the contrary, this is what humans do. This is how our brains operate. And I believe that this narrative-making is actually the manifestation of a truly human instinct. All living things have a survival instinct. But if we're going to really figure out what makes human specifically human, it would be our unique way of creating narratives (whether literary as in the creation of myths, or scientific in the creation of theories and postulates). I also think that the way in which humans utilize objects is also an aspect of that narrative.
What Hayano is studying is our co-existence with nature, and, by his own admission, he cannot seem to rectify the beauty of the forest with what he sees as the ugliness of decomposition. Instinctively, we should see decomposition as ugly: decomposing corpses are toxic and can pollute water and food supplies. They can attract vermin and other scavengers which are detrimental to health. Yet, one of the ongoing markers of "humanity" or at least advanced thought, has been ritualized burial. And what is ritualized burial but an attempt to re-integrate the body with the earth, and simultaneously ameliorate a sense of loss. The two are not diametrically opposed. We memorialize the dead, as a way to simultaneously remember (bring to mind) and forget (return the body to the earth, and mythically, return the soul to whence it came, or liberate the life-force). But we don't really want to forget, do we? We need that act of burial to mark an end, to find closure, and to leave a remnant behind which can focus our memories when we need them to be focused.
Now, let's think of this in more posthuman terms. To circumvent the deleterious effects of decomposition, we find elaborate ways to either preserve or dispose of the dead. Whether we choose a "green" burial and commit the body naked into a pit and foster decomposition; or we burn it to ash, make the ash into a diamond, and wear it around or necks; or if we mummify the dead to be unearthed millenia later and put on display in a museum, they are all, essentially, the same action: a reintegration into the landscape as object, and an integration of loss into the conceptual landscape of self. Mourning is a reconstitution of self in light of an absence. Instinctively, I want to jump to the emotional/affective loss. But I'm going to resist that urge again and focus on the physical loss.
The closer I am to the person who has died, the more likely their physical presence is attached to the idea of them. In fact, I can actually get a visceral response when I think about the people whom I love the most simply not being there physically. In the topography of everyday life, the physical presence of others around us is more important, I believe, than even emotional connection. I think that existentialism may have done us a disservice in that it has elevated the concept of the consciousness to a point where it becomes unduly synonymous with the emotional, intellectual, and conceptual self. We become so focused on getting over a loss on a conceptual level, that the sheer weight of physical absence is overlooked. If thinking, then, is truly distributed over the specific topological spaces we occupy (as per Andy Clark's work), then the physical absence of a person with whom we've shared a specific space would have a profound effect on the mechanism of thinking itself. The process of cognition would occur with a major piece missing, literally. One would be thinking with a piece of his or her mind missing.
One last bit from Hayano to bring this home:
"I think the way we live in society these days has become more complicated. Face-to-face communication used to be vital, but now we can live our lives being online all day. However, the truth of the matter is we still need to see each other's faces, read their expressions, hear their voices, so we can fully understand their emotions. To coexist."
To some extent, every physical object that constitutes our immediate, regular environments is a part of the thinking self -- literally. A truly distributed cognition system consists of the biological brain, body, and physical objects within a person's specific living environment. One can say that the machine through which one "co-exists" with others virtually also makes up part of that cognition system, but somehow, that virtual presence is qualitatively different than having a face-to-face, "real life" interaction with someone. A distributed cognition might explain why an online, virtual presence "just isn't the same" as the "live" alternative.
Aokigahara, and the suicides therein, gives us an albeit extreme way to recharacterize the lebenswelt (or life-world: our lived experience in our specific, individual space and time). I don't pretend to try to get into the minds or the inherent anguish these individuals have experienced. But as someone familiar with the impact of loss (especially the kind involved with suicide), it is the intricacies of the physicality of loss which often remain unexplored or de-emphasized. If we miss out on the role of physicality in the tragedy of death, we'd be even less inclined to see the role it has in our everyday interaction with the world around us.
The old house was a one-floor, low ceilinged, 70s, wood-paneled pre-fab. One bathroom. Two bedrooms. We converted the master bedroom into a study, where the two of us would work. The 2nd bedroom became our master bedroom -- and we shoehorned various dressers and bureaus in there along with our full-sized bed. All but the kitchen and 2 other walls were ensconced in dark wood paneling. The house itself was conveniently located. The rent was low. We had a yard. When all was said and done, it wasn't a bad temporary place. We figured that within 5 years, we'd either get other jobs and move elsewhere, or establish roots here and buy a house of our own. As soon as my wife was tenured, we realized that our next move would be to a house of our own in town.
As soon as the two of us reached the mutual conclusion that we were in Gunnison for the long-haul, the house suddenly felt much, much smaller. We'd bang into doors, doorknobs, and walls. We'd get frustrated by our lack of space (and privacy) in the shared study. We found it increasingly difficult to keep the place clean. What was once a sheltering little island in the midst of uncertainty had become cramped, dark, and annoying. In one of our deeper intellectual conversations, we realized that we had "outgrown" the house. I think my exact words to my wife were "we're bigger than this house now." In retrospect, I should have said, "our thinking is bigger than this house now."
Suddenly, the term "big ideas" became a bit more literal. My wife was about to become the Chair of my department, and I was on the verge of some major changes to the Philosophy program -- as well as moving full-steam ahead on tenure. The things we were thinking about -- logistical, professional, personal, and intellectual, had a broader scope. And that might explain why the house with which we ended up falling in love (something you're not supposed to do), was brand new, had an open floor plan, and an interior composed of dramatically high ceilings, sweeping angles, and was flooded with daylight. We never envisioned ever liking a place like this. We had always been attracted to darker homes with lots of nooks and crannies, filled with alcoves and hidden spaces. Luckily, all the houses like that we saw in town required a least $100,000 worth of remodeling. We stepped into the new house on a whim and some advice from a co-worker. As we walked in, I expected my wife to immediately hate it, since neither of us was really into the open floor-plan model. But as I turned and saw her more wide-eyed than I'd seen her in years, walking and making a complete 360 while staring at the cathedral ceiling, I thought we might be onto something.
I have thought a lot about whether or not the effect of living in the new space was just an emotional response. It was our first house. It was so dramatically different than our old one. We had budgeted for new furniture as well -- so there was just such a sense of new-ness to everything; of course we'd be more psychologically happy. It had been a very long and hard road through grad school. We had "made it." On top of that, our building on campus had just received a complete renovation. So every space in which we worked or thought was completely different than it had been.
But, for me at least, there was a clarity in my thinking that I hadn't had before. I was able to think about more things without getting too freaked out. Even the fact that we had just written the largest check of our lives and committed to a mortgage that my feeble math skills said we could afford didn't send me into any panic attacks. I had a better perspective. And the scope of that perspective seemed to grow steadily as we settled in at the new place. In retrospect, my thinking changed most dramatically in my capacity to make connections. I was now more able to connect my own research with class material. I was also better able to help students dealing with their own topics in my philosophy classes. I was even seeing improvements in my pedagogy, and seemed to spontaneously emerge from a few "teaching ruts" into which I knew I had fallen. There was -- for lack a better word -- a different texture to my thinking.
It's out of this desire to describe that "texture," or the topography of thinking in material spaces which is currently driving my work. This is also why Andy Clark's work is so resonating with me now. I don't think the new living/working space(s) I occupy are affecting my thinking in a way one object affects another. I think that the spaces I'm occupying comprise my thinking itself. I'm working with the idea that what we call a "self" is actually woven into the material spaces our bodies occupy. What we call thinking is as contingent upon topological spaces traditionally located "outside" of the self as it is on our biological bodies.
So these last three posts have served as a kind of extended introduction to how I got where I am. Time to move beyond the explication. And there's so much beyond it.
After I wrote Posthuman Suffering, I wasn't exactly sure where to go from there. Since the core material of the book was my dissertation, and my dissertation took years (and years and years) to write, My ideas were already evolving from my key argument that technology is more of an ontology than an epistemology. What I hadn't realized at the time was that my own idea of "ontological" was heavily influenced by an existential perspective. The world "out there" was always already filtered though the consciousness. So, as I'm so fond of saying in my philosophy classes, "the world is out there in our consciousness."
But after I gained some distance from the book, and after teaching several classes and having a lot of very good class discussions with the students in those classes, I began to feel that this "primacy of consciousness" often presented a conceptual brick wall of sorts, especially when it came to otherness. In Heidegger's The Question Concerning Technology, instrumental technology (that is to say, technological artifacts themselves -- aka, "stuff") and technology-as-concept are very quickly separated. As I've said in my book, Heidegger implies that "the technological" is itself an epistemology. It is a way that humans "know" the world. At the time, I fully supported this opinion, but even then knew there was a bit more to it. But thinking about technology at the time of his writing, I doubted that Heidegger could come to any other conclusion. The ubiquitousness of virtual, "always on" technology (I'll get into that in another post) was not yet visible to him. Yet, taking into account the ideas of Donna Haraway in A Manifesto for Cyborgs and N. Katherine Hayles great How We Became Posthuman, I started orbiting around the idea that how we "are" -- our "Be-ing" -- is itself shaped by the technological.
What I couldn't see then was that I was still putting consciousness first. How we express the self is dictated by our technological systems -- but in a more traditionally existential way. I had located that expression of self in terms of mindedness, and not something greater. Like many existentialists, I gave myself a pass by always inserting some disclaimer about the physicality of the "wetware of the brain" (Hayles' excellent phrase). No matter how many times I said it, though, there was always something gnawing at me. Materialism became that pea under the mattress. Simply cordoning off a more materialist perspective into the biological body was not enough. There was just too much stuff. And that stuff had more than an affective pull on us.
I was able to keep all of that at bay for quite a while, actually. I had a Philosophy program to help develop, maintain, and grow; classes to teach; and tenure to worry about. But then, around the time that my wife and I bought our first house, I could no longer ignore that gnawing feeling. We had been living in the same rental for six years before we moved into our new place. It was the move into the new space -- a dramatically different space than our old rental -- which affected not only my thinking, but the way in which that thinking unfolded.
That's when I started thinking about the shape of thoughts.