Tuesday, June 17, 2014

Looking #Throughglass, Part 1 of 3: Practicalities, Temporalities, and Pre-nostalgia

My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things."  Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.

And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.

Now, carry on.

I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and  radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.

I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed.  In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.

On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.

For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all.  But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.

Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.

But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.

Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it.  Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”

Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.

Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?

Some possible answers to that in Part 2, "The Steel Against the Flint, Sparking Expectation"

Tuesday, April 15, 2014

Updates: Tenure, Google Glass, and a Very Positive Review

Just some updates of a personal, professional, and academic nature.

First of all, a couple of weeks ago, I was awarded tenure and promotion!  So after that little bit of news, I took a bit of a breather from everything (aside from classes, grading, and my usual semester duties).  Tenure is an interesting feeling; definitely a good one, but much more loaded than I originally thought it would be.

Secondly, a few months back, the office of Academic Affairs at Western State Colorado University generously contributed partial funds to help me acquire Google Glass. I've been using them pretty regularly now and am now composing what I hope will be a series of posts about them. Just a warning, though, these will not be a standard "user review."  You can get that anywhere.  I've been thinking long and hard about how I was going to write about Glass. But, as usual, some classroom discussion regarding technology inspired me, and now I know exactly how I'm going to go about my blog posts regarding glass. Despite the fact that we're entering that chaotic end-of-the-semester rush, I'm hoping to get the first post out within the next week or so.

Finally, I am really happy about a recent review of Posthuman Suffering and the Technological Embrace. Even though the book came out in 2010, I'm happy that it still has legs. This particular review appeared in The Information Society: An International Journal. All of the reviews have been positive, but this one really seemed to understand my intentions much more intrinsically. So I'm really happy about that.

So yes, although I've been quiet, good things have been happening. And look for my Google Glass entries soon!

Monday, January 20, 2014

The Internet of Things and the Great Recalibration

I've been playing catch-up since my tenure application and my class preps for the Spring semester, but I've finally been able to re-engage with my usual sites, and all of the fantastic content in my Google+ communities.

One thing that's been coming up in various iterations is the concept of the "internet of things." In a nutshell, the term loosely (and, I think perhaps a little misleadingly) refers to a technological interconnectivity of everyday objects: clothes, appliances, industrial equipment, jewelry, cars, etc, now made possible by advancements in creating smaller microprocessors. This idea has been around for quite some time, and has been developing steadily even though the general public might have been unaware of it. RFID chips in credit cards, black boxes in cars, even traffic sensors and cameras: they have all been pinging under our general perception for years -- almost like a collective unconscious.  But now, various patterns and developments have aligned to bring the concept itself into public awareness. While WiFi or even internet access is far from ubiquitous, we are becoming "connected enough" for these technologies to gain traction and -- as Intel, Google, and a host of other tech companies hope -- become something we expect. And I believe it is this expectation of connectedness which will once and for all mark the end of an antiquated notion of privacy and anonymity. 

Yes, I know. Snowden. The NSA. Massive black and grey operations poring through every text we send, every dirty little Snap we take, every phone call we make, and email we send. But I believe the bluster and histrionics people are going through are actually the death-throes of an almost Luddite conception of what "privacy" and "information" actually are. 

This thought came to me long ago, but I wasn't able to really articulate it until this past semester, when I was covering Kant in my intro to philosophy course. In the landscape of western philosophy, Kant created a seismic shift with a very subtle, even elegant, yet really sneaky rearticulation of one specific philosophical concept: a priori knowledge. Instead of characterizing a priori knowledge as an innate concept like infinity or freedom, he presented it as an innate capacity or ability. That is to say, the concept of "freedom," isn't in itself a priori, but our capacity to reason about it is. Of course, it's more complicated than that, but generally speaking, my students come to realize that Kant essentially recalibrated the spectrum of a priori/a posteriori knowledge. And Western philosophy was never the same again. The potential relativism of empiricism was contained, while the solipsisms of rationalism were dissipated.  

I believe that we are witnessing a similar seismic shift in our conception of what information is, and by extension, what we consider to be "private." Only history will be able to determine if this shift was a leap or an evolutionary creep forward. Regardless, I'm hoping that as more material objects become woven into the fabric of the data cloud, that it acts as a way to recalibrate people's thoughts on what exactly information is, more specifically, how that information doesn't "belong" to us. 

Our information is as susceptible to "loss" or "destruction" as our bodies are. Our information can degrade just as our bodies can. We can "protect" "our" information only as so far as we can protect our bodies from various dangers.  Granted, the dangers can be very different, however, we have as much chance of keeping our information private as we have of keeping our "selves" private.  Of course, biologically, in the phenomenal world, we can live "off the grid" and be as far away from others as possible. But the cost is paranoia and a general distrust of humanity in general: essentially, a life of fear.  Similarly, there is no way to completely protect our information without also withdrawing it completely from a technified world.  But again, at what cost?  I think it's one that is similar to all of those who sit in their compounds, armed to the teeth, waiting for a collapse of civilization that will never come.  

The internet of things, as it evolves, will slowly grow our expectations of connectivity.  We will opt in to smart cars, clothes, houses ... and I'm sure one day, trees, forests, animals ... that seem to intuitively adapt to our needs. From the dawn of time, we have always altered the physical world to our needs.  What we see happening today is no different, except that we now have a discourse to self-reflexively question our own motives. I always wondered if there was some kind of "cusp generation" of early humanity who distrusted cultivation and agriculture, as a ceding of humanity's power to nature itself?  An old hunter looking at his grandchildren planting things, thinking that they were putting too much faith, reliance, and attention in dirt. And, probably, that eventually the things that they grew would somehow eventually kill them (and I'm sure there was a sense of pure satisfaction from the paleo-Luddite when someone choked to death on a vegetable, or got food poisoning). 

Our expectations of connectivity will overcome our attachment to "private" information. The benefits will outweigh the risks; just as the benefits of going outside outweigh the benefits of being a hermit. 

I'm not saying that we should start waving around our social security numbers or giving our bank account numbers to foreign princes who solicit us over spam. We don't walk into a gang zone waving around cash, or dangle our children in front of pedophiles.  We must protect our "information" as much as we can, realizing that reasonable safeguards do not -- by any stretch of the imagination -- equal anonymity. If we wish to be woven into an internet of things, then we must more actively recalbrate what our notion of "privacy" and even "anonymity" is. And given the historical development of civilization itself, we will cede aspects of privacy or invisibility in order to gain a greater sense of efficacy. An internet of things that more efficiently weaves us into the world of objects will heighten that sense of efficacy. It already has. When our cars customize themselves for us when we open the door, or when our houses adjust all manner of ambient conditions to our liking, or even when Google autocompletes our searches based on our geographical location or past searches, our sense of efficacy is heightened; as is our sense of expectation.

As for what this recalibration brings, I believe it will -- like other technological developments -- be part of a larger field of advancements which will allow for us to become more ontologically ready for even bigger leaps forward. Perhaps after a few decades of a more widespread, almost ubiquitous internet of things, the emergence of an AI will actually seem more natural to us. I think in the more immediate future, it will ease fears of various transhuman values; augmentation of our biology will not be as threatening for some as might be today.

In any movement, there is an avant garde -- literally the "advance guard" or "fore-guard;" the innovators and dreamers who experiment and push ahead. And often, like Kant, they allow cultures to recalibrate their expectations and values, and rethink old notions and standards. Each time we use a credit card, click "I agree" on a terms of service box, or sign in to a various web account, we're pushing that advance ever forward ... and that's not a bad thing. 

Thursday, December 12, 2013

Moments

Some colleagues and I at Western State Colorado University were interviewed by one of our students, Justin Sutton (Communication Arts major/Philosophy minor) for a mini-documentary.  I think he did a great job with it, and I was honored to be involved




I write so much about space, that time can sometimes be overlooked.  I think I'll have to remedy that soon.

Tuesday, December 10, 2013

Good news!

It's official!  The article I've been working on, "Posthuman Topologies: Thinking Through the Hoard" will be published in Lexington Books' upcoming anthology Radical Interface: Transdisciplinary Interventions on Design, Mediation, and the Posthuman!

Here's an abstract of the the chapter:
No matter how deeply we push into posthuman explanations of technology as an underlying epistemology or ontology, “interface” remains a fundamental difficulty.  It represents a seemingly insurmountable topological space which exists between the human and the technological artifact which the human manipulates.  Posthumanism’s tendency to subsume the technological into the self as an epistemological or ontological modality reveals its vestigial humanist conceits:  (I know) through technology; or (I am) technologically.  To fully emerge from the humanist shadow, we must rethink “the human” as a function which occurs across substrates, non-anthropocentrically distributing cognition/selfhood/being through our topological environments.  Being, thinking, etc. are as contingent upon the spaces we occupy as they are upon the biological wetware of the brain.  This radical re-imagining of being requires us to start with a posthuman perspective and move on, rather than characterize the posthuman as the destination. 
To achieve this, we must -- perhaps counterintuitively -- re-emphasize technology as an artifact on equal discursive footing with the ontological self or “mind.”  When we start a posthumanist analysis of interface with the “object” or “artifact” rather than the human using it, we can more readily achieve a discourse of the “distributed self,” which takes the shape of the environment it occupies; a self which morphs across a spatio-temporal continuum and is as affected by phenomena traditionally considered “outside” of it as it is by the biological processes which sustain it.  In such a scenario, “interface” is rendered moot, and becomes a signifier for arbitrary and shifting designations of that which is and isn’t the self.  
I'm really excited about this one!  The collection is under contract right now and I'll post more details as to a publishing date and availability as soon as I know them.  

Sunday, December 1, 2013

The intensity of things -- a quick update

I apologize to everyone for the long gap between posts.  The truth is, I've had two very major things going on that have taken up all of my attention:  my application for tenure and the article I've been working on for an upcoming anthology.  The deadline for tenure was a month ago and the deadline for my final draft of my article was today.  Add to that my regular duties at my University and it made for a very hectic few months.

I'm keeping mum on the article until everything is finalized, since even when there is a press and contracts, things can shift unexpectedly.  I'll know more about the final status of the article in a few weeks.  As for tenure, I'll know the final result of that in the Spring.

I never take anything for granted.

But based on what I have written, I've been thinking a lot about "intensities" lately.  And that's the term that I've been orbiting around post-article.  I'll be thinking a lot about that through the next couple of weeks -- not just in the the scope of the intensity of objects.  I'm thinking more of the intensity that objects can help foster, or instantiate.

Yes, more about that is coming in my next posts -- and I promise there won't be such a wait for the next one.

Wednesday, August 21, 2013

Hide and Seek, Part 2: The Sweeping Insensitivity of This Still Life

Hide and seek.
Trains and sewing machines.
All those years they were here first.

Oily marks appear on walls
Where pleasure moments hung before.
The takeover, the sweeping insensitivity of this still life.

- Imogen Heap, "Hide and Seek"

Although inspired by Bennett's vital materialism, I'd like to think about why objects give us comfort from the position of "distributed cognition" which I've written about in previous entries (once again, owing much to Andy Clark's work).   If we follow the hoarder scenario, there is that jarring moment when the extent of the hoard is thrust into the hoarder's perception by some outside actant.  It's at this moment that the hoarder is forced to see these objects as individual things, and the overall seriousness and magnitude of the problem becomes apparent.  I think that even non-hoarders get a glimpse of this when faced having to move from one dwelling to another.  Even people who aren't pack rats find the task of having  to -- in some form or another -- account for each object that is owned.  Dishes can't be packed away in sets.  Books can't be moved in their bookcases.  Everything has to be taken out, manipulated, and handled.  The process is exhausting, no matter how healthy the individual is.

The objects become more "present" in their consecutive singularities. And in each instance, we have to make an effort to justify the existence of each object. And that's it, isn't it?  It is up to us to justify that this object is worth the effort of dusting off, packing, unpacking, etc.  In this way, the objects seem dependent upon us, since we are the ones burdened with bestowing purpose on those objects.  Objects cannot justify themselves.  They are, for lack of a better term, insensitive. We, however, are sensitive; and some of us, as explained by Bennett, are more sensitive than others. Perhaps this helps us to understand the hoarder mentality, especially the tears that are shed when something that seems to be non-functioning, decomposing junk is cast away.  The hoarder has become invested in the objects themselves -- and bestowed sensitivity upon them.  To throw them away is to abandon them.

But here we come dangerously close to the more existentialist viewpoint that it is the subject who bestows value upon the object:  that is to say, the act of bringing an object into being is to automatically bestow upon it value.  But, let's pause on the moment and process of "bringing." Etymologically speaking, "bring" implies a carrying. There must be a thing (even in the loosest sense) to be carried.  The object is at least as important as the subject.  Now, I don't want to just flip the model and say it's the thing which brings "I" into being, because that's nothing necessarily new.  Hegel implies a version of this in aspects of his Herrschaft und Knechtschaft  [Lord and bondsman ... or "master/slave"] dialectic.  And there really is no way around the "I": the embodied "I" is a kind of locus of a specific bio-cognitive process.  The particular I, of itself at the present moment, is made manifest by the phenomenal environment around it.

 The objects by which we're surrounded are (not "represent", but phenomenally, functionally, are) a secondary material substrate through which our cognition is made manifest.  A "first" material substrate would be our physiological, embodied brains.  But, beyond that, our surrounding environments become an "outboard brain" which helps to carry our cognition.

I cannot stress enough that I'm not speaking metaphorically.  The phenomenal world we occupy at any given moment partially constitutes a larger, distributed substrate through which cognitive processes occur.  That environment is not "taken in" or "represented":  it constitutes the very mechanisms of cognition itself.  The process happens as instantly as thought itself, and is highly recursive -- meaning that the better and more efficiently a distributed cognition works, the less visible and more illusory it becomes. The more illusory the process, the greater our sense of autonomy.  So, if something goes awry at any point in the process (whether environmentally, emotionally, or physically ... or any cumulative combination of them), then our sense of autonomy is skewed in any number of directions: an inflated/deflated sense of one's presence, an inflated/deflated sense of the presence of objects, skewed senses of efficacy, body dysmorphic disorder, etc.  When the hoarder, or even the "normal" individual having to pack up his or her belongings, suddenly must account for each individual object, it causes a breakdown in the recursivity of the distributed cognitive process.  The illusion of an autonomous self is dissipated by the slowdown -- and eventual breakdown -- of the mind's capacity to efface the processes which constitute it.  Try to accomplish any complex task while simultaneously analyzing each and every physical, mental, and emotional point during the process:  the process quickly breaks down.  The process of constituting a viable self is quite possibly the most complex in which a human can be engaged.

What then, are the implications to posthumanism?  What I'm getting at here is something which a follower of mine, Stephen Kagen, so eloquently said in a response to my Aokigahara Forest entries: "my bias is that the artificial distinction between human, technology, and nature breaks down when examined closely."

Yes it does.  The distinction is, in my opinion, arbitrary.

Technologically speaking -- and from a posthuman standpoint -- this is very important.  Current technological development allows us to manipulate matter on an unprecedented small scale:  machines the size of molecules have already been created.  It is theoretically possible for these machines to physically manipulate strands of DNA, or structures of cells. The boundary between human and machine is now -- literally -- permeable.  At the same time, developments in artificial intelligence continue, as we begin to see that robots that are learning by physically exploring the spaces around them.

The distinction does, in fact, break down.  Posthumanism steps in as a mode of inquiry where the arbitrary condition of the subject/object divide is the starting point -- not the end point.  Ontologically and ethically, the lack of boundary between self and other is no longer just a theoretical construct.   It means viewing our environments on the micro- and macro-level simultaneously.  We must fuse together what we have been warned must remain separate.  The smart phone is no more or less "native" to the space I occupy as the aspens in the distance.  Within my locus of apprehension, the landscape includes every "thing" around me: things that grow, breathe, reproduce, talk, walk, reflect light, take up space, beep, light up, emit EMFs, decay, erode, pollute, and pollinate.  And, the closer and more recursively these various objects -- or gestalts of objects -- occupy this sphere of apprehension, the more integrated they are to my cognition.  They manifest my topological self.

And I think this also is where one can start to articulate the distinction between posthumanism and transhumanism.  More of that in another post.