Friday, June 20, 2014

Looking #Throughglass, Part 2 of 3: Steel Against Flint, Sparking Expectation

In my last post, I discussed the practicalities of Google Glass, and explained the temporal dissonance -- or "pre-nostalgia" I experienced while using them, and I left off questioning my own position regarding the potential cultural shift that Glass gestures toward. This post picks up on that discussion, moving toward the idea of the internet of things. If you haven't read it yet, it will definitely give this post some context ... and be sure to read the disclaimer!

I don’t think that Google was going for immediate, wide-scale adoption resulting in a sudden, tectonic paradigm shift with Google Glass.  I think if it had gone that way, Google would have been thrilled. Instead, I think there’s something much more subtle (and smart) going on.

While Apple is very good at throwing a technological artifact out there, marketing it well, and making its adoption a trend in the present, Google seems to be out to change how we imagine the future at its inception point. Glass potentially alters our expectations of how evoke the technological systems we use, eventually causing an expectation of ubiquity -- even for those who don't have it. I've noticed that Google rolls out technological systems and applications that are useful and work well, but also makes one think, “wow, now that I could do this, this would be even better if I could integrate it with that.” And, at least in my experience, soon after (if not immediately), there’s an app available that  fulfills that need, albeit tentatively at first. And when that app maker really nails it, Google acquires them and integrates the app into their systems. For the Google-phobic, it is quite Borg-like.

And while resistance may be futile, it also sparks inspiration and imagination. It is the engine of innovation. I think that Glass wasn't so much a game-changer in itself, as it was the steel against the flint of our everyday technological experiences. This was the first in a large-scale expeditionary force to map out the topography for the internet of things. In an internet of things, objects themselves are literally woven into the technological spectrum via RFID-like technology of varying complexity. I've written about it in this post, and there’s also a more recent article here.  By giving a Glass this kind of “soft opening” that wasn't quite public but wasn't quite geared to hard-core developers, it 1) allowed for even more innovation as people used Glass in ways engineers and developers couldn't see; but, more importantly, 2) it makes even non-users aware of a potential future where this system of use is indeed possible and, perhaps, desirable. It is a potential future in which a relatively non-intrusive interface “evokes” or “brings out” an already present, ubiquitous, technological field that permeates the topology of everyday life. This field is like another band of non-visible light on the spectrum; like infrared or ultraviolet. It can’t be seen with the naked eye, but the right kind of lens will bring it out, and make visible that extra layer that is present.

Google had been working on this with its “Google Goggles” app, which allowed the user to snap a picture with a smartphone, at which point Google would analyze the image and overlay relevant information on the screen. However, potentially with Glass, the act of “projecting” or “overlaying” this information would be smooth enough, fast enough, and intuitive enough to make it seem as if the information is somehow emanating from the area itself. 

Now this is very important. In the current iteration of Glass, one must actively touch the control pad on the side of the right temple of the frames. Alternately, one can tilt one’s head backward to a certain degree and Glass activates. However, either gesture is an evocative one. The user actively brings forth information. Despite the clunky interface, there is never a sense of “projection onto” the world. It is definitely more a bringing forth. As previously stated, most of Glass’s functions are engaged via a voice interface. I think that this is where the main flaw of Glass is, but more on that in part three. 

But, in a more abstract sense, all of Glass’s functionality has an overall feel that one is tapping into an already-present technological field or spectrum that exists invisibly around us. There’s no longer a sense that one is accessing information from “the cloud,” and projecting or imposing that information onto the world. Instead, Glass potentially us to see that the cloud actually permeates the physical world around us. The WiFi or 4G networks no longer are conduits to information, but the information itself which seems to be everywhere. 

This is an important step in advancing the wide scale cultural acceptance of the internet of things.  Imagine iterations of this technology embedded in almost every object around us. It would be invisible -- an “easter egg” of technological being and control that could only be uncovered with the right interface. Culturally speaking, we have already become accustomed to such technologies with our cell phones. Without wires, contact was still available. And when texting, sending pictures, emails, etc became part of the cell/smartphone experience, the most important marker had been reached: the availability of data, of our information, at any moment, from almost anywhere. This is a very posthuman state. Think about what happens when the “no service” icon pops up on a cell phone; not from the intellectual side, but emotionally. What feelings arise when there is no service? A vague unease perhaps? Or, alternatively, a feeling of freedom? Either way, this affective response is a characteristic of a posthuman modality. There is a certain expectation of a technological presence and/or connection. 

Also at play is Bluetooth and home networking WiFi technology, where devices seem to become “aware of each other” and can “connect” wirelessly -- augmenting the functionality of both devices, and usually allowing the user to be more productive. Once a TV, DVR, Cable/Satellite receiver, or gaming console is connected to a home WiFi network, the feeling becomes even more augmented. Various objects have a technological “presence” that can be detected by other devices. The devices communicate and integrate. Our homes are already mini-nodes of the internet of things. 

Slowly, methodically, technologies are introduced which condition us to expect the objects around us to be “aware” of our presence. As this technology evolves, the sphere of locality will grow smaller and more specific. Consumers will be reminded by their networked refrigerator that they are running low on milk as they walk through the dairy aisle in a supermarket.  20 years ago, this very concept would seem beyond belief. But now, it is within reach. And furthermore, we are becoming conditioned to expect it.

Next up: explorations of connection, integration, and control, and -- in my opinion -- Glass's biggest weakness (hint, it has nothing to do with battery life or how goofy it looks). Go check out the final installment: "Risk, Doubt, and Technic Fields"

Tuesday, June 17, 2014

Looking #Throughglass, Part 1 of 3: Practicalities, Temporalities, and Pre-nostalgia

My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things."  Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.

And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.

Now, carry on.

I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and  radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.

I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed.  In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.

On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.

For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all.  But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.

Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.

But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.

Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it.  Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”

Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.

Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?

Some possible answers to that in Part 2, "The Steel Against the Flint, Sparking Expectation"

Tuesday, April 15, 2014

Updates: Tenure, Google Glass, and a Very Positive Review

Just some updates of a personal, professional, and academic nature.

First of all, a couple of weeks ago, I was awarded tenure and promotion!  So after that little bit of news, I took a bit of a breather from everything (aside from classes, grading, and my usual semester duties).  Tenure is an interesting feeling; definitely a good one, but much more loaded than I originally thought it would be.

Secondly, a few months back, the office of Academic Affairs at Western State Colorado University generously contributed partial funds to help me acquire Google Glass. I've been using them pretty regularly now and am now composing what I hope will be a series of posts about them. Just a warning, though, these will not be a standard "user review."  You can get that anywhere.  I've been thinking long and hard about how I was going to write about Glass. But, as usual, some classroom discussion regarding technology inspired me, and now I know exactly how I'm going to go about my blog posts regarding glass. Despite the fact that we're entering that chaotic end-of-the-semester rush, I'm hoping to get the first post out within the next week or so.

Finally, I am really happy about a recent review of Posthuman Suffering and the Technological Embrace. Even though the book came out in 2010, I'm happy that it still has legs. This particular review appeared in The Information Society: An International Journal. All of the reviews have been positive, but this one really seemed to understand my intentions much more intrinsically. So I'm really happy about that.

So yes, although I've been quiet, good things have been happening. And look for my Google Glass entries soon!

Monday, January 20, 2014

The Internet of Things and the Great Recalibration

I've been playing catch-up since my tenure application and my class preps for the Spring semester, but I've finally been able to re-engage with my usual sites, and all of the fantastic content in my Google+ communities.

One thing that's been coming up in various iterations is the concept of the "internet of things." In a nutshell, the term loosely (and, I think perhaps a little misleadingly) refers to a technological interconnectivity of everyday objects: clothes, appliances, industrial equipment, jewelry, cars, etc, now made possible by advancements in creating smaller microprocessors. This idea has been around for quite some time, and has been developing steadily even though the general public might have been unaware of it. RFID chips in credit cards, black boxes in cars, even traffic sensors and cameras: they have all been pinging under our general perception for years -- almost like a collective unconscious.  But now, various patterns and developments have aligned to bring the concept itself into public awareness. While WiFi or even internet access is far from ubiquitous, we are becoming "connected enough" for these technologies to gain traction and -- as Intel, Google, and a host of other tech companies hope -- become something we expect. And I believe it is this expectation of connectedness which will once and for all mark the end of an antiquated notion of privacy and anonymity. 

Yes, I know. Snowden. The NSA. Massive black and grey operations poring through every text we send, every dirty little Snap we take, every phone call we make, and email we send. But I believe the bluster and histrionics people are going through are actually the death-throes of an almost Luddite conception of what "privacy" and "information" actually are. 

This thought came to me long ago, but I wasn't able to really articulate it until this past semester, when I was covering Kant in my intro to philosophy course. In the landscape of western philosophy, Kant created a seismic shift with a very subtle, even elegant, yet really sneaky rearticulation of one specific philosophical concept: a priori knowledge. Instead of characterizing a priori knowledge as an innate concept like infinity or freedom, he presented it as an innate capacity or ability. That is to say, the concept of "freedom," isn't in itself a priori, but our capacity to reason about it is. Of course, it's more complicated than that, but generally speaking, my students come to realize that Kant essentially recalibrated the spectrum of a priori/a posteriori knowledge. And Western philosophy was never the same again. The potential relativism of empiricism was contained, while the solipsisms of rationalism were dissipated.  

I believe that we are witnessing a similar seismic shift in our conception of what information is, and by extension, what we consider to be "private." Only history will be able to determine if this shift was a leap or an evolutionary creep forward. Regardless, I'm hoping that as more material objects become woven into the fabric of the data cloud, that it acts as a way to recalibrate people's thoughts on what exactly information is, more specifically, how that information doesn't "belong" to us. 

Our information is as susceptible to "loss" or "destruction" as our bodies are. Our information can degrade just as our bodies can. We can "protect" "our" information only as so far as we can protect our bodies from various dangers.  Granted, the dangers can be very different, however, we have as much chance of keeping our information private as we have of keeping our "selves" private.  Of course, biologically, in the phenomenal world, we can live "off the grid" and be as far away from others as possible. But the cost is paranoia and a general distrust of humanity in general: essentially, a life of fear.  Similarly, there is no way to completely protect our information without also withdrawing it completely from a technified world.  But again, at what cost?  I think it's one that is similar to all of those who sit in their compounds, armed to the teeth, waiting for a collapse of civilization that will never come.  

The internet of things, as it evolves, will slowly grow our expectations of connectivity.  We will opt in to smart cars, clothes, houses ... and I'm sure one day, trees, forests, animals ... that seem to intuitively adapt to our needs. From the dawn of time, we have always altered the physical world to our needs.  What we see happening today is no different, except that we now have a discourse to self-reflexively question our own motives. I always wondered if there was some kind of "cusp generation" of early humanity who distrusted cultivation and agriculture, as a ceding of humanity's power to nature itself?  An old hunter looking at his grandchildren planting things, thinking that they were putting too much faith, reliance, and attention in dirt. And, probably, that eventually the things that they grew would somehow eventually kill them (and I'm sure there was a sense of pure satisfaction from the paleo-Luddite when someone choked to death on a vegetable, or got food poisoning). 

Our expectations of connectivity will overcome our attachment to "private" information. The benefits will outweigh the risks; just as the benefits of going outside outweigh the benefits of being a hermit. 

I'm not saying that we should start waving around our social security numbers or giving our bank account numbers to foreign princes who solicit us over spam. We don't walk into a gang zone waving around cash, or dangle our children in front of pedophiles.  We must protect our "information" as much as we can, realizing that reasonable safeguards do not -- by any stretch of the imagination -- equal anonymity. If we wish to be woven into an internet of things, then we must more actively recalbrate what our notion of "privacy" and even "anonymity" is. And given the historical development of civilization itself, we will cede aspects of privacy or invisibility in order to gain a greater sense of efficacy. An internet of things that more efficiently weaves us into the world of objects will heighten that sense of efficacy. It already has. When our cars customize themselves for us when we open the door, or when our houses adjust all manner of ambient conditions to our liking, or even when Google autocompletes our searches based on our geographical location or past searches, our sense of efficacy is heightened; as is our sense of expectation.

As for what this recalibration brings, I believe it will -- like other technological developments -- be part of a larger field of advancements which will allow for us to become more ontologically ready for even bigger leaps forward. Perhaps after a few decades of a more widespread, almost ubiquitous internet of things, the emergence of an AI will actually seem more natural to us. I think in the more immediate future, it will ease fears of various transhuman values; augmentation of our biology will not be as threatening for some as might be today.

In any movement, there is an avant garde -- literally the "advance guard" or "fore-guard;" the innovators and dreamers who experiment and push ahead. And often, like Kant, they allow cultures to recalibrate their expectations and values, and rethink old notions and standards. Each time we use a credit card, click "I agree" on a terms of service box, or sign in to a various web account, we're pushing that advance ever forward ... and that's not a bad thing. 

Thursday, December 12, 2013

Moments

Some colleagues and I at Western State Colorado University were interviewed by one of our students, Justin Sutton (Communication Arts major/Philosophy minor) for a mini-documentary.  I think he did a great job with it, and I was honored to be involved




I write so much about space, that time can sometimes be overlooked.  I think I'll have to remedy that soon.

Tuesday, December 10, 2013

Good news!

It's official!  The article I've been working on, "Posthuman Topologies: Thinking Through the Hoard" will be published in Lexington Books' upcoming anthology Radical Interface: Transdisciplinary Interventions on Design, Mediation, and the Posthuman!

Here's an abstract of the the chapter:
No matter how deeply we push into posthuman explanations of technology as an underlying epistemology or ontology, “interface” remains a fundamental difficulty.  It represents a seemingly insurmountable topological space which exists between the human and the technological artifact which the human manipulates.  Posthumanism’s tendency to subsume the technological into the self as an epistemological or ontological modality reveals its vestigial humanist conceits:  (I know) through technology; or (I am) technologically.  To fully emerge from the humanist shadow, we must rethink “the human” as a function which occurs across substrates, non-anthropocentrically distributing cognition/selfhood/being through our topological environments.  Being, thinking, etc. are as contingent upon the spaces we occupy as they are upon the biological wetware of the brain.  This radical re-imagining of being requires us to start with a posthuman perspective and move on, rather than characterize the posthuman as the destination. 
To achieve this, we must -- perhaps counterintuitively -- re-emphasize technology as an artifact on equal discursive footing with the ontological self or “mind.”  When we start a posthumanist analysis of interface with the “object” or “artifact” rather than the human using it, we can more readily achieve a discourse of the “distributed self,” which takes the shape of the environment it occupies; a self which morphs across a spatio-temporal continuum and is as affected by phenomena traditionally considered “outside” of it as it is by the biological processes which sustain it.  In such a scenario, “interface” is rendered moot, and becomes a signifier for arbitrary and shifting designations of that which is and isn’t the self.  
I'm really excited about this one!  The collection is under contract right now and I'll post more details as to a publishing date and availability as soon as I know them.  

Sunday, December 1, 2013

The intensity of things -- a quick update

I apologize to everyone for the long gap between posts.  The truth is, I've had two very major things going on that have taken up all of my attention:  my application for tenure and the article I've been working on for an upcoming anthology.  The deadline for tenure was a month ago and the deadline for my final draft of my article was today.  Add to that my regular duties at my University and it made for a very hectic few months.

I'm keeping mum on the article until everything is finalized, since even when there is a press and contracts, things can shift unexpectedly.  I'll know more about the final status of the article in a few weeks.  As for tenure, I'll know the final result of that in the Spring.

I never take anything for granted.

But based on what I have written, I've been thinking a lot about "intensities" lately.  And that's the term that I've been orbiting around post-article.  I'll be thinking a lot about that through the next couple of weeks -- not just in the the scope of the intensity of objects.  I'm thinking more of the intensity that objects can help foster, or instantiate.

Yes, more about that is coming in my next posts -- and I promise there won't be such a wait for the next one.