Wednesday, August 20, 2014

Perseverance and Writing Regularly, Part 2 of 2: Reading

I've gotten a fair amount of positive feedback on my previous post. And it seems that the blog post I was working on regarding connection, interface, and control has morphed into a much larger project: at least another article, but more likely, the start of an outline for my second book. It became clear pretty quickly that a blog wasn't going to be a workable platform for the material; it's deep, and needs footnotes and long arcs of theoretical unpacking. With that said, however, Posthuman Being will be a perfect space for me to explore singular aspects of what I'll be covering.  Anyway, in the process of doing some very preliminary research and preparation for the project, I started thinking about an aspect of the writing process that is all-too-often glossed over: reading.

I started thinking about a concept that had come up a few times in my research, but always as a tangent or "scenic route" in my line of argument. I had visited it a few times in grad school, but had more pressing matters at hand. And in my later projects, other deadlines always loomed which precluded anything but the straightest line through my points. I remembered a book that I had "read" in grad school by one of our faculty. And then, in that procrastinatory way, I did some googling to find out where the author was now teaching. This particular academic was a bit of an interdisciplinary chameleon: taking the shape of whatever department/institution in which he was housed. As far as I knew, he wasn't in my field, so to speak ... at least, the last time I had checked he wasn't. Until I found his faculty page.

And there it was: a description of his current work. And in a scant few sentences was the very preliminary and tentative thesis I had come up with as I was outlining my latest project. I had almost forgotten that terrible punch-in-the-gut feeling of seeing what you thought was an original idea already solidly articulated in someone else's words. With a grimace, I actually said, out loud, "FUCK! I've been SCOOPED!"

For those not familiar with academia, or just getting started as grad students, getting scooped is what happens when that "original" idea you thought you had -- generally the one on which you had psychologically banked all of your aspirations AND had, for a brief moment, made you feel like you weren't a total failure -- has been put forward by someone else. The first time it happens, it's just a terrible feeling. But you learn pretty quickly that getting scooped is actually a very good and necessary stage in research. It's a necessary lesson in humility, but, on the flip side, can also be a quite affirming thing. It means that the idea you've come up with not only had legs at some point, but that other people have already researched it, and thus have a treasure trove of new sources nicely listed for you in the bibliographies and works cited pages of their books and articles. I like to think of those bibliographies as maps. Others, as archeological layers that require excavation. Regardless, the only path through them is to read them.

When I returned to this particular scholar's book, I apparently had found some of it useful in grad school, since I recognized my own lightly-penciled notes in the margins. The reading process in grad school was always such a rushed affair. While I can't speak for all of my classmates at the time, I'm pretty sure most of us awkwardly and greedily blew through the majority of the books we read, trying to figure out where we would situate ourselves in our specific discursive landscapes. Other times, we scavenged, looking for the one or two quotes that said what we needed them to say to make us look not-so-dumb.

As Ph.D. students, we were required to have three reading lists, each consisting of 50-75 sources each. One list covered the theoretical/philosophical foundations of each grad student's field of inquiry; the second, the primary sources that demonstrated the particular movement, trend, or cultural phenomena the student was explaining; and the third, a more far reaching collection that points toward the potential future of the field. Theoretically, we were expected to have carefully read every one of those 150-225 sources. And comprehensive exams -- which had both written and oral components and was a process that covered about 3-4 weeks -- were questions based on those sources. In my program, once we finished the comprehensives, we were then "ABDs" and had license to start our dissertations, which, in most cases, were born of the written exams. "Reading for exams" was sometimes tedious and it was very difficult, for me at least, to remain balanced between taking detours based on other sources found in my main sources, and staying focused on my supposed research field. It was like the Scylla and Charybdis: On one side, you could be sucked down into the abyss of tangential reads, but on the other you could become too focused on a narrow question and have gaping holes in your research.

I remember little from those days, other than alternating bouts of deep anger and abject despair. When I leafed through the books I was reading at the time, I found all of my very heavy penciled underling  -- scratched into the pages with marginal notes consisting of "NO!" and "DEAR GOD!" and "WRONG" But then, in other books, the notes were far less angry. The lines, far less dark. The only place where there was any kind of emphasis was in the asterisks I put in the margin to designate that section as very important, or the repeatedly circled page number that indicated THIS was a place to concentrate. This was a part to transcribe word for word, longhand, onto an index card or sheets of note paper (I preferred the latter, since I liked to annotate my notes and then annotate my annotations). Those books rose to the surface. I found myself referring back to them more often. They became a center. The other, lesser books, orbited around them. Some of those books had other books orbiting around them like moons. It becomes a solar system of thought. And I began to see concentric patterns through various ideas. Some of them intersected predictably, as dictated by the sources themselves. But others, no. I saw intersections that others didn't.

The notes from those books became more involved and complicated. I started cross-referencing more and more. And I realized, fleetingly at first, that there was something that, maybe, possibly ... perhaps ... that these authors and legendary experts, might have, if I wasn't mistaken ... overlooked? Something as simple as a "slippage" of a term. Why did they use this word THIS way in this part of the paragraph but use it differently later? And why does the other author seem to be avoiding this "complicated" issue? And why has that author deferred analysis of an idea for "later research"?

And in my head (and on paper ... lots of paper), I started to sketch out, figuratively and literally, a network of connections among all of the most pertinent texts I'd read and the "gaps" and "slippages" therein. There were maybe a dozen or so books, articles, and chapters that were tightly woven together in the center of that much larger network. They were conversing with each other in ways that -- apparently -- only I could see. And I had to explain how they did, exhaustively, writing for hours spanning dozens of pages that would only be jettisoned later. But it didn't matter, because I had worked out the relationships among them. I explained how they were connected. And then I turned to the why ... taking on the role of meta-critic and fleshing out the academic and cultural reasons why those particular texts should be put in conversation with each other.

And then a member of  my dissertation committee asked, pointedly, "so what?"

Okay, allow me the indulgence of using a stream of consciousness here to represent a longer -- gut wrenching process -- with certain particulars purposely left out to better illustrate the flow:

BECAUSE IT'S AWESOME, that's why!  How cool is it that these texts intersect in these ways?! I mean, can't you see how awesomely cool this is?!  How all of this work has shown that all of these things are connected and that one can keep connecting the connections to see how they're all connected?!  I mean, really, I've been working so hard on this to uncover these there can't possibly be a reason why I'd spend all of this time and nearly push myself to the edge of death just to show you connections that don't matter ... at all ... to anyone ... but me. Kill me. I suck. You kind of suck, too. Because you let me go on and on doing all of this work as I found connections to connections and created all of this discourse just to be shown that it doesn't matter at all. Why did you let me keep going? As if YOU know anything about this anyway. What was the last text by A you ever read? And, by the way, your reading of B and C is completely off because you didn't notice that each is defining their terms slightly differently which shows a cultural predilection toward X cultural belief which is an unexplored area of Y that could explain why Z important intellectual/cultural/academic crisis is making people scratch their heads]. Oh, wait. That. There is that. Huh. That is kind of awesome, actually.

Again, the above intellectual, emotional, and psychological process spanned days -- if not weeks -- of consideration. But at the end of it and with another committee member's incredible advice, there it was: a research question; a working thesis. Ideally, it should have formed during my classwork. But sometimes -- especially for me -- it didn't work out that way. But my classes had given me a very solid theoretical foundation which helped me read more soundly, and with a better sense of where the text at hand belonged within the broader discourse. To some extent, however, from that point forward, everything that I read was a means to an end: always within the shadow of the research question/thesis I had posed -- All toward helping me answer the withering "so what" question.

Psychologically, there also comes a point when I had to take a stand and stop reading. This is a kind of compounded temptation, because: 1) I had become so used to analyzing text through my thesis that anything and everything could be connected to it, which, narcissistically, made me want to read more -- because it was just my own ideas being reflected back to me;  2) If I kept reading, I didn't have write. And writing is hard. Having the willpower to say "no" and stop ordering more books was one of the most difficult things to do. Especially since, once in a while, it was necessary to pick something else up -- especially if it was one that was named prominently by other authors in the network of texts I had. In my case, writing about technology made it extremely difficult to stop reading, because every year there were new innovations, and since posthumanism was an emerging field, there was always a new journal article or book on the horizon. Stopping to write was like pulling off the highway and watching everyone else pass me by. That's where all that perseverance comes in.

I recently met with a student getting her entrance essay ready for grad school. She was struggling with something related to getting scooped: feeling like everything had already been said, and that there was nothing new to say. I told her that I like to think about academic discourse in cartographical terms: each field is a larger territory on a map. There are a network of highways through them. When you first start out, it's like getting onto a gigantic eight-lane highway: a mass of people are all moving well-travelled roads in the same direction. But as you travel farther, you turn off the main highway onto smaller ones, with less traffic. Eight lanes become four; four lanes become two. The further you go, the less traveled the road, until the pavement ends and you're on a dirt road. Even further, you end up on foot and on a trail. Some even go off-trail and explore. There will almost always be a few others around, but really, you can't get to your master's thesis or your doctoral dissertation without having to travel some well-worn roads to get to your little clearing within a larger territory. For me, I didn't just teleport to posthumanism (neither did any of the others in my field). It started with the superhighway of English, then into the still-giant highway of literary theory, then forked into philosophy, then into existentialism, which sent me on a very scenic path through the philosophy of technology, and then there I was, along with just a few other souls, who got to posthumanism through very different routes, but all of those routes were marked by the roads becoming smaller and less-travelled.  And as for the occasional moments of getting lost, backtracking, and jumping back on major roads, well, that's part of it, too.

This November will mark nine years since I finished my Ph.D. The work didn't stop when I was done. I honed my dissertation down into something much better. I once again had to answer the "so what" question to my editor. I was also in the unique position of writing on something with which the editorial board was unfamiliar. I got an email that basically said, we're on the fence. Can you convince us why we should publish it? I crafted an email in one sitting that was my strongest writing ever. It was clear, focused, and disciplined. And it, with the exception of fixing a few typos, became verbatim the preface of my book. And it was, essentially, the answer to "so what?"

Now that it's clear that I'm starting to write another book. But within the process of writing is reading. The process is different his time, in that I already know the answer to the gut-wrenching "so what?" question. And, man, it's awesome. But I also know that I have to read so much more. I can only cover the same ground for so long, and the evolution of my ideas needs mass quantities of discourse to feed it. The great part is that now, I don't have to read under grad school or probationary faculty pressures. This journey is definitely my own.

Sometimes I think that people outside of academia think that academics produce thought -- out of nothing. That we just walk around, see something interesting, and say, "hey, I'm gonna write all about this," and just sit down and start writing another book or article. But, to produce, one also must consume. And, for an academic, we must consume much more in proportion than what we produce. Just pick up any academic book and look at the bibliography. Yes, all of that is what that particular author had to read in order to give you the one book you're holding in your hand. All of those texts were points on a map.















Monday, July 28, 2014

Perseverance and Writing Regularly, Part 1 of 2: (A Post on the Writing Process)

The main function of this blog has always been as a space where I can tease out certain ideas that may or may not be ripe for deeper, more academically solid exploration. I also envisioned it as a place where I can talk about the writing process itself, especially since several of my readers are or were students of mine. I am currently revising a multi-part post on connection, interface, and control. So don't worry, I'll be back to technological themes soon. In the space between finishing up the first draft and beginning a major revision, I had a moment to reflect more deeply upon being granted tenure and promotion. I wasn't sure if I was going to actually post this entry, but after a really interesting dinner conversation with a colleague and some exchanges with students, I decided to give it a go.

A challenge that comes with working at a teaching -- rather than a research -- institution is that my main focus is the classroom. With a 4/4 teaching load (that's 4 classes per semester; whereas a research university it may be 2/2, 2/3 or some variation of that depending on rank, seniority, grants, etc.), it's not easy to find the time to research or write. Summer breaks are that time. Winter breaks used to also be that time as well, but the brutally short break between the Fall and Spring semesters at my current institution makes that difficult. Summers are also the time for class preparation, and just simply catching up with every project at home that I couldn't get done during the academic year. Add to that visits from family and friends, travel/vacations, and whatever "emergency" committee or task force upon which one is called to serve on campus, and the time can fill up very quickly. 

With tenure comes a little bit of a break. An "invitation" to be on a committee during a break is just that, rather than a veiled requirement (i.e. "this will be really good for your tenure application").  So, for the first time since 2005, I have finally had the time, and motivation, to write on a regular basis again. I am somewhat ashamed to admit that I haven't written daily for an extended period of time since I was writing my dissertation. I definitely wrote, but came in desperate spurts among grading, writing committee reports, class preparation, and the week or two before deadlines. There were always summers, but it's amazing how quickly I fell into bad habits of waiting until the very end of the break to actually write. The idea that the pressure of a deadline will "force" one to get things done is a myth students and some academics are very good at perpetuating. Accomplished scholars who say they write that way may be revising that way, but they aren't composing that way. 

The past two months have been a revelation in regard to my writing process. Since I already have a piece coming out soon in this anthology, I am under no deadlines. I have kept campus commitments to an absolute minimum. I have been able to make writing a priority in my day. It is my first project in the morning at least 5 days a week, and I write for a minimum of an hour. The first product of this was my previous 3-part Google Glass review. But it's the post(s) on which I'm currently working that the real benefits of prioritizing my writing and research have become apparent. I have started to work through some of the more complicated aspects of technological/human interface that I wasn't able to in my first book. Of course, much of that comes from just knowing more based on the reading I've done since then, and being able to make more connections to established philosophy due to all of the classes I've taught between the last book and now.

It's clear that the level from which I'm working now is much deeper than my previous pieces. I attribute that to my slowed down and regular approach. Sometimes I think that my background in English works against me: no matter how much I know about process and writing, no matter what advice I give to students regarding giving oneself time to write, there is still that romanticized vision of the exhausted writer "birthing" out some kind of tome that comes only when one occupies the borders of sanity. And after that overwrought, cathartic blast, we hope that there is something salvageable in the mess.

But after a couple of months of slow, steady, and regular writing, I find that 6 hours of writing spread out over 5-6 days is just so much better than 6 hours of writing done in a single, coffee-fueled, trembling day (or night). The embarrassing part is that, when I look back on it, it was the former, more methodical technique that allowed me to finish my dissertation, rather than the latter. The main difference was that I was writing for two or three hours at a time then. Some days there was literally nothing in the tank, and most of my time was spent thinking through a particularly difficult problem.  Other days, I would labor over one or two paragraphs for the full session. There were also times when I would write voluminously in those hours. It varied, but it was a set, scheduled process. Doing it every day allowed me to finish. Success came with an awareness of my process and a commitment to finishing it up. In retrospect, my writing process matured. It made me ready for the next level not just in my writing, but in my career.

As flawed as academia may be, there is something to be said for its "hierarchy." As I've said to every student whom I've counselled regarding grad school and Ph.D. work, the dissertation is not simply about carving out a niche in a given field; or just being able to answer the "so what?" question when you've come up with something new. Writing a dissertation is a process designed to push an academic to his or her limits intellectually, emotionally, and professionally. It is a crucible, an arena, a battlefield, and a very personal hell, where you are perpetually harassed by your own demons while still at the mercy of circumstance (your advisor decides to take a sabbatical? Too bad; one of your committee members decides to work at another institution? Oh well. You or your partner are diagnosed with something horrific? Tough break). If there is one word that describes the point of the process that captures all of this, it's perseverance.

For a perilously long time, I was ABD: "all-but-dissertation." This is an informal term (yes, there are those ABDs who actually want to put this as a suffix on their business cards, thinking it carries weight), which means that all the requirements for the Ph.D. have been fulfilled except for the dissertation. It is when the student is solely responsible for his or her progress. It is the most dangerous time for any Ph.D. student, because it is when the perseverance I mentioned is most tested. The negative psychological backslide that can occur during the ABD phase is insidious. I found myself wondering why "they just couldn't let me finish," and lamenting "but I just want to teach!" I began to question and deride the entire Ph.D. process as antiquated, elitist, and unfair. I amassed a pile of teaching experience, however, desperately using it as an excuse not to face my writing ... and also hoping that magically, the dozens of courses I had taught would somehow make my lack of a Ph.D. something that search committees would ignore. I became satisfied with less and less at the teaching jobs I did have: I was taking jobs out of guilt -- at least if I made money and was 'busy,' it meant that I hadn't stalled. I even thought I could make a permanent career out of my adjunct, ABD teaching. When my wife completed her Ph.D., I offset my jealousy with even more magical thinking: yes, that was her path. For what I want to do, I don't really need the Ph.D. at all. 

But after truly hitting bottom, and being faced with some very serious ultimatums (one having to do with being dropped from my Ph.D. program), I rebuilt from the ground-up. I sought counseling, and uncovered deeply entrenched issues that were hindering me. I faced my fears and actively engaged my dissertation committee. I started writing regularly. It took 3 years of rebuilding before I was on track again. But with the help of committed and compassionate faculty, an excellent therapist, and a partner who had been through the process herself and really, really loved me, I found my rhythm. I found a way to put all of the work I had done previously to use. Circumstances also finally aligned toward the end of those 3+ years. My wife was offered a tenure-track position 2,000 miles from where we lived. If I had any hope of being employed at the same place, I would have to finish my dissertation within a year of moving there. Within 4 months of moving and despite the chaos of unpacking and settling into a new place, I wrote regularly, drawing from every false start and red herring in my research, and finished. Eleven years had passed from the day I took my first graduate level class.

Getting the Ph.D. is more of a personal milestone than a professional one, because having a Ph.D. doesn't guarantee anyone a job. Ever. In most academic fields, the tenure-track job market is abysmal, and repeated runs through the job search process can be utterly demoralizing. However, the Ph.D. does improve one's chances dramatically, however -- and in many academic fields, it is an absolute necessity for finding a tenure-track job. And having one "in hand" versus "defending in August" does make one more attractive to a search committee.  But when that tenure-track job is found, the professional gauntlet truly begins. I'm lucky to be at a teaching University because the tenure process is five, rather than seven (or more) years. For those five years, I was an Assistant Professor (aka, "probationary" or "junior" faculty). In a nutshell, that means that at the end of any one of those five years, I could have been let go without any reason given. And that does happen sometimes. So during those probationary years, any junior faculty member will take on any and every project that is thrown his or her way: extra committee work, extra-curricular activities, moderating a club, volunteering, etc. And if a trusted mentor, department chair, or any administrator shows up at your office door with an "opportunity that would really support your tenure," you say yes. Of course, at a teaching institution, you are being judged primarily by your teaching evaluations, with research and professional development as a slightly distant second. But nevermind that all of those commitments mean less time for your classroom preparation; or that you have to leave students in the dust to run to a meeting. You balance it. You do it because you've already proven that you can balance yourself during the dissertation process. You dig deep. You persevere.

I did my fair share of work, and with the help of a particularly insightful administrator, I chose my committee service well. I did take a few risks here and there, and had one or two minor -- and ultimately resolved -- disagreements with colleagues, but I pushed myself. I squeezed in some writing where I could, and managed to completely revise my dissertation and get it published and then write two more articles: one was rejected at the very last stage; the other is the one included in the new collection. I applied for tenure with a strong portfolio. Putting that portfolio together was much more time consuming and emotionally draining than I expected. That process, plus all of my other duties, pushed me to a point that was very similar to the final weeks of dissertation writing: that place where you have to once again dig very deep for that last bit of motivation and energy. But I could look back to my dissertation process and know that I had it in me to finish. I had excellent support from my spouse, colleagues, and even students. All of my supporters reiterated a variation of a theme: "You earned this." Yes. I had earned it, and I would persevere.

Being granted tenure is not an "end." Just like getting the Ph.D. or first tenure-track position is not an end. It is a new process of self-evaluation and professional development, but one that comes with the privileges one has earned in the process. There is more freedom to engage in both research and course development. Student evaluations -- while still very important -- no longer hold such psychological weight. There is room for experimentation and trying out things that one has always wanted to try. Projects can be more long-term. Professional evaluations are set at longer intervals. And, as I hinted at earlier, one can be more selective as to the types of service in which one engages. With rank and seniority, there are more opportunities for leadership in committees and campus-wide initiatives. However, as an "associate professor," there is still one more level above. This is very institution-specific. At a research-level university or college, being promoted to a "full professor" is contingent upon criteria particular to each institution. Some require major publishing and/or research achievements; others require commendable teaching. Regardless, if one wants to move on from "Associate Professor" to "Professor," it is another round of evaluation and assessment. One earns more opportunities. And yes, the system is flawed. Institutionalized sexism, racism, classism, etc. persist even in the most liberal-leaning academic edifices. But at least academia tends to be more aware of these issues than in other places.

When I sat down to write this post, I intended to make it solely about my writing process -- not necessarily about the journey up the academic ladder. But for me at least, the two are absolutely intertwined. I gained confidence from my writing successes, which bolstered my ownership of my own expertise, which, in turn, pushed me to take more risks through my writing. The process is circular and iterative: it reinforces an identity. I try to think to the moment back in grad school when I turned myself around. But really, it was a series of moments over the course of weeks ... perhaps months. One incremental advance after another. But they compounded. And with each iteration, I became slightly more confident in my voice and my subsequent identity. This process really never ends.

It took tenure, promotion, and a summer without any major commitments for me to gain the perspective necessary to verify what I always suspected: one's identity is something that is always a process. I experienced my greatest failures and made my worst decisions when I lost sight of that fact, and passively allowed circumstance and my environment to shape me without actively engaging in the process of that shaping. Each success reinforces my identity as an academic.That identity isn't static. It is an ongoing and active process of evolution, with every stage being a regeneration.

I rather like where I am now. Yet, I will persevere.








Wednesday, June 25, 2014

Looking #Throughglass, Part 3 of 3: Risk, Doubt, and Technic Fields

In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.

Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.

The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence. 

Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main --  means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.

And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time? 

In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread. 

In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive. 

In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.

Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!).  But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons. 

First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back.  Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands.  A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions. 

The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly. 

For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen. 

Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly.  Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.

Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience. 

At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how 
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point. 



My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:

Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience.  In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,”  users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.

If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality. 

Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things.  Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it.  It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space. 

But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project. 

Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found. 

Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.

#Throughglass, however, the future is in the past-tense.


[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.




Friday, June 20, 2014

Looking #Throughglass, Part 2 of 3: Steel Against Flint, Sparking Expectation

In my last post, I discussed the practicalities of Google Glass, and explained the temporal dissonance -- or "pre-nostalgia" I experienced while using them, and I left off questioning my own position regarding the potential cultural shift that Glass gestures toward. This post picks up on that discussion, moving toward the idea of the internet of things. If you haven't read it yet, it will definitely give this post some context ... and be sure to read the disclaimer!

I don’t think that Google was going for immediate, wide-scale adoption resulting in a sudden, tectonic paradigm shift with Google Glass.  I think if it had gone that way, Google would have been thrilled. Instead, I think there’s something much more subtle (and smart) going on.

While Apple is very good at throwing a technological artifact out there, marketing it well, and making its adoption a trend in the present, Google seems to be out to change how we imagine the future at its inception point. Glass potentially alters our expectations of how evoke the technological systems we use, eventually causing an expectation of ubiquity -- even for those who don't have it. I've noticed that Google rolls out technological systems and applications that are useful and work well, but also makes one think, “wow, now that I could do this, this would be even better if I could integrate it with that.” And, at least in my experience, soon after (if not immediately), there’s an app available that  fulfills that need, albeit tentatively at first. And when that app maker really nails it, Google acquires them and integrates the app into their systems. For the Google-phobic, it is quite Borg-like.

And while resistance may be futile, it also sparks inspiration and imagination. It is the engine of innovation. I think that Glass wasn't so much a game-changer in itself, as it was the steel against the flint of our everyday technological experiences. This was the first in a large-scale expeditionary force to map out the topography for the internet of things. In an internet of things, objects themselves are literally woven into the technological spectrum via RFID-like technology of varying complexity. I've written about it in this post, and there’s also a more recent article here.  By giving a Glass this kind of “soft opening” that wasn't quite public but wasn't quite geared to hard-core developers, it 1) allowed for even more innovation as people used Glass in ways engineers and developers couldn't see; but, more importantly, 2) it makes even non-users aware of a potential future where this system of use is indeed possible and, perhaps, desirable. It is a potential future in which a relatively non-intrusive interface “evokes” or “brings out” an already present, ubiquitous, technological field that permeates the topology of everyday life. This field is like another band of non-visible light on the spectrum; like infrared or ultraviolet. It can’t be seen with the naked eye, but the right kind of lens will bring it out, and make visible that extra layer that is present.

Google had been working on this with its “Google Goggles” app, which allowed the user to snap a picture with a smartphone, at which point Google would analyze the image and overlay relevant information on the screen. However, potentially with Glass, the act of “projecting” or “overlaying” this information would be smooth enough, fast enough, and intuitive enough to make it seem as if the information is somehow emanating from the area itself. 

Now this is very important. In the current iteration of Glass, one must actively touch the control pad on the side of the right temple of the frames. Alternately, one can tilt one’s head backward to a certain degree and Glass activates. However, either gesture is an evocative one. The user actively brings forth information. Despite the clunky interface, there is never a sense of “projection onto” the world. It is definitely more a bringing forth. As previously stated, most of Glass’s functions are engaged via a voice interface. I think that this is where the main flaw of Glass is, but more on that in part three. 

But, in a more abstract sense, all of Glass’s functionality has an overall feel that one is tapping into an already-present technological field or spectrum that exists invisibly around us. There’s no longer a sense that one is accessing information from “the cloud,” and projecting or imposing that information onto the world. Instead, Glass potentially us to see that the cloud actually permeates the physical world around us. The WiFi or 4G networks no longer are conduits to information, but the information itself which seems to be everywhere. 

This is an important step in advancing the wide scale cultural acceptance of the internet of things.  Imagine iterations of this technology embedded in almost every object around us. It would be invisible -- an “easter egg” of technological being and control that could only be uncovered with the right interface. Culturally speaking, we have already become accustomed to such technologies with our cell phones. Without wires, contact was still available. And when texting, sending pictures, emails, etc became part of the cell/smartphone experience, the most important marker had been reached: the availability of data, of our information, at any moment, from almost anywhere. This is a very posthuman state. Think about what happens when the “no service” icon pops up on a cell phone; not from the intellectual side, but emotionally. What feelings arise when there is no service? A vague unease perhaps? Or, alternatively, a feeling of freedom? Either way, this affective response is a characteristic of a posthuman modality. There is a certain expectation of a technological presence and/or connection. 

Also at play is Bluetooth and home networking WiFi technology, where devices seem to become “aware of each other” and can “connect” wirelessly -- augmenting the functionality of both devices, and usually allowing the user to be more productive. Once a TV, DVR, Cable/Satellite receiver, or gaming console is connected to a home WiFi network, the feeling becomes even more augmented. Various objects have a technological “presence” that can be detected by other devices. The devices communicate and integrate. Our homes are already mini-nodes of the internet of things. 

Slowly, methodically, technologies are introduced which condition us to expect the objects around us to be “aware” of our presence. As this technology evolves, the sphere of locality will grow smaller and more specific. Consumers will be reminded by their networked refrigerator that they are running low on milk as they walk through the dairy aisle in a supermarket.  20 years ago, this very concept would seem beyond belief. But now, it is within reach. And furthermore, we are becoming conditioned to expect it.

Next up: explorations of connection, integration, and control, and -- in my opinion -- Glass's biggest weakness (hint, it has nothing to do with battery life or how goofy it looks). Go check out the final installment: "Risk, Doubt, and Technic Fields"

Tuesday, June 17, 2014

Looking #Throughglass, Part 1 of 3: Practicalities, Temporalities, and Pre-nostalgia

My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things."  Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.

And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.

Now, carry on.

I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and  radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.

I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed.  In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.

On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.

For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all.  But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.

Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.

But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.

Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it.  Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”

Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.

Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?

Some possible answers to that in Part 2, "The Steel Against the Flint, Sparking Expectation"

Tuesday, April 15, 2014

Updates: Tenure, Google Glass, and a Very Positive Review

Just some updates of a personal, professional, and academic nature.

First of all, a couple of weeks ago, I was awarded tenure and promotion!  So after that little bit of news, I took a bit of a breather from everything (aside from classes, grading, and my usual semester duties).  Tenure is an interesting feeling; definitely a good one, but much more loaded than I originally thought it would be.

Secondly, a few months back, the office of Academic Affairs at Western State Colorado University generously contributed partial funds to help me acquire Google Glass. I've been using them pretty regularly now and am now composing what I hope will be a series of posts about them. Just a warning, though, these will not be a standard "user review."  You can get that anywhere.  I've been thinking long and hard about how I was going to write about Glass. But, as usual, some classroom discussion regarding technology inspired me, and now I know exactly how I'm going to go about my blog posts regarding glass. Despite the fact that we're entering that chaotic end-of-the-semester rush, I'm hoping to get the first post out within the next week or so.

Finally, I am really happy about a recent review of Posthuman Suffering and the Technological Embrace. Even though the book came out in 2010, I'm happy that it still has legs. This particular review appeared in The Information Society: An International Journal. All of the reviews have been positive, but this one really seemed to understand my intentions much more intrinsically. So I'm really happy about that.

So yes, although I've been quiet, good things have been happening. And look for my Google Glass entries soon!

Monday, January 20, 2014

The Internet of Things and the Great Recalibration

I've been playing catch-up since my tenure application and my class preps for the Spring semester, but I've finally been able to re-engage with my usual sites, and all of the fantastic content in my Google+ communities.

One thing that's been coming up in various iterations is the concept of the "internet of things." In a nutshell, the term loosely (and, I think perhaps a little misleadingly) refers to a technological interconnectivity of everyday objects: clothes, appliances, industrial equipment, jewelry, cars, etc, now made possible by advancements in creating smaller microprocessors. This idea has been around for quite some time, and has been developing steadily even though the general public might have been unaware of it. RFID chips in credit cards, black boxes in cars, even traffic sensors and cameras: they have all been pinging under our general perception for years -- almost like a collective unconscious.  But now, various patterns and developments have aligned to bring the concept itself into public awareness. While WiFi or even internet access is far from ubiquitous, we are becoming "connected enough" for these technologies to gain traction and -- as Intel, Google, and a host of other tech companies hope -- become something we expect. And I believe it is this expectation of connectedness which will once and for all mark the end of an antiquated notion of privacy and anonymity. 

Yes, I know. Snowden. The NSA. Massive black and grey operations poring through every text we send, every dirty little Snap we take, every phone call we make, and email we send. But I believe the bluster and histrionics people are going through are actually the death-throes of an almost Luddite conception of what "privacy" and "information" actually are. 

This thought came to me long ago, but I wasn't able to really articulate it until this past semester, when I was covering Kant in my intro to philosophy course. In the landscape of western philosophy, Kant created a seismic shift with a very subtle, even elegant, yet really sneaky rearticulation of one specific philosophical concept: a priori knowledge. Instead of characterizing a priori knowledge as an innate concept like infinity or freedom, he presented it as an innate capacity or ability. That is to say, the concept of "freedom," isn't in itself a priori, but our capacity to reason about it is. Of course, it's more complicated than that, but generally speaking, my students come to realize that Kant essentially recalibrated the spectrum of a priori/a posteriori knowledge. And Western philosophy was never the same again. The potential relativism of empiricism was contained, while the solipsisms of rationalism were dissipated.  

I believe that we are witnessing a similar seismic shift in our conception of what information is, and by extension, what we consider to be "private." Only history will be able to determine if this shift was a leap or an evolutionary creep forward. Regardless, I'm hoping that as more material objects become woven into the fabric of the data cloud, that it acts as a way to recalibrate people's thoughts on what exactly information is, more specifically, how that information doesn't "belong" to us. 

Our information is as susceptible to "loss" or "destruction" as our bodies are. Our information can degrade just as our bodies can. We can "protect" "our" information only as so far as we can protect our bodies from various dangers.  Granted, the dangers can be very different, however, we have as much chance of keeping our information private as we have of keeping our "selves" private.  Of course, biologically, in the phenomenal world, we can live "off the grid" and be as far away from others as possible. But the cost is paranoia and a general distrust of humanity in general: essentially, a life of fear.  Similarly, there is no way to completely protect our information without also withdrawing it completely from a technified world.  But again, at what cost?  I think it's one that is similar to all of those who sit in their compounds, armed to the teeth, waiting for a collapse of civilization that will never come.  

The internet of things, as it evolves, will slowly grow our expectations of connectivity.  We will opt in to smart cars, clothes, houses ... and I'm sure one day, trees, forests, animals ... that seem to intuitively adapt to our needs. From the dawn of time, we have always altered the physical world to our needs.  What we see happening today is no different, except that we now have a discourse to self-reflexively question our own motives. I always wondered if there was some kind of "cusp generation" of early humanity who distrusted cultivation and agriculture, as a ceding of humanity's power to nature itself?  An old hunter looking at his grandchildren planting things, thinking that they were putting too much faith, reliance, and attention in dirt. And, probably, that eventually the things that they grew would somehow eventually kill them (and I'm sure there was a sense of pure satisfaction from the paleo-Luddite when someone choked to death on a vegetable, or got food poisoning). 

Our expectations of connectivity will overcome our attachment to "private" information. The benefits will outweigh the risks; just as the benefits of going outside outweigh the benefits of being a hermit. 

I'm not saying that we should start waving around our social security numbers or giving our bank account numbers to foreign princes who solicit us over spam. We don't walk into a gang zone waving around cash, or dangle our children in front of pedophiles.  We must protect our "information" as much as we can, realizing that reasonable safeguards do not -- by any stretch of the imagination -- equal anonymity. If we wish to be woven into an internet of things, then we must more actively recalbrate what our notion of "privacy" and even "anonymity" is. And given the historical development of civilization itself, we will cede aspects of privacy or invisibility in order to gain a greater sense of efficacy. An internet of things that more efficiently weaves us into the world of objects will heighten that sense of efficacy. It already has. When our cars customize themselves for us when we open the door, or when our houses adjust all manner of ambient conditions to our liking, or even when Google autocompletes our searches based on our geographical location or past searches, our sense of efficacy is heightened; as is our sense of expectation.

As for what this recalibration brings, I believe it will -- like other technological developments -- be part of a larger field of advancements which will allow for us to become more ontologically ready for even bigger leaps forward. Perhaps after a few decades of a more widespread, almost ubiquitous internet of things, the emergence of an AI will actually seem more natural to us. I think in the more immediate future, it will ease fears of various transhuman values; augmentation of our biology will not be as threatening for some as might be today.

In any movement, there is an avant garde -- literally the "advance guard" or "fore-guard;" the innovators and dreamers who experiment and push ahead. And often, like Kant, they allow cultures to recalibrate their expectations and values, and rethink old notions and standards. Each time we use a credit card, click "I agree" on a terms of service box, or sign in to a various web account, we're pushing that advance ever forward ... and that's not a bad thing.