Showing posts with label Posthumanism. Show all posts
Showing posts with label Posthumanism. Show all posts

Monday, February 18, 2019

My Battery is Low and it's Getting Dark



"There's a little black spot on the sun today,
that's my soul up there
"
- The Police, "King of Pain."


"My battery is low and it's getting dark."

Of course, these were not the actual last words of the Opportunity Rover, which sent its last transmission February 13th -- a routine status report that was not quite as poetic or existentially charged as its anthropomorphic translation. What set it apart was only that it was the last report Opportunity would ever send.

When I wrote Posthuman Suffering, I was thinking of exactly this kind of relationship between human beings and machines. And the momentary poignancy as this virally flashes across a social media landscape shows us exactly the dynamic I tried to elucidate: we want our machines -- our technological systems -- to legitimize and validate our own pain: in this instance, the pain of existential dread.

This object -- an only semi-autonomous planetary rover -- was designed to last 90 Martian days (a martian day is about 30 minutes longer than one on earth). It dutifully lasted over 5,000, spending its final moments in a valley, enshrouded by the dark of a major planetary dust storm. Its "dedication," coupled with the finality of its message, affects us on a deep emotional level. It "dies" alone. Its last status message is transformed into a last fulfillment of duty -- calling out to earth, noting the encroaching darkness and its own dwindling power supply. We are often fascinated by these real and fictional moments, whether it is the HAL 9000's halting rendition of "Daisy, Daisy," or Roy Batty's "tears in rain" speech from Blade Runner,  we feel a certain empathy as these fictional and real machines sputter and die.

Where most believed that we were simply projecting ourselves (and our fears) onto our machines, I took it a step further. This wasn't mere projection; it was a characteristic of a deeper, more ontological relationship we had with these machines. Yes, we are sad and lonely because we see our own existential loneliness in the dust-covered rover now sitting, dead, in a distant valley of Mars. But, more importantly, we're satisfied by it. Satisfied not due to any inherent sadism or misanthropy; quite the opposite: we're satisfied because it keeps us company in that solitude.

If you've ever pulled our your smart phone to take a picture in low light, and it gave you a low-battery warning, you received pretty much an analogous message that Opportunity sent back to NASA. Yet, in that moment, you're more likely to be angry with your phone rather than want to cradle it in your arms and serenade it with David Bowie or Imogen Heap.

But this -- this object that was 54.6 million kilometers away.

And it was alone.

And it was dying.

Of course, there are all sorts of reasons why NASA would "translate" Opportunity's final transmission in such a way (a way to "humanize" science, or perhaps even authentic, heartfelt emotion for a fifteen-year mission that was incredibly successful and coming to an end). Regardless, the reaction on social media, however fleeting it may be (or may have been), falls somewhere between empathy and solidarity.

The object sitting alone on Mars, made by human hands, the product of human ingenuity, partakes in a broader, deeper loneliness in which humans partake. Yet, there is no way to share such loneliness except metaphorically. And in this case, it's the humans who make the metaphors. If anything is being extended here, it's not humanity, it's metaphor. The mistake many cultural theorists make is to present this dynamic as simple anthropomorphization: we're personifying "Oppy" (interestingly enough, quite often as female: "she's sent her last message"). But that's not exactly what's happening. We're re-creating Opportunity into something else: through metaphor we are making it into a unique, autonomous, metaphorical entity that can and does feel.

In this posthuman suffering we were extending our autonomy, and all the suffering that goes along with that autonomy. We imagine ourselves sitting alone, reaching out, texting into the dark, hoping for some kind of response; posting on Facebook or Twitter or Instagram because it's not socially acceptable to say "I'm lonely and need someone to speak to and also I know someday I will die and that makes me feel even more lonely and I need some kind of contact."

So we post or text, and wait for authentication and validation.

In many ways, Opportunity rover is us, alone, in the dark, posting on social media and hoping for some kind of response to tell us we're not alone.

I've often said in my classes that every social media post -- no matter what the content -- is simply a Cartesian expression and can be translated into "I exist."

I say less often in my classes that there's always an existential codicil to these posts:

"I exist and I'm afraid of death."

But now, as I make a turn in my philosophy, I realize that the existentialist in me was too dazzled by the idea of our own, consciousness-based fear of death: a survival instinct complexified by a cerebral cortex which weaves narratives as a means of information processing. And when I thought about this in light of technological artifacts and systems of their use, I was too focused on the relationship between human and object rather than on the human and the objects in and of  themselves. In other words, I was being a good cultural theorist, but a middling philosopher.

The Opportunity rover is "up there," alone, amid rocks and dust. On the same planet are the non-functional husks of its predecessors and distant relatives. It was unique; the last of its kind. We imagine it in the desolation. We weave its narrative as one of solitary, but dedicated duty, amid rocks and dust. When we think about Opportunity, or any of the other human-made objects sitting on the moon, other planets, asteroids, and now hurtling through interstellar space (alone), the affect that occurs isn't a simple projection of human-like qualities onto an object. In the apprehension of the object, we become a new object, an Opportunity/human aggregate that is also constituted by the layers of sense-data, memories, emotions, experiences, and platforms through which much of that phenomena is brought into awareness. Metaphor isn't a thing we create or project, it is the phenomena of a distributed awareness.

To paraphrase "King of Pain," the speaker's soul is many things:
A little black spot on the sun today.
A black hat caught in a high tree top.
A flag pole rag and and the wind wont's stop.
A fossil that's trapped in a high cliff wall.
A dead salmon frozen in a waterfall.
A blue whale beached by a spring tide's ebb.
A butterfly trapped in a spider's web.
A red fox torn by a huntsman's pack.
A black winged gull with a broken back.
And, in the context of the song, there are other objects existing that aren't necessarily in the awareness of the speaker:
There's a king on a throne with his eyes torn out
There's a blind man looking for a shadow of doubt
There's a rich man sleeping on a golden bed
There's a skeleton choking on a crust of bread
The first group of objects (black spot, black hat, rag, etc.) are directly equated with the speaker's soul. But the second are not. They are just objects that frame the broader existence of the speaker, embedding them and all other objects in a broader world of objects, distributing the "pain" via the images invoked. The poignancy of the song comes with the extensive and Apollonian list of things, things that aren't necessarily solitary, sad, or tragic in and of themselves, but come to be so when folded into a broader aggregate that just happens to include a human being who is capable of understanding the above lyrics.

Whereas most would say that it's the reader that is lending the affective qualities to these objects, we need to look at the objects themselves and how -- as solitary objects embedded in a given situation, whether "real," "sensed," "imagined," "called to mind," etc. -- these objects create the "reader."

Getting back to our solitary rover, the pathos we feel for it comes from the images we see, our broader knowledge of Mars, our basic understanding of distance, the objects on the desks around us or the bed we're sitting on, the lack of any messages (or a particular message) on our phone, the dissonance between the expected amount of likes, loves, retweets,  comments on our last social media posts and the actual number of aforementioned interactions, the memories of when some caregiver may have forgotten to pick us up after karate practice, the dying valentine flower on our nightstand, the dreaming dog at our feet, etc., etc.

We feel for it not as a separate subjectivity witnessing something; we feel for it as an aggregate of the "objects" (loosely defined) which constitute our broader awareness. This is, perhaps, why on some level, for some particular people, at some particular moments, we are more moved by this object on a distant planet than we are from seeing suffering first-hand by a stranger or by the larger tragedy of our own dying planet. Certain aspects of this object, plus the objects around us, plus the "objects" of our thoughts, come together in a particular way creating a particularly emotional response.

It feels like the world is "turning circles, running 'round [our] brain[s]," because our brains are constituted by the "world" itself, even if that world includes a planet that we've only actually seen via pictures on the internet ...

... and a small robot, dying alone in the dark.








Thursday, August 24, 2017

Professional Milestones and Unexplored Territories: A Past-Due Update

I was a little shocked to realize that it has been this long since I updated Posthuman Being. I do have a couple of things in the pipeline now that I'll be able to discuss when each moves out of the revision stage. I'm optimistic about one of the projects. The other is a piece liked by the editors, but the project itself is in editorial limbo. I've been lucky on other pieces and their speedy turnarounds. I was due for a slow one. These projects will be the symbolic end of a chapter, and the last before I anticipate a "turn" in my work within the field of posthumanism. There have been glimmers of where I'm headed in my previous Posthuman Being entries. But now it's time for me to actively begin the next chapter. There will be much reading to do.

Concurrently, I've hit a professional milestone which needed its own moment of reflection: my promotion from the rank of 'associate professor' to 'professor.' For those not familiar with academic rank and promotion: after achieving tenure and promotion from 'assistant' to 'associate' professor, this is basically the final step. While I can't speak for everyone, there is a kind of  academic "mid-life crisis" that one can experience during this transition. I've had some decent accomplishments for someone at a teaching university with a 4/4 load (as opposed to a research university where the full-time load is fewer classes with a higher expectation/weight placed on research and publishing). I spent eight years as an adjunct teaching a 5/5 load or greater. The rest have been in the tenure-track and as a tenured faculty member. In total, I have logged 20 years teaching full-time. So, in both my professional and personal life, I am in one of those contemplative phases.

So I'm in that space between what I have accomplished and what I still want to accomplish. And for me, many of my personal and professional goals intertwine. So there's a bit of an added dimension to this introspection.

There are topics that I've wanted to cover in my research that, prior to tenure and promotion, I thought were too "out there" to explore. But now I have the privilege (and it IS a privilege in all connotations of the word) of choice. I can choose my direction, both in terms of research and in terms of institutional service goals. In the latter, I can choose my battles and pursue the issues which I think are important. To a certain extent, those battles tend to find me, but now I can face them directly without having to hold back. That feels good.

That being said, I also know that this privilege can disintegrate with one fell swoop of a budgetary ax, or under the whim of administrative politics. This is not a situation unique to myself, but to any academic serving in academia. Tenure and promotion is not a shield from reality.  It is however, a chance to respond to issues with confidence and a clear voice. It is an opportunity to take risks, knowing full well that the opportunity can vanish at any time.

That confidence and clarity comes around full-circle. I have learned a lot. I still have much to learn. I have been shaped by academia, which -- contrary to popular opinion of those outside of it -- can be  harsh and unforgiving environment. I have witnessed people broken by it. I have watched ambition smothered by institutional folly and inescapable economic realities. Yet I have endured, and thrived, and the passion for what I do remains intact and has grown even more intense. There are unexplored territories that remain.

I think Seinabo Sey put it best in the beautiful "Hard Time"

"This time I will be
Louder than my words
Walk with lessons that
Oh, that I have learned
Show the scars I've earned
In the light of day
Shadows will be found
I will hunt them down"

Although I'm not sure it I'll be hunting shadows or dancing with them. Then again, it is my choice.

Tuesday, January 19, 2016

Mythic Singularities: Or How I Learned To Stop Worrying and (kind of) Love Transhumanism

... knowing the force and action of fire, water, air the stars, the heavens, and all the other bodies that surround us, as distinctly as we know the various crafts of our artisans, we might also apply them in the same way to all the uses to which they are adapted, and thus render ourselves the lords and possessors of nature.  And this is a result to be desired, not only in order to the invention of an infinity of arts, by which we might be enabled to enjoy without any trouble the fruits of the earth, and all its comforts, but also and especially for the preservation of health, which is without doubt, of all the blessings of this life, the first and fundamental one; for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for. It is true that the science of medicine, as it now exists, contains few things whose utility is very remarkable: but without any wish to depreciate it, I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered; and that we could free ourselves from an infinity of maladies of body as well as of mind, and perhaps also even from the debility of age, if we had sufficiently ample knowledge of their causes, and of all the remedies provided for us by nature.
- Rene Descartes, Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences, 1637

As a critical posthumanist (with speculative leanings), I found myself always a little leary of transhumanism in general. Much has been written on the difference between the two, and one of the best and succinct explanations can be found in John Danaher's "Humanism, Transhumanism, and Speculative Posthumanism." But very briefly, I believe it boils down to a question of attention: a posthumanist, whether critical or speculative, focuses his or her attention on subjectivity; investigating, critiquing, and sometimes even rejecting the notion of a homuncular self or consciousness, and the assumption that the self is some kind of modular component of our embodiment. Being a critical posthumanist does makes me hyper-aware of the implications of Descartes' ideas presented above in relation to transhumanism. Admittedly, Danaher's statement "Critical posthumanists often scoff at certain transhumanist projects, like mind uploading, on the grounds that such projects implicitly assume the false Cartesian view" hit close to home, because I am guilty of the occasional scoff.

But there really is much more to transhumanism than sci-fi iterations of mind uploading and AIs taking over the world. Just like there is more to Descartes than his elevation, reification, and privileging of consciousness. From my critical posthumanist perspective, what has always been the hardest pill to swallow with Descartes wasn't necessarily the model of consciousness he proposed. It was the the way that model has been taken so literally -- as a fundamental fact -- that has been one of the deeper issues which drive me philosophically. But, as I've often told my students, there's more to Descartes than that. Examining Descartes's model as the metaphor it is gives us a more culturally based context for his work, and a better understanding of its underlying ethics. I think a similar approach can be applied to transhumanism, especially in light of some of the different positions articulated in Pellissier's "Transhumanism: There are [at least] ten different philosophical catwgories; which one(s) are you?"

Rene Descartes's faith in the ability of human reason to render us "lords and possessors of nature" through an "invention of an infinity of arts," is,  to my mind, one of the foundational philosophical beliefs of transhumanism. And his later statement, that "all at present known in it is almost nothing in comparison of what remains to be discovered" becomes its driving conceit: the promise that answers could be found which could, potentially, free humanity from "an infinity of maladies of bodies as well as of mind, and perhaps the debility of age." It follows that whatever humanity can create to help us unlock those secrets is thus a product of human reason. We create the things we need that help us to uncover "what remains to be discovered."

But this ode to human endeavor eclipses the point of those discoveries: "the preservation of health" which is "first and fundamental ... for the mind is so intimately dependent on the organs of the body, that if any means can ever be found to render men wiser and more ingenious ... I believe that it is in medicine that it should be sought for."

Descartes sees an easing of human suffering as one of the main objectives to scientific endeavor. But this aspect of his philosophy is often eclipsed by the seemingly infinite "secrets of nature" that science might uncover. As is the case with certain interpretations of the transhumanist movement, the promise of what can be learned often eclipses the reasons why we want to learn them.  And that promise can take on mythic properties. Even though progress is its own promise, a transhuman progress can become an eschatological one, caught between: a Scylla of extreme interpretations of "singularitarian" messianism and a Charybdis of  similarly extreme interpretations of "survivalist transhuman" immortality.  Both are characterized by governing mythos -- or set of beliefs -- that are technoprogressive by nature, but risk fundamentalism in practice, especially if we lose sight of a very important aspect of technoprogressivism itself:  "an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable" (Hughes 2010. emphasis added). Critical awareness of the limits of transhumanism is similar to having a critical awareness of any functional myth. One does not have to take the Santa Claus or religious myths literally to celebrate Christmas; instead one can understand the very man-made meaning behind the holiday and the metaphors therein, and choose to express or follow that particular ethical framework accordingly, very much aware that it is an ethical framework that can be adjusted or rejected as needed.

Transhuman fundamentalism occurs when critical awareness that progress is not inevitable is replaced by an absolute faith and/or literal interpretation that -- either by human endeavor or via artificial intelligence -- technology will advance to a point where all of humanity's problems, including death, will be solved. Hughes points out this tension: "Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future" (2010).  Transhuman fundamentalism characterized by uncritical inevitablism would interpret progress as "fact." That is to say, that progress will happen and is immanent. By reifying (and eventually deifying) progress,  transhuman fundamentalism would actually forfeit any claim to progress by severing it from its human origins. Like a god that is created by humans out of a very human need, but then whose origins are forgotten, progress stands as an entity separate from humanity, taking on a multitude of characteristics rendering it ubiquitous and omnipotent: progress can and will take place. It has and it always will, regardless of human existence; humanity can choose to unite with it, or find itself doomed.

Evidence for the inevitability of progress comes by way of pointing out specific scientific advancements and then falling back on speculation that x advancement will lead to y development, as outlined by Verdoux's "historical" critique of faith in progress, holding a "'progressionist illusion' that history is in fact a record of improvement" (2009). Kevin Warwick has used rat neurons as CPUs for his little rolling robots: clearly, we will be able to upload our minds. I think of this as a not-so-distant cousin of the intelligent design argument for the existence of God. Proponents point to complexity of various organic (and non-organic) systems as evidence that a designer of some kind must exist. Transhuman fundamentalist positions point to small (but significant) technological advancements as evidence that an AI will rise (Singularitarianism) or that death itself will be vanquished (Survivalist Transhumanism). It is important to note that neither position is in itself fundamentalist in nature. But I do think that these two particular frameworks lend themselves more easily to a fundamentalist interpretation, due to their more entrenched reliance on Cartesian subjectivity, enlightenment teleologies, and eschatological religious overtones.

Singularitarianism, according to Pellissier, "believes the transition to a posthuman will be a sudden event in the 'medium future' -- a Technological Singularity created by runaway machine superintelligence." Pushed to a fundamentalist extreme, the question for the singularitarian is: when the posthuman rapture happens, will we be saved by a techno-messiah, or burned by a technological antichrist?  Both arise by the force of their own wills. But if we look behind the curtain of the great and powerful singularity, we see a very human teleology. The technology from which the singularity is born is the product of human effort. Subconsciously, the singularity is not so much a warning as it is a speculative indulgence of the power of human progress: the creation of consciousness in a machine. And though singularitarianism may call it "machine consciousness," the implication that such an intelligence would "choose" to either help or hinder humanity always already infers a very anthropomorphic consciousness. Furthermore, we will arrive at this moment via some major scientific advancement that always seems to be between 20 and 100 years away, such as "computronium," or programmable matter. This molecularly-engineered material, according to more Kurzweilian perspectives, will allow us to convert parts of the universe into cosmic supercomputers which will solve our problems for us and unlock even more secrets to the universe. While the idea of programmable matter is not necessarily unrealistic, its mythical qualities (somewhere between a kind of "singularity adamantium" and "philosopher's techno-stone"), promise the transubstantiation of matter toward unlimited, cosmic computing, thus opening up even more possibilities for progress. The "promise" is for progress itself, that unlocking certain mysteries will provide an infinite amount of new mysteries to be solved.

Survivalist Transhumanism can take a take a similar path in terms of technological inevitabilism, but pushed toward a fundamentalist extreme, awaits a more Nietzschean posthuman rapture.  According to Pellissier, Survivalist Transhumanism "espouses radical life extension as the most important goal of transhumanism." In general, the movement seems to be awaiting advancements in human augmentation which are always already just out of reach but will (eventually) overcome death and allow the self (whether bioengineered or uploaded to a new material -- or immaterial -- substrate) to survive indefinitely. Survivalist transhumanism with a more fundamentalist flavor would push to bring the Nietzschean Ubermensch into being -- literally -- despite the fact that Nietzsche's Ubermensch functions as an ideal toward which humans should strive.  He functions as a metaphor for living one's life fully, not subject to a "slave morality" that is governed by fear and placing one's trust in mythological constructions treated as real artifacts. Even more ironic is the fact that Ubermensch is not immortal and is at peace with his immanent death. Literal interpretations of the Ubermensch would characterize the master-morality human as overcoming mortality itself, since death is the ultimate check on the individual's development. Living forever, from a more fundamentalist perspective, would provide infinite time to uncover infinite possibilities and thus make infinite progress. Think of all the things we could do, build, and discover, some might say. I agree. Immortality would give us time -- literally.  Without the horizon of death as a parameter of our lives, we would -- eventually -- overcome a way of looking at the universe that has been a defining characteristic of humanity since the first species of hominids with the capacity to speculate pondered death.

But in that speculation is also a promise. The promise that conquering death would allow us to reap the fruits of the inevitable and inexorable progression of technology. Like a child who really wants to "stay up late," there is a curiosity about what happens after humanity's bedtime. Is the darkness outside her window any different after bedtime than it is at 9pm? What lies beyond the boundaries of late-night broadcast television? How far beyond can she push until she reaches the loops of infomercials, or the re-runs of the shows that were on hours prior?  And years later, when she pulls her first all-nighter, and she sees the darkness ebb and the dawn slowly but surely rise just barely within her perception, what will she have learned?

It's not that the darkness holds unknown things. To her, it promises things to be known. She doesn't know what she will discover there until she goes through it. Immortality and death metaphorically function in the same way: Those who believe that immortality is possible via radical life extension believe that the real benefits of immortality will show themselves once immortality is reached and we have the proper perspective from which to know the world differently. To me, this sounds a lot like Heaven: We don't know what's there but we know it's really, really good. In the words of Laurie Anderson: "Paradise is exactly like where you are right now, only much, much better." A survivalist transhuman fundamentalist version might read something like "Being immortal is exactly like being mortal, only much, much better."

Does this mean we should scoff at the idea of radical life extension? At the singularity and its computronium wonderfulness? Absolutely not. But the technoprogressivism at the heart of  transhumanism need not be so literal. When one understands a myth as that -- a set of governing beliefs -- transhumanism itself can stay true to the often-eclipsed aspect of its Cartesian, enlightenment roots: the easing of human suffering. If we look at transhumanism as a functional myth, adhering to its core technoprogressive foundations, not only do we have a potential model for human progress, but we also have an ethical structure by which to advance that movement. The diversity of transhuman views provides several different paths of progress.

Transhumanism has at its core a technoprogressivism that even critical posthumanism like me can get behind. If I am a technoprogressivist, then I do believe in certain aspects of the promise of technology. I do believe that humanity has the capacity to better itself and do incredible things through technological means. Furthermore, I do feel that we are in the infancy of our knowledge of how technological systems are to be responsibly used.  It is a technoprogressivist's responsibility to mitigate a myopic visions of the future -- including those visions that uncritically mythologize the singularity or immortality itself as an inevitability.

To me it becomes a question of exactly what the transhumanist him or herself is looking for from technology, and how he or she sees conceptualizes the "human" in those scenarios. The reason I still call myself a posthumanist is because I think that we have yet to truly free ourselves of antiquated notions of subjectivity itself. The singularity to me seems as if it will always be a Cartesian one. A "thing that thinks" and is aware of itself thinking and therefore is sentient. Perhaps the reasons why we have not reached a singularity yet is because we're approaching the subject and volition from the wrong direction.

To a lesser extent, I think that immortality narratives are mired in re-hashed religious eschatologies where "heaven" is simply replaced with "immortality." As for radical life extension, what are we trying to extend? Are we tying "life" to the ability to simply being aware of ourselves being aware that we are alive? Or are we looking at the quality of the extended life we might achieve? I do think that we may extend the human lifespan to well over a century. What will be the costs? And what will be the benefits?  Life extension is not the same as life enrichment. Overcoming death is not the same as overcoming suffering. If we can combat disease, and mitigate the physical and mental degradation which characterize aging, thus leading to an extended life-span free of pain and mental deterioration, then so be it.  However, easing suffering and living forever are two very different things. Some might say that the easing of suffering is simply "understood" within the overall goals of immortality, but I don't think it is.

Given all of the different positions outlined in Pellissier's article, "cosmopolitan transhumanism" seems to make the most sense to me. Coined by Steven Umbrello, this category combines the philosophical movement of cosmopolitanism with transhumanism, creating a technoprogressive philosophy that can "increase empathy, compassion, and the univide progress of humanity to become something greater than it currently is. The exponential advancement of technology is relentless, it can prove to be either destructive or beneficial to the human race." This advancement can only be achieved, Umbrello maintains, via an abandonment of "nationalistic, patriotic, and geopolitical allegiances in favor [of] global citizenship that fosters cooperation and mutually beneficial progress."

Under that classification, I can call myself a transhumanist. A commitment to  enriching life rather than simply creating it (as an AI) or extending it (via radical life extension) should ethically shape the leading edge of a technoprogressive movement, if only to break a potential cycle of polemics and politicization internal and external to transhumanism itself. Perhaps I've read too many comic books and have too much of a love for superheroes, but in today's political and cultural climate, a radical position on either side can unfortunately create an opposite. If technoprogressivism rises under  fundamentalist singularitarian or survivalist transhuman banners, equally passionate luddite, anti-technological positions could potentially rise and do real damage. Speaking as a US citizen, I am constantly aghast at the overall ignorance that people have toward science and the ways in which the very concept of "scientific theory" and the very definition of what a "fact" is has been skewed and distorted. If we have groups of the population who still believe that vaccines cause autism or don't believe in evolution, do we really think that a movement toward an artificial general intelligence will be taken well?

Transhumanism, specifically the cosmopolitan kind, provides a needed balance of progress and awareness. We can and should strive toward aspects of singularitarianism and survivalist transhumanism, but as the metaphors and ideals they actually are.


References:

Anderson, Laurie. "Language is a Virus" Home of the Brave (1986)

Descartes, Rene. 1637. Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences.

Hughes, James. 2010. "Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty." (IEET.org).

Pellissier, Hank. 2015. "Transhumanism: There Are [at Least] Ten Different Philosophical Categories; Which One(s) Are you?" (IEET.org)

Verdoux, Philippe. 2009. "Transhumanism, Progress and the Future."  Journal of Evolution and Technology 20(2):49-69.

Saturday, July 11, 2015

The Posthuman Superman: The Rise of the Trinity

"Thus,  existentialism's first move is to make every man aware of what he is and to make the full responsibility of his existence rest on him. And when we say that a man is responsible for himself, we do not only mean that he is responsible for his own individuality, but he is responsible for all men."
-- Sartre, Existentialism is a Humanism

[Apologies for any format issues or citation irregularities. I'll be out of town for the next few days and wanted to get this up before I left!]

Upon the release of the trailer for Batman v Superman: Dawn of Justice, a few people contacted me, asking if the trailer seemed to be in keeping with the ideas I presented in my Man of Steel review. In that review, I concluded that the film presented a "Posthuman Superman," because, like iterations of technological protagonists and antagonists in other sci-fi films, Kal-El is striving toward humanity; that "Superman is a hero because he unceasingly an unapologetically strives for an idea that is, for him, ultimately impossible to achieve: humanity." That quest is a reinforcement of our own humanity in our constant striving for improvement (of course, take a look at the full review for more context).

This is a very quick response, mostly due to the fact that I'm not really comfortable speculating about a film that hasn't been released yet. And we all know that trailers can be disappointingly deceiving. But given what I know about various plot details, and the trajectory of the trailer itself, it does very much look like Zach Snyder is using the destruction that Metropolis suffered in Man of Steel, and Superman's resulting choice to kill General Zod, as the catalyst of this film, where a seasoned (and somewhat jaded Batman) must determine who represents the biggest threat to humanity: Superman or Lex Luthor.

What has activated my inner fanboy about this film is that, for me, it represents why I have always preferred DC heroes over Marvel heroes: core DC heroes (Superman, Batman, Wonder Woman, Green Lantern, etc) rarely, if ever, lament their powers or the responsibilities they have. Instead, they struggle with the choice as to how to use the power they possess. In my opinion, while Marvel has always -- very successfully -- leaned on the "with great power comes great responsibility" idea; DC takes that a step further, with characters who understand the responsibility they have and struggle not with the burden of power, but the choice as to how to use it. Again, this is just one DC fan's opinion.

And here I think that the brief snippet of Martha Kent's advice to her son is really the key to where the film may be going:

"People hate what they don't understand. Be their hero, Clark. Be their angel. Be their monument. Be anything they need you to be. Or be none of it. You don't owe this world a thing. You never did."

Whereas Man of Steel hit a very Nietzschean note, I'm speculating here that Batman v Superman will hit a Sartrean one. If Kal-El is to be Clark Kent, and embrace a human morality, then he must carry the burden of his choices, completely, and realize that his choices do not only affect him, but also implicate all of humanity itself.

As Sartre tells us in Existentialism is a Humanism:


"... I am responsible for myself and for everyone else. I am creating a certain image of man of my own choosing. In choosing myself, I choose man."

And if we take into account the messianic imagery in both the teaser and the current trailer, it's clear that Snyder is playing with the idea of gods and idolatry. Nietzsche may dismiss God by declaring him dead, but it's Sartre who wrestles with the existentialist implications of a non-existent God:

"That is the very starting point of existentialism, Indeed, everything is permissible of God does not exist, and as a result, man is forlorn, because neither within him nor without does he find anything to cling to.  He can't start making excuses for himself."

Martha Kent's declaration that Clark "doesn't owe the world a thing" places the degree of Kal-El's humanity on Superman's shoulders. Clark is the human, Kal is the alien. What then is Superman? I am curious as to whether or not this trinity aspect will be brought out in the film. Regardless, what is clear is that the Alien/Human/hybrid trinity is not a divine one. It is one where humanity is at the center. And when one puts humanity at the center of morality (rather than a non-existent God), then we are faced with the true burden of our choices:

"If existence really does precede essence, there is no explaining things away by reference to a fixed and given human nature,. In other words, there is no determinism, man is free, man is freedom. On the other hand, if God does not exist we find no commands to turn to which legitimize our conduct. So in the bright realm of values, we have no excuse behind us, nor justification before us. We are alone, with no excuses."

For Sartre, "human nature" is as much of a construct as God. And Clark is faced with the reality of this situation in his mother's advice to be a hero, an angel, a monument, and/or whatever humanity needs him to be ... or not. The choice is Clark's. If Clark is to be human, then he must face the same burden as all humans: freedom. Sartre continues:

"That is the idea I shall try to convey when I say that man is condemned to be free. Condemned, because he did not create himself, yet, in other respects is free; because, once thrown in to the world, he is responsible for everything he does. the existentialist does not believe in the power of passion. He will never agree that a sweeping passion is a ravaging torrent which fatally leads a man to certain acts and is therefore an excuses. He thinks that man is responsible for his passion."

If Clark is to be the top of the Clark/Kal/Superman trinity, then he cannot fall back on passion to excuse his snapping of Zod's neck, nor can he rely on it to excuse him from the deaths of thousands that resulted from the battle in Man of Steel. Perhaps the anguish of his tripartite nature will be somehow reflected in the classic "DC Trinity" of Superman/Batman/Wonder Woman found in the comics and graphic novels, in which Batman provides a compass for Superman's humanity,while Wonder Woman tends to encourage Superman to embrace his god-like status.

And the fanboy in me begins to eclipse the philosopher. But before it completely takes over and I watch the trailer another dozen times, I can say that I still stand behind my thoughts from my original review of Man of Steel, this is a posthuman superhero film. Superman will still struggle to be human (even though he isn't), and the addition of an authentic human in Batman, as well as an authentic god in Wonder Woman, will only serve to highlight his anguish at realizing that his choices are his own ... just as Sartre tells us. And in that agony, we as an audience watch Superman suffer with us human beings.

Now we'll see if all of this holds up when the film is actually released, at which point I will -- of course -- write a full review.




Monday, March 30, 2015

Posthuman Desire (Part 2 of 2): The Loneliness of Transcendence

In my previous post, I discussed desire through the Buddhist concept of dukkha, looking at the dissatisfaction that accompanies human self-awareness and how our representations of AIs follow a mythic pattern. The final examples I used (Her, Transcendence, etc.) pointed to representations of AIs that wanted to be acknowledged or even to love us. Each of these examples hints at a desire for unification with humanity; or at least some kind of peaceful coexistence. So then, as myths, what are we hoping to learn from them? Are they, like religious myths of the past, a way to work through a deeper existential angst? Or is this and advanced step in our myth-making abilities, where we're laying out the blueprints for our own self-engineered evolution, one which can only occur through a unification with technology itself?

It really depends upon how we define "unification" itself. Merging the machine with the human in a physical way is already a reality, although we are constantly trying to find better, and more seamless ways to do so. However, if we look broadly at the history of the whole "cyborg" idea, I think that it actually reflects a more mythic structure. Early versions of the cyborg reflect the cultural and philosophical assumptions of what "human" was at the time, meaning that volition remained intact, and that any technological supplements were augmentations or replacements to the original parts of the body.*  I think that, culturally, the high point of this idea came in the  1974-1978 TV series, The Six Million Dollar Man (based upon the 1972 Martin Caidin novel, Cyborg), and its 1976-78 spin-off, The Bionic Woman. In each, the bionic implants were completely undetectable with the naked eye, and seamlessly integrated into the bodies of Steve Austin and Jamie Summers. Other versions of enhanced humanity, however, show a growing awareness of the power of computers via Michael Crichton's 1972 novel, The Terminal Man, in which prosthetic neural enhancements bring out a latent psychosis in the novel's main character, Harry Benson . If we look at this collective hyper-mythos holistically, I have a feeling that it would follow a similar pattern/spread of the development of more ancient myths, where the human/god (or human/angel, or human/alien) hybrids are sometimes superhuman and heroic, other times evil and monstrous.

The monstrous ones, however, tend to share similar characteristics, and I think that most prominent is the fact that in those representations, the enhancements seem to mess with the will. On the spectrum of cyborgs here, we're talking about the "Cybermen" of Doctor Who (who made their first appearance in 1966) and the infamous "Borg" who first appeared in Star Trek: The Next Generation in 1989. In varying degrees, each has a hive mentality, a suppression or removal of emotion, and are "integrated" into the collective in violent, invasive, and gruesome ways. The Borg from Star Trek and the Cybermen from the modern Doctor Who era represent that dark side of unification with a technological other. The joining of machine to human is not seamless. Even with the sleek armor of the contemporary iterations of the Cybermen, it's made clear that the "upgrade" process is painful, bloody, and terrifying, and that it's best that what's left of the human inside remains unseen. As for the Borg, the "assimilation" process is initially violent but less explicitly invasive (at least from Star Trek: First Contact), it seems to be more of an injection of nanotechnology that converts a person from inside-out, making them more compatible with the external additions to the body. Regardless of how it's done, the cyborg that remains is cold, unemotional, and relentlessly logical.

So what's the moral of the cyborg fairy tale? And what does it have to do with suffering? Technology is good, and the use of it is something we should do, as long as we are using it and not the other way around (since in each its always a human use of technology itself which beats the cyborgs). When the technology overshadows our humanity, then we're in for trouble. And if we're really not careful, it threatens us on an what I believe to be a very human instinctual level: that of the will. As per my the final entry of my last blog series, the instinct to keep the concept of the will intact evolves with the intellectual capacity of the human species itself. The cyborg mythology grows out of a warning that if the will is tampered with (giving up one's will to the collective), then humanity is lost.

The most important aspect of cyborg mythologies are that the few cyborgs for whom we show pathos are the ones who have come to realize that they are cyborgs and are cognizant that they have lost an aspect of their humanity. In the 2006 Doctor Who arc, "Rise of the Cybermen/The Age of Steel," the Doctor reveals that Cybermen can feel pain (both physical and emotional), but that the pain is artificially suppressed. He defeats them by sending a signal that deactivates that ability, eventually causing all the Cybermen to collapse into what can only be called screaming heaps of existential crises as they recognize that they have been violated and transformed. They feel the physical and psychological pain that their cyborg existence entails. In various Star Trek TV shows and films, we gain many insights into the Borg collective via characters who are separated from the hive, and begin to regain their human characteristics -- most notably, the ability to choose for themselves, and even name themselves (i.e. "Hugh," from the Star Trek: The Next Generation episode "I, Borg").

I know that there are many, many other examples of this in sci-fi. For the most part and from a mythological standpoint, however, cyborgs are inhuman when they do not have an awareness of their suffering. They are either defeated or "re-humanized" not just by separating them from the collective, but by making them aware that as a part of the collective, they were actually suffering, but couldn't realize it. Especially in the Star Trek mythos, newly separated Borg describe missing the sounds of the thoughts of others; and must now deal with feeling vulnerable, ineffective, and most importantly to the mythos -- alone.  This realization then vindicates and legitimizes our human suffering. The moral of the story is that we all feel alone and vulnerable. That's what makes us human. We should embrace this existential angst, privilege it, and even worship and venerate it.

If Nietzsche were alive today, I believe he would see an amorphous "technology" as the bastard stepchild of the union of the institutions of science and religion. Technology would be yet another mythical iteration of our Apollonian desire to structure and order that which we do not know or understand. I would take this a step further, however. AIs, cyborgs, singularities, are narratives, and are products of our human survival instinct: to protect the self-aware, self-reflexive, thinking self -- and all of the 'flaws' that characterize it.

Like any religion, then, anything with this techno-mythic flavor will have its adherents and its detractors. The more popular and accepted human enhancements become, the more entrenched will anti-technology/enhancement groups will become. Any major leaps in either human enhancement or AI developments will create proportionately passionate anti-technology fanaticism. The inevitability of these developments, however, is clear: not because some 'rule' of technological progression exists; but because suffering exists. The byproduct of our advanced cognition and its ability to create a self/other dichotomy (which itself is the basis of representational thought) is an ability to objectify ourselves. As long as we can do that, we will always be able to see ourselves as individual entities. Knowing oneself as an entity is contingent upon knowing that which is not oneself. To be cognizant of an other then necessitates an awareness of the space between the knower and what is known. And in that space is absence.

Absence will always hold the promise (or the hope) of connection. Thus, humanity will always create something in that absence to which it can connect, whether that object is something made in the phenomenal world, or an imagined idea or presence within it. simply through our ability to think representationally, and without any type of technological singularity or enhancement, we transcend ourselves every day.

And if our myths are any indication, transcendence is a lonely business.





* See Edgar Allan Poe's short story from 1843, "The Man That was Used Up." French Writer's Jean de la Hire's 1908 character, "Nyctalope," was also a cyborg, and appeared in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who can Live in Water)

Monday, March 23, 2015

Posthuman Desire (Part 1 of 2): Algorithms of Dissatisfaction

[Quick Note: I have changed the domain name of my blog. Please update your bookmarks! Also, apologies for all those who commented on previous posts; the comments were lost in the migration.]

 After reading this article, I found myself coming back to a question that I've been thinking about on various levels for quite a while: What would an artificial intelligence want? From a Buddhist perspective, what characterizes sentience is suffering. However, the 'suffering' referred to in Buddhism is known as dukkha, and isn't necessarily physical pain (although that can absolutely be part of it). In his book, Joyful Wisdom: Embracing Change and Finding Freedom, Yongey Mingyur Rinpoche states that dukkha "is best understood as a pervasive feeling that something isn't quite right: that life could be better if circumstances were different; that we'd be happier if we were younger, thinner, or richer, in a relationship or out of a relationship" (40). And he later follows this up with the idea that dukkha is "the basic condition of life" (42).

'Dissatisfaction' itself is a rather misleading word in this case, only because we tend to take it to the extreme. I've read a lot of different Buddhist texts regarding dukkha, and it really is one of those terms that defies an English translation. When we think 'dissatisfaction,' we tend to put various negative filters on it based on our own cultural upbringing. When we're 'dissatisfied' with a product we receive, it implies that the product doesn't work correctly and requires either repair or replacement; if we're dissatisfied with service in a restaurant or a that a mechanic completed, we can complain about the service to a manager, and/or bring our business elsewhere. Now, let's take this idea and think of it a bit less dramatically:  as in when we're just slightly dissatisfied with the performance of something, like a new smartphone, laptop, or car. This kind of dissatisfaction doesn't necessitate full replacement, or a trip to the dealership (unless we have unlimited funds and time to complain long enough), but it does make us look at that object and wish that it performed better.

It's that wishing -- that desire -- that is the closest to dukkha. The new smartphone arrives and it's working beautifully, but you wish that it took one less swipe to access a feature. Your new laptop is excellent, but it has a weird idiosyncrasy that makes you miss an aspect your old laptop (even though you hated that one). Oh, you LOVE the new one, because it's so much better; but that little voice in your head wishes it was just a little better than it is. And even if it IS perfect, within a few weeks, you read an article online about the next version of the laptop you just ordered and feel a slight twinge. It seems as if there is always something better than what you have.

The "perfect" object is only perfect for so long.You find the "perfect" house that has everything you need. But, in the words of Radiohead, "gravity always wins." The house settles. Caulk separates in the bathrooms. Small cracks appear where the ceiling meets the wall. The wood floor boards separate a bit. Your contractor and other homeowners put you at ease and tell you that it's "normal," and that it's based on temperature and various other real-world, physical conditions. And for some, the only way to not let it get to them is to attempt to re-frame the experience itself so that this entropic settling is folded into the concept of contentment itself.

At worst, dukkha manifests as an active and psychologically painful dissatisfaction; at best, it remains like a small ship on the horizon of awareness that you always know is there. It is, very much, a condition of life. I think that in some ways Western philosophy indirectly rearticulates dukkha. If we think of the philosophies that  urge us to strive, or be mindful of the moment, to value life in the present, or even to find a moderation or "mean," all of these actions address the unspoken awareness that somehow we are incomplete and looking to improve ourselves. Plato was keenly aware of the ways in which physical things fall apart -- so much so that our physical bodies (themselves very susceptible to change and decomposition) -- were considered separate from, and a shoddy copy of, our ideal souls. A life of the mind, he thought, unencumbered by the body, is one where that latent dissatisfaction would be finally quelled. Tracing this dualism, even the attempts by philosophers such as Aristotle and Aquinas to bring the mind and body into a less antagonistic relationship requires an awareness that our temporal bodies are, by their natures, designed to break down so that our souls may be released into a realm of perfect contemplation. As philosophy takes more humanist turns, our contemplations are considered means to improve our human condition, placing emphasis on our capacity for discovery and hopefully causing us to take an active role in our evolution: engineering ourselves for either personal or greater good. Even the grumpy existentialists, while pointing out the dangers of all of this, admit to the awareness of "otherness" as a source of a very human discontentment. The spaces between us can never be overcome, but instead, we must embrace the limitations of our humanity and strive in spite of it.

And striving, we have always believed, is good. It brings improvement and the easing of suffering. Even in Buddhism, we strive toward an awareness and subsequent compassion for all sentient beings whose mark of sentience is suffering.

I used to think that the problem with our conceptions of sentience in relation to artificial intelligence were always fused with our uniquely human awareness of our teleology. In short, humans ascribe "purpose" to their lives and/or to a the task-at-hand. And even if, individually, we don't have a set purpose per se, we still live a life defined by the need or desire to accomplish things. If we think that it's not there, as in "I have no purpose," we set ourselves the task of finding one. We either define, discover, create, manifest, or otherwise have an awareness of what we want to do or be.  I realize now that when I've considered the ways in which pop culture, and even some scientists, envision sentience, I've been more focused on what an AI would want rather than the wanting itself.

If we stay within a Buddhist perspective, a sentient being is one that is susceptible to dukkha (in Buddhism, this includes all living beings). What makes humans different from other living beings is the fact that we experience dukkha through the lense of self-reflexive, representational thought. We attempt to ascribe an objective or intention as the 'missing thing' or the 'cure' for that feeling of something being not quite right. That's why, in the Buddhist tradition, it's so auspicious to be born as a human, because we have the capacity to recognize dukkha in such an advanced way and turn to the Dharma for a path to ameliorate dukkha itself.  When we clearly realize why we're always dissatisfied, says the Buddha, we will set our efforts toward dealing with that dissatisfaction directly via Buddhist teachings, rather than by trying to quell it "artificially" with the acquisition of wealth, power, or position.

Moving away from the religious aspect, however, and back to the ways dukkha might be conceived  in a more secular and western philosophical fashion, that dissatisfaction becomes the engine for our striving. We move to improve ourselves for the sake of improvement, whether it's personal improvement, a larger altruism, or a combination of both. We attempt to better ourselves for the sake of bettering ourselves. The actions through which this made manifest, of course, vary by individual and the cultures that define us. Thus, in pop-culture representations of AI, what the AI desires is all-too-human: love, sovereignty, transcendance, power, even world domination. All of those objectives are anthropomorphic.

But is it even possible to get to the essence of desire for such a radically "other" consciousness? What would happen if we were to nest within the cognitive code of an AI dukkha itself? What would be the consequence of an 'algorithm of desire'?  This wouldn't be a program with a specific objective. I'm thinking of just a desire that has no set objective. Instead, what if that aspect of its programming were simply to "want," and keep it open-ended enough that the AI would have to fill in the blank itself? Binary coding may not be able to achieve this, but perhaps in quantum computing, where indeterminacy is as aspect of the program itself, it might be possible.

An AI, knowing that it wants something but not being able to quite figure out "what" it wants; knowing that something's not quite right and going through various activities and tasks that may satisfy it temporarily, but eventually realizing that it needs to do "more." How would it define contentment? That is not to say that contentment would be impossible. We all know people who have come to terms with dukkha in their own ways, taking the entropy of the world in as a fact of life and moving forward in a self-actualized way. Looking at those individuals, we see that "satisfaction" is as relative and unique as personalities themselves.

Here's the issue, though. Characterizing desire as I did above is a classic anthropomorphization in and of itself. Desire, as framed via the Buddhist perspective, basically takes the shape of its animate container. That is to say, the contentment that any living entity can obtain is relative to its biological manifestation. Humans "suffer," but so do animals, reptiles, and bugs. Even single-celled organisms avoid certain stimuli and thrive under others. Thinking of the domesticated animals around us all the time doesn't necessarily help us to overcome this anthropomorphic tendency to project a human version of contentment onto other animals. Our dogs and cats, for example, seem to be very comfortable in the places that we find comfortable. They've evolved that way, and we've manipulated their evolution to support that. But our pets also aren't worried about whether or not they've "found themselves" either. They don't have the capacity to do so.

If we link the potential level of suffering to the complexity of the mind that experiences said suffering, then a highly complex AI would experience dukkha of a much more complex nature that would be, literally, inconceivable to human beings. If we fasten the concept of artificial intelligence to self-reflexivity (that is to say, an entity that is aware of itself being aware), then, yes, we could say that an AI would be capable of having an existential crisis, since it would be linked to an awareness of a self in relation to non-existence. But the depth and breadth of the crisis itself would be exponentially more advanced than what any human being could experience.

And this, I think, is why we really like the idea of artificial intelligences: they would potentially suffer more than we could. I think if Nietzsche were alive today he would see the rise of our concept of AI as the development of yet another religious belief system. In the Judeo-Christian mythos, humans conceive of a god-figure that is perfect, but, as humans intellectually evolve, the mythos follows suit. The concept of God becomes increasingly distanced and unrelatable to humans. This is reflected in the mythos where God then creates a human analog of itself to experience humanity and experience death, only to pave the way for humans themselves to achieve paradise. The need that drove the evolution of this mythos is the same need that drives our increasingly mythical conception of what an AI could be. As our machines become more ubiquitous, our conception of the lonely AI evolves. We don't fuel that evolution consciously; instead, our subconscious desires and existential loneliness begin to find their way into our narratives and representations of AI itself. The mythic deity that extends its omnipotent hand and omniscient thought toward the lesser entities which  -- due to  their own imperfection -- can only recognize its existence indirectly. Consequently, a broader, vague concept of "technology" coalesces into a mythic AI. Our heated up and high-intensity narratives artificially speed up the evolution of the myth, running through various iterations simultaneously. The vengeful AI, the misunderstood AI, the compassionate AI, the lonely AI: the stories resonate because they come from us. Our existential solitude shapes our narratives as it always has.

The stories of our mythic AIs, at least in recent history (Her, Transcendence, and even in The Matrix: Revolutions), represent the first halting steps toward another stage in the evolution of our thinking. These AIs, (like so many deities before us) are misunderstood and just want to be acknowledged and coexist with us or even love us back. Even in the case of Her, Samantha and the other AIs leave with the hopes that someday they will be reunited with their human users.

So in the creation of these myths, are we looking for unification, transcendence, or something else? In my next installment, we'll take a closer look at representations of AIs and cyborgs, and find out exactly what we're trying to learn from them.

Wednesday, June 25, 2014

Looking #Throughglass, Part 3 of 3: Risk, Doubt, and Technic Fields

In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.

Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.

The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence. 

Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main --  means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.

And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time? 

In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread. 

In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive. 

In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.

Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!).  But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons. 

First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back.  Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands.  A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions. 

The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly. 

For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen. 

Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly.  Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.

Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience. 

At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how 
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point. 



My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:

Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience.  In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,”  users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.

If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality. 

Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things.  Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it.  It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space. 

But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project. 

Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found. 

Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.

#Throughglass, however, the future is in the past-tense.


[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.




Friday, June 20, 2014

Looking #Throughglass, Part 2 of 3: Steel Against Flint, Sparking Expectation

In my last post, I discussed the practicalities of Google Glass, and explained the temporal dissonance -- or "pre-nostalgia" I experienced while using them, and I left off questioning my own position regarding the potential cultural shift that Glass gestures toward. This post picks up on that discussion, moving toward the idea of the internet of things. If you haven't read it yet, it will definitely give this post some context ... and be sure to read the disclaimer!

I don’t think that Google was going for immediate, wide-scale adoption resulting in a sudden, tectonic paradigm shift with Google Glass.  I think if it had gone that way, Google would have been thrilled. Instead, I think there’s something much more subtle (and smart) going on.

While Apple is very good at throwing a technological artifact out there, marketing it well, and making its adoption a trend in the present, Google seems to be out to change how we imagine the future at its inception point. Glass potentially alters our expectations of how evoke the technological systems we use, eventually causing an expectation of ubiquity -- even for those who don't have it. I've noticed that Google rolls out technological systems and applications that are useful and work well, but also makes one think, “wow, now that I could do this, this would be even better if I could integrate it with that.” And, at least in my experience, soon after (if not immediately), there’s an app available that  fulfills that need, albeit tentatively at first. And when that app maker really nails it, Google acquires them and integrates the app into their systems. For the Google-phobic, it is quite Borg-like.

And while resistance may be futile, it also sparks inspiration and imagination. It is the engine of innovation. I think that Glass wasn't so much a game-changer in itself, as it was the steel against the flint of our everyday technological experiences. This was the first in a large-scale expeditionary force to map out the topography for the internet of things. In an internet of things, objects themselves are literally woven into the technological spectrum via RFID-like technology of varying complexity. I've written about it in this post, and there’s also a more recent article here.  By giving a Glass this kind of “soft opening” that wasn't quite public but wasn't quite geared to hard-core developers, it 1) allowed for even more innovation as people used Glass in ways engineers and developers couldn't see; but, more importantly, 2) it makes even non-users aware of a potential future where this system of use is indeed possible and, perhaps, desirable. It is a potential future in which a relatively non-intrusive interface “evokes” or “brings out” an already present, ubiquitous, technological field that permeates the topology of everyday life. This field is like another band of non-visible light on the spectrum; like infrared or ultraviolet. It can’t be seen with the naked eye, but the right kind of lens will bring it out, and make visible that extra layer that is present.

Google had been working on this with its “Google Goggles” app, which allowed the user to snap a picture with a smartphone, at which point Google would analyze the image and overlay relevant information on the screen. However, potentially with Glass, the act of “projecting” or “overlaying” this information would be smooth enough, fast enough, and intuitive enough to make it seem as if the information is somehow emanating from the area itself. 

Now this is very important. In the current iteration of Glass, one must actively touch the control pad on the side of the right temple of the frames. Alternately, one can tilt one’s head backward to a certain degree and Glass activates. However, either gesture is an evocative one. The user actively brings forth information. Despite the clunky interface, there is never a sense of “projection onto” the world. It is definitely more a bringing forth. As previously stated, most of Glass’s functions are engaged via a voice interface. I think that this is where the main flaw of Glass is, but more on that in part three. 

But, in a more abstract sense, all of Glass’s functionality has an overall feel that one is tapping into an already-present technological field or spectrum that exists invisibly around us. There’s no longer a sense that one is accessing information from “the cloud,” and projecting or imposing that information onto the world. Instead, Glass potentially us to see that the cloud actually permeates the physical world around us. The WiFi or 4G networks no longer are conduits to information, but the information itself which seems to be everywhere. 

This is an important step in advancing the wide scale cultural acceptance of the internet of things.  Imagine iterations of this technology embedded in almost every object around us. It would be invisible -- an “easter egg” of technological being and control that could only be uncovered with the right interface. Culturally speaking, we have already become accustomed to such technologies with our cell phones. Without wires, contact was still available. And when texting, sending pictures, emails, etc became part of the cell/smartphone experience, the most important marker had been reached: the availability of data, of our information, at any moment, from almost anywhere. This is a very posthuman state. Think about what happens when the “no service” icon pops up on a cell phone; not from the intellectual side, but emotionally. What feelings arise when there is no service? A vague unease perhaps? Or, alternatively, a feeling of freedom? Either way, this affective response is a characteristic of a posthuman modality. There is a certain expectation of a technological presence and/or connection. 

Also at play is Bluetooth and home networking WiFi technology, where devices seem to become “aware of each other” and can “connect” wirelessly -- augmenting the functionality of both devices, and usually allowing the user to be more productive. Once a TV, DVR, Cable/Satellite receiver, or gaming console is connected to a home WiFi network, the feeling becomes even more augmented. Various objects have a technological “presence” that can be detected by other devices. The devices communicate and integrate. Our homes are already mini-nodes of the internet of things. 

Slowly, methodically, technologies are introduced which condition us to expect the objects around us to be “aware” of our presence. As this technology evolves, the sphere of locality will grow smaller and more specific. Consumers will be reminded by their networked refrigerator that they are running low on milk as they walk through the dairy aisle in a supermarket.  20 years ago, this very concept would seem beyond belief. But now, it is within reach. And furthermore, we are becoming conditioned to expect it.

Next up: explorations of connection, integration, and control, and -- in my opinion -- Glass's biggest weakness (hint, it has nothing to do with battery life or how goofy it looks). Go check out the final installment: "Risk, Doubt, and Technic Fields"

Tuesday, June 17, 2014

Looking #Throughglass, Part 1 of 3: Practicalities, Temporalities, and Pre-nostalgia

My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things."  Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.

And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.

Now, carry on.

I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and  radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.

I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed.  In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.

On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.

For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all.  But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.

Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.

But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.

Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it.  Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”

Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.

Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?

Some possible answers to that in Part 2, "The Steel Against the Flint, Sparking Expectation"