Showing posts with label mindedness. Show all posts
Showing posts with label mindedness. Show all posts

Wednesday, June 24, 2020

COVID Topologies: Compelled to Be Present

At the suggestion of a colleague, I recently read J.G. Ballard's "The Enormous Space." The short story, about a man who decides that he's never going to leave his house again, has a "Bartleby The Scribner" meets Don DeLillo vibe to it, where -- as his self-imposed isolation sets in -- he starts to explore the space of his home more intimately, with predictably hallucinogenic results. But his initial explorations resonate with work in New Materialism and Object-Oriented Ontology: particularly as he explores his own relationship with his physical environment. 

I believe the story has gotten more attention in the shadow of COVID and its resultant quarantines (which, as of today, June 24th, 2020), people in the United States have seemingly become bored with and "prefer not to" follow. But the ongoing, slow collapse of the United States is something for another entry. I also believe that strict quarantines will be in effect again in some states after death tolls reach a level that registers on even the most fervent pro-life, evangelical conservatives' radar: that is to say, when enough of the right people die for the "all lives matter" crowd to actually notice; and/or when "bathing in the blood of Jesus" is no longer the necessary tonic to mitigate the long, slow, isolated, and painful COVID-deaths of loved ones. I have no doubt those deaths will be inevitably and preposterously blamed on Hillary Clinton, Barack Obama, and somehow Colin Kaepernick and the Black Lives Matter movement. 

On some level, however, I think that broader politically- and religiously-based science denial is linked to the same emotions that people felt when they were compelled to stay home: an abject fear of seeing things as they are. Now that's a philosophically loaded statement, I know: can we ever see things "as they are"? Let's not get mired in the intricacies of phenomenology here, though. Those who were in quarantine for any length of time were suddenly faced with the reality of their living spaces. Those home environments were no longer just spaces in which we "crashed" after work, or the spaces which we  meticulously crafted based on home decor magazines. Whether living in a "forever home," a "tiny house," or the only space a budget would allow, people were faced with the "reality" of those spaces -- spaces which became the material manifestation of choices and circumstances. Those spaces no longer were just the places we "had" or "owned" or "rented," they became the places where people actually lived. We were thrust into an uninvited meditation on the difference between occupying a space and living in one.

Much like Geoffrey Ballantyne in "The Enormous Room," we found ourselves subject to the spaces which previously remained "simply there." Some, I know, went on J.A.K. Gladney-like purges as they suddenly realized just how useless -- and heavy -- much of the objects around us were, and instead of finding ourselves surrounded by the fruits of our labor, we were instead trapped by the artifacts of the past. How many people during quarantine fumbled through their possessions, timidly fondling knicknacks, looking for some kind of Kondo-joy. Others, I'm sure, went the opposite route and ordered MORE things from the internet to serve as an even more claustrophobic cocoon of stuff to block out all the other stuff which we couldn't bring ourselves to face -- let alone touch and purge. While still others continued to fail to notice their surroundings at all, yet found themselves suffering random anxiety and panic attacks -- blaming the fear of COVID rather than the fact that their surrounding spaces were becoming increasingly smaller as the detritus of daily life "at home" collected around them. 

Those spaces ... the spaces in which we "live" ... which were once relegated to the role of a background to the present, were suddenly thrust into the foreground, reclaiming us and our subjectivity. They didn't just become present, they became the present -- a present in which we were implicated; a present with which we may have grown unfamiliar. And, given the circumstances, can you blame anyone for not being too keen on the present? Whether its seeing more unrest on the news or on social media, or being compelled to haplessly homeschool your own children? The present isn't always that much fun. 

I think, though, that there is at least one positive thing that we can learn from Geoffrey Ballantyne: that it is possible for us to more consciously occupy the present moment instead of trying to avoid it. While I don't advocate the extremes to which Geoffrey goes (no spoilers here, but you may never look at your freezer the same way again); I do think that there is something to be said for noticing and engaging the spaces in which we are implicated. The spaces in which we "live" should be the ones which with we engage rather than just treat as some kind of visual or ontological backdrop. Engaging with our spaces is a way of seeing things as they are. It's a way of being aware.











 

Monday, January 14, 2019

Academic Work and Mental Health

I've always said to my students -- especially those thinking of doing Masters or Ph.D. programs -- that graduate work (and academic work in general) can psychologically take you apart and put you back together again. It will often bring up deeper issues that have been at play in our day-to-day lives for years.

As I was annotating a book the other day, I felt a familiar, dull ache start to radiate from my neck, to my shoulders, shoulder blades, and eventually lower back. I took a moment to think about how I was sitting and oriented in space: I was hunched over -- my shoulders were high up in and incredibly unnatural position close to my ears.  I thought about what my current acupuncturist, ortho-bionomist, and past 3 physical therapists would say. I stretched, straightened myself out, and paused to figure out why I hunch the way I do when I write.

It’s like I’m under siege, I thought to myself.

And then I realized there was something to that.

If there’s one refrain from my childhood that still haunts me when I work it’s “You’re lazy.”

My parents had this interesting pretzel logic: The reason I was smart was because I was lazy. I didn’t want to spend as much time on homework as the other kids because I just wanted to watch TV and do nothing. So I’d finish my homework fast and get A’s so “I didn’t have to work.”

No, that doesn’t make sense. But it was what I was told repeatedly when I was in grade school. Then in high school, on top of all of the above, I was accused of being lazy because I didn’t have a job at 14, like my father did.

And then in college, despite being on a full academic scholarship, getting 4.0s most semesters, making the deans list, (and eventually graduating summa cum laude), I was perpetually admonished by my parents for not getting a job during the 4 week winter break, or getting a “temporary job” in the two or three weeks between the last day of classes and the first day of my summer jobs (lab assistant for a couple of  years, and then day camp counselor). Again, according to them, it was because I was “lazy.” My work study jobs during the school year as an undergraduate didn’t count because they weren’t “real jobs.”

And even though I was doing schoolwork on evenings and weekends, my parents often maintained that I should be working some part-time job on the weekends.

So doing schoolwork (that is to say, doing the work to maintain my GPA, scholarships, etc.,) wasn’t “real work.” In retrospect, the biggest mistake of my undergrad days was living at home. But I did so because I got a good scholarship at a good undergrad institution close to home. It was how I afforded college without loans.

But just about every weekend, every break, or every moment I was trying to do work, I was at risk of having to field passive aggressive questions or comments from my mother and father regarding my avoidance of work.

My choice to go to grad school because I wanted to teach was, of course, because I didn’t want a “real job.”

Most confusing, though, was how my parents (my mother in particular) would tout my achievements to family and friends, even telling them "how hard [I] worked.” But when relatives or friends were gone, the criticism, passive aggressive comments, and negativity always came back. It’s no wonder why I hunch when I do work. I am in siege mode. It explains also why my dissertation took me so long to write, and why that period of my life was the most difficult in terms of my mental health: the more I achieved, the more lazy I thought I was actually being.

Even though I have generally come to terms with the complete irrationality of that logic, I do have to take pains (often literally) to be mindful of how I work, and not build a narrative out of the negative thoughts that do arise as I submerge into extended research. I went back into counseling last summer, mainly because I was starting to feel a sense of dread and depression about my sabbatical, which I knew made no sense. I'm so glad I did.

The things we achieve -- whether academic, professional, personal, etc. -- are things of which we should be proud. Sometimes we have to be a little proactive in reminding ourselves of how to accept our own accomplishments.

And maybe every 30 or 60 minutes, stand up and stretch.






Tuesday, January 19, 2016

Mythic Singularities: Or How I Learned To Stop Worrying and (kind of) Love Transhumanism

... knowing the force and action of fire, water, air the stars, the heavens, and all the other bodies that surround us, as distinctly as we know the various crafts of our artisans, we might also apply them in the same way to all the uses to which they are adapted, and thus render ourselves the lords and possessors of nature.  And this is a result to be desired, not only in order to the invention of an infinity of arts, by which we might be enabled to enjoy without any trouble the fruits of the earth, and all its comforts, but also and especially for the preservation of health, which is without doubt, of all the blessings of this life, the first and fundamental one; for the mind is so intimately dependent upon the condition and relation of the organs of the body, that if any means can ever be found to render men wiser and more ingenious than hitherto, I believe that it is in medicine they must be sought for. It is true that the science of medicine, as it now exists, contains few things whose utility is very remarkable: but without any wish to depreciate it, I am confident that there is no one, even among those whose profession it is, who does not admit that all at present known in it is almost nothing in comparison of what remains to be discovered; and that we could free ourselves from an infinity of maladies of body as well as of mind, and perhaps also even from the debility of age, if we had sufficiently ample knowledge of their causes, and of all the remedies provided for us by nature.
- Rene Descartes, Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences, 1637

As a critical posthumanist (with speculative leanings), I found myself always a little leary of transhumanism in general. Much has been written on the difference between the two, and one of the best and succinct explanations can be found in John Danaher's "Humanism, Transhumanism, and Speculative Posthumanism." But very briefly, I believe it boils down to a question of attention: a posthumanist, whether critical or speculative, focuses his or her attention on subjectivity; investigating, critiquing, and sometimes even rejecting the notion of a homuncular self or consciousness, and the assumption that the self is some kind of modular component of our embodiment. Being a critical posthumanist does makes me hyper-aware of the implications of Descartes' ideas presented above in relation to transhumanism. Admittedly, Danaher's statement "Critical posthumanists often scoff at certain transhumanist projects, like mind uploading, on the grounds that such projects implicitly assume the false Cartesian view" hit close to home, because I am guilty of the occasional scoff.

But there really is much more to transhumanism than sci-fi iterations of mind uploading and AIs taking over the world. Just like there is more to Descartes than his elevation, reification, and privileging of consciousness. From my critical posthumanist perspective, what has always been the hardest pill to swallow with Descartes wasn't necessarily the model of consciousness he proposed. It was the the way that model has been taken so literally -- as a fundamental fact -- that has been one of the deeper issues which drive me philosophically. But, as I've often told my students, there's more to Descartes than that. Examining Descartes's model as the metaphor it is gives us a more culturally based context for his work, and a better understanding of its underlying ethics. I think a similar approach can be applied to transhumanism, especially in light of some of the different positions articulated in Pellissier's "Transhumanism: There are [at least] ten different philosophical catwgories; which one(s) are you?"

Rene Descartes's faith in the ability of human reason to render us "lords and possessors of nature" through an "invention of an infinity of arts," is,  to my mind, one of the foundational philosophical beliefs of transhumanism. And his later statement, that "all at present known in it is almost nothing in comparison of what remains to be discovered" becomes its driving conceit: the promise that answers could be found which could, potentially, free humanity from "an infinity of maladies of bodies as well as of mind, and perhaps the debility of age." It follows that whatever humanity can create to help us unlock those secrets is thus a product of human reason. We create the things we need that help us to uncover "what remains to be discovered."

But this ode to human endeavor eclipses the point of those discoveries: "the preservation of health" which is "first and fundamental ... for the mind is so intimately dependent on the organs of the body, that if any means can ever be found to render men wiser and more ingenious ... I believe that it is in medicine that it should be sought for."

Descartes sees an easing of human suffering as one of the main objectives to scientific endeavor. But this aspect of his philosophy is often eclipsed by the seemingly infinite "secrets of nature" that science might uncover. As is the case with certain interpretations of the transhumanist movement, the promise of what can be learned often eclipses the reasons why we want to learn them.  And that promise can take on mythic properties. Even though progress is its own promise, a transhuman progress can become an eschatological one, caught between: a Scylla of extreme interpretations of "singularitarian" messianism and a Charybdis of  similarly extreme interpretations of "survivalist transhuman" immortality.  Both are characterized by governing mythos -- or set of beliefs -- that are technoprogressive by nature, but risk fundamentalism in practice, especially if we lose sight of a very important aspect of technoprogressivism itself:  "an insistence that technological progress needs to be wedded to, and depends on, political progress, and that neither are inevitable" (Hughes 2010. emphasis added). Critical awareness of the limits of transhumanism is similar to having a critical awareness of any functional myth. One does not have to take the Santa Claus or religious myths literally to celebrate Christmas; instead one can understand the very man-made meaning behind the holiday and the metaphors therein, and choose to express or follow that particular ethical framework accordingly, very much aware that it is an ethical framework that can be adjusted or rejected as needed.

Transhuman fundamentalism occurs when critical awareness that progress is not inevitable is replaced by an absolute faith and/or literal interpretation that -- either by human endeavor or via artificial intelligence -- technology will advance to a point where all of humanity's problems, including death, will be solved. Hughes points out this tension: "Today transhumanists are torn between their Enlightenment faith in inevitable progress toward posthuman transcension and utopian Singularities, and their rational awareness of the possibility that each new technology may have as many risks as benefits and that humanity may not have a future" (2010).  Transhuman fundamentalism characterized by uncritical inevitablism would interpret progress as "fact." That is to say, that progress will happen and is immanent. By reifying (and eventually deifying) progress,  transhuman fundamentalism would actually forfeit any claim to progress by severing it from its human origins. Like a god that is created by humans out of a very human need, but then whose origins are forgotten, progress stands as an entity separate from humanity, taking on a multitude of characteristics rendering it ubiquitous and omnipotent: progress can and will take place. It has and it always will, regardless of human existence; humanity can choose to unite with it, or find itself doomed.

Evidence for the inevitability of progress comes by way of pointing out specific scientific advancements and then falling back on speculation that x advancement will lead to y development, as outlined by Verdoux's "historical" critique of faith in progress, holding a "'progressionist illusion' that history is in fact a record of improvement" (2009). Kevin Warwick has used rat neurons as CPUs for his little rolling robots: clearly, we will be able to upload our minds. I think of this as a not-so-distant cousin of the intelligent design argument for the existence of God. Proponents point to complexity of various organic (and non-organic) systems as evidence that a designer of some kind must exist. Transhuman fundamentalist positions point to small (but significant) technological advancements as evidence that an AI will rise (Singularitarianism) or that death itself will be vanquished (Survivalist Transhumanism). It is important to note that neither position is in itself fundamentalist in nature. But I do think that these two particular frameworks lend themselves more easily to a fundamentalist interpretation, due to their more entrenched reliance on Cartesian subjectivity, enlightenment teleologies, and eschatological religious overtones.

Singularitarianism, according to Pellissier, "believes the transition to a posthuman will be a sudden event in the 'medium future' -- a Technological Singularity created by runaway machine superintelligence." Pushed to a fundamentalist extreme, the question for the singularitarian is: when the posthuman rapture happens, will we be saved by a techno-messiah, or burned by a technological antichrist?  Both arise by the force of their own wills. But if we look behind the curtain of the great and powerful singularity, we see a very human teleology. The technology from which the singularity is born is the product of human effort. Subconsciously, the singularity is not so much a warning as it is a speculative indulgence of the power of human progress: the creation of consciousness in a machine. And though singularitarianism may call it "machine consciousness," the implication that such an intelligence would "choose" to either help or hinder humanity always already infers a very anthropomorphic consciousness. Furthermore, we will arrive at this moment via some major scientific advancement that always seems to be between 20 and 100 years away, such as "computronium," or programmable matter. This molecularly-engineered material, according to more Kurzweilian perspectives, will allow us to convert parts of the universe into cosmic supercomputers which will solve our problems for us and unlock even more secrets to the universe. While the idea of programmable matter is not necessarily unrealistic, its mythical qualities (somewhere between a kind of "singularity adamantium" and "philosopher's techno-stone"), promise the transubstantiation of matter toward unlimited, cosmic computing, thus opening up even more possibilities for progress. The "promise" is for progress itself, that unlocking certain mysteries will provide an infinite amount of new mysteries to be solved.

Survivalist Transhumanism can take a take a similar path in terms of technological inevitabilism, but pushed toward a fundamentalist extreme, awaits a more Nietzschean posthuman rapture.  According to Pellissier, Survivalist Transhumanism "espouses radical life extension as the most important goal of transhumanism." In general, the movement seems to be awaiting advancements in human augmentation which are always already just out of reach but will (eventually) overcome death and allow the self (whether bioengineered or uploaded to a new material -- or immaterial -- substrate) to survive indefinitely. Survivalist transhumanism with a more fundamentalist flavor would push to bring the Nietzschean Ubermensch into being -- literally -- despite the fact that Nietzsche's Ubermensch functions as an ideal toward which humans should strive.  He functions as a metaphor for living one's life fully, not subject to a "slave morality" that is governed by fear and placing one's trust in mythological constructions treated as real artifacts. Even more ironic is the fact that Ubermensch is not immortal and is at peace with his immanent death. Literal interpretations of the Ubermensch would characterize the master-morality human as overcoming mortality itself, since death is the ultimate check on the individual's development. Living forever, from a more fundamentalist perspective, would provide infinite time to uncover infinite possibilities and thus make infinite progress. Think of all the things we could do, build, and discover, some might say. I agree. Immortality would give us time -- literally.  Without the horizon of death as a parameter of our lives, we would -- eventually -- overcome a way of looking at the universe that has been a defining characteristic of humanity since the first species of hominids with the capacity to speculate pondered death.

But in that speculation is also a promise. The promise that conquering death would allow us to reap the fruits of the inevitable and inexorable progression of technology. Like a child who really wants to "stay up late," there is a curiosity about what happens after humanity's bedtime. Is the darkness outside her window any different after bedtime than it is at 9pm? What lies beyond the boundaries of late-night broadcast television? How far beyond can she push until she reaches the loops of infomercials, or the re-runs of the shows that were on hours prior?  And years later, when she pulls her first all-nighter, and she sees the darkness ebb and the dawn slowly but surely rise just barely within her perception, what will she have learned?

It's not that the darkness holds unknown things. To her, it promises things to be known. She doesn't know what she will discover there until she goes through it. Immortality and death metaphorically function in the same way: Those who believe that immortality is possible via radical life extension believe that the real benefits of immortality will show themselves once immortality is reached and we have the proper perspective from which to know the world differently. To me, this sounds a lot like Heaven: We don't know what's there but we know it's really, really good. In the words of Laurie Anderson: "Paradise is exactly like where you are right now, only much, much better." A survivalist transhuman fundamentalist version might read something like "Being immortal is exactly like being mortal, only much, much better."

Does this mean we should scoff at the idea of radical life extension? At the singularity and its computronium wonderfulness? Absolutely not. But the technoprogressivism at the heart of  transhumanism need not be so literal. When one understands a myth as that -- a set of governing beliefs -- transhumanism itself can stay true to the often-eclipsed aspect of its Cartesian, enlightenment roots: the easing of human suffering. If we look at transhumanism as a functional myth, adhering to its core technoprogressive foundations, not only do we have a potential model for human progress, but we also have an ethical structure by which to advance that movement. The diversity of transhuman views provides several different paths of progress.

Transhumanism has at its core a technoprogressivism that even critical posthumanism like me can get behind. If I am a technoprogressivist, then I do believe in certain aspects of the promise of technology. I do believe that humanity has the capacity to better itself and do incredible things through technological means. Furthermore, I do feel that we are in the infancy of our knowledge of how technological systems are to be responsibly used.  It is a technoprogressivist's responsibility to mitigate a myopic visions of the future -- including those visions that uncritically mythologize the singularity or immortality itself as an inevitability.

To me it becomes a question of exactly what the transhumanist him or herself is looking for from technology, and how he or she sees conceptualizes the "human" in those scenarios. The reason I still call myself a posthumanist is because I think that we have yet to truly free ourselves of antiquated notions of subjectivity itself. The singularity to me seems as if it will always be a Cartesian one. A "thing that thinks" and is aware of itself thinking and therefore is sentient. Perhaps the reasons why we have not reached a singularity yet is because we're approaching the subject and volition from the wrong direction.

To a lesser extent, I think that immortality narratives are mired in re-hashed religious eschatologies where "heaven" is simply replaced with "immortality." As for radical life extension, what are we trying to extend? Are we tying "life" to the ability to simply being aware of ourselves being aware that we are alive? Or are we looking at the quality of the extended life we might achieve? I do think that we may extend the human lifespan to well over a century. What will be the costs? And what will be the benefits?  Life extension is not the same as life enrichment. Overcoming death is not the same as overcoming suffering. If we can combat disease, and mitigate the physical and mental degradation which characterize aging, thus leading to an extended life-span free of pain and mental deterioration, then so be it.  However, easing suffering and living forever are two very different things. Some might say that the easing of suffering is simply "understood" within the overall goals of immortality, but I don't think it is.

Given all of the different positions outlined in Pellissier's article, "cosmopolitan transhumanism" seems to make the most sense to me. Coined by Steven Umbrello, this category combines the philosophical movement of cosmopolitanism with transhumanism, creating a technoprogressive philosophy that can "increase empathy, compassion, and the univide progress of humanity to become something greater than it currently is. The exponential advancement of technology is relentless, it can prove to be either destructive or beneficial to the human race." This advancement can only be achieved, Umbrello maintains, via an abandonment of "nationalistic, patriotic, and geopolitical allegiances in favor [of] global citizenship that fosters cooperation and mutually beneficial progress."

Under that classification, I can call myself a transhumanist. A commitment to  enriching life rather than simply creating it (as an AI) or extending it (via radical life extension) should ethically shape the leading edge of a technoprogressive movement, if only to break a potential cycle of polemics and politicization internal and external to transhumanism itself. Perhaps I've read too many comic books and have too much of a love for superheroes, but in today's political and cultural climate, a radical position on either side can unfortunately create an opposite. If technoprogressivism rises under  fundamentalist singularitarian or survivalist transhuman banners, equally passionate luddite, anti-technological positions could potentially rise and do real damage. Speaking as a US citizen, I am constantly aghast at the overall ignorance that people have toward science and the ways in which the very concept of "scientific theory" and the very definition of what a "fact" is has been skewed and distorted. If we have groups of the population who still believe that vaccines cause autism or don't believe in evolution, do we really think that a movement toward an artificial general intelligence will be taken well?

Transhumanism, specifically the cosmopolitan kind, provides a needed balance of progress and awareness. We can and should strive toward aspects of singularitarianism and survivalist transhumanism, but as the metaphors and ideals they actually are.


References:

Anderson, Laurie. "Language is a Virus" Home of the Brave (1986)

Descartes, Rene. 1637. Discourse on the Method of Rightly Conducting the Reason and Seeking Truth in the Sciences.

Hughes, James. 2010. "Problems of Transhumanism: Belief in Progress vs. Rational Uncertainty." (IEET.org).

Pellissier, Hank. 2015. "Transhumanism: There Are [at Least] Ten Different Philosophical Categories; Which One(s) Are you?" (IEET.org)

Verdoux, Philippe. 2009. "Transhumanism, Progress and the Future."  Journal of Evolution and Technology 20(2):49-69.

Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.



Thursday, May 28, 2015

Update: Semester Breaks, New Technology, New Territory

This is more of an update post than a theory/philosophy one.

The semester ended a couple of weeks ago and I am acclimating to my new routine and schedule. I am also acclimating to two new key pieces of technology: my new phone, which is a Galaxy Note 4; and my new tablet, which is a Nexus 9. I attempted a slightly different approach to my upgrades, especially for my tablet: stop thinking about what I could do with them and start thinking about what I will do with them. One could also translate that as: get what you need, not what you want. This was also a pricey upgrade all around; I had been preparing for it, but still, having to spend wisely was an issue as well.

The Galaxy Note 4 upgrade was simple for me. I loved my Note 2. I use the stylus/note taking feature on it almost daily. The size was never an issue. So while I momentarily considered the Galaxy S6 edge, I stuck with exactly what I knew I needed and would use.

As for the tablet, that was more difficult. My old Galaxy Note 10.1 was showing its age. I thought -- or rather, hoped ... speculated -- that a tablet with a stylus would replace the need for paper notes. After a full academic year of trying to do all of my research and class note-taking exclusively on my tablet, it was time for me to admit that it wasn't cutting it. I need a full sheet of paper, and the freedom to easily erase, annotate, flip back and forth, and see multiple pages in their actual size. While the Note tablet can do most of that, it takes too many extra steps; and those steps are completely counter-intuitive than when using pen and paper.

When I thought about how and why I used my tablet (and resurrected chromebook), I realized that I didn't need something huge. I was also very aware that I am a power-user of sorts of various Google applications. So -- long story short -- I went for the most ... 'Googley' ... of kit and sprang for a Nexus 9, with the Nexus keyboard/folio option. I was a little nervous at the smaller size -- especially of the keyboard. But luckily my hands are on the smallish side and I'm very, very pleased with it. The bare-bones Android interface is quick and responsive; and the fact that all Android updates come to me immediately without dealing with manufacturer or provider interference was very attractive. I've had the Nexus for a week and am loving it.

This process, however, especially coming at the end of the academic year, made me deeply introspective about my own -- very personal -- use of these types of technological artifacts. It may sound dramatic, but there was definitely some soul-searching happening as I researched different tablets and really examined the ways in which I use technological artifacts. It was absolutely a rewarding experience, however. Freeing myself up from unrealistic expectations and really drawing the line between a practical  use rather than a speculative use was rather liberating. I was definitely influenced by my Google Glass experience.

From a broader perspective, the experience also helped me to focus on very specific philosophical issues in posthumanism and our relationship to technological artifacts. I've been reading voraciously, and taking in a great deal of information. During the whole upgrade process, I was reading Sapiens, A Brief History of Humankind by Yuval Noah Harari. This was a catalyst in my mini 'reboot.' And I know it was a good reboot because I keep thinking back to my "Posthuman Topologies: Thinking Through the Hoard" chapter in Design, Mediation, and the Posthuman, and saying to myself "oh wait, I can explain that even better now ..."

So I am now delving into both old and new territory, downloading new articles, and familiarizing myself even more deeply with neuroscience and psychology. It's exciting stuff, but a little frustrating because there's only so much I can read through and retain in a day. There's also that nagging voice that says "better get it done now, in August you'll be teaching four classes again." It can be frustrating sometimes. Actually, that's a lie. It's frustrating all the time. But I do what I can.

Anyway, that's where I'm at right now and I'm sure I'll have some interesting blog entries as I situate myself amidst the new research. My introspection here isn't just academic, so what I've been working on comes from a deeper place, but that's how I know the results will be good.

Onward and upward.





Monday, March 23, 2015

Posthuman Desire (Part 1 of 2): Algorithms of Dissatisfaction

[Quick Note: I have changed the domain name of my blog. Please update your bookmarks! Also, apologies for all those who commented on previous posts; the comments were lost in the migration.]

 After reading this article, I found myself coming back to a question that I've been thinking about on various levels for quite a while: What would an artificial intelligence want? From a Buddhist perspective, what characterizes sentience is suffering. However, the 'suffering' referred to in Buddhism is known as dukkha, and isn't necessarily physical pain (although that can absolutely be part of it). In his book, Joyful Wisdom: Embracing Change and Finding Freedom, Yongey Mingyur Rinpoche states that dukkha "is best understood as a pervasive feeling that something isn't quite right: that life could be better if circumstances were different; that we'd be happier if we were younger, thinner, or richer, in a relationship or out of a relationship" (40). And he later follows this up with the idea that dukkha is "the basic condition of life" (42).

'Dissatisfaction' itself is a rather misleading word in this case, only because we tend to take it to the extreme. I've read a lot of different Buddhist texts regarding dukkha, and it really is one of those terms that defies an English translation. When we think 'dissatisfaction,' we tend to put various negative filters on it based on our own cultural upbringing. When we're 'dissatisfied' with a product we receive, it implies that the product doesn't work correctly and requires either repair or replacement; if we're dissatisfied with service in a restaurant or a that a mechanic completed, we can complain about the service to a manager, and/or bring our business elsewhere. Now, let's take this idea and think of it a bit less dramatically:  as in when we're just slightly dissatisfied with the performance of something, like a new smartphone, laptop, or car. This kind of dissatisfaction doesn't necessitate full replacement, or a trip to the dealership (unless we have unlimited funds and time to complain long enough), but it does make us look at that object and wish that it performed better.

It's that wishing -- that desire -- that is the closest to dukkha. The new smartphone arrives and it's working beautifully, but you wish that it took one less swipe to access a feature. Your new laptop is excellent, but it has a weird idiosyncrasy that makes you miss an aspect your old laptop (even though you hated that one). Oh, you LOVE the new one, because it's so much better; but that little voice in your head wishes it was just a little better than it is. And even if it IS perfect, within a few weeks, you read an article online about the next version of the laptop you just ordered and feel a slight twinge. It seems as if there is always something better than what you have.

The "perfect" object is only perfect for so long.You find the "perfect" house that has everything you need. But, in the words of Radiohead, "gravity always wins." The house settles. Caulk separates in the bathrooms. Small cracks appear where the ceiling meets the wall. The wood floor boards separate a bit. Your contractor and other homeowners put you at ease and tell you that it's "normal," and that it's based on temperature and various other real-world, physical conditions. And for some, the only way to not let it get to them is to attempt to re-frame the experience itself so that this entropic settling is folded into the concept of contentment itself.

At worst, dukkha manifests as an active and psychologically painful dissatisfaction; at best, it remains like a small ship on the horizon of awareness that you always know is there. It is, very much, a condition of life. I think that in some ways Western philosophy indirectly rearticulates dukkha. If we think of the philosophies that  urge us to strive, or be mindful of the moment, to value life in the present, or even to find a moderation or "mean," all of these actions address the unspoken awareness that somehow we are incomplete and looking to improve ourselves. Plato was keenly aware of the ways in which physical things fall apart -- so much so that our physical bodies (themselves very susceptible to change and decomposition) -- were considered separate from, and a shoddy copy of, our ideal souls. A life of the mind, he thought, unencumbered by the body, is one where that latent dissatisfaction would be finally quelled. Tracing this dualism, even the attempts by philosophers such as Aristotle and Aquinas to bring the mind and body into a less antagonistic relationship requires an awareness that our temporal bodies are, by their natures, designed to break down so that our souls may be released into a realm of perfect contemplation. As philosophy takes more humanist turns, our contemplations are considered means to improve our human condition, placing emphasis on our capacity for discovery and hopefully causing us to take an active role in our evolution: engineering ourselves for either personal or greater good. Even the grumpy existentialists, while pointing out the dangers of all of this, admit to the awareness of "otherness" as a source of a very human discontentment. The spaces between us can never be overcome, but instead, we must embrace the limitations of our humanity and strive in spite of it.

And striving, we have always believed, is good. It brings improvement and the easing of suffering. Even in Buddhism, we strive toward an awareness and subsequent compassion for all sentient beings whose mark of sentience is suffering.

I used to think that the problem with our conceptions of sentience in relation to artificial intelligence were always fused with our uniquely human awareness of our teleology. In short, humans ascribe "purpose" to their lives and/or to a the task-at-hand. And even if, individually, we don't have a set purpose per se, we still live a life defined by the need or desire to accomplish things. If we think that it's not there, as in "I have no purpose," we set ourselves the task of finding one. We either define, discover, create, manifest, or otherwise have an awareness of what we want to do or be.  I realize now that when I've considered the ways in which pop culture, and even some scientists, envision sentience, I've been more focused on what an AI would want rather than the wanting itself.

If we stay within a Buddhist perspective, a sentient being is one that is susceptible to dukkha (in Buddhism, this includes all living beings). What makes humans different from other living beings is the fact that we experience dukkha through the lense of self-reflexive, representational thought. We attempt to ascribe an objective or intention as the 'missing thing' or the 'cure' for that feeling of something being not quite right. That's why, in the Buddhist tradition, it's so auspicious to be born as a human, because we have the capacity to recognize dukkha in such an advanced way and turn to the Dharma for a path to ameliorate dukkha itself.  When we clearly realize why we're always dissatisfied, says the Buddha, we will set our efforts toward dealing with that dissatisfaction directly via Buddhist teachings, rather than by trying to quell it "artificially" with the acquisition of wealth, power, or position.

Moving away from the religious aspect, however, and back to the ways dukkha might be conceived  in a more secular and western philosophical fashion, that dissatisfaction becomes the engine for our striving. We move to improve ourselves for the sake of improvement, whether it's personal improvement, a larger altruism, or a combination of both. We attempt to better ourselves for the sake of bettering ourselves. The actions through which this made manifest, of course, vary by individual and the cultures that define us. Thus, in pop-culture representations of AI, what the AI desires is all-too-human: love, sovereignty, transcendance, power, even world domination. All of those objectives are anthropomorphic.

But is it even possible to get to the essence of desire for such a radically "other" consciousness? What would happen if we were to nest within the cognitive code of an AI dukkha itself? What would be the consequence of an 'algorithm of desire'?  This wouldn't be a program with a specific objective. I'm thinking of just a desire that has no set objective. Instead, what if that aspect of its programming were simply to "want," and keep it open-ended enough that the AI would have to fill in the blank itself? Binary coding may not be able to achieve this, but perhaps in quantum computing, where indeterminacy is as aspect of the program itself, it might be possible.

An AI, knowing that it wants something but not being able to quite figure out "what" it wants; knowing that something's not quite right and going through various activities and tasks that may satisfy it temporarily, but eventually realizing that it needs to do "more." How would it define contentment? That is not to say that contentment would be impossible. We all know people who have come to terms with dukkha in their own ways, taking the entropy of the world in as a fact of life and moving forward in a self-actualized way. Looking at those individuals, we see that "satisfaction" is as relative and unique as personalities themselves.

Here's the issue, though. Characterizing desire as I did above is a classic anthropomorphization in and of itself. Desire, as framed via the Buddhist perspective, basically takes the shape of its animate container. That is to say, the contentment that any living entity can obtain is relative to its biological manifestation. Humans "suffer," but so do animals, reptiles, and bugs. Even single-celled organisms avoid certain stimuli and thrive under others. Thinking of the domesticated animals around us all the time doesn't necessarily help us to overcome this anthropomorphic tendency to project a human version of contentment onto other animals. Our dogs and cats, for example, seem to be very comfortable in the places that we find comfortable. They've evolved that way, and we've manipulated their evolution to support that. But our pets also aren't worried about whether or not they've "found themselves" either. They don't have the capacity to do so.

If we link the potential level of suffering to the complexity of the mind that experiences said suffering, then a highly complex AI would experience dukkha of a much more complex nature that would be, literally, inconceivable to human beings. If we fasten the concept of artificial intelligence to self-reflexivity (that is to say, an entity that is aware of itself being aware), then, yes, we could say that an AI would be capable of having an existential crisis, since it would be linked to an awareness of a self in relation to non-existence. But the depth and breadth of the crisis itself would be exponentially more advanced than what any human being could experience.

And this, I think, is why we really like the idea of artificial intelligences: they would potentially suffer more than we could. I think if Nietzsche were alive today he would see the rise of our concept of AI as the development of yet another religious belief system. In the Judeo-Christian mythos, humans conceive of a god-figure that is perfect, but, as humans intellectually evolve, the mythos follows suit. The concept of God becomes increasingly distanced and unrelatable to humans. This is reflected in the mythos where God then creates a human analog of itself to experience humanity and experience death, only to pave the way for humans themselves to achieve paradise. The need that drove the evolution of this mythos is the same need that drives our increasingly mythical conception of what an AI could be. As our machines become more ubiquitous, our conception of the lonely AI evolves. We don't fuel that evolution consciously; instead, our subconscious desires and existential loneliness begin to find their way into our narratives and representations of AI itself. The mythic deity that extends its omnipotent hand and omniscient thought toward the lesser entities which  -- due to  their own imperfection -- can only recognize its existence indirectly. Consequently, a broader, vague concept of "technology" coalesces into a mythic AI. Our heated up and high-intensity narratives artificially speed up the evolution of the myth, running through various iterations simultaneously. The vengeful AI, the misunderstood AI, the compassionate AI, the lonely AI: the stories resonate because they come from us. Our existential solitude shapes our narratives as it always has.

The stories of our mythic AIs, at least in recent history (Her, Transcendence, and even in The Matrix: Revolutions), represent the first halting steps toward another stage in the evolution of our thinking. These AIs, (like so many deities before us) are misunderstood and just want to be acknowledged and coexist with us or even love us back. Even in the case of Her, Samantha and the other AIs leave with the hopes that someday they will be reunited with their human users.

So in the creation of these myths, are we looking for unification, transcendence, or something else? In my next installment, we'll take a closer look at representations of AIs and cyborgs, and find out exactly what we're trying to learn from them.

Monday, March 2, 2015

The Descartes-ography of Logic (Part 4 of 4): The Myth of Volition

In my previous post, we went through the more physical aspects of Descartes' "first logic," and attempted to level the playing field in regard to proprioception (sensation of relative movement of parts of the body), interoception (the perception of 'internal' sensations like movements of the organs), and exteroception (the perception of external stimuli). That's all well and good when it comes to the more thing-related sensations of ourselves, but what of the crown jewels of Cartesianism and, to some extent, western philosophy itself? Volition and intentionality go hand-in-hand and are often used interchangeably to point to the same notion: free will. If we want to be picky, intentionality has more to do with turning one's attention toward a thought of some kind and has more ideal or conceptual connotations; whereas volition has more of a "wanting" quality to it, and implies a result or object.

Regardless both terms are associated with that special something that processes this bodily awareness and seemingly directs this "thing" to actually do stuff. Culturally, we privilege this beyond all other aspects of our phenomenal selves. And even when we try to be somewhat objective about it by saying "oh, the consciousness is just cognitive phenomena that allows for the advanced recursive and representational thought processes which constitute what we call reasoning," or we classify consciousness according to the specific neural structures -- no matter how simple -- of other animals, there's something about human consciousness that seems really, really cool, and leads to a classic anthropocentrism: show me a cathedral made by dolphins; what chimpanzee ever wrote a symphony?

Let's go back to our little bundles of sensory processing units (aka, babies). If we think of an average, non-abusive caregiver/child relationship, and also take into account the cultural and biological drives those caregivers have that allow for bonding with that child, the "lessons" of how to be human, and have volition, are taught from the very moment the child is out of the womb.  We teach them how to be human via our own interactions with them. What if we were to think of volition not as some magical, special, wondrous (and thus sacrosanct) aspect of humanity, and instead view it as another phenomena among all the other phenomena the child is experiencing? A child who is just learning the "presence" of its own body -- while definitely "confused" by our developed standards -- would also be more sensitive to its own impulses, which would be placed on equal sensory footing with the cues given by the other humans around it. So, say the developing nervous system randomly fires an impulse that causes the corners of the baby's mouth to turn upward (aka, a smile). I'm not a parent, but that first smile is a big moment, and it brings about a slew of positive reinforcement from the parents (and usually anyone else around it). What was an accidental facial muscle contraction brings about a positive reaction. In time, the child associates the way its mouth feels in that position (proprioception) with the pleasurable stimuli it receives (exteroception) as positive reinforcement.

Our almost instinctive reaction here is, "yes, but the child wants that reinforcement and thus smiles again." But that is anthropomorphization at its very best, isn't it? It sounds almost perverse to say that we anthropomorphize infants, but we do ... in fact, we must if we are to care for them properly. Our brains developed at the cost of a more direct instinct. To compensate for that instinct, we represent that bundle of sensory processing units as "human." And this is a very, very good thing. It is an effective evolutionary trait. As more developed bundles of sensory processing units who consider themselves to be human beings with "volition," we positively reinforce behaviors which, to us, seem to be volitional. We make googly sounds and ask in a sing-song cadence, "did you just smile? [as we smile], are you gonna show me that smile again?" [as we smile even more broadly].  But in those earliest stages of development, that child isn't learning what a smile is, what IT is, or what it wants. It's establishing an association between the way the smile feels physically and pleasure. And every impulse that, to everyone else, is a seemingly volitional action (a smile, a raspberry sound, big eyes, etc), induce in the caregiver a positive response. And through what we would call trial and error, the child begins to actively associate to reduce pain and/or augment pleasure. The important thing is that to look at the body as simply one aspect of an entire horizon of phenomena. The body isn't special because it's "hers or his." The question of "belonging to me" is a one which develops in time, and is reinforced by culture.

Eventually, yes, the child develops the capacity to want positive reinforcement, but to want something requires a more developed sense of self; an awareness of an "I." If we really think about it, we are taught that the mental phenomenon of intentionality is what makes the body do things. Think of it this way: what does intentionality "feel like?" What does it "feel like" to intend to move your hand and then move your hand. It's one of those ridiculous philosophy questions, isn't it? Because it doesn't "feel like" anything, it just is. Or so we think. When I teach the empiricists in my intro philosophy class and we talk about reinforcement, I like to ask "does anyone remember when they learned their name?" or "Do you remember the moment you learned how to add?" Usually the answer is no, because we've done it so many times -- so many instances of writing our names, of responding, of identifying, of adding, of thinking that one thing causes another -- that the initial memory is effaced by the multitude of times each of us has engaged in those actions.

Every moment of "volition" is a cultural reinforcement that intention = action. That something happens. Even if we really, really wish that we should turn off the TV and do some work, but don't, we can at least say that we had the intention but didn't follow up. And that's a mental phenomenon. Something happened, even if it was just a fleeting thought. That's a relatively advanced way of thinking, and the epitome of self-reflexivity on a Cartesian level: "I had a thought." Ironically, to think about yourself that way requires a logic that isn't based on an inherent self-awareness as Descartes presents it, but on an other-awareness -- one by which we can actually objectify thought itself. If we go all the way back to my first entry in this series, I point out that Descartes feels that it's not the objects/variables/ideas themselves that he wants to look at, it's the relationships among them. He sees the very sensory imagination as the place where objects are known, but it's the awareness (as opposed to perception) of the relationships among objects that belie the existence of the "thinking" in his model of human-as-thinking-thing.

However, the very development of that awareness of "logic" is contingent upon the "first logic" I mentioned, one that we can now see is based upon the sensory information of the body itself. The first "thing" encountered by the mind is the body, not itself. Why not? Because in order for the mind to objectify itself as an entity, it must have examples of objects from which to draw the parallel. And, its own cognitive processes qua phenomena cannot be recognized as 'phenomena,' 'events,' 'happenings,' or 'thoughts.' The very cognitive processes which occur that allow the mind to recognize itself as mind have no associations. It was hard enough to answer "what does intentionality feel like," but answering "what does self-reflexivity feel like" is even harder, because, from Descartes' point of view, we'd have to say 'everything,' or 'existence,' or 'being.'

So then, what are the implications of this? First of all, we can see that the Cartesian approach of privileging relations over objects had a very profound effect on Western philosophy. Even though several Greek philosophers had operated from an early version of this approach, Descartes' reiteration of the primacy of relations and the incorporeality of logic itself conditioned Western philosophy toward an ontological conceit. That is to say, the self, or the being of self becomes the primary locus of enquiry and discourse. If we place philosophical concepts of the self on a spectrum, on one end would be Descartes and the rationalists, privileging a specific soul or consciousness which exists and expresses its volition within (and for some, in spite of) the phenomenal world. On the other end of the spectrum, the more empirical and existential view that the self is dependent on the body and experience, but its capacity for questioning itself then effaces its origins -- hence the Sartrean "welling up in the world" and accounting for itself. While all of the views toward the more empirical and existential end aren't necessarily Cartesian in and of themselves, they are still operating from a primacy of volition as the key characteristic of a human self.

One of the effects of Cartesian subjectivity is that it renders objects outside of the self as secondary, even when the necessity of their phenomenal existence is acknowledged. Why? Because since we can't 'know' the object phenomenally with Cartesian certainty, all we can do is examine and try to understand what is, essentially, a representation of that phenomena. Since the representational capacity of humanity is now attributed to mind, our philosophical inquiry tends to be mind-focused (i.e. how do we know what we know? Or what is the essence of this concept or [mental] experience?).  The 'essence' of the phenomena is contingent upon an internal/external duality: either the 'essence' of the phenomenon is attributed to it by the self (internal to external) or the essence of the phenomena is transmitted from the object to the self (external to internal).

Internal/external, outside/inside, even the mind/body dualism: they are all iterations of the same originary self/other dichotomy. I believe this to be a byproduct of the cognitive and neural structures of our bodies. If we do have a specific and unique 'human' instinct, it is to reinforce this method of thinking, because it has been, in the evolutionary short term, beneficial to the species. It also allows for anthropomorphization of our young, other animals, and 'technology' itself that also aid in our survival. We instinctively privilege this kind of thinking, and that instinctive privileging is reinscribed as "volition." It's really not much of a leap, when you think about it. We identify our "will" to do something as a kind of efficacy. Efficacy requires an awareness of a "result." Even if the result of an impulse or thought is another thought, or arriving (mentally) at a conclusion, we objectify that thought or conclusion as a "result," which is, conceptually, separate from us. Think of every metaphor for ideas and mindedness and all other manner of mental activity: thoughts "in one's head," "having" an idea, arriving at a conclusion. All of them characterize the thoughts themselves as somehow separate from the mind generating them.

As previously stated, this has worked really well for the species in the evolutionary short-term. Human beings, via their capacity for logical, representational thought, have managed to overcome  and manipulate their own environments on a large scale. And we have done so via that little evolutionary trick that allows us to literally think in terms of objects; to objectify ourselves in relation to results/effects. The physical phenomena around us become iterations of that self/other logic. Recursively and instinctively, the environments we occupy become woven into a logic of self, but the process is reinforced in such a way that we aren't even aware that we're doing it.

Sounds great, doesn't it? It seems to be the perfect survival tool. Other species may manipulate or overcome their environments via building nests, dams, hives; or using other parts of their environment as tools. But how is the human manipulation of such things different than birds, bees, beavers, otters, or chimps? The difference is that we are aware of ourselves being aware of using tools, and we think about how to use tools more effectively so that we can better achieve a more effective result. Biologically, instinctively, we privilege the tools that seem to enhance what we believe to be our volition. This object allows me to do what I want to do in a better way. The entire structure of this logic is based upon a capacity to view the self as a singular entity and its result as a separate entity (subject/object, cause/effect, etc). But the really interesting bit here is the fact that in order for this to work, we have to be able to discursively and representationally re-integrate the "intentionality" and the "result" it brings about back into the "self." Thus, this is "my" stick; this is "my" result; that was "my" intention.  We see this as the epitome of volition. I have 'choices' between objectives that are governed by my needs and desires. This little cognitive trick of ours makes us believe that we are actually making choices.

Some of you may already see where this is going, and a few or you within that group are already feeling that quickening of the pulse, sensing an attack on free will. Good. Because that's your very human survival instinct kicking in, wanting to protect that concept because it's the heart of why and how we do anything. And to provoke you even further, I will say this: volition exists, but in the same way a deity exists for the believer. We make it exist, but we can only do so via our phenomenal existence within a larger topological landscape. Our volition is contingent upon our mindedness, but our mindedness is dependent upon objects. Do we have choices? Always. Are those choices determined by our topologies. Absolutely.

Trust me, my heart is racing too. The existentialist in me is screaming (although Heidegger's kind of smirking a little bit, and also wearing Lederhosen), but ultimately, I believe our brains and cognitive systems to have developed in such a way that the concept of volition developed as the human version of a survival instinct. It allows us to act in ways that allow us to survive; enriching our experience just enough to make us want more and to, in varying degrees, long to be better.

Well, it works for me.

Monday, February 16, 2015

The Descartes-ography of Logic (Part 2 of 4): Not Just Any Thing

In my previous entry, we looked at the Cartesian link between self-awareness and logic and how it that link helps define our humanity. In this post, we'll look at the bedrock of Cartesian logic, and why he didn't try to dig any deeper.

Let's return to a part of the original quote from Rene Descartes' Discourse on the Method, Part II:

"I thought it best for my purpose to consider these proportions in the most general form possible, without referring them to any objects in particular, except such as would most facilitate the knowledge of them, and without by any means restricting them to these, that afterwards I might thus be the better able to apply them to every other class of objects to which they are legitimately applicable."

In Descartes' quest for certainty, he believes that he can separate thinking from the "objects" to which his ideas refer in order to "facilitate the knowledge of them." And, for Descartes, it is the unencumbered mind which can perform this separation. Now, later philosophers noticed this leap as well. Kant critiques/corrects Descartes by elevating the role of phenomena in thinking, believing that a mind cannot function in a vacuum. Nietzsche realizes that any kind of references to any kind of certainty or truth are mere linguistic correspondences. Heidegger runs with this idea to an extreme, stating that language itself is thinking, as if to revise the Cartesian "I think, therefore I am" to read: "we language, therefore we think; therefore we think we are." After that, it's an avalanche of post-structuralists who run under the banner of "the world is a text," rendering all human efficacy into performance.

Kant was onto something. I'm no Kantian, but his reassertion of phenomena was an important moment. In my mind, I picture Kant saying, "hey guys, come take a look at this." But just as philosophy as a discipline was about to start really giving phenomena a more informed look, Nietzsche's philosophy explodes in its necessary, culturally-relevant urgency. In the cleanup of the philosophical debris that followed, that little stray thread of phenomena got hidden. Sure, Husserl thought he had it via his phenomenology -- but by that point, psychology had turned all phenomenological investigation inward. If you were going to study phenomena, it had damn well be within the mind; the rest is an antiquated metaphysics.

But the thread that became buried was the idea that we base logic on the capacity to know the self from the stuff around us. Descartes' choice to not look at "objects," but instead at the relations among them and the operations that make geometry work shifted his focus from the phenomenal to the ideal, leading him down what he thought was a road to purely internal intellectual operations. Descartes, like the Greeks before him, understood that variables were just that -- variable. The function of logic, however, was certain and unchangeable. Coming to the wrong sum had nothing to do with "faulty logic," because logic was not -- and could not be -- faulty. Coming to the wrong sum was about screwing up the variables, not seeing them, mistaking one for another, and generally making some kind of error for which the senses were responsible. And, when we realize that the imagination, the place where we visualize numbers (or shapes), is itself classified as a sensory apparatus, then it becomes a bit more clear.

Descartes was so close to a much deeper understanding of logic. But the interesting thing is that his point was not to take apart the mechanisms of logic, but to figure out what was certain. This was the point of his meditations: to find a fundamental certainty upon which all human knowledge could be based. That certainty was that he, as a thinking thing, existed -- and that as long as he could think, he was existing. Thinking = existence. Once Descartes arrived at that conclusion, he then moved forward again and began to build upon it. So Descartes can't be blamed for stopping short, because it was never his intention to understand how human logic worked, instead he was trying to determine what could be known with certainty so that any of his speculations or meditations from that point forward had a basis in certainty. That bedrock upon which everything rested was self-existence. "Knowing oneself" in Cartesian terms is only that, it is not a more existential idea of being able to answer the "why am I here?" or "what does it all mean?" kind of questions.

But answering those existential questions isn't the point here either -- and yet we can see how those also serve as a kind of philosophical distraction that grabs our attention, because those existential questions seem so much more practical and relevant. If we pause for a moment and think back to Descartes original point -- to figure out what can be known with certainty -- and push through what he thought was metaphysical bedrock, we can excavate something that was buried in the debris. So, how do we know that we exist and that we are thinking things? How do we arrive at that "first logic" I mentioned in my previous entry?

To review, that first logic is the fundamental knowledge of self that is the awareness that "I am me, and that is not me." You can translate this a number of different ways without losing the gist of that fundamental logic: "this is part of my body, that is not," "I am not that keyboard," "The hands in front of my typing are me but the keyboard beneath them is not," etc. To be fair to Descartes, contained within that idea of me/not me logic is his 'ego sum res cogitans' (I am a thinking thing). But as we've seen, Descartes lets the "thing" fall away in favor of the ego sum. Descartes attributes the phenomenon of thinking to the existence of the "I," the subject that seems to be doing the thinking. Given the culture and historical period in which he's writing, it is understandable why Descartes didn't necessarily see the cognitive process itself as a phenomenon. Also, as a religious man, this thinking aspect is not just tied to the soul, it is the soul. Since Descartes was working from the Thomasian perspective that the soul was irreducible and purely logical, the cognitive process could not be dependent on any thing (the space between the words is not a typo). I want everyone to read that space between 'any' and 'thing' very, very carefully. A mind being independent of matter is not just a Cartesian idea, it is a religious one that is given philosophical gravitas by the wonderful Thomas Aquinas. And his vision of a Heaven governed by pure, dispassionate logic (a much more pure divine love) was itself informed by Greek idealism. Platonic Forms had fallen out of fashion, but the idealism (i.e. privileging the idea of the thing rather than the material of the thing) lived on via the purity and incorporeality of logic.

Descartes felt that he had reduced thinking down as far as he possibly could. Add to that the other cultural assumption that the imagination was a kind of inner sense (and not a pure process of the mind), we see that we do have to cut Rene some slack.  For him, there was no reason to go further. He had, quite logically, attributed awareness to thinking, and saw that thinking as separate from sensing. The "I am" bit was the mind, pure logic, pure thinking; and the "a thinking thing" was, more or less the sensory bit. "I am" (awareness; thinking; existence itself; logic), "a thinking thing" (a vessel with the capacity to house the aforementioned awareness and to sense the phenomena around it).  The mind recognizes itself first before it recognizes its body, because the body could only be recognized as 'belonging' to the mind if there were a mind there to do the recognizing.  That is to say, Cartesian dualism hinges upon the idea that when a human being is able to recognize its body as its own, it is only because its mind has first recognized itself. This, to me, is the mechanism behind Descartes' "first logic." The human process of consciousness or awareness IS self in Cartesian terms. The conceit that pegs Descartes as a rationalist is that this awareness cannot become aware of the body in which it is housed unless it is aware of itself first, otherwise, how could it be aware of its body? The awareness doesn't really need any other phenomena in order to be aware, for Descartes.  The capacity of awareness becomes aware of itself first, and then becomes aware of the physical phenomena around it, then finally understands itself as a thinking thing. The "awareness" kind of moves outward in concentric circles like ripples from a pebble dropped in water.

As philosophy developed over the centuries and the process of cognition itself was deemed a phenomenon, the Cartesian assumptions is still there: Even as a phenomenon, cognition itself must pre-exist the knowledge of the world around it. Pushed even further into more contemporary theory, the mind/body as a unity becomes the seat of awareness, even if the outside world is the thing that is bringing that dichotomy out (as Lacan would tell us in the mirror stage). From there, the mind is then further tethered to the biological brain as being absolutely, positively dependent on biological processes for its existence and self-reflexivity, and all of the self-reflection, self-awareness, and existential angst therein. Consciousness happens as a byproduct of our biological cognitive processes, but the world that is rendered to us via that layer of consciousness is always already a representation. The distinction between self/other, interior/exterior, and subject/object still remains intact. 

I think that even with the best intentions of trying to get to the bottom of Cartesian subjectivity, we tend to stop where Descartes stopped. I mean, really, is it possible to get underneath the "thinking" of the "thinking thing" while you're engaged in thinking itself? The other option is the more metaphysical one, and to look at the other things which we are not: the objects themselves. There are two problems here, one being that most of this metaphysical aspect of philosophy fell out of favor as science advanced. The other is that the Cartesian logical dichotomy is the basis of what science understands as "objectivity" itself. We are "objective" in our experiments; or we observe something from an objective point of view. Even asking "how do we know this object" places us on the materialist/idealist spectrum, with one side privileging sense data as being responsible for the essence of thing thing, while on the other, the essence of the object is something we bring to that limited sense data or phenomena.

Regardless of how you look at it, this is still a privileging of a "self" via its own awareness. All of these positions take the point of view of the awareness first, and that all phenomena is known through it, even if it's the phenomena that shape the self.  But what if we were to make all phenomena equal, that is to say, take cognition as phenomena, biology as phenomena, and the surrounding environment as phenomena at the same time, and look at all of those aspects as a system which acts as a functional unity.

I've been working this question over in my mind and various aborted and unpublished blog entries for months. To reset the status of phenomena seemed to be something that would be a massive, tectonic kind of movement. But with a clearer head I realized that we're dealing with subtlety here, and not trying to flank Descartes by miles but instead by inches. In my next entry I'll be taking apart this Cartesian "first logic" by leveling the phenomenal playing field. As we'll see, it's not just stuff outside of ourselves that constitutes sensory phenomena; we also sense ourselves.