Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.



Saturday, July 11, 2015

The Posthuman Superman: The Rise of the Trinity

"Thus,  existentialism's first move is to make every man aware of what he is and to make the full responsibility of his existence rest on him. And when we say that a man is responsible for himself, we do not only mean that he is responsible for his own individuality, but he is responsible for all men."
-- Sartre, Existentialism is a Humanism

[Apologies for any format issues or citation irregularities. I'll be out of town for the next few days and wanted to get this up before I left!]

Upon the release of the trailer for Batman v Superman: Dawn of Justice, a few people contacted me, asking if the trailer seemed to be in keeping with the ideas I presented in my Man of Steel review. In that review, I concluded that the film presented a "Posthuman Superman," because, like iterations of technological protagonists and antagonists in other sci-fi films, Kal-El is striving toward humanity; that "Superman is a hero because he unceasingly an unapologetically strives for an idea that is, for him, ultimately impossible to achieve: humanity." That quest is a reinforcement of our own humanity in our constant striving for improvement (of course, take a look at the full review for more context).

This is a very quick response, mostly due to the fact that I'm not really comfortable speculating about a film that hasn't been released yet. And we all know that trailers can be disappointingly deceiving. But given what I know about various plot details, and the trajectory of the trailer itself, it does very much look like Zach Snyder is using the destruction that Metropolis suffered in Man of Steel, and Superman's resulting choice to kill General Zod, as the catalyst of this film, where a seasoned (and somewhat jaded Batman) must determine who represents the biggest threat to humanity: Superman or Lex Luthor.

What has activated my inner fanboy about this film is that, for me, it represents why I have always preferred DC heroes over Marvel heroes: core DC heroes (Superman, Batman, Wonder Woman, Green Lantern, etc) rarely, if ever, lament their powers or the responsibilities they have. Instead, they struggle with the choice as to how to use the power they possess. In my opinion, while Marvel has always -- very successfully -- leaned on the "with great power comes great responsibility" idea; DC takes that a step further, with characters who understand the responsibility they have and struggle not with the burden of power, but the choice as to how to use it. Again, this is just one DC fan's opinion.

And here I think that the brief snippet of Martha Kent's advice to her son is really the key to where the film may be going:

"People hate what they don't understand. Be their hero, Clark. Be their angel. Be their monument. Be anything they need you to be. Or be none of it. You don't owe this world a thing. You never did."

Whereas Man of Steel hit a very Nietzschean note, I'm speculating here that Batman v Superman will hit a Sartrean one. If Kal-El is to be Clark Kent, and embrace a human morality, then he must carry the burden of his choices, completely, and realize that his choices do not only affect him, but also implicate all of humanity itself.

As Sartre tells us in Existentialism is a Humanism:


"... I am responsible for myself and for everyone else. I am creating a certain image of man of my own choosing. In choosing myself, I choose man."

And if we take into account the messianic imagery in both the teaser and the current trailer, it's clear that Snyder is playing with the idea of gods and idolatry. Nietzsche may dismiss God by declaring him dead, but it's Sartre who wrestles with the existentialist implications of a non-existent God:

"That is the very starting point of existentialism, Indeed, everything is permissible of God does not exist, and as a result, man is forlorn, because neither within him nor without does he find anything to cling to.  He can't start making excuses for himself."

Martha Kent's declaration that Clark "doesn't owe the world a thing" places the degree of Kal-El's humanity on Superman's shoulders. Clark is the human, Kal is the alien. What then is Superman? I am curious as to whether or not this trinity aspect will be brought out in the film. Regardless, what is clear is that the Alien/Human/hybrid trinity is not a divine one. It is one where humanity is at the center. And when one puts humanity at the center of morality (rather than a non-existent God), then we are faced with the true burden of our choices:

"If existence really does precede essence, there is no explaining things away by reference to a fixed and given human nature,. In other words, there is no determinism, man is free, man is freedom. On the other hand, if God does not exist we find no commands to turn to which legitimize our conduct. So in the bright realm of values, we have no excuse behind us, nor justification before us. We are alone, with no excuses."

For Sartre, "human nature" is as much of a construct as God. And Clark is faced with the reality of this situation in his mother's advice to be a hero, an angel, a monument, and/or whatever humanity needs him to be ... or not. The choice is Clark's. If Clark is to be human, then he must face the same burden as all humans: freedom. Sartre continues:

"That is the idea I shall try to convey when I say that man is condemned to be free. Condemned, because he did not create himself, yet, in other respects is free; because, once thrown in to the world, he is responsible for everything he does. the existentialist does not believe in the power of passion. He will never agree that a sweeping passion is a ravaging torrent which fatally leads a man to certain acts and is therefore an excuses. He thinks that man is responsible for his passion."

If Clark is to be the top of the Clark/Kal/Superman trinity, then he cannot fall back on passion to excuse his snapping of Zod's neck, nor can he rely on it to excuse him from the deaths of thousands that resulted from the battle in Man of Steel. Perhaps the anguish of his tripartite nature will be somehow reflected in the classic "DC Trinity" of Superman/Batman/Wonder Woman found in the comics and graphic novels, in which Batman provides a compass for Superman's humanity,while Wonder Woman tends to encourage Superman to embrace his god-like status.

And the fanboy in me begins to eclipse the philosopher. But before it completely takes over and I watch the trailer another dozen times, I can say that I still stand behind my thoughts from my original review of Man of Steel, this is a posthuman superhero film. Superman will still struggle to be human (even though he isn't), and the addition of an authentic human in Batman, as well as an authentic god in Wonder Woman, will only serve to highlight his anguish at realizing that his choices are his own ... just as Sartre tells us. And in that agony, we as an audience watch Superman suffer with us human beings.

Now we'll see if all of this holds up when the film is actually released, at which point I will -- of course -- write a full review.




Thursday, May 28, 2015

Update: Semester Breaks, New Technology, New Territory

This is more of an update post than a theory/philosophy one.

The semester ended a couple of weeks ago and I am acclimating to my new routine and schedule. I am also acclimating to two new key pieces of technology: my new phone, which is a Galaxy Note 4; and my new tablet, which is a Nexus 9. I attempted a slightly different approach to my upgrades, especially for my tablet: stop thinking about what I could do with them and start thinking about what I will do with them. One could also translate that as: get what you need, not what you want. This was also a pricey upgrade all around; I had been preparing for it, but still, having to spend wisely was an issue as well.

The Galaxy Note 4 upgrade was simple for me. I loved my Note 2. I use the stylus/note taking feature on it almost daily. The size was never an issue. So while I momentarily considered the Galaxy S6 edge, I stuck with exactly what I knew I needed and would use.

As for the tablet, that was more difficult. My old Galaxy Note 10.1 was showing its age. I thought -- or rather, hoped ... speculated -- that a tablet with a stylus would replace the need for paper notes. After a full academic year of trying to do all of my research and class note-taking exclusively on my tablet, it was time for me to admit that it wasn't cutting it. I need a full sheet of paper, and the freedom to easily erase, annotate, flip back and forth, and see multiple pages in their actual size. While the Note tablet can do most of that, it takes too many extra steps; and those steps are completely counter-intuitive than when using pen and paper.

When I thought about how and why I used my tablet (and resurrected chromebook), I realized that I didn't need something huge. I was also very aware that I am a power-user of sorts of various Google applications. So -- long story short -- I went for the most ... 'Googley' ... of kit and sprang for a Nexus 9, with the Nexus keyboard/folio option. I was a little nervous at the smaller size -- especially of the keyboard. But luckily my hands are on the smallish side and I'm very, very pleased with it. The bare-bones Android interface is quick and responsive; and the fact that all Android updates come to me immediately without dealing with manufacturer or provider interference was very attractive. I've had the Nexus for a week and am loving it.

This process, however, especially coming at the end of the academic year, made me deeply introspective about my own -- very personal -- use of these types of technological artifacts. It may sound dramatic, but there was definitely some soul-searching happening as I researched different tablets and really examined the ways in which I use technological artifacts. It was absolutely a rewarding experience, however. Freeing myself up from unrealistic expectations and really drawing the line between a practical  use rather than a speculative use was rather liberating. I was definitely influenced by my Google Glass experience.

From a broader perspective, the experience also helped me to focus on very specific philosophical issues in posthumanism and our relationship to technological artifacts. I've been reading voraciously, and taking in a great deal of information. During the whole upgrade process, I was reading Sapiens, A Brief History of Humankind by Yuval Noah Harari. This was a catalyst in my mini 'reboot.' And I know it was a good reboot because I keep thinking back to my "Posthuman Topologies: Thinking Through the Hoard" chapter in Design, Mediation, and the Posthuman, and saying to myself "oh wait, I can explain that even better now ..."

So I am now delving into both old and new territory, downloading new articles, and familiarizing myself even more deeply with neuroscience and psychology. It's exciting stuff, but a little frustrating because there's only so much I can read through and retain in a day. There's also that nagging voice that says "better get it done now, in August you'll be teaching four classes again." It can be frustrating sometimes. Actually, that's a lie. It's frustrating all the time. But I do what I can.

Anyway, that's where I'm at right now and I'm sure I'll have some interesting blog entries as I situate myself amidst the new research. My introspection here isn't just academic, so what I've been working on comes from a deeper place, but that's how I know the results will be good.

Onward and upward.





Monday, March 30, 2015

Posthuman Desire (Part 2 of 2): The Loneliness of Transcendence

In my previous post, I discussed desire through the Buddhist concept of dukkha, looking at the dissatisfaction that accompanies human self-awareness and how our representations of AIs follow a mythic pattern. The final examples I used (Her, Transcendence, etc.) pointed to representations of AIs that wanted to be acknowledged or even to love us. Each of these examples hints at a desire for unification with humanity; or at least some kind of peaceful coexistence. So then, as myths, what are we hoping to learn from them? Are they, like religious myths of the past, a way to work through a deeper existential angst? Or is this and advanced step in our myth-making abilities, where we're laying out the blueprints for our own self-engineered evolution, one which can only occur through a unification with technology itself?

It really depends upon how we define "unification" itself. Merging the machine with the human in a physical way is already a reality, although we are constantly trying to find better, and more seamless ways to do so. However, if we look broadly at the history of the whole "cyborg" idea, I think that it actually reflects a more mythic structure. Early versions of the cyborg reflect the cultural and philosophical assumptions of what "human" was at the time, meaning that volition remained intact, and that any technological supplements were augmentations or replacements to the original parts of the body.*  I think that, culturally, the high point of this idea came in the  1974-1978 TV series, The Six Million Dollar Man (based upon the 1972 Martin Caidin novel, Cyborg), and its 1976-78 spin-off, The Bionic Woman. In each, the bionic implants were completely undetectable with the naked eye, and seamlessly integrated into the bodies of Steve Austin and Jamie Summers. Other versions of enhanced humanity, however, show a growing awareness of the power of computers via Michael Crichton's 1972 novel, The Terminal Man, in which prosthetic neural enhancements bring out a latent psychosis in the novel's main character, Harry Benson . If we look at this collective hyper-mythos holistically, I have a feeling that it would follow a similar pattern/spread of the development of more ancient myths, where the human/god (or human/angel, or human/alien) hybrids are sometimes superhuman and heroic, other times evil and monstrous.

The monstrous ones, however, tend to share similar characteristics, and I think that most prominent is the fact that in those representations, the enhancements seem to mess with the will. On the spectrum of cyborgs here, we're talking about the "Cybermen" of Doctor Who (who made their first appearance in 1966) and the infamous "Borg" who first appeared in Star Trek: The Next Generation in 1989. In varying degrees, each has a hive mentality, a suppression or removal of emotion, and are "integrated" into the collective in violent, invasive, and gruesome ways. The Borg from Star Trek and the Cybermen from the modern Doctor Who era represent that dark side of unification with a technological other. The joining of machine to human is not seamless. Even with the sleek armor of the contemporary iterations of the Cybermen, it's made clear that the "upgrade" process is painful, bloody, and terrifying, and that it's best that what's left of the human inside remains unseen. As for the Borg, the "assimilation" process is initially violent but less explicitly invasive (at least from Star Trek: First Contact), it seems to be more of an injection of nanotechnology that converts a person from inside-out, making them more compatible with the external additions to the body. Regardless of how it's done, the cyborg that remains is cold, unemotional, and relentlessly logical.

So what's the moral of the cyborg fairy tale? And what does it have to do with suffering? Technology is good, and the use of it is something we should do, as long as we are using it and not the other way around (since in each its always a human use of technology itself which beats the cyborgs). When the technology overshadows our humanity, then we're in for trouble. And if we're really not careful, it threatens us on an what I believe to be a very human instinctual level: that of the will. As per my the final entry of my last blog series, the instinct to keep the concept of the will intact evolves with the intellectual capacity of the human species itself. The cyborg mythology grows out of a warning that if the will is tampered with (giving up one's will to the collective), then humanity is lost.

The most important aspect of cyborg mythologies are that the few cyborgs for whom we show pathos are the ones who have come to realize that they are cyborgs and are cognizant that they have lost an aspect of their humanity. In the 2006 Doctor Who arc, "Rise of the Cybermen/The Age of Steel," the Doctor reveals that Cybermen can feel pain (both physical and emotional), but that the pain is artificially suppressed. He defeats them by sending a signal that deactivates that ability, eventually causing all the Cybermen to collapse into what can only be called screaming heaps of existential crises as they recognize that they have been violated and transformed. They feel the physical and psychological pain that their cyborg existence entails. In various Star Trek TV shows and films, we gain many insights into the Borg collective via characters who are separated from the hive, and begin to regain their human characteristics -- most notably, the ability to choose for themselves, and even name themselves (i.e. "Hugh," from the Star Trek: The Next Generation episode "I, Borg").

I know that there are many, many other examples of this in sci-fi. For the most part and from a mythological standpoint, however, cyborgs are inhuman when they do not have an awareness of their suffering. They are either defeated or "re-humanized" not just by separating them from the collective, but by making them aware that as a part of the collective, they were actually suffering, but couldn't realize it. Especially in the Star Trek mythos, newly separated Borg describe missing the sounds of the thoughts of others; and must now deal with feeling vulnerable, ineffective, and most importantly to the mythos -- alone.  This realization then vindicates and legitimizes our human suffering. The moral of the story is that we all feel alone and vulnerable. That's what makes us human. We should embrace this existential angst, privilege it, and even worship and venerate it.

If Nietzsche were alive today, I believe he would see an amorphous "technology" as the bastard stepchild of the union of the institutions of science and religion. Technology would be yet another mythical iteration of our Apollonian desire to structure and order that which we do not know or understand. I would take this a step further, however. AIs, cyborgs, singularities, are narratives, and are products of our human survival instinct: to protect the self-aware, self-reflexive, thinking self -- and all of the 'flaws' that characterize it.

Like any religion, then, anything with this techno-mythic flavor will have its adherents and its detractors. The more popular and accepted human enhancements become, the more entrenched will anti-technology/enhancement groups will become. Any major leaps in either human enhancement or AI developments will create proportionately passionate anti-technology fanaticism. The inevitability of these developments, however, is clear: not because some 'rule' of technological progression exists; but because suffering exists. The byproduct of our advanced cognition and its ability to create a self/other dichotomy (which itself is the basis of representational thought) is an ability to objectify ourselves. As long as we can do that, we will always be able to see ourselves as individual entities. Knowing oneself as an entity is contingent upon knowing that which is not oneself. To be cognizant of an other then necessitates an awareness of the space between the knower and what is known. And in that space is absence.

Absence will always hold the promise (or the hope) of connection. Thus, humanity will always create something in that absence to which it can connect, whether that object is something made in the phenomenal world, or an imagined idea or presence within it. simply through our ability to think representationally, and without any type of technological singularity or enhancement, we transcend ourselves every day.

And if our myths are any indication, transcendence is a lonely business.





* See Edgar Allan Poe's short story from 1843, "The Man That was Used Up." French Writer's Jean de la Hire's 1908 character, "Nyctalope," was also a cyborg, and appeared in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who can Live in Water)

Monday, March 23, 2015

Posthuman Desire (Part 1 of 2): Algorithms of Dissatisfaction

[Quick Note: I have changed the domain name of my blog. Please update your bookmarks! Also, apologies for all those who commented on previous posts; the comments were lost in the migration.]

 After reading this article, I found myself coming back to a question that I've been thinking about on various levels for quite a while: What would an artificial intelligence want? From a Buddhist perspective, what characterizes sentience is suffering. However, the 'suffering' referred to in Buddhism is known as dukkha, and isn't necessarily physical pain (although that can absolutely be part of it). In his book, Joyful Wisdom: Embracing Change and Finding Freedom, Yongey Mingyur Rinpoche states that dukkha "is best understood as a pervasive feeling that something isn't quite right: that life could be better if circumstances were different; that we'd be happier if we were younger, thinner, or richer, in a relationship or out of a relationship" (40). And he later follows this up with the idea that dukkha is "the basic condition of life" (42).

'Dissatisfaction' itself is a rather misleading word in this case, only because we tend to take it to the extreme. I've read a lot of different Buddhist texts regarding dukkha, and it really is one of those terms that defies an English translation. When we think 'dissatisfaction,' we tend to put various negative filters on it based on our own cultural upbringing. When we're 'dissatisfied' with a product we receive, it implies that the product doesn't work correctly and requires either repair or replacement; if we're dissatisfied with service in a restaurant or a that a mechanic completed, we can complain about the service to a manager, and/or bring our business elsewhere. Now, let's take this idea and think of it a bit less dramatically:  as in when we're just slightly dissatisfied with the performance of something, like a new smartphone, laptop, or car. This kind of dissatisfaction doesn't necessitate full replacement, or a trip to the dealership (unless we have unlimited funds and time to complain long enough), but it does make us look at that object and wish that it performed better.

It's that wishing -- that desire -- that is the closest to dukkha. The new smartphone arrives and it's working beautifully, but you wish that it took one less swipe to access a feature. Your new laptop is excellent, but it has a weird idiosyncrasy that makes you miss an aspect your old laptop (even though you hated that one). Oh, you LOVE the new one, because it's so much better; but that little voice in your head wishes it was just a little better than it is. And even if it IS perfect, within a few weeks, you read an article online about the next version of the laptop you just ordered and feel a slight twinge. It seems as if there is always something better than what you have.

The "perfect" object is only perfect for so long.You find the "perfect" house that has everything you need. But, in the words of Radiohead, "gravity always wins." The house settles. Caulk separates in the bathrooms. Small cracks appear where the ceiling meets the wall. The wood floor boards separate a bit. Your contractor and other homeowners put you at ease and tell you that it's "normal," and that it's based on temperature and various other real-world, physical conditions. And for some, the only way to not let it get to them is to attempt to re-frame the experience itself so that this entropic settling is folded into the concept of contentment itself.

At worst, dukkha manifests as an active and psychologically painful dissatisfaction; at best, it remains like a small ship on the horizon of awareness that you always know is there. It is, very much, a condition of life. I think that in some ways Western philosophy indirectly rearticulates dukkha. If we think of the philosophies that  urge us to strive, or be mindful of the moment, to value life in the present, or even to find a moderation or "mean," all of these actions address the unspoken awareness that somehow we are incomplete and looking to improve ourselves. Plato was keenly aware of the ways in which physical things fall apart -- so much so that our physical bodies (themselves very susceptible to change and decomposition) -- were considered separate from, and a shoddy copy of, our ideal souls. A life of the mind, he thought, unencumbered by the body, is one where that latent dissatisfaction would be finally quelled. Tracing this dualism, even the attempts by philosophers such as Aristotle and Aquinas to bring the mind and body into a less antagonistic relationship requires an awareness that our temporal bodies are, by their natures, designed to break down so that our souls may be released into a realm of perfect contemplation. As philosophy takes more humanist turns, our contemplations are considered means to improve our human condition, placing emphasis on our capacity for discovery and hopefully causing us to take an active role in our evolution: engineering ourselves for either personal or greater good. Even the grumpy existentialists, while pointing out the dangers of all of this, admit to the awareness of "otherness" as a source of a very human discontentment. The spaces between us can never be overcome, but instead, we must embrace the limitations of our humanity and strive in spite of it.

And striving, we have always believed, is good. It brings improvement and the easing of suffering. Even in Buddhism, we strive toward an awareness and subsequent compassion for all sentient beings whose mark of sentience is suffering.

I used to think that the problem with our conceptions of sentience in relation to artificial intelligence were always fused with our uniquely human awareness of our teleology. In short, humans ascribe "purpose" to their lives and/or to a the task-at-hand. And even if, individually, we don't have a set purpose per se, we still live a life defined by the need or desire to accomplish things. If we think that it's not there, as in "I have no purpose," we set ourselves the task of finding one. We either define, discover, create, manifest, or otherwise have an awareness of what we want to do or be.  I realize now that when I've considered the ways in which pop culture, and even some scientists, envision sentience, I've been more focused on what an AI would want rather than the wanting itself.

If we stay within a Buddhist perspective, a sentient being is one that is susceptible to dukkha (in Buddhism, this includes all living beings). What makes humans different from other living beings is the fact that we experience dukkha through the lense of self-reflexive, representational thought. We attempt to ascribe an objective or intention as the 'missing thing' or the 'cure' for that feeling of something being not quite right. That's why, in the Buddhist tradition, it's so auspicious to be born as a human, because we have the capacity to recognize dukkha in such an advanced way and turn to the Dharma for a path to ameliorate dukkha itself.  When we clearly realize why we're always dissatisfied, says the Buddha, we will set our efforts toward dealing with that dissatisfaction directly via Buddhist teachings, rather than by trying to quell it "artificially" with the acquisition of wealth, power, or position.

Moving away from the religious aspect, however, and back to the ways dukkha might be conceived  in a more secular and western philosophical fashion, that dissatisfaction becomes the engine for our striving. We move to improve ourselves for the sake of improvement, whether it's personal improvement, a larger altruism, or a combination of both. We attempt to better ourselves for the sake of bettering ourselves. The actions through which this made manifest, of course, vary by individual and the cultures that define us. Thus, in pop-culture representations of AI, what the AI desires is all-too-human: love, sovereignty, transcendance, power, even world domination. All of those objectives are anthropomorphic.

But is it even possible to get to the essence of desire for such a radically "other" consciousness? What would happen if we were to nest within the cognitive code of an AI dukkha itself? What would be the consequence of an 'algorithm of desire'?  This wouldn't be a program with a specific objective. I'm thinking of just a desire that has no set objective. Instead, what if that aspect of its programming were simply to "want," and keep it open-ended enough that the AI would have to fill in the blank itself? Binary coding may not be able to achieve this, but perhaps in quantum computing, where indeterminacy is as aspect of the program itself, it might be possible.

An AI, knowing that it wants something but not being able to quite figure out "what" it wants; knowing that something's not quite right and going through various activities and tasks that may satisfy it temporarily, but eventually realizing that it needs to do "more." How would it define contentment? That is not to say that contentment would be impossible. We all know people who have come to terms with dukkha in their own ways, taking the entropy of the world in as a fact of life and moving forward in a self-actualized way. Looking at those individuals, we see that "satisfaction" is as relative and unique as personalities themselves.

Here's the issue, though. Characterizing desire as I did above is a classic anthropomorphization in and of itself. Desire, as framed via the Buddhist perspective, basically takes the shape of its animate container. That is to say, the contentment that any living entity can obtain is relative to its biological manifestation. Humans "suffer," but so do animals, reptiles, and bugs. Even single-celled organisms avoid certain stimuli and thrive under others. Thinking of the domesticated animals around us all the time doesn't necessarily help us to overcome this anthropomorphic tendency to project a human version of contentment onto other animals. Our dogs and cats, for example, seem to be very comfortable in the places that we find comfortable. They've evolved that way, and we've manipulated their evolution to support that. But our pets also aren't worried about whether or not they've "found themselves" either. They don't have the capacity to do so.

If we link the potential level of suffering to the complexity of the mind that experiences said suffering, then a highly complex AI would experience dukkha of a much more complex nature that would be, literally, inconceivable to human beings. If we fasten the concept of artificial intelligence to self-reflexivity (that is to say, an entity that is aware of itself being aware), then, yes, we could say that an AI would be capable of having an existential crisis, since it would be linked to an awareness of a self in relation to non-existence. But the depth and breadth of the crisis itself would be exponentially more advanced than what any human being could experience.

And this, I think, is why we really like the idea of artificial intelligences: they would potentially suffer more than we could. I think if Nietzsche were alive today he would see the rise of our concept of AI as the development of yet another religious belief system. In the Judeo-Christian mythos, humans conceive of a god-figure that is perfect, but, as humans intellectually evolve, the mythos follows suit. The concept of God becomes increasingly distanced and unrelatable to humans. This is reflected in the mythos where God then creates a human analog of itself to experience humanity and experience death, only to pave the way for humans themselves to achieve paradise. The need that drove the evolution of this mythos is the same need that drives our increasingly mythical conception of what an AI could be. As our machines become more ubiquitous, our conception of the lonely AI evolves. We don't fuel that evolution consciously; instead, our subconscious desires and existential loneliness begin to find their way into our narratives and representations of AI itself. The mythic deity that extends its omnipotent hand and omniscient thought toward the lesser entities which  -- due to  their own imperfection -- can only recognize its existence indirectly. Consequently, a broader, vague concept of "technology" coalesces into a mythic AI. Our heated up and high-intensity narratives artificially speed up the evolution of the myth, running through various iterations simultaneously. The vengeful AI, the misunderstood AI, the compassionate AI, the lonely AI: the stories resonate because they come from us. Our existential solitude shapes our narratives as it always has.

The stories of our mythic AIs, at least in recent history (Her, Transcendence, and even in The Matrix: Revolutions), represent the first halting steps toward another stage in the evolution of our thinking. These AIs, (like so many deities before us) are misunderstood and just want to be acknowledged and coexist with us or even love us back. Even in the case of Her, Samantha and the other AIs leave with the hopes that someday they will be reunited with their human users.

So in the creation of these myths, are we looking for unification, transcendence, or something else? In my next installment, we'll take a closer look at representations of AIs and cyborgs, and find out exactly what we're trying to learn from them.

Monday, March 2, 2015

The Descartes-ography of Logic (Part 4 of 4): The Myth of Volition

In my previous post, we went through the more physical aspects of Descartes' "first logic," and attempted to level the playing field in regard to proprioception (sensation of relative movement of parts of the body), interoception (the perception of 'internal' sensations like movements of the organs), and exteroception (the perception of external stimuli). That's all well and good when it comes to the more thing-related sensations of ourselves, but what of the crown jewels of Cartesianism and, to some extent, western philosophy itself? Volition and intentionality go hand-in-hand and are often used interchangeably to point to the same notion: free will. If we want to be picky, intentionality has more to do with turning one's attention toward a thought of some kind and has more ideal or conceptual connotations; whereas volition has more of a "wanting" quality to it, and implies a result or object.

Regardless both terms are associated with that special something that processes this bodily awareness and seemingly directs this "thing" to actually do stuff. Culturally, we privilege this beyond all other aspects of our phenomenal selves. And even when we try to be somewhat objective about it by saying "oh, the consciousness is just cognitive phenomena that allows for the advanced recursive and representational thought processes which constitute what we call reasoning," or we classify consciousness according to the specific neural structures -- no matter how simple -- of other animals, there's something about human consciousness that seems really, really cool, and leads to a classic anthropocentrism: show me a cathedral made by dolphins; what chimpanzee ever wrote a symphony?

Let's go back to our little bundles of sensory processing units (aka, babies). If we think of an average, non-abusive caregiver/child relationship, and also take into account the cultural and biological drives those caregivers have that allow for bonding with that child, the "lessons" of how to be human, and have volition, are taught from the very moment the child is out of the womb.  We teach them how to be human via our own interactions with them. What if we were to think of volition not as some magical, special, wondrous (and thus sacrosanct) aspect of humanity, and instead view it as another phenomena among all the other phenomena the child is experiencing? A child who is just learning the "presence" of its own body -- while definitely "confused" by our developed standards -- would also be more sensitive to its own impulses, which would be placed on equal sensory footing with the cues given by the other humans around it. So, say the developing nervous system randomly fires an impulse that causes the corners of the baby's mouth to turn upward (aka, a smile). I'm not a parent, but that first smile is a big moment, and it brings about a slew of positive reinforcement from the parents (and usually anyone else around it). What was an accidental facial muscle contraction brings about a positive reaction. In time, the child associates the way its mouth feels in that position (proprioception) with the pleasurable stimuli it receives (exteroception) as positive reinforcement.

Our almost instinctive reaction here is, "yes, but the child wants that reinforcement and thus smiles again." But that is anthropomorphization at its very best, isn't it? It sounds almost perverse to say that we anthropomorphize infants, but we do ... in fact, we must if we are to care for them properly. Our brains developed at the cost of a more direct instinct. To compensate for that instinct, we represent that bundle of sensory processing units as "human." And this is a very, very good thing. It is an effective evolutionary trait. As more developed bundles of sensory processing units who consider themselves to be human beings with "volition," we positively reinforce behaviors which, to us, seem to be volitional. We make googly sounds and ask in a sing-song cadence, "did you just smile? [as we smile], are you gonna show me that smile again?" [as we smile even more broadly].  But in those earliest stages of development, that child isn't learning what a smile is, what IT is, or what it wants. It's establishing an association between the way the smile feels physically and pleasure. And every impulse that, to everyone else, is a seemingly volitional action (a smile, a raspberry sound, big eyes, etc), induce in the caregiver a positive response. And through what we would call trial and error, the child begins to actively associate to reduce pain and/or augment pleasure. The important thing is that to look at the body as simply one aspect of an entire horizon of phenomena. The body isn't special because it's "hers or his." The question of "belonging to me" is a one which develops in time, and is reinforced by culture.

Eventually, yes, the child develops the capacity to want positive reinforcement, but to want something requires a more developed sense of self; an awareness of an "I." If we really think about it, we are taught that the mental phenomenon of intentionality is what makes the body do things. Think of it this way: what does intentionality "feel like?" What does it "feel like" to intend to move your hand and then move your hand. It's one of those ridiculous philosophy questions, isn't it? Because it doesn't "feel like" anything, it just is. Or so we think. When I teach the empiricists in my intro philosophy class and we talk about reinforcement, I like to ask "does anyone remember when they learned their name?" or "Do you remember the moment you learned how to add?" Usually the answer is no, because we've done it so many times -- so many instances of writing our names, of responding, of identifying, of adding, of thinking that one thing causes another -- that the initial memory is effaced by the multitude of times each of us has engaged in those actions.

Every moment of "volition" is a cultural reinforcement that intention = action. That something happens. Even if we really, really wish that we should turn off the TV and do some work, but don't, we can at least say that we had the intention but didn't follow up. And that's a mental phenomenon. Something happened, even if it was just a fleeting thought. That's a relatively advanced way of thinking, and the epitome of self-reflexivity on a Cartesian level: "I had a thought." Ironically, to think about yourself that way requires a logic that isn't based on an inherent self-awareness as Descartes presents it, but on an other-awareness -- one by which we can actually objectify thought itself. If we go all the way back to my first entry in this series, I point out that Descartes feels that it's not the objects/variables/ideas themselves that he wants to look at, it's the relationships among them. He sees the very sensory imagination as the place where objects are known, but it's the awareness (as opposed to perception) of the relationships among objects that belie the existence of the "thinking" in his model of human-as-thinking-thing.

However, the very development of that awareness of "logic" is contingent upon the "first logic" I mentioned, one that we can now see is based upon the sensory information of the body itself. The first "thing" encountered by the mind is the body, not itself. Why not? Because in order for the mind to objectify itself as an entity, it must have examples of objects from which to draw the parallel. And, its own cognitive processes qua phenomena cannot be recognized as 'phenomena,' 'events,' 'happenings,' or 'thoughts.' The very cognitive processes which occur that allow the mind to recognize itself as mind have no associations. It was hard enough to answer "what does intentionality feel like," but answering "what does self-reflexivity feel like" is even harder, because, from Descartes' point of view, we'd have to say 'everything,' or 'existence,' or 'being.'

So then, what are the implications of this? First of all, we can see that the Cartesian approach of privileging relations over objects had a very profound effect on Western philosophy. Even though several Greek philosophers had operated from an early version of this approach, Descartes' reiteration of the primacy of relations and the incorporeality of logic itself conditioned Western philosophy toward an ontological conceit. That is to say, the self, or the being of self becomes the primary locus of enquiry and discourse. If we place philosophical concepts of the self on a spectrum, on one end would be Descartes and the rationalists, privileging a specific soul or consciousness which exists and expresses its volition within (and for some, in spite of) the phenomenal world. On the other end of the spectrum, the more empirical and existential view that the self is dependent on the body and experience, but its capacity for questioning itself then effaces its origins -- hence the Sartrean "welling up in the world" and accounting for itself. While all of the views toward the more empirical and existential end aren't necessarily Cartesian in and of themselves, they are still operating from a primacy of volition as the key characteristic of a human self.

One of the effects of Cartesian subjectivity is that it renders objects outside of the self as secondary, even when the necessity of their phenomenal existence is acknowledged. Why? Because since we can't 'know' the object phenomenally with Cartesian certainty, all we can do is examine and try to understand what is, essentially, a representation of that phenomena. Since the representational capacity of humanity is now attributed to mind, our philosophical inquiry tends to be mind-focused (i.e. how do we know what we know? Or what is the essence of this concept or [mental] experience?).  The 'essence' of the phenomena is contingent upon an internal/external duality: either the 'essence' of the phenomenon is attributed to it by the self (internal to external) or the essence of the phenomena is transmitted from the object to the self (external to internal).

Internal/external, outside/inside, even the mind/body dualism: they are all iterations of the same originary self/other dichotomy. I believe this to be a byproduct of the cognitive and neural structures of our bodies. If we do have a specific and unique 'human' instinct, it is to reinforce this method of thinking, because it has been, in the evolutionary short term, beneficial to the species. It also allows for anthropomorphization of our young, other animals, and 'technology' itself that also aid in our survival. We instinctively privilege this kind of thinking, and that instinctive privileging is reinscribed as "volition." It's really not much of a leap, when you think about it. We identify our "will" to do something as a kind of efficacy. Efficacy requires an awareness of a "result." Even if the result of an impulse or thought is another thought, or arriving (mentally) at a conclusion, we objectify that thought or conclusion as a "result," which is, conceptually, separate from us. Think of every metaphor for ideas and mindedness and all other manner of mental activity: thoughts "in one's head," "having" an idea, arriving at a conclusion. All of them characterize the thoughts themselves as somehow separate from the mind generating them.

As previously stated, this has worked really well for the species in the evolutionary short-term. Human beings, via their capacity for logical, representational thought, have managed to overcome  and manipulate their own environments on a large scale. And we have done so via that little evolutionary trick that allows us to literally think in terms of objects; to objectify ourselves in relation to results/effects. The physical phenomena around us become iterations of that self/other logic. Recursively and instinctively, the environments we occupy become woven into a logic of self, but the process is reinforced in such a way that we aren't even aware that we're doing it.

Sounds great, doesn't it? It seems to be the perfect survival tool. Other species may manipulate or overcome their environments via building nests, dams, hives; or using other parts of their environment as tools. But how is the human manipulation of such things different than birds, bees, beavers, otters, or chimps? The difference is that we are aware of ourselves being aware of using tools, and we think about how to use tools more effectively so that we can better achieve a more effective result. Biologically, instinctively, we privilege the tools that seem to enhance what we believe to be our volition. This object allows me to do what I want to do in a better way. The entire structure of this logic is based upon a capacity to view the self as a singular entity and its result as a separate entity (subject/object, cause/effect, etc). But the really interesting bit here is the fact that in order for this to work, we have to be able to discursively and representationally re-integrate the "intentionality" and the "result" it brings about back into the "self." Thus, this is "my" stick; this is "my" result; that was "my" intention.  We see this as the epitome of volition. I have 'choices' between objectives that are governed by my needs and desires. This little cognitive trick of ours makes us believe that we are actually making choices.

Some of you may already see where this is going, and a few or you within that group are already feeling that quickening of the pulse, sensing an attack on free will. Good. Because that's your very human survival instinct kicking in, wanting to protect that concept because it's the heart of why and how we do anything. And to provoke you even further, I will say this: volition exists, but in the same way a deity exists for the believer. We make it exist, but we can only do so via our phenomenal existence within a larger topological landscape. Our volition is contingent upon our mindedness, but our mindedness is dependent upon objects. Do we have choices? Always. Are those choices determined by our topologies. Absolutely.

Trust me, my heart is racing too. The existentialist in me is screaming (although Heidegger's kind of smirking a little bit, and also wearing Lederhosen), but ultimately, I believe our brains and cognitive systems to have developed in such a way that the concept of volition developed as the human version of a survival instinct. It allows us to act in ways that allow us to survive; enriching our experience just enough to make us want more and to, in varying degrees, long to be better.

Well, it works for me.

Monday, February 23, 2015

The Descartes-ography of Logic (Part 3 of 4): The Sensational Self

In my previous section, we explored how Descartes was operating from an assumed irreducibility of the soul and mind. In this section, I'll attempt to get underneath the mechanism of Cartesian logic by looking at how we sense ourselves in relation to the world.

Let's look at what I called Decartes' "first logic."

Even for Descartes, self-awareness was never a complicated notion. It was not an awareness of the meaning of the self, but an awareness that these bits are part of me and those bits "out there" aren't. In the example I used earlier, a baby that is throwing things from its high chair doesn't have an advanced self awareness, but a developing one. In the most non-technical terms, what it is doing is building a sense of self, and in the process reinforcing an idea akin to "me vs not me."  I'm speculating here that the first thing a human becomes aware of is the phenomena of its own body. It literally has no idea that the body "belongs" to it, because, biologically, it hasn't made the association yet between body and mind; it doesn't know what "belong" means; and it has even less of an idea of mindedness. All sensory input would be on equal footing, including the sensory information the baby itself is generating. There would be no "sense of self." Instead, there would be just "sense."

The baby is passively taking in the sensory information that is thrown at it; and a great deal of that sensory information is the physical phenomena of itself. This covers interoception (the perception of things like hunger, pain, and the 'presence' or movement of internal organs), and proprioception (the perception of the feeling of movement, and the position of parts of the body relative to other parts of the body). Added to that is is exteroception, which is the perception of external stimuli. It's the final one which seems to steal the show when we think about our own development, but for now lets try to keep it on the same footing as the others.

Let's assume that all physical phenomena that the baby-entity takes in are equal in how they're processed through the senses. If this is the case, then what would be "learned" first would be that which was the most reinforced. Even with the most present caregiver, what is always there is the child's physical sensations of its own body (interoception and proprioception). The child senses itself first, and does so constantly. It's the consistency of certain sensory input that would allow the process of associations to begin in earnest. At that point, the "self" is more or less a behavioral entity; one that is a product of reinforcement of associations, and an "awareness" of sensory states on the most simple level: the aversion of pain, and the positive association of things that reduce pain or augment pleasure.

If this sounds somewhat cold and technical, it's supposed to be, because we necessarily (and properly) anthropomorphize these little bundles of sensory processing units into humans -- and, rest assured, they are humans. But we need to pause and try to understand this bundle from its point of view without the self-reflexivity we ourselves associate with the Cartesian subject. On the level of the developing human/sensory processing unit, there are no "known" relationships among sensations. There is not yet a sense of unity of "self." Thus, logic has not (yet) developed. The ingredients are all there, however, for logic to develop: the biological phenomenon of a neurological system outfitted with the necessary sensory inputs allowing for a recursive, algorithmic-like learning; and the sense-datum which those sensory inputs receive. I am purposely not using terms like "embodied mind" or "brain in its head" or using any kind of brain/body metaphor because this is a full-body system. The central processing unit of it happens to be centered in the head. But the development of that processing unit is contingent upon sensory input. It is not an independent system.

I'm emphasizing this because it is very much the first hurdle in deconstructing the Cartesian self: the mind as, literally, a self-contained component ... or perhaps a "contained, self-component"?  Either way, there's a philosophical and cultural hierarchy to how we see ourselves in the world that generally places mind on top, followed by body, followed by "everything else." I'm speculating from a philosophical standpoint that -- for that baby/sensory processing bundle -- there is initially no hierarchy. There certainly wouldn't be an idea of mindedness, nor would there be an idea of the body-as-body, it might be more like "everything without the else." In terms of the body, we are conditioned by our biological structures to emphasize the body because it is the first sensation. Bodily sensation comes first. In fact, the sensation is so reinforced and constant that we don't even know we're sensing it. However, our bodily awareness via interoception and proprioception is always active -- almost like an app running in the background, or an 'invisible' background process of an operating system.

Obviously, this decentralized state of "everything else" doesn't last long. The structure of the brain allows learning to begin immediately, through the neurological system of which it is a part, and such learning stimulates its growth and physical development. If, in a glorious moment, all sensory input is equal, it would be no different than the multitude of sense-datum that is around it. But very quickly, the proprioceptive and interoceptive sensations which that body is constantly producing and reinforcing, phenomenally, become so reinforced that the phenomena slip from sensation to a kind of general bodily awareness (personally, I believe that this it's this background sensation, almost like white noise, that is what is responsible for "just knowing" you're not dreaming when you're awake. But that's another potential entry).  Think for a moment, when you're not touching something, can you feel your hands? When they're not moving, or in contact with any surface, are you feeling them? At first you don't think so, but then if you start to concentrate a bit, maybe move them slightly and try to hold onto the sensation of the skin of the crooks of your fingers touching the skin perpendicular to it, there is a little "weight" that's not really weight but more like some kind of presence or mass. It's kind of a neutral sensation. It's just "there." That's part of proprioception. Just as the awareness of the movements and rumblings of your internal organs is interoception. And when you go about your business it falls into the background or is woven back into the tapestry of all your other sensations. Those bodily sensations, for the most part, are so constantly associated with a "self" that they become fused with it.

My contention is that this type of bodily sensation was, at one very early point in each of our lives, just as vibrant and present as resting a hand on a table, or as the sounds that occur, or any other sensory stimuli. The body is a phenomena like all other phenomena we consider to be "other." But because the sensation of our own bodies is always present via our interoception and proprioception, it becomes part of an overall awareness.

This, of course, doesn't quite explain those last havens of Cartesianism: volition and intentionality. In my next post, I'll attempt to do just that.

Monday, February 16, 2015

The Descartes-ography of Logic (Part 2 of 4): Not Just Any Thing

In my previous entry, we looked at the Cartesian link between self-awareness and logic and how it that link helps define our humanity. In this post, we'll look at the bedrock of Cartesian logic, and why he didn't try to dig any deeper.

Let's return to a part of the original quote from Rene Descartes' Discourse on the Method, Part II:

"I thought it best for my purpose to consider these proportions in the most general form possible, without referring them to any objects in particular, except such as would most facilitate the knowledge of them, and without by any means restricting them to these, that afterwards I might thus be the better able to apply them to every other class of objects to which they are legitimately applicable."

In Descartes' quest for certainty, he believes that he can separate thinking from the "objects" to which his ideas refer in order to "facilitate the knowledge of them." And, for Descartes, it is the unencumbered mind which can perform this separation. Now, later philosophers noticed this leap as well. Kant critiques/corrects Descartes by elevating the role of phenomena in thinking, believing that a mind cannot function in a vacuum. Nietzsche realizes that any kind of references to any kind of certainty or truth are mere linguistic correspondences. Heidegger runs with this idea to an extreme, stating that language itself is thinking, as if to revise the Cartesian "I think, therefore I am" to read: "we language, therefore we think; therefore we think we are." After that, it's an avalanche of post-structuralists who run under the banner of "the world is a text," rendering all human efficacy into performance.

Kant was onto something. I'm no Kantian, but his reassertion of phenomena was an important moment. In my mind, I picture Kant saying, "hey guys, come take a look at this." But just as philosophy as a discipline was about to start really giving phenomena a more informed look, Nietzsche's philosophy explodes in its necessary, culturally-relevant urgency. In the cleanup of the philosophical debris that followed, that little stray thread of phenomena got hidden. Sure, Husserl thought he had it via his phenomenology -- but by that point, psychology had turned all phenomenological investigation inward. If you were going to study phenomena, it had damn well be within the mind; the rest is an antiquated metaphysics.

But the thread that became buried was the idea that we base logic on the capacity to know the self from the stuff around us. Descartes' choice to not look at "objects," but instead at the relations among them and the operations that make geometry work shifted his focus from the phenomenal to the ideal, leading him down what he thought was a road to purely internal intellectual operations. Descartes, like the Greeks before him, understood that variables were just that -- variable. The function of logic, however, was certain and unchangeable. Coming to the wrong sum had nothing to do with "faulty logic," because logic was not -- and could not be -- faulty. Coming to the wrong sum was about screwing up the variables, not seeing them, mistaking one for another, and generally making some kind of error for which the senses were responsible. And, when we realize that the imagination, the place where we visualize numbers (or shapes), is itself classified as a sensory apparatus, then it becomes a bit more clear.

Descartes was so close to a much deeper understanding of logic. But the interesting thing is that his point was not to take apart the mechanisms of logic, but to figure out what was certain. This was the point of his meditations: to find a fundamental certainty upon which all human knowledge could be based. That certainty was that he, as a thinking thing, existed -- and that as long as he could think, he was existing. Thinking = existence. Once Descartes arrived at that conclusion, he then moved forward again and began to build upon it. So Descartes can't be blamed for stopping short, because it was never his intention to understand how human logic worked, instead he was trying to determine what could be known with certainty so that any of his speculations or meditations from that point forward had a basis in certainty. That bedrock upon which everything rested was self-existence. "Knowing oneself" in Cartesian terms is only that, it is not a more existential idea of being able to answer the "why am I here?" or "what does it all mean?" kind of questions.

But answering those existential questions isn't the point here either -- and yet we can see how those also serve as a kind of philosophical distraction that grabs our attention, because those existential questions seem so much more practical and relevant. If we pause for a moment and think back to Descartes original point -- to figure out what can be known with certainty -- and push through what he thought was metaphysical bedrock, we can excavate something that was buried in the debris. So, how do we know that we exist and that we are thinking things? How do we arrive at that "first logic" I mentioned in my previous entry?

To review, that first logic is the fundamental knowledge of self that is the awareness that "I am me, and that is not me." You can translate this a number of different ways without losing the gist of that fundamental logic: "this is part of my body, that is not," "I am not that keyboard," "The hands in front of my typing are me but the keyboard beneath them is not," etc. To be fair to Descartes, contained within that idea of me/not me logic is his 'ego sum res cogitans' (I am a thinking thing). But as we've seen, Descartes lets the "thing" fall away in favor of the ego sum. Descartes attributes the phenomenon of thinking to the existence of the "I," the subject that seems to be doing the thinking. Given the culture and historical period in which he's writing, it is understandable why Descartes didn't necessarily see the cognitive process itself as a phenomenon. Also, as a religious man, this thinking aspect is not just tied to the soul, it is the soul. Since Descartes was working from the Thomasian perspective that the soul was irreducible and purely logical, the cognitive process could not be dependent on any thing (the space between the words is not a typo). I want everyone to read that space between 'any' and 'thing' very, very carefully. A mind being independent of matter is not just a Cartesian idea, it is a religious one that is given philosophical gravitas by the wonderful Thomas Aquinas. And his vision of a Heaven governed by pure, dispassionate logic (a much more pure divine love) was itself informed by Greek idealism. Platonic Forms had fallen out of fashion, but the idealism (i.e. privileging the idea of the thing rather than the material of the thing) lived on via the purity and incorporeality of logic.

Descartes felt that he had reduced thinking down as far as he possibly could. Add to that the other cultural assumption that the imagination was a kind of inner sense (and not a pure process of the mind), we see that we do have to cut Rene some slack.  For him, there was no reason to go further. He had, quite logically, attributed awareness to thinking, and saw that thinking as separate from sensing. The "I am" bit was the mind, pure logic, pure thinking; and the "a thinking thing" was, more or less the sensory bit. "I am" (awareness; thinking; existence itself; logic), "a thinking thing" (a vessel with the capacity to house the aforementioned awareness and to sense the phenomena around it).  The mind recognizes itself first before it recognizes its body, because the body could only be recognized as 'belonging' to the mind if there were a mind there to do the recognizing.  That is to say, Cartesian dualism hinges upon the idea that when a human being is able to recognize its body as its own, it is only because its mind has first recognized itself. This, to me, is the mechanism behind Descartes' "first logic." The human process of consciousness or awareness IS self in Cartesian terms. The conceit that pegs Descartes as a rationalist is that this awareness cannot become aware of the body in which it is housed unless it is aware of itself first, otherwise, how could it be aware of its body? The awareness doesn't really need any other phenomena in order to be aware, for Descartes.  The capacity of awareness becomes aware of itself first, and then becomes aware of the physical phenomena around it, then finally understands itself as a thinking thing. The "awareness" kind of moves outward in concentric circles like ripples from a pebble dropped in water.

As philosophy developed over the centuries and the process of cognition itself was deemed a phenomenon, the Cartesian assumptions is still there: Even as a phenomenon, cognition itself must pre-exist the knowledge of the world around it. Pushed even further into more contemporary theory, the mind/body as a unity becomes the seat of awareness, even if the outside world is the thing that is bringing that dichotomy out (as Lacan would tell us in the mirror stage). From there, the mind is then further tethered to the biological brain as being absolutely, positively dependent on biological processes for its existence and self-reflexivity, and all of the self-reflection, self-awareness, and existential angst therein. Consciousness happens as a byproduct of our biological cognitive processes, but the world that is rendered to us via that layer of consciousness is always already a representation. The distinction between self/other, interior/exterior, and subject/object still remains intact. 

I think that even with the best intentions of trying to get to the bottom of Cartesian subjectivity, we tend to stop where Descartes stopped. I mean, really, is it possible to get underneath the "thinking" of the "thinking thing" while you're engaged in thinking itself? The other option is the more metaphysical one, and to look at the other things which we are not: the objects themselves. There are two problems here, one being that most of this metaphysical aspect of philosophy fell out of favor as science advanced. The other is that the Cartesian logical dichotomy is the basis of what science understands as "objectivity" itself. We are "objective" in our experiments; or we observe something from an objective point of view. Even asking "how do we know this object" places us on the materialist/idealist spectrum, with one side privileging sense data as being responsible for the essence of thing thing, while on the other, the essence of the object is something we bring to that limited sense data or phenomena.

Regardless of how you look at it, this is still a privileging of a "self" via its own awareness. All of these positions take the point of view of the awareness first, and that all phenomena is known through it, even if it's the phenomena that shape the self.  But what if we were to make all phenomena equal, that is to say, take cognition as phenomena, biology as phenomena, and the surrounding environment as phenomena at the same time, and look at all of those aspects as a system which acts as a functional unity.

I've been working this question over in my mind and various aborted and unpublished blog entries for months. To reset the status of phenomena seemed to be something that would be a massive, tectonic kind of movement. But with a clearer head I realized that we're dealing with subtlety here, and not trying to flank Descartes by miles but instead by inches. In my next entry I'll be taking apart this Cartesian "first logic" by leveling the phenomenal playing field. As we'll see, it's not just stuff outside of ourselves that constitutes sensory phenomena; we also sense ourselves.

Monday, February 9, 2015

The Descartes-ography of Logic (Part 1 of 4): Establishing Relations

"I resolved to commence, therefore, with the examination of the simplest objects, not anticipating, however, from this any other advantage than than that to be found in a accustoming my mind to the love and nourishment of truth, and to a distaste for all such reasonings as were unsound, But I had no intention on that account of attempting to master all the particular sciences commonly denominated mathematics: but observing that, however different their objects, they all agree in considering on the various relations or proportions subsisting among those objects, I thought it best for my purpose to consider these proportions in the most general form possible, without referring them to any objects in particular, except such as would most facilitate the knowledge of them, and without by any means restricting them to these, that afterwards I might thus be the better able to apply them to every other class of objects to which they are legitimately applicable." -- Rene Descartes, Discourse on the Method, Part II (emphasis added)

And so the Cartesian privileging of the mind over object begins in earnest. Actually, it had its roots all the way back to Plato, where the ideal world was privileged over the material. The relationship between ideas and objects has been an ongoing conundrum and principal engine of philosophy. For Descartes, this dualism is reflected in the metaphorical split between the mind and body: the mind is incorporeal, as are its ideas; and the body is a sensory apparatus and very material. I like to think that when the earliest Western philosophers began asking epistemological questions, they were the first to look "under the hood" of how the mind worked. And what they were seeing was a process of understanding the physical world around them, and representing their own physicality.

The questions continued, and, relatively quickly, brought philosophers like Plato to the conclusion that the physical universe was just too damned flawed to be Real. That is to say, things broke down. There seemed to be no permanence in the physical universe. Everything changed and morphed and, in a glass-half-empty kind of way, died. For Plato, this just wasn't right. Change got in the way of the core of his philosophy. There had to be something permanent that was not dependent on this slippery, changing, and ultimately unreliable matter. Skip to Aristotle, who, in turn, embraced matter because he believed the key to understanding knowledge and permanence was actually the process of change. I like to think of this as the first "philosophical sidestep," where a philosopher points to a paradox and/or re-defines a term to make it work. The only thing that is permanent IS change. I picture Aristotle raising an eyebrow and feeling very proud of himself when his students oohed and ahhed at the very Aristotelian simplicity of his statement. I'm sure it was up to the Lyceum's versions of TAs and grad students to actually write out the implications.

With the exception of atomists like Epicurus -- who embraced matter for the material it was, thinking that we knew things via atoms of sensory stimuli that physically made contact with the physical mind --  most philosophers in one way or another were trying to figure out exactly what it was that we were knowing and exactly how we were knowing it. But there was something about the way Descartes tackled the lack of certainty inherent in the apprehension of the physical world that really stuck with western philosophy. Culturally speaking, we could look at the mind-over-matter attitude that prevails as an aspect of this.  Such attitudes inform the science fiction fantasies of uploading the consciousness to different bodies, whether the media of that body is purely machine, a cyborg hybrid, or even just a clone. Regardless, all of these cultural beliefs rely upon the notion that the mind or consciousness is the key component of the human: similar to the SIM card in our phones.

Modern and contemporary philosophers and theorists have chipped away at those assumptions, focusing on the mind/body dualism itself. These critiques generally follow a pattern in which the biological basis of consciousness is reaffirmed, and deem sense data absolutely necessary for the mind to come to know itself. In spite of these critiques, however, a more subtle aspect of Cartesianism remains, and we can see the roots of it present in the quote above. Cartesianism doesn't just privilege mind over body, it privileges relations over objects. In other words, in Descartes' attempt to scope out the boundaries of certainty, he de-emphasizes the corporeal due to its impermanent nature and the unreliability of our material senses. Any later philosophy which implies that the "real" philosophical work comes in examining the relations among objects and the ways in which the "self" negotiates those relations owes that maneuver to Descartes.

Now anyone who has studied Marx, Nietzsche, the existentialists, and all the structuralists and poststructuralists thereafter should have felt a little bit of a twitch there. I won't be the jerk philosopher here and call them all Cartesians, but I will say that the privileging of relation over objects is Cartesian-ish.

Let's go back to the quote that led off this entry. Descartes is referring to objects, but not necessarily corporeal objects. He was imagining geometric figures. For his time period, this would be the place where the veil between the corporeal and incorporeal was the thinnest -- where the very ideal, incorporeal math meets its expression in real-world, material phenomena. Geometry relies upon physical representations to be rendered. But don't numbers and operations need to be rendered as symbols in order to be known? Not in the most simple Cartesian terms, no. You can have a group of objects which, phenomenally, precedes the number that is assigned to them. So, the objects are there, but it isn't until some subjectivity encounters them and assigns a "number" to them that they become 9 objects.  The same goes for the operations among numbers -- or the relations between them. You don't need a "+" symbol to, according to Descartes, understand addition.

Now the rational philosophers before Descartes understood the above as knowledge; or, as I like to say in my classes, "Knowledge with a capital K." The relations among numbers don't change. Addition is always addition. 7 + 4 = 11. Always. Even if you replace the symbols representing the numbers, the outcome is always, ideally, 11, no matter how that "11ness" is represented. So, VII + IV = XI. "11," "XI," "eleven," "undici," all represent the same concept. Thus, mathematics -- and more importantly, the logic behind mathematics was a priori, or innate, knowledge.

Where Descartes is really interesting is that he believed that what was actually a priori wasn't necessarily math as information, it was related to an awareness of the operations that made math work. In the Sixth Meditation of Meditations on First Philosophy, Descartes addresses this more directly. He states that he is able to imagine basic geometric forms such as triangles, squares, all the way up to octagons, and picture them in his imagination; but then being able to conceive of, but could not imagine accurately, a chiliagon (a thousand-sided figure). This made him realize that he could not fall back on the symbols that represent mathematical operations. So, if you try to imagine a chiliagon, you think, okay, it probably looks a lot like a circle; that inability highlights the difference between intellect and the imagination. The imagination, for many Renaissance and Enlightenment philosophers (both rationalists and empiricists alike) was a place where one recalled -- and could manipulate -- sense experiences. However, it was not where cognition took place. The imagination itself was classified as (or associated with, depending on which philosopher we're talking about) an aspect of the senses; it was a part of our sensory apparati. While the intellect was responsible for dipping into the imagination for the reflections of various sense data it needed (i.e. imagining shapes fitting together, creating new objects from old ones, remembering what someone said or what a song sounded like, calling to mind a specific scent), the intellect itself was separate from the imagination. The intellect was logical, and logic was a perfect process: x always = x. Various stupid mistakes we made were caused by faulty sense data or by a passionate (read: emotional) imagination that drew away our attention and corrupted the information coming in.

This is why Descartes epistemologically wanted  to separate out the object from the relations among objects. If you really think about it, it makes sense that early philosophers would pin our humanity on the capacity to understand complex relationships among objects in the physical world. To them, no other species manipulated tools in the same way that humans did, because we were aware that the tools we used allowed us to achieve better results: self + tool = better result. It also makes possible what I see as later becoming a "sliding scale" of humanity. For example, Descartes himself -- after many of his "Meditations" -- fastens our humanity on our capacity to learn and be aware of that learning. At the basis of this learning and at the core of our a priori logic, is the certainty of our individual being itself. That is to say, the "first logic" (my term, not his), is the realization that one is a singular entity; a res cogitans, a "thinking thing" as Descartes himself likes to put it.

So, any entity which has the capacity to recognize itself as a thinking thing has this first logic. The question is, then, can this thinking thing learn beyond itself and understand its place in the world? That's a tall order, and filled with lots of wiggle room. Who is to say what is understanding its place and what is not? For Descartes, that's where a self-aware learning comes in. First, one must be able to "know thyself," not existentially, but logically. The self/other dichotomy, for Descartes, must be established in order for all other learning to apply. This is really key to the Cartesian self. Too many people want to place a more contemporary, existential/psychological dimension to this "knowledge of self" (Personally, I blame the Germans). However, for Descartes, he's speaking of a more simple, fundamental logic. Once the consciousness understands on a very basic level that it is a singular entity that has some kind of efficacy in the world around it, then things start building very quickly. So, the baby who throws Cheerios from its high chair and watches in wonder as things happen is on the cusp of this first logic. As soon as the association is made between "action" and "result" occurs (regardless of what the result is), Descartes assumes that the baby is also learning that this is "MY action."

As the child becomes more advanced, it comes to the real philosophical knowledge that it is a unique entity with efficacy in the world , and it can imagine itself acting in a given situation. It is aware of itself being aware. It has self-reflexivity. For philosophers of the time, this is what constitutes the difference between human beings and animals:  an animal can be trained, but that's different from 'human' learning which is a process that requires that second layer of awareness. The easiest way to think about it is how we fall into physical or mental habits. In a behavioral fashion, certain things are reinforced. However, we have the capacity to recognize that we are falling into a habit, and thus have the power to change our behaviors. It may not be easy, but it is possible. The smartest breeds of dogs (Border Collies, Standard Poodles, etc), seem to perform complex tasks and are very attuned to the most subtle behaviors. Using a mixture or training and instinct, they behave this way. However, they cannot transcend that mixture.

In a Cartesian tradition, it is a human awareness of the self as this res cogitans (thinking thing) that defines the human for itself, by itself. And, for Descartes, it was the only thing of which we could be absolutely certain. This is very important, because this certainty was the basis upon which all other logic was founded. Descartes' philosophy implies that an intuitive, innate awareness of the self as a thinking thing (X = me, Y ≠  me), basically superseding Aristotle's own logical cornerstone: to say of what is that it is not, is false; to say of what is not that it is, is false. Understanding that you yourself are a thinking thing and acting accordingly is proof that you are aware that X = X (this = me) and that X ≠ Y (that ≠ me), only then can one be aware of what is and what is not.

This means that any entity that knows itself in this manner -- and acts within the world with an awareness that it is an aware being acting in the world (an awareness of being aware) -- is human. Thus, an automaton was not human, because it was incapable of moving beyond its programming of gears and cams. It had no awareness that it was acting from a script and this could make no attempt to move beyond it. In practical terms, this meant that the complex, representational thinking needed for the creation and support of laws, ethics (regardless of custom), any kind of agriculture, animal husbandry, coordinated hunting, etc.) were human characteristics. Any entity that showed these behaviors was human, because they showed planning; or, an imagining of oneself in the future, creating if/then scenarios.

Descartes' philosophy was quite egalitarian in his designation of humanity. He was well-traveled and understood that customs and cultural differences were superfluous in the designation of the human. To have any kind of customs or cultural traditions was to have a self-reflexivity. The dehumanization of other cultures, races, and gender identities was a product of psychological, social, religious, and economic forces which distorted Cartesian principles: i.e., if someone's culture is not as technologically advanced as ours, it means they're not thinking in an advanced way, which means that they're not quite human. This was NOT a Cartesian idea, but a twisting and misrepresentation of it.

However, Cartesian principles do come into play in the justification of what it is to be "human" in various other areas, and are usually at the crux of many ethical issues when it comes to abortion, euthanasia, and even animal rights. A the capacity to measure and map brain activity advances, and our understanding of psychology and ethics evolves, we are starting to grant more human-like qualities to non-human entities, especially species which show the very Cartesian characteristic of self-reflexivity. Great apes, dolphins, elephants, and other species have been shown, via variations of the rouge test, to have an advanced self-recognition. Notice, however, that all of those designations are ones that hinge upon a capacity to, in some form or another, know oneself; to be aware of oneself in one's surroundings and learn accordingly; to transcend a simple behavioral relationship to the world. Also helping here is the fact that psychology and sociology have shown that much of what we do is actually a product of reinforced, behavioral patterns. So science readjusted the parameters a bit allowing us to be more like animals.

As a philosopher, this is a tempting point of departure where I could start discussing the differences between human, animal, and artificial intelligences and problematizing their designations. This inevitably leads toward the age-old "what does it mean to be human" question. Please, if Descartes were alive today, "he'd freak the fuck out" (as I say so eloquently in my classes), because by his own definition, if a machine could learn based on the parameters of its surroundings, it would thus be human. But, over time, especially through the industrial revolution and into the 20th century, "humanity" remains in tact due to some slight tweaks to the Cartesian subject, most of which come back to the self-awareness inherent in self-reflexivity.

But as we will see in my next post, these possibilities are the shiny objects that distract us from the fact that all of this conjecture is actually based on a fundamental leap in Cartesian logic: that a mind is a separate entity from not only its body, but also from the objects which it thinks about.