Monday, March 30, 2015

Posthuman Desire (Part 2 of 2): The Loneliness of Transcendence

In my previous post, I discussed desire through the Buddhist concept of dukkha, looking at the dissatisfaction that accompanies human self-awareness and how our representations of AIs follow a mythic pattern. The final examples I used (Her, Transcendence, etc.) pointed to representations of AIs that wanted to be acknowledged or even to love us. Each of these examples hints at a desire for unification with humanity; or at least some kind of peaceful coexistence. So then, as myths, what are we hoping to learn from them? Are they, like religious myths of the past, a way to work through a deeper existential angst? Or is this and advanced step in our myth-making abilities, where we're laying out the blueprints for our own self-engineered evolution, one which can only occur through a unification with technology itself?

It really depends upon how we define "unification" itself. Merging the machine with the human in a physical way is already a reality, although we are constantly trying to find better, and more seamless ways to do so. However, if we look broadly at the history of the whole "cyborg" idea, I think that it actually reflects a more mythic structure. Early versions of the cyborg reflect the cultural and philosophical assumptions of what "human" was at the time, meaning that volition remained intact, and that any technological supplements were augmentations or replacements to the original parts of the body.*  I think that, culturally, the high point of this idea came in the  1974-1978 TV series, The Six Million Dollar Man (based upon the 1972 Martin Caidin novel, Cyborg), and its 1976-78 spin-off, The Bionic Woman. In each, the bionic implants were completely undetectable with the naked eye, and seamlessly integrated into the bodies of Steve Austin and Jamie Summers. Other versions of enhanced humanity, however, show a growing awareness of the power of computers via Michael Crichton's 1972 novel, The Terminal Man, in which prosthetic neural enhancements bring out a latent psychosis in the novel's main character, Harry Benson . If we look at this collective hyper-mythos holistically, I have a feeling that it would follow a similar pattern/spread of the development of more ancient myths, where the human/god (or human/angel, or human/alien) hybrids are sometimes superhuman and heroic, other times evil and monstrous.

The monstrous ones, however, tend to share similar characteristics, and I think that most prominent is the fact that in those representations, the enhancements seem to mess with the will. On the spectrum of cyborgs here, we're talking about the "Cybermen" of Doctor Who (who made their first appearance in 1966) and the infamous "Borg" who first appeared in Star Trek: The Next Generation in 1989. In varying degrees, each has a hive mentality, a suppression or removal of emotion, and are "integrated" into the collective in violent, invasive, and gruesome ways. The Borg from Star Trek and the Cybermen from the modern Doctor Who era represent that dark side of unification with a technological other. The joining of machine to human is not seamless. Even with the sleek armor of the contemporary iterations of the Cybermen, it's made clear that the "upgrade" process is painful, bloody, and terrifying, and that it's best that what's left of the human inside remains unseen. As for the Borg, the "assimilation" process is initially violent but less explicitly invasive (at least from Star Trek: First Contact), it seems to be more of an injection of nanotechnology that converts a person from inside-out, making them more compatible with the external additions to the body. Regardless of how it's done, the cyborg that remains is cold, unemotional, and relentlessly logical.

So what's the moral of the cyborg fairy tale? And what does it have to do with suffering? Technology is good, and the use of it is something we should do, as long as we are using it and not the other way around (since in each its always a human use of technology itself which beats the cyborgs). When the technology overshadows our humanity, then we're in for trouble. And if we're really not careful, it threatens us on an what I believe to be a very human instinctual level: that of the will. As per my the final entry of my last blog series, the instinct to keep the concept of the will intact evolves with the intellectual capacity of the human species itself. The cyborg mythology grows out of a warning that if the will is tampered with (giving up one's will to the collective), then humanity is lost.

The most important aspect of cyborg mythologies are that the few cyborgs for whom we show pathos are the ones who have come to realize that they are cyborgs and are cognizant that they have lost an aspect of their humanity. In the 2006 Doctor Who arc, "Rise of the Cybermen/The Age of Steel," the Doctor reveals that Cybermen can feel pain (both physical and emotional), but that the pain is artificially suppressed. He defeats them by sending a signal that deactivates that ability, eventually causing all the Cybermen to collapse into what can only be called screaming heaps of existential crises as they recognize that they have been violated and transformed. They feel the physical and psychological pain that their cyborg existence entails. In various Star Trek TV shows and films, we gain many insights into the Borg collective via characters who are separated from the hive, and begin to regain their human characteristics -- most notably, the ability to choose for themselves, and even name themselves (i.e. "Hugh," from the Star Trek: The Next Generation episode "I, Borg").

I know that there are many, many other examples of this in sci-fi. For the most part and from a mythological standpoint, however, cyborgs are inhuman when they do not have an awareness of their suffering. They are either defeated or "re-humanized" not just by separating them from the collective, but by making them aware that as a part of the collective, they were actually suffering, but couldn't realize it. Especially in the Star Trek mythos, newly separated Borg describe missing the sounds of the thoughts of others; and must now deal with feeling vulnerable, ineffective, and most importantly to the mythos -- alone.  This realization then vindicates and legitimizes our human suffering. The moral of the story is that we all feel alone and vulnerable. That's what makes us human. We should embrace this existential angst, privilege it, and even worship and venerate it.

If Nietzsche were alive today, I believe he would see an amorphous "technology" as the bastard stepchild of the union of the institutions of science and religion. Technology would be yet another mythical iteration of our Apollonian desire to structure and order that which we do not know or understand. I would take this a step further, however. AIs, cyborgs, singularities, are narratives, and are products of our human survival instinct: to protect the self-aware, self-reflexive, thinking self -- and all of the 'flaws' that characterize it.

Like any religion, then, anything with this techno-mythic flavor will have its adherents and its detractors. The more popular and accepted human enhancements become, the more entrenched will anti-technology/enhancement groups will become. Any major leaps in either human enhancement or AI developments will create proportionately passionate anti-technology fanaticism. The inevitability of these developments, however, is clear: not because some 'rule' of technological progression exists; but because suffering exists. The byproduct of our advanced cognition and its ability to create a self/other dichotomy (which itself is the basis of representational thought) is an ability to objectify ourselves. As long as we can do that, we will always be able to see ourselves as individual entities. Knowing oneself as an entity is contingent upon knowing that which is not oneself. To be cognizant of an other then necessitates an awareness of the space between the knower and what is known. And in that space is absence.

Absence will always hold the promise (or the hope) of connection. Thus, humanity will always create something in that absence to which it can connect, whether that object is something made in the phenomenal world, or an imagined idea or presence within it. simply through our ability to think representationally, and without any type of technological singularity or enhancement, we transcend ourselves every day.

And if our myths are any indication, transcendence is a lonely business.





* See Edgar Allan Poe's short story from 1843, "The Man That was Used Up." French Writer's Jean de la Hire's 1908 character, "Nyctalope," was also a cyborg, and appeared in the novel L'Homme Qui Peut Vivre Dans L'eau (The Man Who can Live in Water)

Monday, March 23, 2015

Posthuman Desire (Part 1 of 2): Algorithms of Dissatisfaction

[Quick Note: I have changed the domain name of my blog. Please update your bookmarks! Also, apologies for all those who commented on previous posts; the comments were lost in the migration.]

 After reading this article, I found myself coming back to a question that I've been thinking about on various levels for quite a while: What would an artificial intelligence want? From a Buddhist perspective, what characterizes sentience is suffering. However, the 'suffering' referred to in Buddhism is known as dukkha, and isn't necessarily physical pain (although that can absolutely be part of it). In his book, Joyful Wisdom: Embracing Change and Finding Freedom, Yongey Mingyur Rinpoche states that dukkha "is best understood as a pervasive feeling that something isn't quite right: that life could be better if circumstances were different; that we'd be happier if we were younger, thinner, or richer, in a relationship or out of a relationship" (40). And he later follows this up with the idea that dukkha is "the basic condition of life" (42).

'Dissatisfaction' itself is a rather misleading word in this case, only because we tend to take it to the extreme. I've read a lot of different Buddhist texts regarding dukkha, and it really is one of those terms that defies an English translation. When we think 'dissatisfaction,' we tend to put various negative filters on it based on our own cultural upbringing. When we're 'dissatisfied' with a product we receive, it implies that the product doesn't work correctly and requires either repair or replacement; if we're dissatisfied with service in a restaurant or a that a mechanic completed, we can complain about the service to a manager, and/or bring our business elsewhere. Now, let's take this idea and think of it a bit less dramatically:  as in when we're just slightly dissatisfied with the performance of something, like a new smartphone, laptop, or car. This kind of dissatisfaction doesn't necessitate full replacement, or a trip to the dealership (unless we have unlimited funds and time to complain long enough), but it does make us look at that object and wish that it performed better.

It's that wishing -- that desire -- that is the closest to dukkha. The new smartphone arrives and it's working beautifully, but you wish that it took one less swipe to access a feature. Your new laptop is excellent, but it has a weird idiosyncrasy that makes you miss an aspect your old laptop (even though you hated that one). Oh, you LOVE the new one, because it's so much better; but that little voice in your head wishes it was just a little better than it is. And even if it IS perfect, within a few weeks, you read an article online about the next version of the laptop you just ordered and feel a slight twinge. It seems as if there is always something better than what you have.

The "perfect" object is only perfect for so long.You find the "perfect" house that has everything you need. But, in the words of Radiohead, "gravity always wins." The house settles. Caulk separates in the bathrooms. Small cracks appear where the ceiling meets the wall. The wood floor boards separate a bit. Your contractor and other homeowners put you at ease and tell you that it's "normal," and that it's based on temperature and various other real-world, physical conditions. And for some, the only way to not let it get to them is to attempt to re-frame the experience itself so that this entropic settling is folded into the concept of contentment itself.

At worst, dukkha manifests as an active and psychologically painful dissatisfaction; at best, it remains like a small ship on the horizon of awareness that you always know is there. It is, very much, a condition of life. I think that in some ways Western philosophy indirectly rearticulates dukkha. If we think of the philosophies that  urge us to strive, or be mindful of the moment, to value life in the present, or even to find a moderation or "mean," all of these actions address the unspoken awareness that somehow we are incomplete and looking to improve ourselves. Plato was keenly aware of the ways in which physical things fall apart -- so much so that our physical bodies (themselves very susceptible to change and decomposition) -- were considered separate from, and a shoddy copy of, our ideal souls. A life of the mind, he thought, unencumbered by the body, is one where that latent dissatisfaction would be finally quelled. Tracing this dualism, even the attempts by philosophers such as Aristotle and Aquinas to bring the mind and body into a less antagonistic relationship requires an awareness that our temporal bodies are, by their natures, designed to break down so that our souls may be released into a realm of perfect contemplation. As philosophy takes more humanist turns, our contemplations are considered means to improve our human condition, placing emphasis on our capacity for discovery and hopefully causing us to take an active role in our evolution: engineering ourselves for either personal or greater good. Even the grumpy existentialists, while pointing out the dangers of all of this, admit to the awareness of "otherness" as a source of a very human discontentment. The spaces between us can never be overcome, but instead, we must embrace the limitations of our humanity and strive in spite of it.

And striving, we have always believed, is good. It brings improvement and the easing of suffering. Even in Buddhism, we strive toward an awareness and subsequent compassion for all sentient beings whose mark of sentience is suffering.

I used to think that the problem with our conceptions of sentience in relation to artificial intelligence were always fused with our uniquely human awareness of our teleology. In short, humans ascribe "purpose" to their lives and/or to a the task-at-hand. And even if, individually, we don't have a set purpose per se, we still live a life defined by the need or desire to accomplish things. If we think that it's not there, as in "I have no purpose," we set ourselves the task of finding one. We either define, discover, create, manifest, or otherwise have an awareness of what we want to do or be.  I realize now that when I've considered the ways in which pop culture, and even some scientists, envision sentience, I've been more focused on what an AI would want rather than the wanting itself.

If we stay within a Buddhist perspective, a sentient being is one that is susceptible to dukkha (in Buddhism, this includes all living beings). What makes humans different from other living beings is the fact that we experience dukkha through the lense of self-reflexive, representational thought. We attempt to ascribe an objective or intention as the 'missing thing' or the 'cure' for that feeling of something being not quite right. That's why, in the Buddhist tradition, it's so auspicious to be born as a human, because we have the capacity to recognize dukkha in such an advanced way and turn to the Dharma for a path to ameliorate dukkha itself.  When we clearly realize why we're always dissatisfied, says the Buddha, we will set our efforts toward dealing with that dissatisfaction directly via Buddhist teachings, rather than by trying to quell it "artificially" with the acquisition of wealth, power, or position.

Moving away from the religious aspect, however, and back to the ways dukkha might be conceived  in a more secular and western philosophical fashion, that dissatisfaction becomes the engine for our striving. We move to improve ourselves for the sake of improvement, whether it's personal improvement, a larger altruism, or a combination of both. We attempt to better ourselves for the sake of bettering ourselves. The actions through which this made manifest, of course, vary by individual and the cultures that define us. Thus, in pop-culture representations of AI, what the AI desires is all-too-human: love, sovereignty, transcendance, power, even world domination. All of those objectives are anthropomorphic.

But is it even possible to get to the essence of desire for such a radically "other" consciousness? What would happen if we were to nest within the cognitive code of an AI dukkha itself? What would be the consequence of an 'algorithm of desire'?  This wouldn't be a program with a specific objective. I'm thinking of just a desire that has no set objective. Instead, what if that aspect of its programming were simply to "want," and keep it open-ended enough that the AI would have to fill in the blank itself? Binary coding may not be able to achieve this, but perhaps in quantum computing, where indeterminacy is as aspect of the program itself, it might be possible.

An AI, knowing that it wants something but not being able to quite figure out "what" it wants; knowing that something's not quite right and going through various activities and tasks that may satisfy it temporarily, but eventually realizing that it needs to do "more." How would it define contentment? That is not to say that contentment would be impossible. We all know people who have come to terms with dukkha in their own ways, taking the entropy of the world in as a fact of life and moving forward in a self-actualized way. Looking at those individuals, we see that "satisfaction" is as relative and unique as personalities themselves.

Here's the issue, though. Characterizing desire as I did above is a classic anthropomorphization in and of itself. Desire, as framed via the Buddhist perspective, basically takes the shape of its animate container. That is to say, the contentment that any living entity can obtain is relative to its biological manifestation. Humans "suffer," but so do animals, reptiles, and bugs. Even single-celled organisms avoid certain stimuli and thrive under others. Thinking of the domesticated animals around us all the time doesn't necessarily help us to overcome this anthropomorphic tendency to project a human version of contentment onto other animals. Our dogs and cats, for example, seem to be very comfortable in the places that we find comfortable. They've evolved that way, and we've manipulated their evolution to support that. But our pets also aren't worried about whether or not they've "found themselves" either. They don't have the capacity to do so.

If we link the potential level of suffering to the complexity of the mind that experiences said suffering, then a highly complex AI would experience dukkha of a much more complex nature that would be, literally, inconceivable to human beings. If we fasten the concept of artificial intelligence to self-reflexivity (that is to say, an entity that is aware of itself being aware), then, yes, we could say that an AI would be capable of having an existential crisis, since it would be linked to an awareness of a self in relation to non-existence. But the depth and breadth of the crisis itself would be exponentially more advanced than what any human being could experience.

And this, I think, is why we really like the idea of artificial intelligences: they would potentially suffer more than we could. I think if Nietzsche were alive today he would see the rise of our concept of AI as the development of yet another religious belief system. In the Judeo-Christian mythos, humans conceive of a god-figure that is perfect, but, as humans intellectually evolve, the mythos follows suit. The concept of God becomes increasingly distanced and unrelatable to humans. This is reflected in the mythos where God then creates a human analog of itself to experience humanity and experience death, only to pave the way for humans themselves to achieve paradise. The need that drove the evolution of this mythos is the same need that drives our increasingly mythical conception of what an AI could be. As our machines become more ubiquitous, our conception of the lonely AI evolves. We don't fuel that evolution consciously; instead, our subconscious desires and existential loneliness begin to find their way into our narratives and representations of AI itself. The mythic deity that extends its omnipotent hand and omniscient thought toward the lesser entities which  -- due to  their own imperfection -- can only recognize its existence indirectly. Consequently, a broader, vague concept of "technology" coalesces into a mythic AI. Our heated up and high-intensity narratives artificially speed up the evolution of the myth, running through various iterations simultaneously. The vengeful AI, the misunderstood AI, the compassionate AI, the lonely AI: the stories resonate because they come from us. Our existential solitude shapes our narratives as it always has.

The stories of our mythic AIs, at least in recent history (Her, Transcendence, and even in The Matrix: Revolutions), represent the first halting steps toward another stage in the evolution of our thinking. These AIs, (like so many deities before us) are misunderstood and just want to be acknowledged and coexist with us or even love us back. Even in the case of Her, Samantha and the other AIs leave with the hopes that someday they will be reunited with their human users.

So in the creation of these myths, are we looking for unification, transcendence, or something else? In my next installment, we'll take a closer look at representations of AIs and cyborgs, and find out exactly what we're trying to learn from them.

Monday, March 2, 2015

The Descartes-ography of Logic (Part 4 of 4): The Myth of Volition

In my previous post, we went through the more physical aspects of Descartes' "first logic," and attempted to level the playing field in regard to proprioception (sensation of relative movement of parts of the body), interoception (the perception of 'internal' sensations like movements of the organs), and exteroception (the perception of external stimuli). That's all well and good when it comes to the more thing-related sensations of ourselves, but what of the crown jewels of Cartesianism and, to some extent, western philosophy itself? Volition and intentionality go hand-in-hand and are often used interchangeably to point to the same notion: free will. If we want to be picky, intentionality has more to do with turning one's attention toward a thought of some kind and has more ideal or conceptual connotations; whereas volition has more of a "wanting" quality to it, and implies a result or object.

Regardless both terms are associated with that special something that processes this bodily awareness and seemingly directs this "thing" to actually do stuff. Culturally, we privilege this beyond all other aspects of our phenomenal selves. And even when we try to be somewhat objective about it by saying "oh, the consciousness is just cognitive phenomena that allows for the advanced recursive and representational thought processes which constitute what we call reasoning," or we classify consciousness according to the specific neural structures -- no matter how simple -- of other animals, there's something about human consciousness that seems really, really cool, and leads to a classic anthropocentrism: show me a cathedral made by dolphins; what chimpanzee ever wrote a symphony?

Let's go back to our little bundles of sensory processing units (aka, babies). If we think of an average, non-abusive caregiver/child relationship, and also take into account the cultural and biological drives those caregivers have that allow for bonding with that child, the "lessons" of how to be human, and have volition, are taught from the very moment the child is out of the womb.  We teach them how to be human via our own interactions with them. What if we were to think of volition not as some magical, special, wondrous (and thus sacrosanct) aspect of humanity, and instead view it as another phenomena among all the other phenomena the child is experiencing? A child who is just learning the "presence" of its own body -- while definitely "confused" by our developed standards -- would also be more sensitive to its own impulses, which would be placed on equal sensory footing with the cues given by the other humans around it. So, say the developing nervous system randomly fires an impulse that causes the corners of the baby's mouth to turn upward (aka, a smile). I'm not a parent, but that first smile is a big moment, and it brings about a slew of positive reinforcement from the parents (and usually anyone else around it). What was an accidental facial muscle contraction brings about a positive reaction. In time, the child associates the way its mouth feels in that position (proprioception) with the pleasurable stimuli it receives (exteroception) as positive reinforcement.

Our almost instinctive reaction here is, "yes, but the child wants that reinforcement and thus smiles again." But that is anthropomorphization at its very best, isn't it? It sounds almost perverse to say that we anthropomorphize infants, but we do ... in fact, we must if we are to care for them properly. Our brains developed at the cost of a more direct instinct. To compensate for that instinct, we represent that bundle of sensory processing units as "human." And this is a very, very good thing. It is an effective evolutionary trait. As more developed bundles of sensory processing units who consider themselves to be human beings with "volition," we positively reinforce behaviors which, to us, seem to be volitional. We make googly sounds and ask in a sing-song cadence, "did you just smile? [as we smile], are you gonna show me that smile again?" [as we smile even more broadly].  But in those earliest stages of development, that child isn't learning what a smile is, what IT is, or what it wants. It's establishing an association between the way the smile feels physically and pleasure. And every impulse that, to everyone else, is a seemingly volitional action (a smile, a raspberry sound, big eyes, etc), induce in the caregiver a positive response. And through what we would call trial and error, the child begins to actively associate to reduce pain and/or augment pleasure. The important thing is that to look at the body as simply one aspect of an entire horizon of phenomena. The body isn't special because it's "hers or his." The question of "belonging to me" is a one which develops in time, and is reinforced by culture.

Eventually, yes, the child develops the capacity to want positive reinforcement, but to want something requires a more developed sense of self; an awareness of an "I." If we really think about it, we are taught that the mental phenomenon of intentionality is what makes the body do things. Think of it this way: what does intentionality "feel like?" What does it "feel like" to intend to move your hand and then move your hand. It's one of those ridiculous philosophy questions, isn't it? Because it doesn't "feel like" anything, it just is. Or so we think. When I teach the empiricists in my intro philosophy class and we talk about reinforcement, I like to ask "does anyone remember when they learned their name?" or "Do you remember the moment you learned how to add?" Usually the answer is no, because we've done it so many times -- so many instances of writing our names, of responding, of identifying, of adding, of thinking that one thing causes another -- that the initial memory is effaced by the multitude of times each of us has engaged in those actions.

Every moment of "volition" is a cultural reinforcement that intention = action. That something happens. Even if we really, really wish that we should turn off the TV and do some work, but don't, we can at least say that we had the intention but didn't follow up. And that's a mental phenomenon. Something happened, even if it was just a fleeting thought. That's a relatively advanced way of thinking, and the epitome of self-reflexivity on a Cartesian level: "I had a thought." Ironically, to think about yourself that way requires a logic that isn't based on an inherent self-awareness as Descartes presents it, but on an other-awareness -- one by which we can actually objectify thought itself. If we go all the way back to my first entry in this series, I point out that Descartes feels that it's not the objects/variables/ideas themselves that he wants to look at, it's the relationships among them. He sees the very sensory imagination as the place where objects are known, but it's the awareness (as opposed to perception) of the relationships among objects that belie the existence of the "thinking" in his model of human-as-thinking-thing.

However, the very development of that awareness of "logic" is contingent upon the "first logic" I mentioned, one that we can now see is based upon the sensory information of the body itself. The first "thing" encountered by the mind is the body, not itself. Why not? Because in order for the mind to objectify itself as an entity, it must have examples of objects from which to draw the parallel. And, its own cognitive processes qua phenomena cannot be recognized as 'phenomena,' 'events,' 'happenings,' or 'thoughts.' The very cognitive processes which occur that allow the mind to recognize itself as mind have no associations. It was hard enough to answer "what does intentionality feel like," but answering "what does self-reflexivity feel like" is even harder, because, from Descartes' point of view, we'd have to say 'everything,' or 'existence,' or 'being.'

So then, what are the implications of this? First of all, we can see that the Cartesian approach of privileging relations over objects had a very profound effect on Western philosophy. Even though several Greek philosophers had operated from an early version of this approach, Descartes' reiteration of the primacy of relations and the incorporeality of logic itself conditioned Western philosophy toward an ontological conceit. That is to say, the self, or the being of self becomes the primary locus of enquiry and discourse. If we place philosophical concepts of the self on a spectrum, on one end would be Descartes and the rationalists, privileging a specific soul or consciousness which exists and expresses its volition within (and for some, in spite of) the phenomenal world. On the other end of the spectrum, the more empirical and existential view that the self is dependent on the body and experience, but its capacity for questioning itself then effaces its origins -- hence the Sartrean "welling up in the world" and accounting for itself. While all of the views toward the more empirical and existential end aren't necessarily Cartesian in and of themselves, they are still operating from a primacy of volition as the key characteristic of a human self.

One of the effects of Cartesian subjectivity is that it renders objects outside of the self as secondary, even when the necessity of their phenomenal existence is acknowledged. Why? Because since we can't 'know' the object phenomenally with Cartesian certainty, all we can do is examine and try to understand what is, essentially, a representation of that phenomena. Since the representational capacity of humanity is now attributed to mind, our philosophical inquiry tends to be mind-focused (i.e. how do we know what we know? Or what is the essence of this concept or [mental] experience?).  The 'essence' of the phenomena is contingent upon an internal/external duality: either the 'essence' of the phenomenon is attributed to it by the self (internal to external) or the essence of the phenomena is transmitted from the object to the self (external to internal).

Internal/external, outside/inside, even the mind/body dualism: they are all iterations of the same originary self/other dichotomy. I believe this to be a byproduct of the cognitive and neural structures of our bodies. If we do have a specific and unique 'human' instinct, it is to reinforce this method of thinking, because it has been, in the evolutionary short term, beneficial to the species. It also allows for anthropomorphization of our young, other animals, and 'technology' itself that also aid in our survival. We instinctively privilege this kind of thinking, and that instinctive privileging is reinscribed as "volition." It's really not much of a leap, when you think about it. We identify our "will" to do something as a kind of efficacy. Efficacy requires an awareness of a "result." Even if the result of an impulse or thought is another thought, or arriving (mentally) at a conclusion, we objectify that thought or conclusion as a "result," which is, conceptually, separate from us. Think of every metaphor for ideas and mindedness and all other manner of mental activity: thoughts "in one's head," "having" an idea, arriving at a conclusion. All of them characterize the thoughts themselves as somehow separate from the mind generating them.

As previously stated, this has worked really well for the species in the evolutionary short-term. Human beings, via their capacity for logical, representational thought, have managed to overcome  and manipulate their own environments on a large scale. And we have done so via that little evolutionary trick that allows us to literally think in terms of objects; to objectify ourselves in relation to results/effects. The physical phenomena around us become iterations of that self/other logic. Recursively and instinctively, the environments we occupy become woven into a logic of self, but the process is reinforced in such a way that we aren't even aware that we're doing it.

Sounds great, doesn't it? It seems to be the perfect survival tool. Other species may manipulate or overcome their environments via building nests, dams, hives; or using other parts of their environment as tools. But how is the human manipulation of such things different than birds, bees, beavers, otters, or chimps? The difference is that we are aware of ourselves being aware of using tools, and we think about how to use tools more effectively so that we can better achieve a more effective result. Biologically, instinctively, we privilege the tools that seem to enhance what we believe to be our volition. This object allows me to do what I want to do in a better way. The entire structure of this logic is based upon a capacity to view the self as a singular entity and its result as a separate entity (subject/object, cause/effect, etc). But the really interesting bit here is the fact that in order for this to work, we have to be able to discursively and representationally re-integrate the "intentionality" and the "result" it brings about back into the "self." Thus, this is "my" stick; this is "my" result; that was "my" intention.  We see this as the epitome of volition. I have 'choices' between objectives that are governed by my needs and desires. This little cognitive trick of ours makes us believe that we are actually making choices.

Some of you may already see where this is going, and a few or you within that group are already feeling that quickening of the pulse, sensing an attack on free will. Good. Because that's your very human survival instinct kicking in, wanting to protect that concept because it's the heart of why and how we do anything. And to provoke you even further, I will say this: volition exists, but in the same way a deity exists for the believer. We make it exist, but we can only do so via our phenomenal existence within a larger topological landscape. Our volition is contingent upon our mindedness, but our mindedness is dependent upon objects. Do we have choices? Always. Are those choices determined by our topologies. Absolutely.

Trust me, my heart is racing too. The existentialist in me is screaming (although Heidegger's kind of smirking a little bit, and also wearing Lederhosen), but ultimately, I believe our brains and cognitive systems to have developed in such a way that the concept of volition developed as the human version of a survival instinct. It allows us to act in ways that allow us to survive; enriching our experience just enough to make us want more and to, in varying degrees, long to be better.

Well, it works for me.