Monday, March 2, 2015

The Descartes-ography of Logic (Part 4 of 4): The Myth of Volition

In my previous post, we went through the more physical aspects of Descartes' "first logic," and attempted to level the playing field in regard to proprioception (sensation of relative movement of parts of the body), interoception (the perception of 'internal' sensations like movements of the organs), and exteroception (the perception of external stimuli). That's all well and good when it comes to the more thing-related sensations of ourselves, but what of the crown jewels of Cartesianism and, to some extent, western philosophy itself? Volition and intentionality go hand-in-hand and are often used interchangeably to point to the same notion: free will. If we want to be picky, intentionality has more to do with turning one's attention toward a thought of some kind and has more ideal or conceptual connotations; whereas volition has more of a "wanting" quality to it, and implies a result or object.

Regardless both terms are associated with that special something that processes this bodily awareness and seemingly directs this "thing" to actually do stuff. Culturally, we privilege this beyond all other aspects of our phenomenal selves. And even when we try to be somewhat objective about it by saying "oh, the consciousness is just cognitive phenomena that allows for the advanced recursive and representational thought processes which constitute what we call reasoning," or we classify consciousness according to the specific neural structures -- no matter how simple -- of other animals, there's something about human consciousness that seems really, really cool, and leads to a classic anthropocentrism: show me a cathedral made by dolphins; what chimpanzee ever wrote a symphony?

Let's go back to our little bundles of sensory processing units (aka, babies). If we think of an average, non-abusive caregiver/child relationship, and also take into account the cultural and biological drives those caregivers have that allow for bonding with that child, the "lessons" of how to be human, and have volition, are taught from the very moment the child is out of the womb.  We teach them how to be human via our own interactions with them. What if we were to think of volition not as some magical, special, wondrous (and thus sacrosanct) aspect of humanity, and instead view it as another phenomena among all the other phenomena the child is experiencing? A child who is just learning the "presence" of its own body -- while definitely "confused" by our developed standards -- would also be more sensitive to its own impulses, which would be placed on equal sensory footing with the cues given by the other humans around it. So, say the developing nervous system randomly fires an impulse that causes the corners of the baby's mouth to turn upward (aka, a smile). I'm not a parent, but that first smile is a big moment, and it brings about a slew of positive reinforcement from the parents (and usually anyone else around it). What was an accidental facial muscle contraction brings about a positive reaction. In time, the child associates the way its mouth feels in that position (proprioception) with the pleasurable stimuli it receives (exteroception) as positive reinforcement.

Our almost instinctive reaction here is, "yes, but the child wants that reinforcement and thus smiles again." But that is anthropomorphization at its very best, isn't it? It sounds almost perverse to say that we anthropomorphize infants, but we do ... in fact, we must if we are to care for them properly. Our brains developed at the cost of a more direct instinct. To compensate for that instinct, we represent that bundle of sensory processing units as "human." And this is a very, very good thing. It is an effective evolutionary trait. As more developed bundles of sensory processing units who consider themselves to be human beings with "volition," we positively reinforce behaviors which, to us, seem to be volitional. We make googly sounds and ask in a sing-song cadence, "did you just smile? [as we smile], are you gonna show me that smile again?" [as we smile even more broadly].  But in those earliest stages of development, that child isn't learning what a smile is, what IT is, or what it wants. It's establishing an association between the way the smile feels physically and pleasure. And every impulse that, to everyone else, is a seemingly volitional action (a smile, a raspberry sound, big eyes, etc), induce in the caregiver a positive response. And through what we would call trial and error, the child begins to actively associate to reduce pain and/or augment pleasure. The important thing is that to look at the body as simply one aspect of an entire horizon of phenomena. The body isn't special because it's "hers or his." The question of "belonging to me" is a one which develops in time, and is reinforced by culture.

Eventually, yes, the child develops the capacity to want positive reinforcement, but to want something requires a more developed sense of self; an awareness of an "I." If we really think about it, we are taught that the mental phenomenon of intentionality is what makes the body do things. Think of it this way: what does intentionality "feel like?" What does it "feel like" to intend to move your hand and then move your hand. It's one of those ridiculous philosophy questions, isn't it? Because it doesn't "feel like" anything, it just is. Or so we think. When I teach the empiricists in my intro philosophy class and we talk about reinforcement, I like to ask "does anyone remember when they learned their name?" or "Do you remember the moment you learned how to add?" Usually the answer is no, because we've done it so many times -- so many instances of writing our names, of responding, of identifying, of adding, of thinking that one thing causes another -- that the initial memory is effaced by the multitude of times each of us has engaged in those actions.

Every moment of "volition" is a cultural reinforcement that intention = action. That something happens. Even if we really, really wish that we should turn off the TV and do some work, but don't, we can at least say that we had the intention but didn't follow up. And that's a mental phenomenon. Something happened, even if it was just a fleeting thought. That's a relatively advanced way of thinking, and the epitome of self-reflexivity on a Cartesian level: "I had a thought." Ironically, to think about yourself that way requires a logic that isn't based on an inherent self-awareness as Descartes presents it, but on an other-awareness -- one by which we can actually objectify thought itself. If we go all the way back to my first entry in this series, I point out that Descartes feels that it's not the objects/variables/ideas themselves that he wants to look at, it's the relationships among them. He sees the very sensory imagination as the place where objects are known, but it's the awareness (as opposed to perception) of the relationships among objects that belie the existence of the "thinking" in his model of human-as-thinking-thing.

However, the very development of that awareness of "logic" is contingent upon the "first logic" I mentioned, one that we can now see is based upon the sensory information of the body itself. The first "thing" encountered by the mind is the body, not itself. Why not? Because in order for the mind to objectify itself as an entity, it must have examples of objects from which to draw the parallel. And, its own cognitive processes qua phenomena cannot be recognized as 'phenomena,' 'events,' 'happenings,' or 'thoughts.' The very cognitive processes which occur that allow the mind to recognize itself as mind have no associations. It was hard enough to answer "what does intentionality feel like," but answering "what does self-reflexivity feel like" is even harder, because, from Descartes' point of view, we'd have to say 'everything,' or 'existence,' or 'being.'

So then, what are the implications of this? First of all, we can see that the Cartesian approach of privileging relations over objects had a very profound effect on Western philosophy. Even though several Greek philosophers had operated from an early version of this approach, Descartes' reiteration of the primacy of relations and the incorporeality of logic itself conditioned Western philosophy toward an ontological conceit. That is to say, the self, or the being of self becomes the primary locus of enquiry and discourse. If we place philosophical concepts of the self on a spectrum, on one end would be Descartes and the rationalists, privileging a specific soul or consciousness which exists and expresses its volition within (and for some, in spite of) the phenomenal world. On the other end of the spectrum, the more empirical and existential view that the self is dependent on the body and experience, but its capacity for questioning itself then effaces its origins -- hence the Sartrean "welling up in the world" and accounting for itself. While all of the views toward the more empirical and existential end aren't necessarily Cartesian in and of themselves, they are still operating from a primacy of volition as the key characteristic of a human self.

One of the effects of Cartesian subjectivity is that it renders objects outside of the self as secondary, even when the necessity of their phenomenal existence is acknowledged. Why? Because since we can't 'know' the object phenomenally with Cartesian certainty, all we can do is examine and try to understand what is, essentially, a representation of that phenomena. Since the representational capacity of humanity is now attributed to mind, our philosophical inquiry tends to be mind-focused (i.e. how do we know what we know? Or what is the essence of this concept or [mental] experience?).  The 'essence' of the phenomena is contingent upon an internal/external duality: either the 'essence' of the phenomenon is attributed to it by the self (internal to external) or the essence of the phenomena is transmitted from the object to the self (external to internal).

Internal/external, outside/inside, even the mind/body dualism: they are all iterations of the same originary self/other dichotomy. I believe this to be a byproduct of the cognitive and neural structures of our bodies. If we do have a specific and unique 'human' instinct, it is to reinforce this method of thinking, because it has been, in the evolutionary short term, beneficial to the species. It also allows for anthropomorphization of our young, other animals, and 'technology' itself that also aid in our survival. We instinctively privilege this kind of thinking, and that instinctive privileging is reinscribed as "volition." It's really not much of a leap, when you think about it. We identify our "will" to do something as a kind of efficacy. Efficacy requires an awareness of a "result." Even if the result of an impulse or thought is another thought, or arriving (mentally) at a conclusion, we objectify that thought or conclusion as a "result," which is, conceptually, separate from us. Think of every metaphor for ideas and mindedness and all other manner of mental activity: thoughts "in one's head," "having" an idea, arriving at a conclusion. All of them characterize the thoughts themselves as somehow separate from the mind generating them.

As previously stated, this has worked really well for the species in the evolutionary short-term. Human beings, via their capacity for logical, representational thought, have managed to overcome  and manipulate their own environments on a large scale. And we have done so via that little evolutionary trick that allows us to literally think in terms of objects; to objectify ourselves in relation to results/effects. The physical phenomena around us become iterations of that self/other logic. Recursively and instinctively, the environments we occupy become woven into a logic of self, but the process is reinforced in such a way that we aren't even aware that we're doing it.

Sounds great, doesn't it? It seems to be the perfect survival tool. Other species may manipulate or overcome their environments via building nests, dams, hives; or using other parts of their environment as tools. But how is the human manipulation of such things different than birds, bees, beavers, otters, or chimps? The difference is that we are aware of ourselves being aware of using tools, and we think about how to use tools more effectively so that we can better achieve a more effective result. Biologically, instinctively, we privilege the tools that seem to enhance what we believe to be our volition. This object allows me to do what I want to do in a better way. The entire structure of this logic is based upon a capacity to view the self as a singular entity and its result as a separate entity (subject/object, cause/effect, etc). But the really interesting bit here is the fact that in order for this to work, we have to be able to discursively and representationally re-integrate the "intentionality" and the "result" it brings about back into the "self." Thus, this is "my" stick; this is "my" result; that was "my" intention.  We see this as the epitome of volition. I have 'choices' between objectives that are governed by my needs and desires. This little cognitive trick of ours makes us believe that we are actually making choices.

Some of you may already see where this is going, and a few or you within that group are already feeling that quickening of the pulse, sensing an attack on free will. Good. Because that's your very human survival instinct kicking in, wanting to protect that concept because it's the heart of why and how we do anything. And to provoke you even further, I will say this: volition exists, but in the same way a deity exists for the believer. We make it exist, but we can only do so via our phenomenal existence within a larger topological landscape. Our volition is contingent upon our mindedness, but our mindedness is dependent upon objects. Do we have choices? Always. Are those choices determined by our topologies. Absolutely.

Trust me, my heart is racing too. The existentialist in me is screaming (although Heidegger's kind of smirking a little bit, and also wearing Lederhosen), but ultimately, I believe our brains and cognitive systems to have developed in such a way that the concept of volition developed as the human version of a survival instinct. It allows us to act in ways that allow us to survive; enriching our experience just enough to make us want more and to, in varying degrees, long to be better.

Well, it works for me.

Monday, February 23, 2015

The Descartes-ography of Logic (Part 3 of 4): The Sensational Self

In my previous section, we explored how Descartes was operating from an assumed irreducibility of the soul and mind. In this section, I'll attempt to get underneath the mechanism of Cartesian logic by looking at how we sense ourselves in relation to the world.

Let's look at what I called Decartes' "first logic."

Even for Descartes, self-awareness was never a complicated notion. It was not an awareness of the meaning of the self, but an awareness that these bits are part of me and those bits "out there" aren't. In the example I used earlier, a baby that is throwing things from its high chair doesn't have an advanced self awareness, but a developing one. In the most non-technical terms, what it is doing is building a sense of self, and in the process reinforcing an idea akin to "me vs not me."  I'm speculating here that the first thing a human becomes aware of is the phenomena of its own body. It literally has no idea that the body "belongs" to it, because, biologically, it hasn't made the association yet between body and mind; it doesn't know what "belong" means; and it has even less of an idea of mindedness. All sensory input would be on equal footing, including the sensory information the baby itself is generating. There would be no "sense of self." Instead, there would be just "sense."

The baby is passively taking in the sensory information that is thrown at it; and a great deal of that sensory information is the physical phenomena of itself. This covers interoception (the perception of things like hunger, pain, and the 'presence' or movement of internal organs), and proprioception (the perception of the feeling of movement, and the position of parts of the body relative to other parts of the body). Added to that is is exteroception, which is the perception of external stimuli. It's the final one which seems to steal the show when we think about our own development, but for now lets try to keep it on the same footing as the others.

Let's assume that all physical phenomena that the baby-entity takes in are equal in how they're processed through the senses. If this is the case, then what would be "learned" first would be that which was the most reinforced. Even with the most present caregiver, what is always there is the child's physical sensations of its own body (interoception and proprioception). The child senses itself first, and does so constantly. It's the consistency of certain sensory input that would allow the process of associations to begin in earnest. At that point, the "self" is more or less a behavioral entity; one that is a product of reinforcement of associations, and an "awareness" of sensory states on the most simple level: the aversion of pain, and the positive association of things that reduce pain or augment pleasure.

If this sounds somewhat cold and technical, it's supposed to be, because we necessarily (and properly) anthropomorphize these little bundles of sensory processing units into humans -- and, rest assured, they are humans. But we need to pause and try to understand this bundle from its point of view without the self-reflexivity we ourselves associate with the Cartesian subject. On the level of the developing human/sensory processing unit, there are no "known" relationships among sensations. There is not yet a sense of unity of "self." Thus, logic has not (yet) developed. The ingredients are all there, however, for logic to develop: the biological phenomenon of a neurological system outfitted with the necessary sensory inputs allowing for a recursive, algorithmic-like learning; and the sense-datum which those sensory inputs receive. I am purposely not using terms like "embodied mind" or "brain in its head" or using any kind of brain/body metaphor because this is a full-body system. The central processing unit of it happens to be centered in the head. But the development of that processing unit is contingent upon sensory input. It is not an independent system.

I'm emphasizing this because it is very much the first hurdle in deconstructing the Cartesian self: the mind as, literally, a self-contained component ... or perhaps a "contained, self-component"?  Either way, there's a philosophical and cultural hierarchy to how we see ourselves in the world that generally places mind on top, followed by body, followed by "everything else." I'm speculating from a philosophical standpoint that -- for that baby/sensory processing bundle -- there is initially no hierarchy. There certainly wouldn't be an idea of mindedness, nor would there be an idea of the body-as-body, it might be more like "everything without the else." In terms of the body, we are conditioned by our biological structures to emphasize the body because it is the first sensation. Bodily sensation comes first. In fact, the sensation is so reinforced and constant that we don't even know we're sensing it. However, our bodily awareness via interoception and proprioception is always active -- almost like an app running in the background, or an 'invisible' background process of an operating system.

Obviously, this decentralized state of "everything else" doesn't last long. The structure of the brain allows learning to begin immediately, through the neurological system of which it is a part, and such learning stimulates its growth and physical development. If, in a glorious moment, all sensory input is equal, it would be no different than the multitude of sense-datum that is around it. But very quickly, the proprioceptive and interoceptive sensations which that body is constantly producing and reinforcing, phenomenally, become so reinforced that the phenomena slip from sensation to a kind of general bodily awareness (personally, I believe that this it's this background sensation, almost like white noise, that is what is responsible for "just knowing" you're not dreaming when you're awake. But that's another potential entry).  Think for a moment, when you're not touching something, can you feel your hands? When they're not moving, or in contact with any surface, are you feeling them? At first you don't think so, but then if you start to concentrate a bit, maybe move them slightly and try to hold onto the sensation of the skin of the crooks of your fingers touching the skin perpendicular to it, there is a little "weight" that's not really weight but more like some kind of presence or mass. It's kind of a neutral sensation. It's just "there." That's part of proprioception. Just as the awareness of the movements and rumblings of your internal organs is interoception. And when you go about your business it falls into the background or is woven back into the tapestry of all your other sensations. Those bodily sensations, for the most part, are so constantly associated with a "self" that they become fused with it.

My contention is that this type of bodily sensation was, at one very early point in each of our lives, just as vibrant and present as resting a hand on a table, or as the sounds that occur, or any other sensory stimuli. The body is a phenomena like all other phenomena we consider to be "other." But because the sensation of our own bodies is always present via our interoception and proprioception, it becomes part of an overall awareness.

This, of course, doesn't quite explain those last havens of Cartesianism: volition and intentionality. In my next post, I'll attempt to do just that.

Monday, February 16, 2015

The Descartes-ography of Logic (Part 2 of 4): Not Just Any Thing

In my previous entry, we looked at the Cartesian link between self-awareness and logic and how it that link helps define our humanity. In this post, we'll look at the bedrock of Cartesian logic, and why he didn't try to dig any deeper.

Let's return to a part of the original quote from Rene Descartes' Discourse on the Method, Part II:

"I thought it best for my purpose to consider these proportions in the most general form possible, without referring them to any objects in particular, except such as would most facilitate the knowledge of them, and without by any means restricting them to these, that afterwards I might thus be the better able to apply them to every other class of objects to which they are legitimately applicable."

In Descartes' quest for certainty, he believes that he can separate thinking from the "objects" to which his ideas refer in order to "facilitate the knowledge of them." And, for Descartes, it is the unencumbered mind which can perform this separation. Now, later philosophers noticed this leap as well. Kant critiques/corrects Descartes by elevating the role of phenomena in thinking, believing that a mind cannot function in a vacuum. Nietzsche realizes that any kind of references to any kind of certainty or truth are mere linguistic correspondences. Heidegger runs with this idea to an extreme, stating that language itself is thinking, as if to revise the Cartesian "I think, therefore I am" to read: "we language, therefore we think; therefore we think we are." After that, it's an avalanche of post-structuralists who run under the banner of "the world is a text," rendering all human efficacy into performance.

Kant was onto something. I'm no Kantian, but his reassertion of phenomena was an important moment. In my mind, I picture Kant saying, "hey guys, come take a look at this." But just as philosophy as a discipline was about to start really giving phenomena a more informed look, Nietzsche's philosophy explodes in its necessary, culturally-relevant urgency. In the cleanup of the philosophical debris that followed, that little stray thread of phenomena got hidden. Sure, Husserl thought he had it via his phenomenology -- but by that point, psychology had turned all phenomenological investigation inward. If you were going to study phenomena, it had damn well be within the mind; the rest is an antiquated metaphysics.

But the thread that became buried was the idea that we base logic on the capacity to know the self from the stuff around us. Descartes' choice to not look at "objects," but instead at the relations among them and the operations that make geometry work shifted his focus from the phenomenal to the ideal, leading him down what he thought was a road to purely internal intellectual operations. Descartes, like the Greeks before him, understood that variables were just that -- variable. The function of logic, however, was certain and unchangeable. Coming to the wrong sum had nothing to do with "faulty logic," because logic was not -- and could not be -- faulty. Coming to the wrong sum was about screwing up the variables, not seeing them, mistaking one for another, and generally making some kind of error for which the senses were responsible. And, when we realize that the imagination, the place where we visualize numbers (or shapes), is itself classified as a sensory apparatus, then it becomes a bit more clear.

Descartes was so close to a much deeper understanding of logic. But the interesting thing is that his point was not to take apart the mechanisms of logic, but to figure out what was certain. This was the point of his meditations: to find a fundamental certainty upon which all human knowledge could be based. That certainty was that he, as a thinking thing, existed -- and that as long as he could think, he was existing. Thinking = existence. Once Descartes arrived at that conclusion, he then moved forward again and began to build upon it. So Descartes can't be blamed for stopping short, because it was never his intention to understand how human logic worked, instead he was trying to determine what could be known with certainty so that any of his speculations or meditations from that point forward had a basis in certainty. That bedrock upon which everything rested was self-existence. "Knowing oneself" in Cartesian terms is only that, it is not a more existential idea of being able to answer the "why am I here?" or "what does it all mean?" kind of questions.

But answering those existential questions isn't the point here either -- and yet we can see how those also serve as a kind of philosophical distraction that grabs our attention, because those existential questions seem so much more practical and relevant. If we pause for a moment and think back to Descartes original point -- to figure out what can be known with certainty -- and push through what he thought was metaphysical bedrock, we can excavate something that was buried in the debris. So, how do we know that we exist and that we are thinking things? How do we arrive at that "first logic" I mentioned in my previous entry?

To review, that first logic is the fundamental knowledge of self that is the awareness that "I am me, and that is not me." You can translate this a number of different ways without losing the gist of that fundamental logic: "this is part of my body, that is not," "I am not that keyboard," "The hands in front of my typing are me but the keyboard beneath them is not," etc. To be fair to Descartes, contained within that idea of me/not me logic is his 'ego sum res cogitans' (I am a thinking thing). But as we've seen, Descartes lets the "thing" fall away in favor of the ego sum. Descartes attributes the phenomenon of thinking to the existence of the "I," the subject that seems to be doing the thinking. Given the culture and historical period in which he's writing, it is understandable why Descartes didn't necessarily see the cognitive process itself as a phenomenon. Also, as a religious man, this thinking aspect is not just tied to the soul, it is the soul. Since Descartes was working from the Thomasian perspective that the soul was irreducible and purely logical, the cognitive process could not be dependent on any thing (the space between the words is not a typo). I want everyone to read that space between 'any' and 'thing' very, very carefully. A mind being independent of matter is not just a Cartesian idea, it is a religious one that is given philosophical gravitas by the wonderful Thomas Aquinas. And his vision of a Heaven governed by pure, dispassionate logic (a much more pure divine love) was itself informed by Greek idealism. Platonic Forms had fallen out of fashion, but the idealism (i.e. privileging the idea of the thing rather than the material of the thing) lived on via the purity and incorporeality of logic.

Descartes felt that he had reduced thinking down as far as he possibly could. Add to that the other cultural assumption that the imagination was a kind of inner sense (and not a pure process of the mind), we see that we do have to cut Rene some slack.  For him, there was no reason to go further. He had, quite logically, attributed awareness to thinking, and saw that thinking as separate from sensing. The "I am" bit was the mind, pure logic, pure thinking; and the "a thinking thing" was, more or less the sensory bit. "I am" (awareness; thinking; existence itself; logic), "a thinking thing" (a vessel with the capacity to house the aforementioned awareness and to sense the phenomena around it).  The mind recognizes itself first before it recognizes its body, because the body could only be recognized as 'belonging' to the mind if there were a mind there to do the recognizing.  That is to say, Cartesian dualism hinges upon the idea that when a human being is able to recognize its body as its own, it is only because its mind has first recognized itself. This, to me, is the mechanism behind Descartes' "first logic." The human process of consciousness or awareness IS self in Cartesian terms. The conceit that pegs Descartes as a rationalist is that this awareness cannot become aware of the body in which it is housed unless it is aware of itself first, otherwise, how could it be aware of its body? The awareness doesn't really need any other phenomena in order to be aware, for Descartes.  The capacity of awareness becomes aware of itself first, and then becomes aware of the physical phenomena around it, then finally understands itself as a thinking thing. The "awareness" kind of moves outward in concentric circles like ripples from a pebble dropped in water.

As philosophy developed over the centuries and the process of cognition itself was deemed a phenomenon, the Cartesian assumptions is still there: Even as a phenomenon, cognition itself must pre-exist the knowledge of the world around it. Pushed even further into more contemporary theory, the mind/body as a unity becomes the seat of awareness, even if the outside world is the thing that is bringing that dichotomy out (as Lacan would tell us in the mirror stage). From there, the mind is then further tethered to the biological brain as being absolutely, positively dependent on biological processes for its existence and self-reflexivity, and all of the self-reflection, self-awareness, and existential angst therein. Consciousness happens as a byproduct of our biological cognitive processes, but the world that is rendered to us via that layer of consciousness is always already a representation. The distinction between self/other, interior/exterior, and subject/object still remains intact. 

I think that even with the best intentions of trying to get to the bottom of Cartesian subjectivity, we tend to stop where Descartes stopped. I mean, really, is it possible to get underneath the "thinking" of the "thinking thing" while you're engaged in thinking itself? The other option is the more metaphysical one, and to look at the other things which we are not: the objects themselves. There are two problems here, one being that most of this metaphysical aspect of philosophy fell out of favor as science advanced. The other is that the Cartesian logical dichotomy is the basis of what science understands as "objectivity" itself. We are "objective" in our experiments; or we observe something from an objective point of view. Even asking "how do we know this object" places us on the materialist/idealist spectrum, with one side privileging sense data as being responsible for the essence of thing thing, while on the other, the essence of the object is something we bring to that limited sense data or phenomena.

Regardless of how you look at it, this is still a privileging of a "self" via its own awareness. All of these positions take the point of view of the awareness first, and that all phenomena is known through it, even if it's the phenomena that shape the self.  But what if we were to make all phenomena equal, that is to say, take cognition as phenomena, biology as phenomena, and the surrounding environment as phenomena at the same time, and look at all of those aspects as a system which acts as a functional unity.

I've been working this question over in my mind and various aborted and unpublished blog entries for months. To reset the status of phenomena seemed to be something that would be a massive, tectonic kind of movement. But with a clearer head I realized that we're dealing with subtlety here, and not trying to flank Descartes by miles but instead by inches. In my next entry I'll be taking apart this Cartesian "first logic" by leveling the phenomenal playing field. As we'll see, it's not just stuff outside of ourselves that constitutes sensory phenomena; we also sense ourselves.

Monday, February 9, 2015

The Descartes-ography of Logic (Part 1 of 4): Establishing Relations

"I resolved to commence, therefore, with the examination of the simplest objects, not anticipating, however, from this any other advantage than than that to be found in a accustoming my mind to the love and nourishment of truth, and to a distaste for all such reasonings as were unsound, But I had no intention on that account of attempting to master all the particular sciences commonly denominated mathematics: but observing that, however different their objects, they all agree in considering on the various relations or proportions subsisting among those objects, I thought it best for my purpose to consider these proportions in the most general form possible, without referring them to any objects in particular, except such as would most facilitate the knowledge of them, and without by any means restricting them to these, that afterwards I might thus be the better able to apply them to every other class of objects to which they are legitimately applicable." -- Rene Descartes, Discourse on the Method, Part II (emphasis added)

And so the Cartesian privileging of the mind over object begins in earnest. Actually, it had its roots all the way back to Plato, where the ideal world was privileged over the material. The relationship between ideas and objects has been an ongoing conundrum and principal engine of philosophy. For Descartes, this dualism is reflected in the metaphorical split between the mind and body: the mind is incorporeal, as are its ideas; and the body is a sensory apparatus and very material. I like to think that when the earliest Western philosophers began asking epistemological questions, they were the first to look "under the hood" of how the mind worked. And what they were seeing was a process of understanding the physical world around them, and representing their own physicality.

The questions continued, and, relatively quickly, brought philosophers like Plato to the conclusion that the physical universe was just too damned flawed to be Real. That is to say, things broke down. There seemed to be no permanence in the physical universe. Everything changed and morphed and, in a glass-half-empty kind of way, died. For Plato, this just wasn't right. Change got in the way of the core of his philosophy. There had to be something permanent that was not dependent on this slippery, changing, and ultimately unreliable matter. Skip to Aristotle, who, in turn, embraced matter because he believed the key to understanding knowledge and permanence was actually the process of change. I like to think of this as the first "philosophical sidestep," where a philosopher points to a paradox and/or re-defines a term to make it work. The only thing that is permanent IS change. I picture Aristotle raising an eyebrow and feeling very proud of himself when his students oohed and ahhed at the very Aristotelian simplicity of his statement. I'm sure it was up to the Lyceum's versions of TAs and grad students to actually write out the implications.

With the exception of atomists like Epicurus -- who embraced matter for the material it was, thinking that we knew things via atoms of sensory stimuli that physically made contact with the physical mind --  most philosophers in one way or another were trying to figure out exactly what it was that we were knowing and exactly how we were knowing it. But there was something about the way Descartes tackled the lack of certainty inherent in the apprehension of the physical world that really stuck with western philosophy. Culturally speaking, we could look at the mind-over-matter attitude that prevails as an aspect of this.  Such attitudes inform the science fiction fantasies of uploading the consciousness to different bodies, whether the media of that body is purely machine, a cyborg hybrid, or even just a clone. Regardless, all of these cultural beliefs rely upon the notion that the mind or consciousness is the key component of the human: similar to the SIM card in our phones.

Modern and contemporary philosophers and theorists have chipped away at those assumptions, focusing on the mind/body dualism itself. These critiques generally follow a pattern in which the biological basis of consciousness is reaffirmed, and deem sense data absolutely necessary for the mind to come to know itself. In spite of these critiques, however, a more subtle aspect of Cartesianism remains, and we can see the roots of it present in the quote above. Cartesianism doesn't just privilege mind over body, it privileges relations over objects. In other words, in Descartes' attempt to scope out the boundaries of certainty, he de-emphasizes the corporeal due to its impermanent nature and the unreliability of our material senses. Any later philosophy which implies that the "real" philosophical work comes in examining the relations among objects and the ways in which the "self" negotiates those relations owes that maneuver to Descartes.

Now anyone who has studied Marx, Nietzsche, the existentialists, and all the structuralists and poststructuralists thereafter should have felt a little bit of a twitch there. I won't be the jerk philosopher here and call them all Cartesians, but I will say that the privileging of relation over objects is Cartesian-ish.

Let's go back to the quote that led off this entry. Descartes is referring to objects, but not necessarily corporeal objects. He was imagining geometric figures. For his time period, this would be the place where the veil between the corporeal and incorporeal was the thinnest -- where the very ideal, incorporeal math meets its expression in real-world, material phenomena. Geometry relies upon physical representations to be rendered. But don't numbers and operations need to be rendered as symbols in order to be known? Not in the most simple Cartesian terms, no. You can have a group of objects which, phenomenally, precedes the number that is assigned to them. So, the objects are there, but it isn't until some subjectivity encounters them and assigns a "number" to them that they become 9 objects.  The same goes for the operations among numbers -- or the relations between them. You don't need a "+" symbol to, according to Descartes, understand addition.

Now the rational philosophers before Descartes understood the above as knowledge; or, as I like to say in my classes, "Knowledge with a capital K." The relations among numbers don't change. Addition is always addition. 7 + 4 = 11. Always. Even if you replace the symbols representing the numbers, the outcome is always, ideally, 11, no matter how that "11ness" is represented. So, VII + IV = XI. "11," "XI," "eleven," "undici," all represent the same concept. Thus, mathematics -- and more importantly, the logic behind mathematics was a priori, or innate, knowledge.

Where Descartes is really interesting is that he believed that what was actually a priori wasn't necessarily math as information, it was related to an awareness of the operations that made math work. In the Sixth Meditation of Meditations on First Philosophy, Descartes addresses this more directly. He states that he is able to imagine basic geometric forms such as triangles, squares, all the way up to octagons, and picture them in his imagination; but then being able to conceive of, but could not imagine accurately, a chiliagon (a thousand-sided figure). This made him realize that he could not fall back on the symbols that represent mathematical operations. So, if you try to imagine a chiliagon, you think, okay, it probably looks a lot like a circle; that inability highlights the difference between intellect and the imagination. The imagination, for many Renaissance and Enlightenment philosophers (both rationalists and empiricists alike) was a place where one recalled -- and could manipulate -- sense experiences. However, it was not where cognition took place. The imagination itself was classified as (or associated with, depending on which philosopher we're talking about) an aspect of the senses; it was a part of our sensory apparati. While the intellect was responsible for dipping into the imagination for the reflections of various sense data it needed (i.e. imagining shapes fitting together, creating new objects from old ones, remembering what someone said or what a song sounded like, calling to mind a specific scent), the intellect itself was separate from the imagination. The intellect was logical, and logic was a perfect process: x always = x. Various stupid mistakes we made were caused by faulty sense data or by a passionate (read: emotional) imagination that drew away our attention and corrupted the information coming in.

This is why Descartes epistemologically wanted  to separate out the object from the relations among objects. If you really think about it, it makes sense that early philosophers would pin our humanity on the capacity to understand complex relationships among objects in the physical world. To them, no other species manipulated tools in the same way that humans did, because we were aware that the tools we used allowed us to achieve better results: self + tool = better result. It also makes possible what I see as later becoming a "sliding scale" of humanity. For example, Descartes himself -- after many of his "Meditations" -- fastens our humanity on our capacity to learn and be aware of that learning. At the basis of this learning and at the core of our a priori logic, is the certainty of our individual being itself. That is to say, the "first logic" (my term, not his), is the realization that one is a singular entity; a res cogitans, a "thinking thing" as Descartes himself likes to put it.

So, any entity which has the capacity to recognize itself as a thinking thing has this first logic. The question is, then, can this thinking thing learn beyond itself and understand its place in the world? That's a tall order, and filled with lots of wiggle room. Who is to say what is understanding its place and what is not? For Descartes, that's where a self-aware learning comes in. First, one must be able to "know thyself," not existentially, but logically. The self/other dichotomy, for Descartes, must be established in order for all other learning to apply. This is really key to the Cartesian self. Too many people want to place a more contemporary, existential/psychological dimension to this "knowledge of self" (Personally, I blame the Germans). However, for Descartes, he's speaking of a more simple, fundamental logic. Once the consciousness understands on a very basic level that it is a singular entity that has some kind of efficacy in the world around it, then things start building very quickly. So, the baby who throws Cheerios from its high chair and watches in wonder as things happen is on the cusp of this first logic. As soon as the association is made between "action" and "result" occurs (regardless of what the result is), Descartes assumes that the baby is also learning that this is "MY action."

As the child becomes more advanced, it comes to the real philosophical knowledge that it is a unique entity with efficacy in the world , and it can imagine itself acting in a given situation. It is aware of itself being aware. It has self-reflexivity. For philosophers of the time, this is what constitutes the difference between human beings and animals:  an animal can be trained, but that's different from 'human' learning which is a process that requires that second layer of awareness. The easiest way to think about it is how we fall into physical or mental habits. In a behavioral fashion, certain things are reinforced. However, we have the capacity to recognize that we are falling into a habit, and thus have the power to change our behaviors. It may not be easy, but it is possible. The smartest breeds of dogs (Border Collies, Standard Poodles, etc), seem to perform complex tasks and are very attuned to the most subtle behaviors. Using a mixture or training and instinct, they behave this way. However, they cannot transcend that mixture.

In a Cartesian tradition, it is a human awareness of the self as this res cogitans (thinking thing) that defines the human for itself, by itself. And, for Descartes, it was the only thing of which we could be absolutely certain. This is very important, because this certainty was the basis upon which all other logic was founded. Descartes' philosophy implies that an intuitive, innate awareness of the self as a thinking thing (X = me, Y ≠  me), basically superseding Aristotle's own logical cornerstone: to say of what is that it is not, is false; to say of what is not that it is, is false. Understanding that you yourself are a thinking thing and acting accordingly is proof that you are aware that X = X (this = me) and that X ≠ Y (that ≠ me), only then can one be aware of what is and what is not.

This means that any entity that knows itself in this manner -- and acts within the world with an awareness that it is an aware being acting in the world (an awareness of being aware) -- is human. Thus, an automaton was not human, because it was incapable of moving beyond its programming of gears and cams. It had no awareness that it was acting from a script and this could make no attempt to move beyond it. In practical terms, this meant that the complex, representational thinking needed for the creation and support of laws, ethics (regardless of custom), any kind of agriculture, animal husbandry, coordinated hunting, etc.) were human characteristics. Any entity that showed these behaviors was human, because they showed planning; or, an imagining of oneself in the future, creating if/then scenarios.

Descartes' philosophy was quite egalitarian in his designation of humanity. He was well-traveled and understood that customs and cultural differences were superfluous in the designation of the human. To have any kind of customs or cultural traditions was to have a self-reflexivity. The dehumanization of other cultures, races, and gender identities was a product of psychological, social, religious, and economic forces which distorted Cartesian principles: i.e., if someone's culture is not as technologically advanced as ours, it means they're not thinking in an advanced way, which means that they're not quite human. This was NOT a Cartesian idea, but a twisting and misrepresentation of it.

However, Cartesian principles do come into play in the justification of what it is to be "human" in various other areas, and are usually at the crux of many ethical issues when it comes to abortion, euthanasia, and even animal rights. A the capacity to measure and map brain activity advances, and our understanding of psychology and ethics evolves, we are starting to grant more human-like qualities to non-human entities, especially species which show the very Cartesian characteristic of self-reflexivity. Great apes, dolphins, elephants, and other species have been shown, via variations of the rouge test, to have an advanced self-recognition. Notice, however, that all of those designations are ones that hinge upon a capacity to, in some form or another, know oneself; to be aware of oneself in one's surroundings and learn accordingly; to transcend a simple behavioral relationship to the world. Also helping here is the fact that psychology and sociology have shown that much of what we do is actually a product of reinforced, behavioral patterns. So science readjusted the parameters a bit allowing us to be more like animals.

As a philosopher, this is a tempting point of departure where I could start discussing the differences between human, animal, and artificial intelligences and problematizing their designations. This inevitably leads toward the age-old "what does it mean to be human" question. Please, if Descartes were alive today, "he'd freak the fuck out" (as I say so eloquently in my classes), because by his own definition, if a machine could learn based on the parameters of its surroundings, it would thus be human. But, over time, especially through the industrial revolution and into the 20th century, "humanity" remains in tact due to some slight tweaks to the Cartesian subject, most of which come back to the self-awareness inherent in self-reflexivity.

But as we will see in my next post, these possibilities are the shiny objects that distract us from the fact that all of this conjecture is actually based on a fundamental leap in Cartesian logic: that a mind is a separate entity from not only its body, but also from the objects which it thinks about.

Wednesday, August 20, 2014

Perseverance and Writing Regularly, Part 2 of 2: Reading

I've gotten a fair amount of positive feedback on my previous post. And it seems that the blog post I was working on regarding connection, interface, and control has morphed into a much larger project: at least another article, but more likely, the start of an outline for my second book. It became clear pretty quickly that a blog wasn't going to be a workable platform for the material; it's deep, and needs footnotes and long arcs of theoretical unpacking. With that said, however, Posthuman Being will be a perfect space for me to explore singular aspects of what I'll be covering.  Anyway, in the process of doing some very preliminary research and preparation for the project, I started thinking about an aspect of the writing process that is all-too-often glossed over: reading.

I started thinking about a concept that had come up a few times in my research, but always as a tangent or "scenic route" in my line of argument. I had visited it a few times in grad school, but had more pressing matters at hand. And in my later projects, other deadlines always loomed which precluded anything but the straightest line through my points. I remembered a book that I had "read" in grad school by one of our faculty. And then, in that procrastinatory way, I did some googling to find out where the author was now teaching. This particular academic was a bit of an interdisciplinary chameleon: taking the shape of whatever department/institution in which he was housed. As far as I knew, he wasn't in my field, so to speak ... at least, the last time I had checked he wasn't. Until I found his faculty page.

And there it was: a description of his current work. And in a scant few sentences was the very preliminary and tentative thesis I had come up with as I was outlining my latest project. I had almost forgotten that terrible punch-in-the-gut feeling of seeing what you thought was an original idea already solidly articulated in someone else's words. With a grimace, I actually said, out loud, "FUCK! I've been SCOOPED!"

For those not familiar with academia, or just getting started as grad students, getting scooped is what happens when that "original" idea you thought you had -- generally the one on which you had psychologically banked all of your aspirations AND had, for a brief moment, made you feel like you weren't a total failure -- has been put forward by someone else. The first time it happens, it's just a terrible feeling. But you learn pretty quickly that getting scooped is actually a very good and necessary stage in research. It's a necessary lesson in humility, but, on the flip side, can also be a quite affirming thing. It means that the idea you've come up with not only had legs at some point, but that other people have already researched it, and thus have a treasure trove of new sources nicely listed for you in the bibliographies and works cited pages of their books and articles. I like to think of those bibliographies as maps. Others, as archeological layers that require excavation. Regardless, the only path through them is to read them.

When I returned to this particular scholar's book, I apparently had found some of it useful in grad school, since I recognized my own lightly-penciled notes in the margins. The reading process in grad school was always such a rushed affair. While I can't speak for all of my classmates at the time, I'm pretty sure most of us awkwardly and greedily blew through the majority of the books we read, trying to figure out where we would situate ourselves in our specific discursive landscapes. Other times, we scavenged, looking for the one or two quotes that said what we needed them to say to make us look not-so-dumb.

As Ph.D. students, we were required to have three reading lists, each consisting of 50-75 sources each. One list covered the theoretical/philosophical foundations of each grad student's field of inquiry; the second, the primary sources that demonstrated the particular movement, trend, or cultural phenomena the student was explaining; and the third, a more far reaching collection that points toward the potential future of the field. Theoretically, we were expected to have carefully read every one of those 150-225 sources. And comprehensive exams -- which had both written and oral components and was a process that covered about 3-4 weeks -- were questions based on those sources. In my program, once we finished the comprehensives, we were then "ABDs" and had license to start our dissertations, which, in most cases, were born of the written exams. "Reading for exams" was sometimes tedious and it was very difficult, for me at least, to remain balanced between taking detours based on other sources found in my main sources, and staying focused on my supposed research field. It was like the Scylla and Charybdis: On one side, you could be sucked down into the abyss of tangential reads, but on the other you could become too focused on a narrow question and have gaping holes in your research.

I remember little from those days, other than alternating bouts of deep anger and abject despair. When I leafed through the books I was reading at the time, I found all of my very heavy penciled underling  -- scratched into the pages with marginal notes consisting of "NO!" and "DEAR GOD!" and "WRONG" But then, in other books, the notes were far less angry. The lines, far less dark. The only place where there was any kind of emphasis was in the asterisks I put in the margin to designate that section as very important, or the repeatedly circled page number that indicated THIS was a place to concentrate. This was a part to transcribe word for word, longhand, onto an index card or sheets of note paper (I preferred the latter, since I liked to annotate my notes and then annotate my annotations). Those books rose to the surface. I found myself referring back to them more often. They became a center. The other, lesser books, orbited around them. Some of those books had other books orbiting around them like moons. It becomes a solar system of thought. And I began to see concentric patterns through various ideas. Some of them intersected predictably, as dictated by the sources themselves. But others, no. I saw intersections that others didn't.

The notes from those books became more involved and complicated. I started cross-referencing more and more. And I realized, fleetingly at first, that there was something that, maybe, possibly ... perhaps ... that these authors and legendary experts, might have, if I wasn't mistaken ... overlooked? Something as simple as a "slippage" of a term. Why did they use this word THIS way in this part of the paragraph but use it differently later? And why does the other author seem to be avoiding this "complicated" issue? And why has that author deferred analysis of an idea for "later research"?

And in my head (and on paper ... lots of paper), I started to sketch out, figuratively and literally, a network of connections among all of the most pertinent texts I'd read and the "gaps" and "slippages" therein. There were maybe a dozen or so books, articles, and chapters that were tightly woven together in the center of that much larger network. They were conversing with each other in ways that -- apparently -- only I could see. And I had to explain how they did, exhaustively, writing for hours spanning dozens of pages that would only be jettisoned later. But it didn't matter, because I had worked out the relationships among them. I explained how they were connected. And then I turned to the why ... taking on the role of meta-critic and fleshing out the academic and cultural reasons why those particular texts should be put in conversation with each other.

And then a member of  my dissertation committee asked, pointedly, "so what?"

Okay, allow me the indulgence of using a stream of consciousness here to represent a longer -- gut wrenching process -- with certain particulars purposely left out to better illustrate the flow:

BECAUSE IT'S AWESOME, that's why!  How cool is it that these texts intersect in these ways?! I mean, can't you see how awesomely cool this is?!  How all of this work has shown that all of these things are connected and that one can keep connecting the connections to see how they're all connected?!  I mean, really, I've been working so hard on this to uncover these there can't possibly be a reason why I'd spend all of this time and nearly push myself to the edge of death just to show you connections that don't matter ... at all ... to anyone ... but me. Kill me. I suck. You kind of suck, too. Because you let me go on and on doing all of this work as I found connections to connections and created all of this discourse just to be shown that it doesn't matter at all. Why did you let me keep going? As if YOU know anything about this anyway. What was the last text by A you ever read? And, by the way, your reading of B and C is completely off because you didn't notice that each is defining their terms slightly differently which shows a cultural predilection toward X cultural belief which is an unexplored area of Y that could explain why Z important intellectual/cultural/academic crisis is making people scratch their heads]. Oh, wait. That. There is that. Huh. That is kind of awesome, actually.

Again, the above intellectual, emotional, and psychological process spanned days -- if not weeks -- of consideration. But at the end of it and with another committee member's incredible advice, there it was: a research question; a working thesis. Ideally, it should have formed during my classwork. But sometimes -- especially for me -- it didn't work out that way. But my classes had given me a very solid theoretical foundation which helped me read more soundly, and with a better sense of where the text at hand belonged within the broader discourse. To some extent, however, from that point forward, everything that I read was a means to an end: always within the shadow of the research question/thesis I had posed -- All toward helping me answer the withering "so what" question.

Psychologically, there also comes a point when I had to take a stand and stop reading. This is a kind of compounded temptation, because: 1) I had become so used to analyzing text through my thesis that anything and everything could be connected to it, which, narcissistically, made me want to read more -- because it was just my own ideas being reflected back to me;  2) If I kept reading, I didn't have write. And writing is hard. Having the willpower to say "no" and stop ordering more books was one of the most difficult things to do. Especially since, once in a while, it was necessary to pick something else up -- especially if it was one that was named prominently by other authors in the network of texts I had. In my case, writing about technology made it extremely difficult to stop reading, because every year there were new innovations, and since posthumanism was an emerging field, there was always a new journal article or book on the horizon. Stopping to write was like pulling off the highway and watching everyone else pass me by. That's where all that perseverance comes in.

I recently met with a student getting her entrance essay ready for grad school. She was struggling with something related to getting scooped: feeling like everything had already been said, and that there was nothing new to say. I told her that I like to think about academic discourse in cartographical terms: each field is a larger territory on a map. There are a network of highways through them. When you first start out, it's like getting onto a gigantic eight-lane highway: a mass of people are all moving well-travelled roads in the same direction. But as you travel farther, you turn off the main highway onto smaller ones, with less traffic. Eight lanes become four; four lanes become two. The further you go, the less traveled the road, until the pavement ends and you're on a dirt road. Even further, you end up on foot and on a trail. Some even go off-trail and explore. There will almost always be a few others around, but really, you can't get to your master's thesis or your doctoral dissertation without having to travel some well-worn roads to get to your little clearing within a larger territory. For me, I didn't just teleport to posthumanism (neither did any of the others in my field). It started with the superhighway of English, then into the still-giant highway of literary theory, then forked into philosophy, then into existentialism, which sent me on a very scenic path through the philosophy of technology, and then there I was, along with just a few other souls, who got to posthumanism through very different routes, but all of those routes were marked by the roads becoming smaller and less-travelled.  And as for the occasional moments of getting lost, backtracking, and jumping back on major roads, well, that's part of it, too.

This November will mark nine years since I finished my Ph.D. The work didn't stop when I was done. I honed my dissertation down into something much better. I once again had to answer the "so what" question to my editor. I was also in the unique position of writing on something with which the editorial board was unfamiliar. I got an email that basically said, we're on the fence. Can you convince us why we should publish it? I crafted an email in one sitting that was my strongest writing ever. It was clear, focused, and disciplined. And it, with the exception of fixing a few typos, became verbatim the preface of my book. And it was, essentially, the answer to "so what?"

Now that it's clear that I'm starting to write another book. But within the process of writing is reading. The process is different his time, in that I already know the answer to the gut-wrenching "so what?" question. And, man, it's awesome. But I also know that I have to read so much more. I can only cover the same ground for so long, and the evolution of my ideas needs mass quantities of discourse to feed it. The great part is that now, I don't have to read under grad school or probationary faculty pressures. This journey is definitely my own.

Sometimes I think that people outside of academia think that academics produce thought -- out of nothing. That we just walk around, see something interesting, and say, "hey, I'm gonna write all about this," and just sit down and start writing another book or article. But, to produce, one also must consume. And, for an academic, we must consume much more in proportion than what we produce. Just pick up any academic book and look at the bibliography. Yes, all of that is what that particular author had to read in order to give you the one book you're holding in your hand. All of those texts were points on a map.















Monday, July 28, 2014

Perseverance and Writing Regularly, Part 1 of 2: (A Post on the Writing Process)

The main function of this blog has always been as a space where I can tease out certain ideas that may or may not be ripe for deeper, more academically solid exploration. I also envisioned it as a place where I can talk about the writing process itself, especially since several of my readers are or were students of mine. I am currently revising a multi-part post on connection, interface, and control. So don't worry, I'll be back to technological themes soon. In the space between finishing up the first draft and beginning a major revision, I had a moment to reflect more deeply upon being granted tenure and promotion. I wasn't sure if I was going to actually post this entry, but after a really interesting dinner conversation with a colleague and some exchanges with students, I decided to give it a go.

A challenge that comes with working at a teaching -- rather than a research -- institution is that my main focus is the classroom. With a 4/4 teaching load (that's 4 classes per semester; whereas a research university it may be 2/2, 2/3 or some variation of that depending on rank, seniority, grants, etc.), it's not easy to find the time to research or write. Summer breaks are that time. Winter breaks used to also be that time as well, but the brutally short break between the Fall and Spring semesters at my current institution makes that difficult. Summers are also the time for class preparation, and just simply catching up with every project at home that I couldn't get done during the academic year. Add to that visits from family and friends, travel/vacations, and whatever "emergency" committee or task force upon which one is called to serve on campus, and the time can fill up very quickly. 

With tenure comes a little bit of a break. An "invitation" to be on a committee during a break is just that, rather than a veiled requirement (i.e. "this will be really good for your tenure application").  So, for the first time since 2005, I have finally had the time, and motivation, to write on a regular basis again. I am somewhat ashamed to admit that I haven't written daily for an extended period of time since I was writing my dissertation. I definitely wrote, but came in desperate spurts among grading, writing committee reports, class preparation, and the week or two before deadlines. There were always summers, but it's amazing how quickly I fell into bad habits of waiting until the very end of the break to actually write. The idea that the pressure of a deadline will "force" one to get things done is a myth students and some academics are very good at perpetuating. Accomplished scholars who say they write that way may be revising that way, but they aren't composing that way. 

The past two months have been a revelation in regard to my writing process. Since I already have a piece coming out soon in this anthology, I am under no deadlines. I have kept campus commitments to an absolute minimum. I have been able to make writing a priority in my day. It is my first project in the morning at least 5 days a week, and I write for a minimum of an hour. The first product of this was my previous 3-part Google Glass review. But it's the post(s) on which I'm currently working that the real benefits of prioritizing my writing and research have become apparent. I have started to work through some of the more complicated aspects of technological/human interface that I wasn't able to in my first book. Of course, much of that comes from just knowing more based on the reading I've done since then, and being able to make more connections to established philosophy due to all of the classes I've taught between the last book and now.

It's clear that the level from which I'm working now is much deeper than my previous pieces. I attribute that to my slowed down and regular approach. Sometimes I think that my background in English works against me: no matter how much I know about process and writing, no matter what advice I give to students regarding giving oneself time to write, there is still that romanticized vision of the exhausted writer "birthing" out some kind of tome that comes only when one occupies the borders of sanity. And after that overwrought, cathartic blast, we hope that there is something salvageable in the mess.

But after a couple of months of slow, steady, and regular writing, I find that 6 hours of writing spread out over 5-6 days is just so much better than 6 hours of writing done in a single, coffee-fueled, trembling day (or night). The embarrassing part is that, when I look back on it, it was the former, more methodical technique that allowed me to finish my dissertation, rather than the latter. The main difference was that I was writing for two or three hours at a time then. Some days there was literally nothing in the tank, and most of my time was spent thinking through a particularly difficult problem.  Other days, I would labor over one or two paragraphs for the full session. There were also times when I would write voluminously in those hours. It varied, but it was a set, scheduled process. Doing it every day allowed me to finish. Success came with an awareness of my process and a commitment to finishing it up. In retrospect, my writing process matured. It made me ready for the next level not just in my writing, but in my career.

As flawed as academia may be, there is something to be said for its "hierarchy." As I've said to every student whom I've counselled regarding grad school and Ph.D. work, the dissertation is not simply about carving out a niche in a given field; or just being able to answer the "so what?" question when you've come up with something new. Writing a dissertation is a process designed to push an academic to his or her limits intellectually, emotionally, and professionally. It is a crucible, an arena, a battlefield, and a very personal hell, where you are perpetually harassed by your own demons while still at the mercy of circumstance (your advisor decides to take a sabbatical? Too bad; one of your committee members decides to work at another institution? Oh well. You or your partner are diagnosed with something horrific? Tough break). If there is one word that describes the point of the process that captures all of this, it's perseverance.

For a perilously long time, I was ABD: "all-but-dissertation." This is an informal term (yes, there are those ABDs who actually want to put this as a suffix on their business cards, thinking it carries weight), which means that all the requirements for the Ph.D. have been fulfilled except for the dissertation. It is when the student is solely responsible for his or her progress. It is the most dangerous time for any Ph.D. student, because it is when the perseverance I mentioned is most tested. The negative psychological backslide that can occur during the ABD phase is insidious. I found myself wondering why "they just couldn't let me finish," and lamenting "but I just want to teach!" I began to question and deride the entire Ph.D. process as antiquated, elitist, and unfair. I amassed a pile of teaching experience, however, desperately using it as an excuse not to face my writing ... and also hoping that magically, the dozens of courses I had taught would somehow make my lack of a Ph.D. something that search committees would ignore. I became satisfied with less and less at the teaching jobs I did have: I was taking jobs out of guilt -- at least if I made money and was 'busy,' it meant that I hadn't stalled. I even thought I could make a permanent career out of my adjunct, ABD teaching. When my wife completed her Ph.D., I offset my jealousy with even more magical thinking: yes, that was her path. For what I want to do, I don't really need the Ph.D. at all. 

But after truly hitting bottom, and being faced with some very serious ultimatums (one having to do with being dropped from my Ph.D. program), I rebuilt from the ground-up. I sought counseling, and uncovered deeply entrenched issues that were hindering me. I faced my fears and actively engaged my dissertation committee. I started writing regularly. It took 3 years of rebuilding before I was on track again. But with the help of committed and compassionate faculty, an excellent therapist, and a partner who had been through the process herself and really, really loved me, I found my rhythm. I found a way to put all of the work I had done previously to use. Circumstances also finally aligned toward the end of those 3+ years. My wife was offered a tenure-track position 2,000 miles from where we lived. If I had any hope of being employed at the same place, I would have to finish my dissertation within a year of moving there. Within 4 months of moving and despite the chaos of unpacking and settling into a new place, I wrote regularly, drawing from every false start and red herring in my research, and finished. Eleven years had passed from the day I took my first graduate level class.

Getting the Ph.D. is more of a personal milestone than a professional one, because having a Ph.D. doesn't guarantee anyone a job. Ever. In most academic fields, the tenure-track job market is abysmal, and repeated runs through the job search process can be utterly demoralizing. However, the Ph.D. does improve one's chances dramatically, however -- and in many academic fields, it is an absolute necessity for finding a tenure-track job. And having one "in hand" versus "defending in August" does make one more attractive to a search committee.  But when that tenure-track job is found, the professional gauntlet truly begins. I'm lucky to be at a teaching University because the tenure process is five, rather than seven (or more) years. For those five years, I was an Assistant Professor (aka, "probationary" or "junior" faculty). In a nutshell, that means that at the end of any one of those five years, I could have been let go without any reason given. And that does happen sometimes. So during those probationary years, any junior faculty member will take on any and every project that is thrown his or her way: extra committee work, extra-curricular activities, moderating a club, volunteering, etc. And if a trusted mentor, department chair, or any administrator shows up at your office door with an "opportunity that would really support your tenure," you say yes. Of course, at a teaching institution, you are being judged primarily by your teaching evaluations, with research and professional development as a slightly distant second. But nevermind that all of those commitments mean less time for your classroom preparation; or that you have to leave students in the dust to run to a meeting. You balance it. You do it because you've already proven that you can balance yourself during the dissertation process. You dig deep. You persevere.

I did my fair share of work, and with the help of a particularly insightful administrator, I chose my committee service well. I did take a few risks here and there, and had one or two minor -- and ultimately resolved -- disagreements with colleagues, but I pushed myself. I squeezed in some writing where I could, and managed to completely revise my dissertation and get it published and then write two more articles: one was rejected at the very last stage; the other is the one included in the new collection. I applied for tenure with a strong portfolio. Putting that portfolio together was much more time consuming and emotionally draining than I expected. That process, plus all of my other duties, pushed me to a point that was very similar to the final weeks of dissertation writing: that place where you have to once again dig very deep for that last bit of motivation and energy. But I could look back to my dissertation process and know that I had it in me to finish. I had excellent support from my spouse, colleagues, and even students. All of my supporters reiterated a variation of a theme: "You earned this." Yes. I had earned it, and I would persevere.

Being granted tenure is not an "end." Just like getting the Ph.D. or first tenure-track position is not an end. It is a new process of self-evaluation and professional development, but one that comes with the privileges one has earned in the process. There is more freedom to engage in both research and course development. Student evaluations -- while still very important -- no longer hold such psychological weight. There is room for experimentation and trying out things that one has always wanted to try. Projects can be more long-term. Professional evaluations are set at longer intervals. And, as I hinted at earlier, one can be more selective as to the types of service in which one engages. With rank and seniority, there are more opportunities for leadership in committees and campus-wide initiatives. However, as an "associate professor," there is still one more level above. This is very institution-specific. At a research-level university or college, being promoted to a "full professor" is contingent upon criteria particular to each institution. Some require major publishing and/or research achievements; others require commendable teaching. Regardless, if one wants to move on from "Associate Professor" to "Professor," it is another round of evaluation and assessment. One earns more opportunities. And yes, the system is flawed. Institutionalized sexism, racism, classism, etc. persist even in the most liberal-leaning academic edifices. But at least academia tends to be more aware of these issues than in other places.

When I sat down to write this post, I intended to make it solely about my writing process -- not necessarily about the journey up the academic ladder. But for me at least, the two are absolutely intertwined. I gained confidence from my writing successes, which bolstered my ownership of my own expertise, which, in turn, pushed me to take more risks through my writing. The process is circular and iterative: it reinforces an identity. I try to think to the moment back in grad school when I turned myself around. But really, it was a series of moments over the course of weeks ... perhaps months. One incremental advance after another. But they compounded. And with each iteration, I became slightly more confident in my voice and my subsequent identity. This process really never ends.

It took tenure, promotion, and a summer without any major commitments for me to gain the perspective necessary to verify what I always suspected: one's identity is something that is always a process. I experienced my greatest failures and made my worst decisions when I lost sight of that fact, and passively allowed circumstance and my environment to shape me without actively engaging in the process of that shaping. Each success reinforces my identity as an academic.That identity isn't static. It is an ongoing and active process of evolution, with every stage being a regeneration.

I rather like where I am now. Yet, I will persevere.








Wednesday, June 25, 2014

Looking #Throughglass, Part 3 of 3: Risk, Doubt, and Technic Fields

In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.

Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.

The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence. 

Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main --  means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.

And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time? 

In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread. 

In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive. 

In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.

Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!).  But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons. 

First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back.  Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands.  A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions. 

The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly. 

For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen. 

Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly.  Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.

Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience. 

At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how 
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point. 



My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:

Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience.  In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,”  users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.

If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality. 

Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things.  Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it.  It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space. 

But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project. 

Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found. 

Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.

#Throughglass, however, the future is in the past-tense.


[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.