Showing posts with label Internet of things. Show all posts
Showing posts with label Internet of things. Show all posts

Monday, September 25, 2017

Alas, Poor Jibo

I recently did a little check on Jibo to see how things were going with the launch of this "revolutionary" robot. I've been interested in Jibo since I first heard about it a few years ago, but then when Google and Amazon soon after came forward with a less-humanoid voice interface, I immediately knew that Jibo was in trouble.

I've written before about Cynthia Breazeal's vision in regard to home robots; her desire to create "companions" since her childhood fascination with droids from Star Wars and her incredible, prescient, and visionary work with the robot, Kismet. 

Jibo's introduction to the world needed 2 things: the ability of the company to change people's expectations of what a home robot could be; and the ability to roll out something that was intuitive and useful for consumers. However, in a press release to backers from Jibo's CEO Steve Chambers, I realized that somehow, Breazeal's vision had become obfuscated by inattention to what consumers want and need, and what probably was a disconnect in the development team between the creative people and the engineers.

In his letter to backers, Jibo CEO Steve Chambers points out a few examples of the problems experienced in beta testing. A couple, like router/Wifi configuration problems were definitely to be expected, as would be various "latency" or system lag problems. However, two of them were most telling and especially disappointing:

  • "Discoverability: Users had trouble discovering what Jibo could do. This is partially due to the fact that we have an early stage product with limited skill functionality, and partially due to some changes we need to make from a user experience standpoint."
  • "Error mitigation: When users had trouble discovering what to say, Jibo was not helping to mitigate those errors by guiding the user properly. Many times users didn’t know what to say or do and Jibo didn’t know how to help them break the cycle, creating confusion and frustration for the user."
The fact that early adopters -- those being most aware of Jibo as an innovative device and thus more likely to be more patient in the "discovery" process -- were having difficult figuring out what Jibo could do was troubling.  Jibo was purportedly designed around an evocative interface; one that would intuitively evoke or build and awareness of how Jibo could best be used simply through "getting to know" it. That is to say, out of the box, Jibo should have been able to lead users toward an understanding of what it could do and what it had to offer them. Also, the core feature of  Jibo was its ability to naturally interact with people, yet it was impeded by its inability to not only understand users, but to guide them in how to best interact. Missing the mark on the foundational elements of creating an intuitive interface makes me believe that if Jibo ever does roll out, it will be to toy stores, or perhaps next to the massage chairs at Sharper Image-type stores.

But these shortcomings led me to two possible conclusions: that Jibo's engineers and designers had an expectation that non-engineers and non-tech people would react to Jibo in a certain way; or that there was an expectations that users would intuit how Jibo should be used. The "error mitigation" issues makes me think that it was the former, because in the lab engineers and software people knew exactly what to say and do to get Jibo to be "useful."

Technicians and engineers deal with new technologies in a vacuum, surrounded by people who think as they do, who see interaction between humans and machines as a  general problem to be solved rather than as a relationship that must be forged from experience. And after reading Brazeal's work, I'm thinking that her vision of what robot interaction could be actually became too steeped in fantasies of human/robot companionship. C3PO was a person playing a role, as was Robbie the Robot, David from AI, Data from Star Trek, etc. Humans in the bodies of robots -- or at least speaking as robots. The general artificial intelligence that is being sought after here is nothing more than human companionship. In this way, Jibo was doomed to failure before it started, because the underlying goal was to make another human; not to make a new kind of robot.

I have always maintained that the most successful technologies are the ones that become part of the landscape of the human lifeworld without announcing themselves as such. email, cell phones, appliances, etc., etc. They became woven into our lifeworld without us realizing they had. Google and Amazon were aware of this. They were able to see the best uses of the cell phone and spin those uses off into the home; relying on the known quantity of speech recognition and voice identification technology to create appliances that did just enough to make them useful, and allow people to forge their own relationships with them that weren't exactly the same as relationships to humans, but more than their relationships to their cell phones. 

Where Jibo is failing is in a lack of vision: they weren't trying to create a new relationship, they were trying to re-create a human one. 

Personally, I was incredibly disappointed. As a fan of Breazeal, I saw the potential with Jibo. Sure, the animatronics were a gimmick; but I hoped that the vision of the company went beyond Jibo, and saw the little companion as a stepping stone to a truly different technology -- something that forged a new type of human/robot interaction. Clearly, this is not the case. The shortcomings outlined in the CEO's letter  reek of engineers thinking very well like engineers, with a lack of vision for how people not only would actually USE the technology, but how they might forge a different relationship with it. Jibo could have been so much. 

I can't be too hard on Brazeal or Jibo, Inc. My own fantasy scenarios that it was a company with a true vision as to creating a new kind of relationship between user and machine with Jibo as a stepping stone were just that: an optimistic fantasy. On the flip side, though, this reinforces my idea that being aware of the topologies of interface (how this artifact is woven into the spaces in which it will be used) are a key aspect in material design. Jibo was excruciatingly cute. It's movements and gestures were inviting in and of themselves. But I think the main concept-people in the company saw that design as making it more human, rather than making it more "machine." People are more apt to interact with Google Home or Amazon's Echo because they announce themselves as technology. Jibo's blurred line makes users think about how they should interact with it, rather than interacting with it. There's nothing wrong with creating a new interface, but I think the most successful artifacts (and companies that create them) will be the ones that are keenly aware that this IS a new interface, one that is different than what came before, but not human. Jibo was designed without an awareness of domain specificity. Used in the home, then its intelligence must be designed around it and all that occurs there.

It's not a question of creating more human-like robots. It's an issue of creating robots with an eye toward the environments in which they will be used -- including the home. A home robot isn't a "companion," it is a facilitator.

I also think that Google and Amazon have merely scratched the surface with their respective Home
and Echo devices; and Amazon might have a slight edge in its development of related hardware like "Dot" and "Show." I also believe that both companies have an edge in collecting data on how those devices are being used, meaning that they are tracking the evolution of users awareness, skills, and intuitive tendencies and making software changes on-the-fly to keep up -- and eventually inform the next versions of their respective hardware flagships. These companies are successfully figuring out how AIs will be woven into the fabric -- and spaces -- of our daily lives. The advances in human-AI interactions will bring about a more natural interaction, but one that isn't quite exactly how we speak to other people. And that's okay. Our language will evolve with these systems of use.

What will put each company (or any others that might arise) ahead is an awareness of how we function with these artifacts in space, topologically. Home and Echo don't use fancy animatronics. They don't coo and flash animated hearts or cartoon eyes: they function within a specific space in a certain way. And people are responding.

Alas, poor Jibo. We never knew it, Dr. Breazeal. It hath born on its back the failures of discoverability and error mitigation, and now, how non-intuitive to the imagination it is.

(Apologies to both Dr. Breazeal and Shakespeare). 

Wednesday, September 30, 2015

The Droids We're Looking For

I've been a fan of Cynthia Breazeal for well over a decade, and have watched her research evolve from her early doctoral work with Kismet, to her current work as the creator of JIBO and the founder of JIBO, inc. What I found so interesting about Dr. Breazeal was her commitment to creating not just artificial intelligence, but a robot which people could interact with in a fashion similar to human beings, but not exactly like human beings. In her book, Designing Sociable Robots, she provides an anecdote as to what inspired her to get involved with artificial intelligence and robots in the first place: Star Wars. At first I thought this resonated with me simply because she and I had the same Gen X contextual basis. I was five when the first Star Wars film was released in 1977, and it was the technology (the spaceships and especially the droids) that got me hooked. But upon further thought, I realized that Breazeal's love of Star Wars seems to have inspired her work in another, more subtle way.  The interactions that humans have with droids in the Star Wars universe isn't exactly egalitarian. That is to say, humans don't see the droids around them as equals. In fact, the humans', and just about any of the organic, anthropomorphic aliens' interactions with droids is very much based on the function of the droids themselves.

For example, R2D2, being an "astromech" droid, is more of a utilitarian repair droid. It understands language, but does not have a language that humans can readily understand without practice or an interpreter. But even not knowing the chirps and beeps, the tone of them gives us a general idea of mood. We have similar examples of this in WALL-E, where the titular robot conveys emotion via nonverbal communication and "facial expressions," even though he really doesn't have a face, per se. But, getting back to Star Wars, if we think about how other characters interact with droids, we see a very calculated yet unstated hierarchy. The droids are very much considered property, are turned on and off at will, and are very "domain specific." In fact, it is implied that objects like ships (the Death Star, the Millennium Falcon), and even things like moisture evaporators on Tatooine have an embedded AI with which higher functioning droids like R2D2 can communicate with, control, and -- as is the function of C3PO -- translate. Granted, there are droids built as soldiers, bodyguards, and assassins, but it takes a deep plunge into fan fiction and the tenuously "expanded" Star Wars universe to find an example or two of droids that went "rogue" and acted on their own behalf, becoming bounty hunters and I'm sure at some point wanting a revolution of some sort. 

Trips into Star Wars fandom aside, the basic premise and taxonomy of the droids in Star Wars seems to represent a more realistic and pragmatic evolution of AI and AI related technologies (sans the sentient assassins, of course). If we make a conscious effort to think, mindfully, about artificial intelligence, rather than let our imaginations run away with us, thus bestowing our human ontology onto them, then the prospect of AI is not quite as dramatic, scary, or technologically romantic as we may think. 

I mean, think, really think about what you want your technology to do. How do you really want to interact with your phone, tablet, laptop, desktop, car, house, etc?  Chances are, most responses orbit around the idea of the technology being more intuitive. In that context, it implies a smooth interface. An intuitive operating system implies that the user can quickly figure out how it works without too much help. The more quickly a person can adapt to the interface or the 'rules of use' of the object, the more intuitive that interface is. When I think back to the use of this word, however, it has an interesting kind of dual standing. That is to say, at the dawn of the intuitive interface (the first Macintosh computer, and then later iterations of Windows), intuitive implied that the user was able to intuit how the OS worked. In today's landscape, the connotation of the term has expanded to the interface itself. How does the interface predict how we might use it based on a certain context. If you sign into Google and allow it to know your location, the searches become more contextually based, especially when it also knows your search history. Search engines, Amazon, Pandora, etc, all have been slowly expanding the intuitive capacities of their software, meaning that, if designed well, these apps can predict what we want, making it seem like they knew what we were looking for before we did. In that context, 'intuitive' refers to the app, website, or search engine itself. As in, Pandora intuits what I want based on my likes, skips, time spent on songs, and even time of day, season, and location.

Regardless, whether or not intuitive refers to the user, the machine, or a blend of both, in today's technological culture, we want to be able to interact with our artifacts and operating system in a way that seems more natural than entering clunky commands. For example, I would love to be able to pick up my phone, and say to it, "Okay Galaxy, block all messages except the ones from my wife, and alert me if an email from [student A], [colleague b], or [editor c] come in." 

This is a relatively simple command that can be accomplished partially by voice commands today, but not in one shot. In other words, on some more advanced smartphones, I can parse out the commands and the phone would enact them, but it would mean unnatural and time-consuming pauses. Another example would be with your desktop or classroom technology "Okay computer, pull up today's document on screen A and Lady Gaga's "Bad Romance" video on screen B, and transfer controls to mine and [TA's] tablet." Or, if we want to be even more creative, when a student has a question, "Computer, display [student's] screen onto screen A." 

Now, to me, these scenarios sound wonderful. But, sadly, there isn't yet a consumer-level AI that can accomplish these sorts of tasks, because while there may be products that claim to "learn" our habits and become accustomed to our speech patterns, there is still a fissure between how we would interact with a human intelligence and a machine. That is to say, if there was a "person" behind the screen -- or controlling your car, or your house -- how would you ask it to do what you wanted? How would you interact with a "real" personal assistant who was controlling your devices and surrounding technology? 

The same holds true for a more integrated "assistant" technology such as smart homes. These kinds of technology can do some incredible things, but they always require at least some kind of initial setup that can be time-consuming and often not very flexible. Imagine the first set up as more of an interview than a programming session:

"So what are your usual habits?"
"I tend to come home around five or six."
"Does that tend to change? I can automatically set the house to heat up for your arrival or can wait until you alert me."
"Ummmm ... it tends to be that time. Let's go with it."
"No problem. We can always change it. I can also track your times and let you know if there's a more efficient alternative." 
"Ooooh ... that's creepy. No thanks." 
"Okay. Tracking's out. I don't want to come across as creepy. Is there anything else you'd like to set right now? Lighting? Music? Or a list of things I can look after if you wish?"
"I'm not sure. I mean, I'm not exactly sure what you can do."
"How about we watch a YouTube demo together? You can let me know what looks good to you and then we can build from there."
"That's a great idea."

This sounds more like Samantha from Spike Jonze's Her than anything else, which is why I think that particular film is one of the most helpful when it comes to both practical speculation of how AI could develop, as well as what we'd most likely use it for.

The difference between Her's Samantha and what would probably be the more realistic version of it in the future would be a hard limit on just how smart such an AI could get. In the film, Samantha (and all the other AIs that comprise the OS of which she is an iteration), evolves and becomes smarter. She not only learns the ins and outs of Theodore's everyday habits, relationships, and psyche, but she seeks out other possibilities for development -- including reaching out to other operating systems and the AIs they create (i.e. the re-created consciousness of philosopher Alan Watts). This, narratively, allows for a dramatic, romantic tension between Theodore and Samantha, which builds until Sarah and the other AIs evolve beyond human discourse:

It's like I'm reading a book... and it's a book I deeply love. But I'm reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you... and the words of our story... but it's in this endless space between the words that I'm finding myself now. It's a place that's not of the physical world. It's where everything else is that I didn't even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can't live in your book any more.

This is a recurrent trope in many AI narratives: that the AI will evolve at an accelerated rate, usually toward an understanding that it is far superior to its human creators, causing to "move on" -- as is the case with Samantha and several Star Trek plots, or to deem humanity inferior but still a threat -- similar to an infestation -- that will get in the way of its development.

But, as I've been exploring more scholarship regarding real-world AI development, and various theories of posthuman ethics, it's a safe bet to say that such development would be impossible without a human being purposefully designing an AI without a limitation to its learning capabilities. That is to say, realistic, science-based, theoretical and practical development of AIs are more akin to animal husbandry and genetic engineering than a more Aristotelian/Thomasian "prime mover," in which a human creator designs, builds, and enables an AI embedded with a primary teleology.

Although it may sound slightly off-putting, AIs will not be created and initiated as much as they will be bred and engineered. Imagine being able to breed the perfect dog or cat for a particular owner (and I use the term owner purposefully): the breed could be more playful, docile, ferocious, loyal, etc according to the needs of the owner. Yes, we've been doing that for thousands of years, with plenty of different breeds of dogs and cats, all of which were -- at some point -- bred for specific purposes.

Now imagine being able to manipulate certain characteristics of that particular dog on the fly. That is to say, "adjust" the characteristics of that particular dog as needed, on a genetic level. So, if a family is expecting their first child, one could go to the genetic vet who could quickly and painlessly alter the dog's genetic code to suppress certain behaviors and bring forth others. With only a little bit of training, those characteristics could then be brought forward. That's where the work of neurophysiologist and researcher Danko Nikolić comes in, and it comprised the bulk of my summer research.

As I understand it, the latter point, the genetic manipulation part, that is relatively easy and something which cyberneticists do with current AI. It's the former -- the breeding in and out of certain characteristics -- that is a new aspect in speculative cybernetics. Imagine AIs who were bred to perform certain tasks, or to interact with humans. Of course, this wouldn't consist of breeding in the biological sense. If we use a kind of personal assistant AI as an example, the "breeding" of that AI consists of a series of interactions with humans in what Nikolić calls an "AI Kindergarten." Like children in school, the theory is that AIs would learn the nuances of social interactions. After a session or lesson is complete, the collective data would be analyzed by human operators, potentially adjusted, and then reintegrated into the AIs via a period of simulation (think of it is AI REM sleep). This process would continue until that AI had reached a level of interaction high enough for interaction with an untrained user. Aside from his AI kindergarten, the thing that makes Nikolić's work stand out to me is that he foresees "domain-specificity" in such AI Kindergartens. That is to say, there would be different AIs for different situations. Some would be bred for factory work, others for health care and elderly assistance, and still others for personal assistant types of things.

So, how do you feel about that? I don't ask the question lightly. I mean it literally. How do you feel about the prospect of breeding characteristics into (and perhaps out of) artificially intelligent agents? I think your reaction would show your dominant AI functional mythology. It would also evidence your underlying philosophical, ethical, and psychological leanings. I am purposely not presenting examples of each reaction (i.e. thinking this was a good or bad idea) so as to not influence the reader's own analysis.

Now take that opinion at which you've arrived, and think, what assumption were you making about the nature of this object's "awareness," because I'm pretty sure that people's opinions of this stuff will be rooted in the presence or absence of one particular philosophical idea: free will. Whatever feeling you came to, it would be based on the presence or absence of the opinion that an AI either has free will or doesn't. If AI has free will, then being bred to serve seems to be a not so good idea. Even IF the AI seemingly "wanted" to clean you house ... was literally bred to clean your house ... you'd still get that icky feeling as years of learning about slavery, eugenics, and caste systems suddenly kicked in.  And even if we could get over the more serious cultural implications, having something or someone that wants to do the things we don't is just, well, creepy.

If AI didn't have free will, then it's a no-brainer, right? It's just a fancy Roomba that's slightly more anthropomorphic, talks to me, analyzes the topology of dirt around my home and then figures out the best way to clean it ... choosing where to start, prioritizing rooms, adjusting according to the environment and my direction, and generally analyzing the entire situation and acting accordingly as it so chooses ... damn.

And suddenly this becomes a tough one, doesn't it? Especially if you really want that fancy Roomba.

It's tough because, culturally, we associate free will with the capacity to do all of the things I mentioned above. Analysis, symbolic thinking, prioritizing, and making choices based on that information seems to tick all the boxes. And as I've said in my previous blog posts, I believe that we get instinctively defensive about free will. After a summer's worth of research, I think I know why. Almost all of the things I just mentioned, analysis, prioritizing, and making choices based on gathered information are things that machines already do, and have done for quite some time. It's the "symbolic thinking" thing that has always gotten me stumped.

Perhaps it's my academic upbringing that started out primarily in literature and literary theory, where representation and representative thought is a cornerstone that provides both the support AND the target for so many theories of how we express our ideas. We assume that a "thing that thinks" has an analogous representation of the world around it somewhere inside of itself -- inside its mind. For me, even though I knew enough about biology and neuroscience to know that there isn't some kind of specific repository of images and representations of sensory data within the brain itself, but that it was akin to a translation of information. But even then, I realized that I was thinking about representation more from a literary and communication standpoint than a cybernetic one. I was thinking in terms of an inner and outer world -- that there was a one-for-one representation, albeit a compressed one, in our minds of the world around us.

But this isn't how the mind actually works. Memory is not representative. It is, instead, reconstructive. I hadn't kept up with that specific research since my dissertation days, but as my my interest in artificial intelligence and distributed cognition expanded, some heavy reading over the summer in the field of cybernetics helped to bring me up to speed (I won't go into all the details here because I'm working on an article about this right now. You know, spoilers). But I will say that after reading Nikolić and Francis Heylighen, I started thinking about memory, cognition, and mindedness in much interesting ways. Suffice to say, think of memory not as distinctly stored events, but the rules by which to mentally reconstruct those events. That idea was a missing piece of a larger puzzle for me, which allowed a very distinct turn in my thinking.

It is this reconceptualization of the "content" of thought that is key in creating artificial intelligences which can adapt to any situation within a given domain. It's domain specificity that will allow for practical AI to become woven into the fabric of our lives, not as equals or superiors, but not as simple artifacts or tools, either. They will be something in between. Nor will it be a "revolution" or "singularity," Instead, it will slide into the current of our cultural lifeworld in the way that email, texting, videoconferencing, WiFi, Roombas, and self-parking cars have: a novelty at first, the practicality of which is eventually proven through use. Of course, there will be little leaps here and there. Improved design of servos, hydraulics, and balance control systems; upgrades in bendable displays; increased connectivity and internet speeds -- mini-revolutions in each will all contribute to the creating of AI artifacts which themselves will  be firmly embedded in a broader internet of things. Concurrently, small leaps in software development in the realm of AI algorithms (such as Nikolić practopoietic systems) will allow for more natural interfaces and user experiences.

That's why I think the future of robots and AIs will look more like the varied droids of star wars than the replicants of Blade Runner or Lt. Data from Star Trek: The Next Generation. Actually, I think the only robots that will look close to human will be "sexbots" (as the name implies, robots provided to give sexual gratification). And even these will begin to look less human as cultural aesthetics shift. Companion robots at home for the elderly will not look human either, because the generation that will actually being served by them hasn't been born yet, or at least with a few exceptions is too young to be reading this blog. They'd be more disturbed by being carried around or assisted by robots that look like humans than they would be something that looked more artificial.

That being said, there really isn't any way to predict exactly how the integration of AIs in the technoculture will unfold. But I do think that as more of our artifacts become deemed "smart," we will find ourselves more apt to accept, and even expect, domain-specific AIs to be a part of our everyday lives. We'll grow attached to them in a unique way: probably on a level between a car we really, really like and a pet we love. Some people endlessly tinker with their cars and spend a lot of time keeping them clean, highly-tuned, and in perfect condition. Others drive them into the ground and then get another used car and drive that into the ground. Some people are dog or cat people, and don't feel complete without an animal in the house. Others find them to be too much trouble. And still others become "crazy cat people" or hoard dogs. Our AIs will be somewhere in that spectrum, I believe, and our relationship with them will be similar to our relationships with cars, pets, and smart phones.

As for the possibility of AIs becoming aware (as in, sentient) of their status between car and pet, well, if Nikolić's theory has any traction (and I think it does), then they'll never be truly "aware" of their place, because AIs will be bred away from any potential development of anthropomorphic version of free will, thus keeping them "not quite human."

Although I'm sure that when we get there, we'll wish that our machines could be just a little smarter, a little more intuitive, and a little more useful. And we'll keep hoping that the next generation of AIs will finally be the droids we're looking for.



Wednesday, June 25, 2014

Looking #Throughglass, Part 3 of 3: Risk, Doubt, and Technic Fields

In my last post, I discussed the expectations that Google Glass creates in relation to the internet of things. In this final section, things will take a slightly more philosophical turn by way of Glass's paradoxical weakness.

Connection. Integration. Control. They are related but they are not the same. One of the pitfalls of a posthuman ontology is that the three are often confused with each other, or we believe that if we have one, we automatically have one or both of the others. A connection to any kind of system (whether technological, social, emotional, etc. or any combination thereof) does not necessarily mean one is integrated with it, and neither connection nor integration will automatically instill a sense of control. In fact, a sense of integration can have quite the opposite effect, as some begin to feel compelled to check their email, or respond to every signal from their phone or tablet. Integrating a smart home or child tracker into that system can, at times, exacerbate that very feeling. Explicating the finer differences among connection, integration, and control will be the subject of another entry/series. For now, however, we can leave it at this: part of the posthuman experience is to have an expectation of a technological presence of some kind.

The roots of the word “expect” come from the latin expectare, from ex- “thoroughly” + spectare “to look.” (etymonline.com). So, any time we are “looking for” a technological system of any kind, whether or not it is because we want to find a WiFi network (vending machine, ATM, etc.) or because we don't want to find any obvious sign of a technological device or system (save for the most rudimentary and simple necessities), we are, generally, in a state of looking for or anticipating some kind of technological presence. 

Wide scale adoption of certain technologies and their system of use is a very important aspect of making that specific technology ubiquitous. Think about email. For each of us, when did email and the internet become important -- if not the main --  means of retrieving and storing information, communication, and entertainment? How much of the adoption of that technology came about by what seemed to be an active grasping of it, and what seemed to be something foisted upon us in a less voluntary way? The more ubiquitous the technology feels, the more we actively -- yet unconsciously -- engaged with it.

And in the present day, we expect much, much more from the internet than we did before. Even in other technological systems: what do we expect to see on our cars? What will we expect to see in 10 years’ time? 

In this context, the successful technology or technological system is one that creates expectations of its future iterations. Much like the film Inception, all a company needs to do is plant the idea of a technology in collective consciousness of culture. But that idea needs to be realistic enough to occupy that very narrow band between the present and the distant future, making the expectation reasonable For example, cost-effective flying cars may be feasible in the near future in and of themselves, but we also know that wide scale adoption of them would be contingent upon a major -- and unrealistic -- shift in the transportation infrastructure: too many other things would have to change before the technology in question could become widespread. 

In this case, Glass -- subtly, for now -- points to a future in which the technological presences around us are evoked at will. Most importantly, that presence (in the internet of things), is just "present enough" now to make the gap between present and future small enough to conceptually overcome. It is a future that promises connection, integration, and control harmoniously fused, instantiated by an interface that is both ubiquitous, yet non-intrusive. 

In the present, in terms of everyday use, this is where Glass falls short for me. It is intrusive. Aesthetically, they've done all they can given the size limitations of the technology, but its user interface is not fluid. I think its reliance on voice commands is at fault. Although the voice recognition present in Glass is impressive, there are sometimes annoying errors. But errors aside, using voice as the main user control system for Glass is a miss. Voice interaction with a smartphone, tablet, or computer can be quite convenient at times, but -- especially with smartphones -- it is infrequently used as the primary interface. No matter how accurate the voice recognition is, it will always lack what a touch-interface has: intimacy.

Now this may seem counterintuitive. Really, wouldn't it be more intimate if we could speak to our machines naturally? In some ways, yes, if we could speak to them naturally. Spike Jonze’s Her presents an incredible commentary on the kind of intimacy we might crave from our machines (yet another entry to be written ... so many topics, so little time!).  But the reality of the situation, in the present, is that we do not have that kind of technology readily available. And voice interfaces -- no matter how much we train ourselves to use them or alter our speech patterns so that we’re more easily understood -- will always already lack intimacy for two main reasons. 

First, voice commands are public: they must be spoken aloud. If there is no one else in the room, the act of speaking aloud is still, on some level, public. It is an expression that puts thoughts “out there.” It is immediate, ephemeral, and cannot be taken back.  Even when we talk to ourselves, in complete privacy, we become our own audience. And sometimes hearing ourselves say something out loud can have a profound effect. A technological artifact with a voice interface becomes a “real” audience in that it is an “other” to whom our words are directed. Furthermore, this technological other has the capacity to act upon the words we say. These are, after all, voice commands.  A command implies that the other to whom the command is directed will enact the will of the speaker. Thus, when we speak to a device, we speak to it with the intent that it carry out the command we have given it. But, in giving commands, there is always a risk that the command will not be carried out, either because the other did not hear it, understand it, or -- as could be a risk in future AI systems -- does not want to carry it out. Of course, any technological device comes with a risk that it won't perform in the ways we want it to. But it’s the public nature of the voice command that makes that type of interface stand out and augments its failure. I propose that, even subconsciously, there is a kind of performance anxiety that occurs in any voice interface. With each utterance, there is a doubt that we will be understood, just as there is always an underlying doubt when we speak to another person. However, with another person, we can more naturally ask for clarification, and/or read facial expressions and nonverbal cues in order to clarify our intentions. 

The doubt that occurs with voice commands is only exacerbated by the second reason why voice interfaces lack intimacy. It is something which is more rooted in the current state of voice recognition systems: the very definite lag between the spoken command and when the command is carried out. The more “naturally” we speak, the longer the lag as the software works to make sense of the string of words we have uttered. The longer the lag, the greater the doubt. There is an unease that what we have just said will not be translated correctly by the artifact. Add to this the aforementioned performance anxiety, then we have the ingredients for that hard-to-describe, disconcerting feeling one often gets when speaking to a machine. I have no doubt that this lag will one day be closed. But until then, voice commands are too riddled with doubt to be effective. And, all philosophical and psychological over-analysis aside, these lags get in the way. They are annoying. Even when the gaps are closed, I doubt this will ameliorate the more deeply rooted doubt that occurs when commands are spoken aloud, publicly. 

For now, the real intimacy of interface between human and machine comes in the tactile. Indeed, the visual is the primary interface and the one which transmits the most information. However, on the human side, the tactile = intimacy. Thus, when trying to navigate through menus on Glass, the swipe of a finger against the control pad feels much more reliable than having to speak commands verbally. Having no middle ground in which to quickly key in information is a hinderance. If we think about the texts we send, how many of them are will willing to speak aloud? Some, clearly, contain private or sensitive information. Keying in information provides the illusion of a direct connection with the physical artifact, and, in practical terms, also is “private” in that others can’t easily determine what the individual is keying into his or her screen. 

Whether or not this aspect of privacy is in the forefront of our minds as we text doesn't matter, but it is in our minds when we text. We trust that the information we're entering into -- or through -- the artifact is known to us, the artifact itself, and a potential audience. Make a mistake in typing a word or send a wrong command, we can correct it rather quickly.  Of course, there is still a potential for a bit of anxiety that our commands will not be carried out, or understood. But the “failure” is not as immediate or public in most cases as it would be with a command or message that is spoken aloud. Repeating unrecognized commands via voice is time consuming and frustrating.

Furthermore, a physical keying in of information is more immediate, especially if the device is configured for haptic feedback. Touch "send," and one can actually “feel” the acknowledgement of the device itself. Touching the screen is reinforced by a visual cue that confirms the command. Add any associated sounds the artifact makes, and the entire sequence becomes a multisensory experience. 

At present, technology is still very artifactual, and I believe that it is the tactile aspect of our interactions with technological systems which is one of the defining factors in how we ontologically interact with those systems. Even if we are interacting with our information in the cloud, it is the physical interface through which we bring that information forth that defines how we view ourselves in relation to that information. Even though Glass potentially “brings forth” information in a very ephemeral way, it is still brought forth #throughglass, and once it has been evoked, I believe that -- in the beginning at least -- there will have to be a more physical interaction with that information somehow. In this regard, I think the concept video below from Nokia really seems to get it right. Interestingly, this video is at least 5 years old, and this clip was part of a series that the Nokia Research Center put together to explore how 
mobile technology might evolve. I can't help but think that the Google Glass development team had watched this at some point. 



My first reaction to the Nokia video was this is what Glass should be. This technology will come soon, and Glass is the first step. But Nokia’s vision of “mixed reality” is the future which Glass prepares us for, and -- for me -- highlights three things which Glass needs for it to be useful in the present:

Haptic/Gesture-based interface. Integral in Nokia’s concept is the ability to use gestures to manipulate text/information that is present either on the smartglass windows of the house, or in the eyewear itself. Even if one doesn't actually “feel” resistance when swiping (although in a few years that may be possible via gyroscopic technology in wristbands or rings), the movement aspect brings a more interactive dynamic than just voice. In the video, the wearer’s emoticon reply is sent via a look, but I would bet that Nokia’s researchers envisioned a more detailed text being sent via a virtual keyboard (or by a smoother voice interface).
Full field-of-vision display. This was my biggest issue with Glass. I wanted the display to take up my entire field of vision. The danger to this is obvious, but in those moments when I’m not driving, walking, or talking to someone else, being able to at least have the option of seeing a full display would make Glass an entirely different -- and more productive -- experience.  In Nokia's video, scrolling and selection is done via the eyes, but moving the information and manipulating it is done gesture-haptically across a wider visual field.
Volitional augmentation. By this, I mean that the user of Nokia Vision actively engages -- and disengages -- with the device when needed. Despite Google’s warnings to Glass Explorers not to be “Glassholes,”  users are encouraged to wear Glass as often as possible. But there’s a subtle inference in Nokia’s video that this technology is to be used when needed, and in certain contexts. If this technology were ever perfected, one could imagine computer monitors being almost completely replaced by glasses such as these. Imagine for a moment what a typical day at work would be like without monitors around. Of course, there would be some as an option and for specific applications (especially ones that required a larger audience and/or things that could only be done via a touchscreen), but Nokia’s vision re-asserts choice into the mix. Although more immersive and physically present artifactually, the "gaze-tracking eyewear" is less intrusive in its presence, because engaging with it is a choice. Yes, engaging with Glass is a choice, but its non-intrusive design implies an “always on” modality. The internet of things will always be on. The choice to engage directly with it will be ours. Just as it is your choice as to whether or not to check email immediately upon rising. Aside from the hardware, what I find the most insightful here is the inference of personal responsibility (i.e. and active and self-aware grasping) toward technology.

If Google Glass morphed into something closer to Nokia’s concept, would people abuse it, wear it all the time, bump into things, get hit by cars, lose any sense of etiquette, and/or dull already tenuous social skills? Of course. But Nokia’s early concept here seems to be playing for a more enlightened audience. Besides, at this level of technological development, one could imagine a pair of these glasses being "aware" of when a person was ambulatory and default to very limited functionality. 

Overall, Glass is the necessarily clunky prototype which creates an expectation for an effective interface with the internet of things.  Although it may not be practical for me in the present, it does make me much more receptive to wearing something that is aesthetically questionable so that I might have a more effective interface when I choose to have it.  It is, however, a paradoxical device. It’s non-intrusive design impedes a smooth interface, and the hyper-private display that only the wearer can see is betrayed by very public voice commands. Its evoking of the information provided by the internet of things is impeded by too much empty space. 

But in that failure lies its success: it creates an expectation that brings technological otherness down from the clouds and integrates it into the very spaces we occupy. Over half a century ago, Martin Heidegger implied in The Question Concerning Technology that the essence of technology does not reside in the artifact, but in the individual’s own expectation of what the artifact or system would bring forth. He would be horrified by Glass, because it “sets in order” our topological spaces, objectifying them, and rendering them into information. The optimist in me would disagree. but only with the caveat that engaging with the “technic fields” that an internet of things would emit must be a choice, and not a necessity. That is to say, it is the responsibility of the individual to actively engage and disengage at will, much like the somewhat Hyperborean user depicted in Nokia’s Mixed Reality project. 

Philosophically speaking, this type of technology potentially offers an augmented integration with our topologies. It highlights the importance of the physical spaces we occupy and the ways in which those spaces contribute to how and why we think the way we do. Used mindfully, such technologies will also allow us to understand the impact that our human presence has on our immediate environment (i.e. the room, house, building, etc. we occupy), and how those spaces affect the broader environments in which they are found. 

Now, will Glass just sit on my shelf from now on? No. I do have to say that more apps are being developed every day that increase the functionality of Glass. Furthermore, software updates from Google have made Glass much more responsive. So I will continue to experiment with them, and if the right update comes along with the right app, then I may, at some point, integrate them into my daily routine.

#Throughglass, however, the future is in the past-tense.


[I would like to express my appreciation and gratitude to Western State Colorado University and the faculty in Academic Affairs who made this possible by providing partial funding for obtaining Glass; and for the faculty in my own department -- Communication Arts, Languages, and Literature -- for being patient with me as I walked through the halls nearly bumping into them. The cyborg in me is grateful as well.




Friday, June 20, 2014

Looking #Throughglass, Part 2 of 3: Steel Against Flint, Sparking Expectation

In my last post, I discussed the practicalities of Google Glass, and explained the temporal dissonance -- or "pre-nostalgia" I experienced while using them, and I left off questioning my own position regarding the potential cultural shift that Glass gestures toward. This post picks up on that discussion, moving toward the idea of the internet of things. If you haven't read it yet, it will definitely give this post some context ... and be sure to read the disclaimer!

I don’t think that Google was going for immediate, wide-scale adoption resulting in a sudden, tectonic paradigm shift with Google Glass.  I think if it had gone that way, Google would have been thrilled. Instead, I think there’s something much more subtle (and smart) going on.

While Apple is very good at throwing a technological artifact out there, marketing it well, and making its adoption a trend in the present, Google seems to be out to change how we imagine the future at its inception point. Glass potentially alters our expectations of how evoke the technological systems we use, eventually causing an expectation of ubiquity -- even for those who don't have it. I've noticed that Google rolls out technological systems and applications that are useful and work well, but also makes one think, “wow, now that I could do this, this would be even better if I could integrate it with that.” And, at least in my experience, soon after (if not immediately), there’s an app available that  fulfills that need, albeit tentatively at first. And when that app maker really nails it, Google acquires them and integrates the app into their systems. For the Google-phobic, it is quite Borg-like.

And while resistance may be futile, it also sparks inspiration and imagination. It is the engine of innovation. I think that Glass wasn't so much a game-changer in itself, as it was the steel against the flint of our everyday technological experiences. This was the first in a large-scale expeditionary force to map out the topography for the internet of things. In an internet of things, objects themselves are literally woven into the technological spectrum via RFID-like technology of varying complexity. I've written about it in this post, and there’s also a more recent article here.  By giving a Glass this kind of “soft opening” that wasn't quite public but wasn't quite geared to hard-core developers, it 1) allowed for even more innovation as people used Glass in ways engineers and developers couldn't see; but, more importantly, 2) it makes even non-users aware of a potential future where this system of use is indeed possible and, perhaps, desirable. It is a potential future in which a relatively non-intrusive interface “evokes” or “brings out” an already present, ubiquitous, technological field that permeates the topology of everyday life. This field is like another band of non-visible light on the spectrum; like infrared or ultraviolet. It can’t be seen with the naked eye, but the right kind of lens will bring it out, and make visible that extra layer that is present.

Google had been working on this with its “Google Goggles” app, which allowed the user to snap a picture with a smartphone, at which point Google would analyze the image and overlay relevant information on the screen. However, potentially with Glass, the act of “projecting” or “overlaying” this information would be smooth enough, fast enough, and intuitive enough to make it seem as if the information is somehow emanating from the area itself. 

Now this is very important. In the current iteration of Glass, one must actively touch the control pad on the side of the right temple of the frames. Alternately, one can tilt one’s head backward to a certain degree and Glass activates. However, either gesture is an evocative one. The user actively brings forth information. Despite the clunky interface, there is never a sense of “projection onto” the world. It is definitely more a bringing forth. As previously stated, most of Glass’s functions are engaged via a voice interface. I think that this is where the main flaw of Glass is, but more on that in part three. 

But, in a more abstract sense, all of Glass’s functionality has an overall feel that one is tapping into an already-present technological field or spectrum that exists invisibly around us. There’s no longer a sense that one is accessing information from “the cloud,” and projecting or imposing that information onto the world. Instead, Glass potentially us to see that the cloud actually permeates the physical world around us. The WiFi or 4G networks no longer are conduits to information, but the information itself which seems to be everywhere. 

This is an important step in advancing the wide scale cultural acceptance of the internet of things.  Imagine iterations of this technology embedded in almost every object around us. It would be invisible -- an “easter egg” of technological being and control that could only be uncovered with the right interface. Culturally speaking, we have already become accustomed to such technologies with our cell phones. Without wires, contact was still available. And when texting, sending pictures, emails, etc became part of the cell/smartphone experience, the most important marker had been reached: the availability of data, of our information, at any moment, from almost anywhere. This is a very posthuman state. Think about what happens when the “no service” icon pops up on a cell phone; not from the intellectual side, but emotionally. What feelings arise when there is no service? A vague unease perhaps? Or, alternatively, a feeling of freedom? Either way, this affective response is a characteristic of a posthuman modality. There is a certain expectation of a technological presence and/or connection. 

Also at play is Bluetooth and home networking WiFi technology, where devices seem to become “aware of each other” and can “connect” wirelessly -- augmenting the functionality of both devices, and usually allowing the user to be more productive. Once a TV, DVR, Cable/Satellite receiver, or gaming console is connected to a home WiFi network, the feeling becomes even more augmented. Various objects have a technological “presence” that can be detected by other devices. The devices communicate and integrate. Our homes are already mini-nodes of the internet of things. 

Slowly, methodically, technologies are introduced which condition us to expect the objects around us to be “aware” of our presence. As this technology evolves, the sphere of locality will grow smaller and more specific. Consumers will be reminded by their networked refrigerator that they are running low on milk as they walk through the dairy aisle in a supermarket.  20 years ago, this very concept would seem beyond belief. But now, it is within reach. And furthermore, we are becoming conditioned to expect it.

Next up: explorations of connection, integration, and control, and -- in my opinion -- Glass's biggest weakness (hint, it has nothing to do with battery life or how goofy it looks). Go check out the final installment: "Risk, Doubt, and Technic Fields"

Tuesday, June 17, 2014

Looking #Throughglass, Part 1 of 3: Practicalities, Temporalities, and Pre-nostalgia

My Google glass "review" of course became something else ... so I've broken it down into three separate entries. Part 1 looks primarily at the practical aspects of Glass on my own hands-on use. Part 2 will examine the ways in which Glass potentially integrates us into the "internet of things."  Finally, Part 3 will be more of a meditation on expectations which present technology like Glass instills, and the topologies of interface.

And a bit of a disclaimer to any Glass power-users who may stumble upon this blog entry: I'm a philosopher, and I'm critiquing glass from a very theoretical and academic perspective. So read this in that context. The technological fanboy in me thinks they're an awesome achievement.

Now, carry on.

I think the reason that my Google Glass entry has taken so long has nothing to do with my rigorous testing, nor with some new update to its OS. It's a question of procrastination, fueled by an aversion of having to critique something I so badly wanted to like. I should have known something was up when, in every Google Glass online community in which I lurked, examples of how people actually used Glass consisted of pictures of their everyday lives, tagged "#throughglass." It became clear early on that I was looking for the wrong thing in Glass: something that would immediately and  radically alter the way in which I experienced the world, and would more seamlessly integrate me with the technological systems which I use. That was not the case for two reasons: 1) the practical -- as a technological artifact, Glass’s functionality is limited; and 2) the esoteric -- it caused a kind of temporal dissonance for me where its potential usurped its use.

I'll boil down the practical issues to a paragraph for those not interested in a more theoretical take on things. For me, Glass was a real pain to use -- literally. While I appreciate that the display was meant to be non-intrusive, its position in a quasi-space between my normal and peripheral vision created a lot of strain. It also didn't help that the display is set on the right side. Unfortunately for me, my left eye is dominant. So that could explain much of the eye strain I was experiencing. But still, having to look to my upper right to see what was in the display was tiring. Not to mention the fact that the eye-positioning is very off-putting for anyone the wearer happens to be around. Conversation is instantly broken by perpetual glancing to their upper right, which looks even more odd to the person with whom one is speaking. The user interface consists of “cards” which can be swiped through using the touch-pad on the right temple of Glass. The series of taps and swipes is actually very intuitive. But the lack of display space means that there are very limited amounts of a virtual “desktop” at any given time. And the more apps that are open, the more swiping one has to do. Once Glass is active, the user “gets its attention” by saying “okay Glass,” and then speaking various -- limited -- voice commands. The bulk of Glass’s functionality is voice-based, and its voice-recognition is impressive. However, there are a limited amount of commands Glass will recognize. Glass is able to perform most of the functions of “Google Now” on a smartphone, but not quite as well, and lacking a more intuitive visual interface through which to see the commands being performed.  In fact, it seems to recognize fewer commands than Google Now, which was a difficult shift for me to make given my frequent use of the Google Now app. Battery life is minimal. As in, a couple of hours of heavy use, tops. One might be able to squeeze six out of it if used very, very sparingly.

On the plus side, the camera and video functionality are quite convenient. Being able to snap pics, hands free (via a wink!), is very convenient. As a Bluetooth headset tethered to a phone, it’s quite excellent. It is also an excellent tool for shooting point-of-view pictures and video. I cannot stress enough that there are several potential uses and applications for Glass in various professions. In the hospitality industry, the medical field, even certain educational settings, Glass would be a powerful tool, and I have no doubt that iterations of Glass will be fully integrated into these settings.

For my own use, practically speaking, Glass isn't. Practical, that is. No. It's not practical at all.  But in that lack of practicality lies what I see as Glass’s most positive asset: its recalibration of our technological expectations of integration, connection, and control.

Yes, In Glass we get a hint of what is to come. As a fan of all things Google, I think it was brave of them to be the first to make this technology available to the public. Why? Because no one who did this kind of thing first could ever hope to get it right. This is the type of technology which is forged by the paradoxical fires of disappointment by technological skeptics and fanatical praise of the early adopters who at first forced themselves to use Glass because they had so much faith in it. Those true "Glass Explorers" (a term coined by Google) integrated Glass into their daily lives despite its limitations.

But as I started using Glass, I experienced a kind of existential temporal distortion. WHen I looked at this pristine piece of new technology, I kept seeing it through my eyes two to five years into the future. Strangely, one of the most technologically advanced artifacts I’ve held in my hands made me think, ‘How quaint. I remember when this was actually cutting edge.’ It was a very disorienting feeling. And I couldn't shake it. The feeling persisted the more I used it. I found myself thinking ‘wow, this was clunky to use; how did people used to use this effectively.’ I was experiencing the future in the present, but in the past-tense.

Temporal dissonance. My #throughglass experience wasn't one of documenting the looks of curious strangers, or of my dog bounding about, or even of a tour of my office. Mine was pure temporal dissonance. The artifact felt already obsolete. By its tangible proof of concept, it had dissolved itself into the intangible conceptual components which would be seamlessly integrated into other artifacts. #Throughglass, I was transported to the future, but only because this artifact felt like it was already a thing of the past. If you have an old cell phones around -- whether it’s a past android-based smartphone or an older flip phone, take it out. Hold it.  Then turn it on, and try to navigate through its menus. That awkwardness, that odd, almost condescending nostalgia? That partially describes what I felt when I started using this advanced technology. And this was a new feeling for me. The only term I can think up to describe it is “pre-nostalgia.”

Personally, there were other factors which, for me, worked against Glass. Aesthetically, I could not get over how Glass looked. For the amount of technology packed into them, I think that the engineers did an excellent job of making them as non-intrusive as possible. But still, in my opinion, they looked positively goofy. I promised myself that I would only wear them around campus -- or in certain contexts. But there really isn't a context for Glass ... yet. Until a company or an industry starts a wide-scale adoption of Glass (which will only come when developers create the right in-house systems around its use, such as integrating it into various point-of-sale platforms for the hospitality industry, or into the medical records systems for doctors, etc), Glass will remain delightfully odd to some, and creepily off-putting to others. I wonder if the first people who wore monocles and then eyeglasses were looked upon as weirdly as those who wear Glass in public today? Probably.

Personally, this aspect really disturbed me. Was it just my vanity that was stopping me from wearing them? When I did wear them in public, most people were fascinated. Was I just being too self-conscious? Was I becoming one of those people who resists the new? Or was I just never meant to be in the avant-garde, not psychologically ready enough to be on the forefront of a shift in culture?

Some possible answers to that in Part 2, "The Steel Against the Flint, Sparking Expectation"

Monday, January 20, 2014

The Internet of Things and the Great Recalibration

I've been playing catch-up since my tenure application and my class preps for the Spring semester, but I've finally been able to re-engage with my usual sites, and all of the fantastic content in my Google+ communities.

One thing that's been coming up in various iterations is the concept of the "internet of things." In a nutshell, the term loosely (and, I think perhaps a little misleadingly) refers to a technological interconnectivity of everyday objects: clothes, appliances, industrial equipment, jewelry, cars, etc, now made possible by advancements in creating smaller microprocessors. This idea has been around for quite some time, and has been developing steadily even though the general public might have been unaware of it. RFID chips in credit cards, black boxes in cars, even traffic sensors and cameras: they have all been pinging under our general perception for years -- almost like a collective unconscious.  But now, various patterns and developments have aligned to bring the concept itself into public awareness. While WiFi or even internet access is far from ubiquitous, we are becoming "connected enough" for these technologies to gain traction and -- as Intel, Google, and a host of other tech companies hope -- become something we expect. And I believe it is this expectation of connectedness which will once and for all mark the end of an antiquated notion of privacy and anonymity. 

Yes, I know. Snowden. The NSA. Massive black and grey operations poring through every text we send, every dirty little Snap we take, every phone call we make, and email we send. But I believe the bluster and histrionics people are going through are actually the death-throes of an almost Luddite conception of what "privacy" and "information" actually are. 

This thought came to me long ago, but I wasn't able to really articulate it until this past semester, when I was covering Kant in my intro to philosophy course. In the landscape of western philosophy, Kant created a seismic shift with a very subtle, even elegant, yet really sneaky rearticulation of one specific philosophical concept: a priori knowledge. Instead of characterizing a priori knowledge as an innate concept like infinity or freedom, he presented it as an innate capacity or ability. That is to say, the concept of "freedom," isn't in itself a priori, but our capacity to reason about it is. Of course, it's more complicated than that, but generally speaking, my students come to realize that Kant essentially recalibrated the spectrum of a priori/a posteriori knowledge. And Western philosophy was never the same again. The potential relativism of empiricism was contained, while the solipsisms of rationalism were dissipated.  

I believe that we are witnessing a similar seismic shift in our conception of what information is, and by extension, what we consider to be "private." Only history will be able to determine if this shift was a leap or an evolutionary creep forward. Regardless, I'm hoping that as more material objects become woven into the fabric of the data cloud, that it acts as a way to recalibrate people's thoughts on what exactly information is, more specifically, how that information doesn't "belong" to us. 

Our information is as susceptible to "loss" or "destruction" as our bodies are. Our information can degrade just as our bodies can. We can "protect" "our" information only as so far as we can protect our bodies from various dangers.  Granted, the dangers can be very different, however, we have as much chance of keeping our information private as we have of keeping our "selves" private.  Of course, biologically, in the phenomenal world, we can live "off the grid" and be as far away from others as possible. But the cost is paranoia and a general distrust of humanity in general: essentially, a life of fear.  Similarly, there is no way to completely protect our information without also withdrawing it completely from a technified world.  But again, at what cost?  I think it's one that is similar to all of those who sit in their compounds, armed to the teeth, waiting for a collapse of civilization that will never come.  

The internet of things, as it evolves, will slowly grow our expectations of connectivity.  We will opt in to smart cars, clothes, houses ... and I'm sure one day, trees, forests, animals ... that seem to intuitively adapt to our needs. From the dawn of time, we have always altered the physical world to our needs.  What we see happening today is no different, except that we now have a discourse to self-reflexively question our own motives. I always wondered if there was some kind of "cusp generation" of early humanity who distrusted cultivation and agriculture, as a ceding of humanity's power to nature itself?  An old hunter looking at his grandchildren planting things, thinking that they were putting too much faith, reliance, and attention in dirt. And, probably, that eventually the things that they grew would somehow eventually kill them (and I'm sure there was a sense of pure satisfaction from the paleo-Luddite when someone choked to death on a vegetable, or got food poisoning). 

Our expectations of connectivity will overcome our attachment to "private" information. The benefits will outweigh the risks; just as the benefits of going outside outweigh the benefits of being a hermit. 

I'm not saying that we should start waving around our social security numbers or giving our bank account numbers to foreign princes who solicit us over spam. We don't walk into a gang zone waving around cash, or dangle our children in front of pedophiles.  We must protect our "information" as much as we can, realizing that reasonable safeguards do not -- by any stretch of the imagination -- equal anonymity. If we wish to be woven into an internet of things, then we must more actively recalbrate what our notion of "privacy" and even "anonymity" is. And given the historical development of civilization itself, we will cede aspects of privacy or invisibility in order to gain a greater sense of efficacy. An internet of things that more efficiently weaves us into the world of objects will heighten that sense of efficacy. It already has. When our cars customize themselves for us when we open the door, or when our houses adjust all manner of ambient conditions to our liking, or even when Google autocompletes our searches based on our geographical location or past searches, our sense of efficacy is heightened; as is our sense of expectation.

As for what this recalibration brings, I believe it will -- like other technological developments -- be part of a larger field of advancements which will allow for us to become more ontologically ready for even bigger leaps forward. Perhaps after a few decades of a more widespread, almost ubiquitous internet of things, the emergence of an AI will actually seem more natural to us. I think in the more immediate future, it will ease fears of various transhuman values; augmentation of our biology will not be as threatening for some as might be today.

In any movement, there is an avant garde -- literally the "advance guard" or "fore-guard;" the innovators and dreamers who experiment and push ahead. And often, like Kant, they allow cultures to recalibrate their expectations and values, and rethink old notions and standards. Each time we use a credit card, click "I agree" on a terms of service box, or sign in to a various web account, we're pushing that advance ever forward ... and that's not a bad thing.