Category Archives: Tech Futurism

Games That Play Themselves

February 19, 2015 at 12:36 pm (One Comment)

A few days ago a new iOS app called Dreeps landed in my news feed, heralded with headlines like Maybe The Laziest RPG You Could Ever Play and A Video Game That Plays Itself. Dreeps is an app where a little robot boy goes on an adventure, Japanese RPG style.  You set an alarm to tell him to rest, and that’s it.  When the alarm goes off, he gets up and gets on with his adventure, fighting monsters and meeting NPCs.  There’s pixel art and chiptune audio.  Dialog is word balloons with squiggly lines for text.  It’s all very atmospheric.  You just don’t do anything, really, but watch when you want and suggest he get up when he’s resting after a fight.

Dreeps is a lot like Godville, a game I talked about in a post about Pocket Worlds back in 2012.  They’re games that (appear, depending on the implementation) to be running and progressing even when you’re not around.  While Godville does its magic with text, Dreeps has neat graphics and sound.  They’re essentially the same game, though.  A singular hero you have slight control over goes on a quest.  In Godville it’s for your glory (since you’re their god), in Dreeps it’s to destroy evil (I think).


Both Dreeps and Godville are passive entertainment experiences, they’re worlds that are all about you, but not really games you play.  They’re games you experience, or perhaps we need a new word for this kind of thing.  While books and TV shows and music (although not playlists, as we’ve seen with Pandora) are hard to create for just one person’s unique enjoyment, games are great at that.  They can take feedback and craft an experience just for you, and as we built more complex technology and can access more external datasets, they can get even more unique.

Imagine a game like Dreeps where the other characters (or maybe even the enemies) are modeled algorithmically after your Facebook friends (or LinkedIn contacts).  Take their names, mash them through a fantasy-name-izer, do face detection and hue detection to pick hair color and eye color, maybe figure out where they’re from (geolocated photos, profile hometowns or checkins) for region-appropriate clothing.  Weather from where they are, or where your friends live, maybe playing on an appropriate map.  You could even use street view and fancy algorithms to identify key regional architectural elements and generate game levels that ‘feel’ like the places they live.  That starts to get pretty interestingly personalized, though much less predictable.

261308-animal_crossing_screenMike Diver over at Vice posted an article about Dreeps titled I Am Quite OK With Video Games That Play Themselves, where his main point was that he’s figured out that he’s actually bad at games, and it’s nice to have something where you can enjoy the progression without working about your joystick skills.  Maybe Mike should spent more time with Animal Crossing, a game series I think Dreeps shares a lot of DNA with.  In Animal Crossing your character inhabits a town that progresses in real-time.  You can go fishing and dig up treasure and pick fruit and talk to the other inhabitants in your little village, but the world keeps going when you’re not playing, so if you leave it alone for a long time, you come back to a game that’s progressed without you (with the game characters wondering where you’ve been).  Dreeps is like that, but without the active user participation.  It’s like a zen Farmville.  Take out the gamification, add in some serenity.

It feels like Dreeps could be a really fantastic lock-screen-game, if that’s a thing.  You nudge your phone awake, and see your guy trudging along.  He’s always there, in a comforting, reassuring, living way.  Maybe Samsung or someone with some great cross-vertical reach could implement lock-screen or sleep-screen as a platform across TVs, phones, tablets, fridges, etc.  That’d be something.

ant-farmI was talking to a friend of mine about these kind of games yesterday, pondering where this is headed, and I mentioned that the experience almost feels like an Ecosphere.  Ecospheres are those totally enclosed ecosystems, where aside from providing a reasonable temperature and sunlight, you’re a completely passive observer. There’s something nice about walking by and peeking in on it every once in a while.  Something comforting about knowing that even when you’re not watching it’s going on about its fantastically complex business without you.  But there’s also a spiritual weight to it, because it’s a thing that could cease to exist.  I could cover the Ecosphere with a sheet or leave it out in the cold, I could delete Godville or Dreeps from my phone, or have my phone stolen, unable to retrieve my little robotic adventurer.

It isn’t a huge weight now that we carry with these sorts of things.  In fact, I stopped checking in on my Godville character a few months ago, after over a year of nearly daily care.  Sometimes you just lose the thread.  But these systems are going to become more complex, more compelling.  They’re going to have more pieces of ourselves in them.  How would I feel if a friend of mine was a major character in Dreeps, always showing up to help me out, and then he died in real life?  What if Dreeps decides to shutter their app, or not release an upgrade for the new phone I get after that?  Would I leave my device plugged in, forever stuck at iOS whatever, just so the experiences could keep going?  The Weavrs I created for myself back in 2012 are gone, victims to this onward march of technology and unportability of complex cloud-based systems.  I’m fortunate that I never got too attached.  Droops is an app, but there’s still a lot there outside of my control.

GodBenderI’m particularly interested in where this stuff intersects with physical objects.  Tamagotchis are still out there, and we’re building hardware with enough smarts to be able to create interesting installations.  There’s an Austin Interactive Installation meetup I keep meaning to go to that’s probably full of folks who would have great ideas about this.  Imagine a pico-projector or LCD screen and a RaspberryPi running a game like Dreeps, but with the deep complexity and procedural generation systems of Dwarf Fortress.  Maybe a god game like Populous, with limited interaction.  You’d be like Bender in Godfellas, watching a civilization grow.  Could that sit in your home, on your desk or by the bookshelf, running a little world with little adventurers for years and years?  Text notifications on your phone when interesting things happened.  A weekly email of news from their perspective?  As it sat on your desk for longer, would it be harder and harder to let go of?  When your kids grew up, would they want to fork a copy and take it with them?

4 years ago there were no low-power GPU sporting Raspberry Pis or globally interconnected Nest thermostats or dirt-cheap tablet-sized LCD screens or PROCJAM.  Minecraft was still in alpha, the indie game scene hadn’t exploded, the App Store was still young, procedural content generation was a niche thing.  Now all those pieces are there, just waiting to be plugged together.  So who’s going to be the first one to do it?

Magical Objects: The Future of Craft

September 30, 2013 at 4:57 pm (One Comment)

Marken PhotographOf the thousands of pictures I’ve taken since I got into photography, there are only a few on display in my house.  Only one of them is what you might call professionally framed.  It’s that one, to the right.  It was taken in Marken, Netherlands, on the Wandelroute Rond Marken Over de Dijk.  Not exactly here, but close by, on a little path at the edge of an island next to the ocean.  The thing is, it isn’t a photograph.  It looks like a photograph, but it’s actually a panorama, digitally spliced together from half a dozen shots.  It’s a photograph, re-interpreted by software.  And it could be the first step on the road to something new.

Ode to a Camera Gathering Dust

A few weeks ago I read a blog post by Kirk Tuck talking about the recent drop in camera sales, and the general decline of photography as a hobby.  Kirk’s assertion was that when a lot of us got into photography, gear made a big difference.  There was the high end to yearn for, but with the right skill and tricks you could make up for it.  There were good sized communities online where you could share photos with other people in the same spot, and you were all getting a little better.  It was something you could take pride in.  Now all the gear is great.  Your cell phone camera is great.  It’s hard to stand-out.  Everyone has read the same tutorials, everyone can do HDR and panoramas.  They can even do them in-camera with one button.  And as photography goes, so goes video.

Dust Bunny 3D PrintsFor a while I thought that 3d printing and the maker movement might be a little like photography.  There’s plenty of gear to collect, and it can make a big difference in the final product, but skill and technique and creativity still count for a lot.  Now I’m leaning towards 3d printing and the maker movement really being a rediscovery of the physical after the birth of the age of software.  Before personal computers ate the world you could still find plenty of folks who knew about gear ratios and metallurgy and who’d put together crystal radios when they were kids.  I grew up in the 80s, and I don’t know anything about either of those things, but I was diagnosing IRQ conflicts before I liked girls.  So the maker movement is kind of new, and photography is kind of past the curve, so what’s new-new?  What’s going to eat our time and interest and energy and fill our walls and display shelves next?  What are we going to collect and tinker with and obsess over?

Beautiful, New Things

It’s been said that we’re all in the attention game now.  Attention is currency.  In an indirectly monetized world it’s what people have to give.  When you create something, you’re vying for that bit of attention.  Given that, I think we’re looking at the birth of a new kind of craft, and a new kind of object.

Let’s call them magical objects: Objects that use software and computation to break or make irrelevant their inherent limitations, for the purpose of entertaining or informing.  They’re objects that use software to amplify their Attention Quotient.  (AQ, is that a thing? It should be.)

First, I’d like you to look at a video that hit a few days ago, Box.  It’s what happens when you combine a bunch of creative folks, some big robot arms, projectors, cameras, and a whole bunch of software.

That’s pretty awesome, right?  Not really practical for your house, but pretty.  Let’s find something smaller, something more intimate.  Maybe something more tactile.  Something like… a sandbox…

Ok, now we’re getting somewhere.  It’s a sandbox that reacts to your input.  The software and the projectors and the cameras make the sandbox more than just a sand table with some water on it, the whole thing becomes an application platform, with sand and touch as it’s interface.  The object becomes magical.  When you look at a sandbox, you know what it can do.  When you look at an augmented sandbox, you don’t know what it does.  You have to play with it.  You have to explore.  It has a high attention quotient.

These kind of objects are going to proliferate like crazy in the next few years.  We’re already starting to see hints of it in iOS 7’s Parallax wallpaper.  The only reason that parallax wallpaper exists is to make your iDevice more magical.  It serves no other purpose than to use software (head distance, accelerometer movement tracking) to overcome the limitations of hardware (2d display), for the purpose of delighting the user (magic).

Kids These Days

So as we think about the future, let’s step back for a second, and think about the children.  At the Austin Personal Cloud meetup a few weeks ago I had a realization that everyone in the room was probably over the age of 30, and there were plenty over the age of 50.  We have to be really careful about prognosticating and planning the future, because the world that we see isn’t the world that those in their teens and 20’s see.  They have different reference points, and they’re inspired by different things.  I’ve written before about Adventure Time and The Amazing World of Gumball as training for future engineers.  But it occurs to me that when it comes to magical objects, we only need to look at the name to tell us where the inspiration for the next generation will spring.

Luna LovegoodPart of the thing that makes Harry Potter’s world wonderful is that things are more than they appear.  A car isn’t just a car, a hat isn’t just a hat, and a map isn’t just a map.  For all the plot-driving magical objects in Harry Potter like the Time Turner, there are plenty of wandering portraits, chocolate frog trading cards, and miscellaneous baubles.  They amp up the attention quotient of the world.  Maybe they’re the reason we don’t see Harry and Hermione checking Facebook all day, or maybe they just have awful coverage at Hogwarts.

My daughter’s about to turn 2, and her newest discovery is that if she holds a cup to her ear, it kind of sounds like the ocean.  After I showed her that, she held the cup to her ear for a good 20 minutes.  I hold the cup up to my ear, and I hear science.  She holds the cup to her ear, and she hears magic.  Her eyes are wide, and she says, “Ocean!” over and over.

We can make these magical objects now, and we have a generation that would love more meaningful interaction from physical things.  We just need to start assembling the bits and deciding on a few simple standards so we can create ecosystems of art.  We don’t have magic, but we have something that’s nearly as good.  We have software…

That’s a documentary about Processing.  You don’t need to watch the whole thing, but it’s pretty, and interesting. Processing is a programming language for visual arts.  Usually those interesting visual things live on a screen, or through a projector in space or on a building.  They rarely live in your house.  But they could, and they could be really cool.

Wherein We Sketch Out the Future

I think that by combining the artistic software movement, emergent behavior fields like procedural game world generation, and a little bit of hardware hacker know-how, we can create a new type of thing.  A magical, home object.  Let’s look at one…

Back of an Envelope SketchSo this is a thing.  Literally a back-of-an-envelope sketch.  It’s a bowl, or a box, with an arm extending over it.  In the bowl is sand, or perhaps something more pure-white but still eco-friendly and non-toxic.  At the end of the arm is a little pod, it has two cameras in it, for stereoscopic 3D, and a pico projector.  Maybe there’s even another projector pointing up out of it.  Under the bowl is the descendant of a Raspberry Pi, or a Beaglebone Black, or something like it.  It lives on a side table or end table in your house.

This magical device runs programs.  The programs use the sand (or whatever you put under the arm) as an interface.  It can recognize other objects, maybe little shovels or pointers or what have you.  Maybe simple programs are like our virtual sandbox above.  Maybe it’s like a bonsai, but instead of a virtual tree, it runs a simulation of an ecological ecosystem.  Dig out your valleys and pile up your mountains, and see trees grow, animals roam the steppes, birds fly…  Maybe you can even run a game on that, like Populous, but instead of looking into the screen you can walk around it and touch it.  You can watch your little minions wander around the landscape.  Maybe you can talk to it.  Maybe it’s like the asteroid that hits Bender in Futurama’s Godfella’s episode, like Black and White but designed for the long-haul.  Maybe when I’m not running my civilization on it, it plays selections from a feed of cool Processing visualizations across my ceiling.

Back to the Beginning

I’m sure there will be all kinds of form factors for these magical objects.  They’ll come in pocket-sized compacts, or ceiling projectors, or robotically controlled room projectors (imagine a bunch of tiny Disney-esque mice that live in your house, but are only projected onto the walls and floorboards, not actually chewing through them).  Or maybe it’s like my photo of Marken, in a frame on the wall, except that it’s based off a video clip, or some software analyzes the scene and says, “Hey, this is grass, let’s make it wave a little, and these are clouds, so they should float by, and this is a sailboat, so it should drift back and forth.”  And maybe, if you lean in really close, you can hear the ocean.

Building a Personal Cloud Computer

September 13, 2013 at 12:21 pm (2 Comments)

Wednesday I presented a talk at the Austin Personal Cloud meetup about Building a Personal Cloud computer.  Murphy was in full effect, so both of the cameras we had to record the session died, and I forgot to start my audio recorder.  I’ve decided to write out the notes that I should have had, so here’s the presentation if it had been read.

Personal Cloud Meetup Talk.001

In this presentation we’re talking about building a personal cloud computer.  This is one approach to the personal cloud, there are certainly others, but this is the one that has been ringing true to me lately.

Personal Cloud Meetup Talk.002

A lot of what people have been talking about when they speak about the personal cloud is really personal pervasive storage.  These are things like Dropbox or Evernote.  It’s the concept of having your files everywhere, and being able to give permission to things that want to access them.  Think Google Drive, as well.

These concepts are certainly valid, but I’m more interested in software, and I think computing really comes down to running programs.  For me, the personal cloud has storage, but it’s power is in the fact that it executes programs for me, just like my personal computer at home.

That computer in the slide is a Commodore +4, the first computer I ever laid fingers on.

Personal Cloud Meetup Talk.003

Back then, idea of running programs for yourself still appealed to the dreamers.  They made movies like TRON, and we anthropomorphized the software we were writing.  These were our programs doing work for us, and if we were just smart enough and spent enough time at it, we could change our lives and change the world.

Personal Cloud Meetup Talk.004

This idea isn’t new, in fact AI pioneers were talking about it back in the 50s.  John McCarthy was thinking about it back then, as Alan Kay relates when he talks about his 3rd age of computing:

They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a ‘soft robot’ living and doing its business within the computer world.

That’s been the dream for a long time…

Personal Cloud Meetup Talk.005

But that never really happened.  The personal computer revolution revolutionized business, and it changed how we communicated with each other, but before the Internet things didn’t interconnect to the point where software could be a useful helper, and then we all went crazy making money with .com 1.0 and Web 2.0, and it was all about being easy and carving out a market niche.  Then something else hit…

Personal Cloud Meetup Talk.006

Mobile exploded.  If you’ll notice, mobile applications never really had an early adopter phase.  There was no early computing era for mobile.  You could say that PDAs were it, but without connectivity that isn’t the same as the world we have now.  Most developers couldn’t get their app onto a mobile device until the iOS app store hit, but that platform was already locked down.  There was no experimentation phase with no boundaries.  We still haven’t had the ability to have an always-connected device in our pocket that can run whatever we want.  The Ubuntu phones may be that, but we’re 6 iterations into the post-iPhone era.

Personal Cloud Meetup Talk.007And who doesn’t love mobile?  Who doesn’t love their phone?  They’re great, they’re easy to use, they solve our problems.  What’s wrong with them?  Why do we need something else?  Well, let’s compare them to what we’ve got…

Personal Cloud Meetup Talk.008With the PC we had a unique device in so far as we owned the hardware, we owned our data, and EULA issues aside, we owned the software.  You could pack up your PC, take it with you to the top of a mountain in Nepal, and write your great novel or game or program, with no worries about someone deactivating it or the machine being EOLed.  Unfortunately the PC is stuck at your house, unscalable, badly networked, loaded with an OS that was designed for compatibility with programs written 25 years ago.  It isn’t an Internet era machine.

With the web we got Software as a Service (SaaS), and with this I’m thinking about the Picasa’s and Flickr’s and Bloggers of the world.  No software to maintain, no hardware to maintain, access to some of your data (but not all of it, such as not having access to traffic metrics with Flickr unless you paid, and only export rights if you were paid up).  But in this new world you can’t guarantee your continuity of experience.  Flickr releases a redesign and the experience you’ve depended on goes away.  The way you’ve organized and curated your content no longer makes sense.  Or maybe as in the case of sites like Gowalla, the whole thing just disappears one day.

Mobile has it’s own issues.  You often don’t own the hardware, you’re leasing it or it’s locked up and difficult to control.  You can’t take your phone to another provider, you can’t install whatever software you want on it.  Sometimes it’s difficult to get data out.  How do you store the savegame files from your favorite iPhone game without a whole-device snapshot?  How do you get files out of a note taking app if it doesn’t have Dropbox integration?  In the end, you don’t even really own a lot of that software.  Many apps only work with specific back-end services, and once your phone gets older, support starts to disappear.  Upgrade or throw it in the junk pile.

Cloud offers us new options.  We don’t have to own the hardware, we can just access it through standards compliant means.  That’s what OpenStack is all about.  OpenStack’s a platform, but OpenStack is also an API promise.  If you can do it with X provider, you can also do it with Y provider.  No vendor lock-in is even one of the bullet points on our homepage at HP Cloud.

Implicit in cloud is that you own your own data.  You may pay to have it mutated, but you own the input and the output.  A lot of the software we use in cloud systems is either free, or stuff that you own (usually by building it or tweaking it yourself).  It’s a lot more like the old PC model than Mobile or SaaS.

Personal Cloud Meetup Talk.009

All of these systems solve specific types of problems, and for the Personal Cloud to really take off, I think it needs to solve a problem better than the alternatives.  It has to be the logical choice for some problem set.  (At the meetup we spent a lot of time discussing exactly what that problem could be, and if the millennials would even have the same problems those of us over 30 do.  I’m not sure anyone has a definitive answer for that yet.)

Personal Cloud Meetup Talk.010 Personal Cloud Meetup Talk.011

This is what I think the Personal Cloud is waiting for.  This explosion of data from all our connected devices, from the metrics of everything we do, read, and say, and what everyone around us says and does.  I think the Personal Cloud has a unique place, being Internet-native, as the ideal place to solve those problems.  We’re generating more data from our activities than ever before, and the new wave of Quantified Self and Internet of Things devices is just going to amplify that.  How many data points a day does my FitBit generate?  Stephen Wolfram’s been collecting personal analytics for decades, but how many of us have the skill to create our own suite of tools to analyze it, like he does?

Personal Cloud Meetup Talk.012

The other play the Personal Cloud can make is as a defense against the productization of you.  Bruce Sterling was talking about The Stacks years ago, but maybe there’s an actual defensive strategy against just being a metric in some billion dollar corporations database.  I worked on retail systems for a while, it wouldn’t surprise me at all if based on the order of items scanned out of your cart at Target (plus some anonymized data mining from store cameras) they could re-construct your likely path through the store.  Track you over time based on your hashed credit card information, and they know a whole lot about you.  You don’t know a whole lot about them, though.  Maybe the Personal Cloud’s place is to alert you to when you’re being played.

Personal Cloud Meetup Talk.013In the end I think the Personal Cloud is about you.  It’s about privacy, it’s about personal empowerment.  It’s uniquely just about you and your needs, just like the Personal Computer was personal, but can’t keep up, so the Personal Cloud Computer will take that mantel.

Personal Cloud Meetup Talk.014 Personal Cloud Meetup Talk.015

The new dream, I think, is that the Personal Cloud Computer runs those programs for you, and acts like your own TRON.  It’s your guardian, your watchdog, your companion in a world gone data mad.  Just like airbags in your car protect you against the volume of other automobiles and your own lack of perfect focus, so your Personal Cloud protects you against malicious or inconsiderate manipulation and your own data privacy unawareness.

Personal Cloud Meetup Talk.016

To do this I think the Personal Cloud Computer has to live a central role in your digital life.  I think it needs to be a place that other things connect to, a central switching station for everything else.

Personal Cloud Meetup Talk.017

And I think this is the promise it can fulfill.  The PC was a computer that was personal.  We could write diary entries, work on our novel for years, collect our photos.  In the early days of the Internet, we could even be anonymous.  We could play and pretend, we could take on different personas and try them out, like the freedom you have when you move to a new place or a new school or job.  We had the freedom to disappear, to be forgotten.  This is a freedom that kids today may not have.  Everything can connect for these kids (note the links to my LinkedIn profile, Flickr Photos, Twitter account, etc in the sidebar), though they don’t.  They seem to be working around this, routing around the failure, but Google and others are working against that.  Facebook buys Instagram because that’s where the kids are.  Eventually everything connects and is discoverable, though it may be years after the fact.

Personal Cloud Meetup Talk.018

So how do I think this looks, when the code hits the circuits?  I think the Personal Cloud Computer (or ‘a’ personal cloud computer) will look like this:

  • A Migratory – Think OpenStack APIs, and an orchestration tool optimized for provider price/security/privacy/whuffie.
  • Standards Compliant – Your PCC can talk to mine, and Facebook knows how to talk to both.
  • Remotely Accessible – Responsive HTML5 on your Phone, Tablet and Desktop. Voice and Cards for Glass.
  • API Nexus – Everything connects through it, so it can track what’s going on.
  • with Authentication – You authenticate with it, Twitter authenticates with it, you don’t have a password at Twitter.
  • Application Hosting – It all comes down to running Apps, just like the PC.  No provider can build everything, apps have to be easy to port and easy to build.
  • Permission Delegation – These two apps want to talk to each other, so let them.  They want to share files, so expose a cloud storage container/bucket for them to use.
  • Managed Updates – It has to be up to date all the time, look to Mobile for this.
  • Notifications – It has to be able to get ahold of you, since things are happening all the time online.
  • and Dynamic Scaling Capabilities – Think spinning up a hadoop cluster to process your lifelog camera data for face and word detection every night, then spinning it down when it’s done.

Personal Cloud Meetup Talk.022So how do we actually make this happen?  What bits and bobs already exist that look like they’d be good foundational pieces, or good applications to sit on top?

Personal Cloud Meetup Talk.023No presentation these days would be complete without a mention of docker, and this one is no different.  If you haven’t heard of docker, it’s the hot new orchestration platform that makes bundling up apps and deploying lightweight linux container images super-easy.  It’s almost a PaaS in a box, and has blown up like few projects before it in the last 6 months.  Docker lets you bundle up an application and run it on a laptop, a home server, in a cloud, or on a managed Platform as a Service.  One image, multiple environments, multiple capacities.  Looking at that Ubuntu Edge, that looks like a perfect way to sandbox applications iOS style, but still give them what they need to be functional.

Personal Cloud Meetup Talk.024

Hubot is a chat bot, a descendant of the IRC bots that flourished in the 90’s.  Hubot was built by Github, and was originally designed to make orchestration and system management easier.  Since they connect and collaborate in text based chat rooms, Hubot sits in their waiting for someone to give it a command.  Once it hears a command, it goes off and does it, whether it be to restart a server, post an image or say a joke.  You can imagine that you could have a Personal Cloud Computer bot that you’d say ‘I’m on my way home, and it’s pot roast night’ to, and it would switch on the Air Conditioner, turn on the TV and queue up your favorite show, and fire up the crock pot.

Personal Cloud Meetup Talk.025

The great thing about Hubot, and the thing about these Personal Cloud Bots, is that like WordPress Plugins, they’re developed largely by the community.  Github being who they are, Hubot embraces the open development model, and users have developed hundreds of scripts that add functionality to Hubot.  I expect we’ll see the same thing with the Personal Cloud Computer.

Personal Cloud Meetup Talk.027

I’ve talked about Weavrs pretty extensively here on the blog before, so I won’t go into serious depth, but I think that the Personal Cloud Computer is the perfect place for something like Weavrs to live.  Weavrs are social bots that have big-data derived personalities, you can create as many of them as you like, and watch them do their thing.  That’s a nice playground to play with personalities, to experiment and see what bubbles to the top from the chaos of the internet.

Personal Cloud Meetup Talk.031

If you listen to game developers talk, you’ll start to hear about that initial dream that got them into game development, the dream of a system that tells stories, or tells stories collaboratively with you.  The Kickstarted game Sir, You Are Being Hunted has been playing with this, specifically with their procedurally generated British Countryside Generator.  I think there’s a lot of room for that closely personal kind of entertainment experience, and the Personal Cloud Computer could be a great place to do it.

Personal Cloud Meetup Talk.032

Aaron Cope is someone you should be following if you aren’t.  He used to be at Flickr, and is now at the Cooper-Hewett Design Museum in New York.  His Time Pixels talk is fantastic.  Two of the things that Aaron has worked on of interest are Parallel Flickr, (a networkable backup engine for Flickr, that lets you backup your photos and your contacts photos, but is API compatible with Flickr) and privatesquare (a foursquare checkin proxy that lets you keep your checkins private if you want, or make them public).  That feels like a really great Personal Cloud app to me, because it plays to that API Nexus feature.

Personal Cloud Meetup Talk.033

The Numenta guys are doing some really interesting stuff, and have open sourced their brain simulation system that does pattern learning and prediction.  They want people to use it and build apps on top of it, and we’re a long way away from real use, but that could lead to some cool personal data insights that you run yourself.  HP spent a bunch of money on Autonomy because extracting insights from the stream of data has a lot of value.  Numenta could be a similar piece for the Personal Cloud.

Personal Cloud Meetup Talk.036

That’s the Adafruit Pi Printer, Berg has their Little Printer, and they’re building a cloud platform for these kind of things.  These devices bring the internet to the real world in interesting ways, and there’s a lot of room for personal innovation.  People want massively personalized products, and the Personal Cloud Computer can be a good data conduit for that.

Personal Cloud Meetup Talk.037

Beyond printers, we have internet connected thermostats, doorknobs, and some of those service companies will inevitably go away before people stop using their products.  What happens to your wifi thermostat or wifi lightbulbs when the company behind it goes way?  Personal Cloud lets you support that going forward, it lets you maintain your own service continuity.

Personal Cloud Meetup Talk.038 Personal Cloud Meetup Talk.039

Having an always-on personal app platform lets us utilize interesting APIs provided by other companies to process our data in ways we can’t with open source or our own apps.  Mashape has a marketplace that lets you pick and switch between api providers, and lets you extend your Personal Cloud in interesting ways, like getting a sentiment analysis for your Twitter followers.

Personal Cloud Meetup Talk.041

In addition to stuff we can touch over the network, there’s a growing market of providers that let you trigger meatspace actions through an API.  Taskrabbit has an API, oDesk does, Shapeways does, and we haven’t even begun to scratch the possibilities that opens up.

Personal Cloud Meetup Talk.042

One thing to watch is how the Enterprise market is adapting to utility computing and the cloud.  The problems they have (marketplaces, managed permissions, security for apps that run premises, big data) are problems that all of us will have in a few years.  We can make the technology work with enterprise and startups, but for end users, we have to make it simple.  We have to iPhone it.

Personal Cloud Meetup Talk.045

So where do we start?  I think we have to start with a just good enough, minimum viable product that solves a real problem people have.  Early adopters adopt a technology that empowers them or excites them in some way, and whatever Personal Cloud platforms appear, they have to scratch an itch.  This is super-critical.  I think the VRM stuff from Doc Searls is really interesting, but it doesn’t scratch an itch that I have today in a way I can comprehend.  If you’ve been talking about something for years, what will likely happen is not that it’ll eventually grow up, it’s that something radical will come out of left field that uses some of those ideas, but doesn’t honor all of them.  That’s my opinion, at least.  I think the Personal Cloud community that’s been going for years with the Internet Identity Workshop probably won’t be where the big new thing comes from, but a lot of their ideas will be in it.  That’s just my gut feeling.

Personal Cloud Meetup Talk.046The last caveat is that Apple and Microsoft and Google are perfectly positioned to make this happen with vendor lockin easily.  They all already do cloud.  They all have app stores.  They have accounts for you, and they want to keep you in their system.  Imagine an Apple App Store that goes beyond your iPhone, iPad and even Apple TV, but lets you run apps in iCloud?  That’s an easy jump for them, and a huge upending of the Personal Cloud world.  Google can do the exact same thing, and they’re even more likely to.

Personal Cloud Meetup Talk.047 So thanks for your time, and for listening (reading).  If you have comments, please share them.  It’s an exciting time.

Adventure Time as Inspiration for Future Engineers

January 24, 2013 at 3:28 pm (One Comment)
It's a truck.  I think.

My sister-in-law.  She’s holding a truck. I think.

My sister-in-law is 8 years old, and loves Adventure Time.  We spend a fair amount of time hanging out, which means I’ve seen a fair amount of Adventure Time, too.  I spend a lot of time thinking about future technologies: companion software bots, augmented reality, enveloping story universes.  A few months ago it struck me that Adventure Time and The Amazing World of Gumball are really effective at teaching the fundamentals of what life will be like in the future, assuming AR and bot trends continue like they have.  I’m sure it’s inadvertent, but by mashing up the media from their youth with current technology and idioms, the creators have produced really compelling content that predicts the future.

Augmented Reality, specifically additive AR where you wear glasses that display images laid over the real world, is looking like the next innovation frontier after the cell phone.  (There isn’t much innovation going on in the cell phone space that isn’t incrementally smaller, lighter, faster, or is really a cloud software innovation.)  You have a set of glasses wirelessly connected to the internet that have cameras and some intelligent software that detects objects or interprets landscape positions and can then project images into your eyeballs appropriately.  Mix that with some cloud based software bot friends, and you get a view that might look something like this:


Speaking of software bot friends, Adventure Time does a great job of showing what a personal bot might be like.  Finn’s a human, the relatable entity in the story, but his best friend Jake is a talking dog who can stretch to nearly any shape (easy in AR) and knows all kinds of esoteric information about the strange world Finn finds himself in.  Like, I don’t know, he has access to the Internet or something.  The entire world is magical and gamified in a cell shaded way.  You need exercise, why doesn’t Jake (your cloud based software buddy) take you on an adventure to the world of the Tree People (walk to the park), where you can show off your awesome adventuring skills (climb the monkeybars).


By defining an aesthetic for what cool things look like and what fun experiences are, the creators of these shows are guiding what our future will actually look like.  The kids watching these shows who grow up to design and build technology will be more likely to make this AR future, because it speaks to what originally inspired them, and the rest will fundamentally understand it, because the inspiration was part of their experience.

Kurzweil, Bot AI and the GoogleBoard

January 7, 2013 at 10:58 am (3 Comments)

A few weeks ago it was revealed that Ray Kurzweil, pioneer of OCR, speech recognition and AI assistance tools, had joined Google to work on machine learning and language processing projects. My initial reactions were excitement (Google knows the time is ripe for this to happen), cynicism (big name matchups like this rarely work out like they’re supposed to), and last night, during a 12 hour drive from Santa Fe to Austin, curious speculation.

These days it’s rare to truly disconnect. We have the internet floating through the air at home, swirling around our mobile devices as we drive around town. Even in remote places we can read ebooks or listen to our music. When you’re driving through the lonely landscape of New Mexico at 11pm on a Saturday night, with a car full of sleeping people… technology leaves you to your imagination.  So here’s my take on where a Google/Kurzweil mashup may take us.

The Stacks

Google’s an amazing company. I’m a die hard Apple product user, but even I realize that Google’s better positioned for the next 50 years. They’ve spent the last 15 years assembling a mindbogglingly good technology base. While Apple has been great at forecasting what people will use and making beautiful, easy to use versions of that, Google has spent the last 15 years figuring out what impact technology’s going to have on peoples lives, and building all the foundational technologies to make it happen. They’ve spent a ton of money and time building technologies that are hard to replicate.

There’s been a lot of talk recently about the five stacks: Amazon, Apple, Microsoft, Google and Facebook. They like to wrap you up in their ecosystems, but in reality, Google’s the only one doing the whole thing.

Amazon doesn’t have a real search option, and generally don’t do deep technology development. They’re a lot like Facebook in this regard, they’re agile, but not deep. They can give you a social experience, but they can’t really make your life better, only full of more content (Amazon) or better connected to your friends (Facebook).

Microsoft’s having trouble staying relevant, and while they have a good foothold in the living room, mobile’s abysmal and nobody gets Windows 8. I don’t think they really have a vision for where they want to be as a company in 10 years. They just want people to keep buying Office.

So that leaves us with Apple and Google. Apple has great product design, but they aren’t a deep software technology company. They can design great experiences, but that’s only an advantage for so long. If someone else offers a device that fundamentally does something they can’t match, that someone else (Google) can eventually catch up in design and ease of use. Just look at Google’s Maps app for iOS. Not a skeuomorph to be found. I think Android’s still too complicated for my parents, but it’s obviously getting better.

So we’re left with Google. They have a great technology foundation, gobs of really smart people, and more and more experience making what they build easy to use. And now they’ve hired Ray Kurzweil. Why? Because they want to leverage all this amazing technology they’ve built to be your life AI assistant.

The Predictive Technologies

Lets look at some technologies that Google’s built that will eventually be seen as the ancestors of whatever Kurzweil’s team comes up with. First, we’ll be communicating with it using our voice. Google’s been working on it’s voice technology for a while, including Google Voice Search (call a phone number, say your search, and get the results read to you), Google Voice Voicemail Transcription, Youtube Automatic Video Transcription, and even Google Translate (Speak english, hear Spanish!).  Their Google Voice Search is better than Siri, in my experience.

Second, it’ll be with us everywhere (thanks to Android), and it’ll be predictive based on being continually active (thanks to Google’s massive computing capacity, and oddles of data at it’s disposal). An example of this is Google Now, but the Kurzweil version will be even better.  Google has been really smart about letting developers build cheap Android devices, but almost all of them still go back to Google for email, calendar, etc.  They’ve leveraged the market, but the customers are still theirs.

Third, it’ll reach out and touch other devices. While your phone might be your personal magic wand for the internet, it’s hard to share things on the phone screen, and there are all kinds of things we could do with larger displays. Google’s started doing this with it’s Airplay-like wireless display mirroring. The Google Nexus Q is essentially an admission that you need an easy way to share what you’re listening to when you’re with your friends. The device lets your Android device discover it and use it for output, you’re no longer limited to your android device’s speakers.

The GoogleBoard

Audio is a first step, and mirroring your entire screen is fine, but the future belongs to sharing. You want your friends to be able to come over to your house, and share their lolcat pic or funny video on your TV or other display without needing HDMI cables, or taking over the entire screen. You may want a note taking application to be displayed next to a streaming video, or you may want to play a game where everyone uses their android devices as controllers. For that, you need something smarter. You need something like… a GoogleBoard. (Thank goodness you just bought a hardware company.)

Imagine a display that fades into the environment. It may be small (a 15″ screen next to your door that notices when you walk by and shows you the weather forecast, your expected commute time and reminders) or big (a chalkboard in your kitechen or your refrigerator door). You won’t be watching movies on it, so display fidelity isn’t as important as TVs, but it can use the same production base, so they’ll be cheap. They’ll also be smart, they’ll run android like Google TV, they’ll be internet connected, but they’ll have a lot of features dedicated to sharing their screen space.

They’ll use bluetooth or nfc, they’ll probably have cameras (for google hangout/google talk video conferencing). They’ll be aware of your friends, thanks to Google’s permission system with a group blog bolted on top that’s masquerading as a social network (Google+, if you didn’t catch that). You’ll own the Google Board, and you’ll be able to say ‘everyone in my friends circle can use this’. Your friends will come over to your house, and they’ll be able to magic up a video or graphic or app on their Android device, and fling it up to the GoogleBoard, where it can use the whole thing (if it was empty), or share space with other users already using the Board. Depending on your preferences, the app may utilize the Board’s network connectivity, or the display may just be that, a display that is driven from your Android device like an X-Windows app, with all the network traffic going through your Android device’s backhaul.  Maybe Android will be smart enough that it can price-optimize it’s network traffic, using it’s own wifi when it can, your friends wifi when it’s available, or LTE as a fallback.

People love interesting information, and a GoogleBoard would be uniquely suited to provide constant global metric displays. You could have a home dashboard on one, that shows up when nobody’s using it. Your family’s pedometers and scales feed into little personal health meters on the side. ‘Dad, you should probably lay off the pringles, your avatar’s looking a little sad.’ ‘Hey, the fridge is out of milk! (and Target’s having a sale, thank you Google Ads, touch here to add it to your delivery order)’ ‘You play a lot of Kruder and Dorfmeister through your Nexus Q, and Thievery Corporation’s going to be in town, do you want tickets?’

GoogleBoard 0.1

GoogleBoard 0.1

When the GoogleBoard isn’t being used, they may use neat simple technologies to be energy efficient art displays, white boards (I’m imagining capacitive chalk markers that you can see when the Google Board is ‘off’, but are transparent when the google board is on.) They could be coffee tables or kitchen tables. You could play an RTS or card game with the rest of your family over dinner. You could watch a funny video and throw it over to the living room TV, if you really wanted everyone to see it.

Google Bot Avatars

So now that we’re all sharing these displays at once, we need a way to identify who’s who, and now that our android devices (and by extension, google activity) is exposed to other people through a Kurzweil-derived AI, maybe it becomes time to give the thing a name and an avatar. This AI Bot Avatar is your personal concierge for everything Google can offer you, you talk to it, it talks back, it lives in the cloud, but it’s snuggly at home on your phone or Google Glass device, because that only belongs to you.  It can pop it’s head up on your home display devices (your Google Boards and Google TV, or your Berg Little Printer), and it can be an invited guest on your friends Boards or Boards and computers at work or school.  Since it has a unique name, you can summon it in the car, and all your friends can use your Android-driven car bluetooth speaker system to talk to their devices and ask for their music to be played.  Or display things on the Board in the self-driving car’s ceiling.

Princess Fluffypants

Princess Fluffypants

So lets say my kid’s AI bot avatar is Princess Fluffypants, because she’s a kid and that’s how she rolls. Her AI assistant pulls in stuff from Khan Academy or Make Magazine Youtube videos (because it knows she’s interested in science, but could use some help in math), it keeps her up to date on trends, including what her friends are watching, and gives her the latest news. When she communicates with her AI bot, the bot has a personality (maybe the kid picks ‘Royal’ since Fluffypants is a Princess). Bot grooming and accessorizing becomes a thing, because the Google AI Bot has all of Google’s knowledge behind it, and can probably be programmed and modified like android apps.

My AI bot may be more serious, maybe I’m really into P. G. Wodehouse, so I have a Jeeves, and maybe Stephen Fry’s making some extra scratch by lending his voice to my avatar set. Maybe I even have multiple AI bots, since it’s weird for Jeeves to be talking to me about football or my interest in crumping (or maybe that’s hilarious). But that’s a topic for another blog post.

Application Network Portability

One requirement that this raises is the need for the applications that run your bot, or the applications your bot runs, depending on your perspective, to be network portable. You need to be able to execute code in the Google cloud, things you want to happen regularly or things that benefit from rapid access to large volumes of data, but then you also want software execution on your device, or you want to push a little applet over to a TV or GoogleBoard, since it’s inefficient to render the graphics on your phone and then push them when the display could run the app itself.  Or maybe the display reports it’s capabilities and the Phone decides whether to push the applet (the display’s fast enough for what I want) or just use it as a display (the display’s two years old, and isn’t fast enough for this application).

Lots of iOS developers (myself included) gnash their teeth when they think of the insane panoply of Android devices.  Testing software at all the resolutions, form factors, and aspect ratios is incredibly painful, but in a network transportable world, maybe that was a smart decision.  You never know where your app is going to be displayed, on a landscape 16×9 screen, in a portrait 16×9 area on a larger display, on a square car dashboard, so you design for flexibility.

The Dark Horse

So Google seems pretty well positioned here.  Amazon doesn’t seem like a serious player.  Facebook will continue to make money, but needs to branch out if they want to control more than just the social conversation.  Microsoft is chasing it’s tail, trying to stay relevant.  Apple will continue to make beautiful, amazing devices, but they may not have the technological muscle to pull off the next level of magical user experience.  Already they have to partner for their most useful features, and that isn’t a good place to be.

There’s one technology company that’s looking like it may be a dark horse entry into this technical re-invention, though, and that’s Wolfram Research.  Wolfram|Alpha powers Siri’s more complex question answering, and they’re really gung-ho on their algorithmic approach to the world.  With the amount of user generated search data they’re collecting, Wolfram|Alpha could get really good, really fast.  If you aren’t reading Stephen Wolfram’s blog, you should.  At SXSW last year he mentioned that he wanted to get Mathematica into more areas, to make it more of a foundational piece people could build on.  If Wolfram Research was able to turn Wolfram|Alpha and Mathematica into a really good open source development platform for bot and internet search applications, you could get something really powerful.

Wolfram Research is privately held, and I don’t believe that Stephen Wolfram and whoever else owns pieces of it will sell.  Any non-Google stack should be slavering to get their hands on it, but being private may keep it out of their reach.


Whatever comes of the Google/Kurzweil partnership, be it really interesting, a spectacular Xanadu-esque failure, or a quiet Google Labs-esque decommissioning, it’s worth paying close attention to.  The future doesn’t magically appear, people sit down and build it.  There’s nothing stopping any of the technologies I’ve mentioned from appearing in the next few years, and Google’s in a prime position to make it happen.  While a lot of it is inspired by science fiction, successful science fiction grabs the imagination like a good early adopter product should.  There aren’t many things I’d consider dropping everything to work on, and intelligent network-native bots are one of them.  When they appear they’re going to radically remake our daily life experience.

The Personal Cloud Computer

November 26, 2012 at 12:28 pm (7 Comments)

This is what computers were like before PCs. (photo by phrenologist)

40 years ago the development of the Personal Computer sparked a revolution.  It took a decade for PCs to land in the home, and another decade for them to land in a majority of US homes, but it created an entire industry.  Having a computer that was yours led to generations of hackers and programmers, it created Microsoft, Apple, and led to the rise of Amazon and Google.

The PC is now in decline. In 2008 the laptop outsold the desktop, and now the tablet is eating the laptop’s lunch.  As form factors have shrunk and the Internet has become a more dominant element of most users experience, the computer you own that runs software you own and has explicit privacy is disappearing.  We store our spreadsheets and documents in Google Drive, we post our pictures on Flickr, we store our correspondence in Gmail, we chat with our friends on Facebook or Twitter.

HP TouchPad, a $500 paper weight. (photo by traferty)

No one is learning how to program on Facebook, especially when their only device is a cell phone or tablet.  It’s dangerous to store your personal pictures only in Flickr.  Your Google Drive documents and Gmail email are a clever hacker away from being in someone else’s hands.  On the internet, services dieDevices become orphans and eventually the content on them is lost.

Maybe it’s time for a new paradigm, something that preserves the hackability and ownership of the PC, but takes advantage of all the new technologies we’ve come up with in the last 40 years.  Maybe that thing is…

The Personal Cloud Computer: The essentials of single user focus, software and data ownership, but the portability, networkability and burstability of the cloud, the display flexibility of HTML5 interfaces, the hackability of linux and the flexibility of a PaaS.

So what does the Personal Cloud Computer (PC2, maybe? Let’s try it out.) look like, specifically it’s fundamental architecture, organization and software use cases?  Well, let’s start from the top…


I think we’re looking at something like a PaaS similar to CloudFoundry, but with a UI front end like WordPress, and tuned to run apps for you, not run apps for web consumption.  You’ll access it via HTTPS, it’ll be optimized for desktop, tablet and mobile, and it’ll have API access routes for stand-alone applications or hardware devices. By default your distribution may come with a set of plugins (from the desktop metaphor, these are our programs), but no one wants to be limited to one programming language, so something like CloudFoundry makes sense.  You’ll be able to run plugins written in Java, Python, Ruby, PHP, etc.  Initially each PC2 platform creator will probably have it’s own plugin spec, but developer demand will push them towards a common, unified interface spec.

Even mongrels can be beautiful. (photo by w1n9zr0)

Logging into the UI should be as secure as possible.  Maybe we’ll use two factor authentication with bearer tokens, maybe there will be a super-secure pay-for service that holds the master password for your device.  However we do it, login needs to be safe, and lost password needs to be really, really, really difficult to hack.  Maybe you need to round up a quorum of your friends and coworkers, and by combining bits of a key you’ve given them, they can re-generate your master reset password.

WordPress has learned that software updates are a big issue, and having the update interface be as integrated and simple as possible is a huge deal.  Apple figured out that having devices live their entire lives without being tethered to a PC was an important feature.  PC2’s will need something similar.  Updates for the core platform and plugins should be easy, as secure as possible and baked in.

For memory consumption’s sake, we’ll probably follow the iOS model.  Programs only run when you’re making requests of them.  They can schedule tasks to wake themselves up with a central platform scheduler, and can run little chunks of code to check things in the background, but they don’t run continually when you’re not using them.  The core platform also provides a notification/alert hub, so if your scheduled task needs to tell you something, it can push it to you.

The interface between the core and the plugins should be network-able.  You’ll want the flexibility to run your PC2 in the cloud, but execute a program on your phone, or your house’s thermostat, or your car.  Authentication will probably be similar to Oauth, or the two factor unique password setup that Google does.  You’ll pair devices with your PC2 by entering a network identifier for your PC2 into the device, then the device will generate a random key, which you’ll punch into a devices section of your PC2 interface.  If you lose your cell phone, you can go into the PC2 interface and turn off it’s access without resetting everything.

Sharing should be baked in to the platform.  You’ll be able to grand read and write access to files, or between plugins, to other PC2 installs.  You may even share back to centralized services, or pull from centralized services like a car sharing service or traffic updates.  You could share where your car is with a city-wide traffic nexus that shares back the ability to create a route based on live traffic conditions, for instance.

Your UI would be driven like building a Facebook App.  Plugins feed UI markup from back to the PC2’s display layer, and it arranges things so the UI can be optimized for a plane of tiles style desktop UI, a tablet, or a single-tile phone UI.  You may even have baked in interface specifications for voice or visual interfaces, so you can control apps with your voice or eyeball movements in your Google Glass.

Not that kind of Metro.

Microsoft is probably tackling a bunch of these problems with Metro, or whatever it’s called today.  I’m not sure if I trust them to succeed.  These PC2 solutions would have to grow organically, defining the entire spec at once would be a recipe for disaster.  Learn the lessons, but design for simplicity.  Nobody’s going to be building Word on this platform in the first year or two.

A bunch of use cases now and over the next few years are going to be built around pay per use or subscription APIs (for facial detection in your lifestream videos, or machine translation, or whatever the next thing is).  Having a centralized billing platform for those will be important.  You’ll either have accounts with a few external services that plugins can use, or the billing and payment part will be built into the platform.  You’d have an internal provider model, so plugins would be able to discover their options without needing to know the authentication or implementation details themselves.

Utilizing cloud services would be similar to subscription APIs.  Being able to burst their CPU use or disk usage should be a service provided to plugins by the PC2.  Your thermostat should be able to request a hadoop run to churn consumption data, utility billing rates and weather forecasts once a week.  The thermostat doesn’t need to know how to spin up the hadoop cluster, but a ‘can run hadoop jobs’ component can be a part of the PC2, and it can know how to use various cloud services and be able to optimize based on price.  (I’m looking at you, Amazon EC2 spot market.)

So what do we have?  We have a base UI framework with robust integration options, strong login security, a networkable plugin interface, a centralized scheduler, integrated sharing, integrated API billing and a burstable cloud resource provider.  We’ve created a blueprint for an Operating System, something designed for the strengths of the cloud, but something very different from what we have now.  Something like, but for people, not businesses.


I think there will be a bunch of companies and groups creating platforms like this.  Some will flourish, some will die.  Early adopters will bear the brunt of the pain, but they’ll put up with it for the advantages, just like they always do.  I think most of the successful groups will look like Automattic.  WordPress is an easy example to point to, they’ve done really well financially and still embrace the open source model.  They make money from their hosted solution, but you can install it yourself if you want.  I don’t worry that they’re going to hire a new VP really focused on ‘maximizing value’, and make a deal with Microsoft so their mobile UI is only optimized for Windows Phone.  I know they have an open source ethos from the top down, so I trust them.

But in the beginning someone’s going to have to start cobbling these things together into a value-providing alpha.  Will it be me?  Will it be you?

Use Cases

Like this, but… not. (photo by robinvanmourik)

It doesn’t take too much imagination to think of things that a platform like this could provide, but it takes the right combination of experience and imagination to get it off the ground.  Most of the people who would get this kind of platform are early adopters who are already involved in the cloud.  They may run VMs in a couple different clouds, they may have written integration and maintenance software.  The first programs they’re going to build will be things that the PC2 is uniquely suited for, namely tying together your internet of things, and running consuming and consolidating services.

Your PC2 may be a great place to tie all your home automation and quantified self stuff together.  You may have zigbee’s and the Nest and your Withings scale and your Expereal app and your food logger and your Fitbit.  You may like the services, but wouldn’t it be interesting to know if you walked more on days when it was cold, or what combination of exercise, travel and food intake led to your greatest happiness.  That’s data I’d want to keep long term, long after those respective companies bite the dust.  That’s a perfect PC2 application.  It’s big data for people.

The PC2 could also be a great place to host personal Weavr type bots. It’s an always-on platform that has API access, both free and paid, and the UI options mean you could get a back-channel or tweaking interface to your weavr in your car, or on your cell phone.

With Tropo or Google Voice, your PC2 could be the center of your personal message hub.  You could call your PC2 and ask it things, Siri-style, or other people could call it and you could intelligently channel them to what they need to get to.  All the audio data would live in your own cloud storage, so if you wanted to run analytics on it 5 years down the road, you could.  Hey, voice-driven twitter-style sharing with just your friends, call in, record a clip, and it gets sent to all your buddies.

Someone will eventually build an office suite for the PC2.  It will start simple, and then it will get smarter.  With easy cloud access you’ll be able to run Wolfram Alpha style processing on your data, on demand.  Once the (open source) software’s written once, everyone can use it, they just have to pay for the CPU horsepower.

The PC2 initially wouldn’t have more memory or CPU demand than a low end VM or cell phone, which means that if you didn’t want to pay for a cloud server, or had already used your free Amazon EC2 option, you could run your PC2 on, oh… a Raspberry Pi.


PC2’s are a response to a market opportunity, and a technological tipping point.  People need tools to thrive, and their PCs are turning into services they rent.  All the pieces are in place for a new approach, nothing new really needs to be invented.  The only thing that remains is to start writing code and see if this is something people actually want.  Of course, that’s the hardest part.

Markov Chaining at 3 AM

November 16, 2012 at 3:17 am (No Comments)

You know those nights when you get an idea just as you’re going to bed, and it’s a fight to decide whether to try and sleep or to get up and see if you can get it working?  Tonight I had one of those ideas.  Specifically, I was thinking about creating a Markov Chain text generator where the corpus was all the emails I’d written since 1996.  Then you’d be able to load that page and see something pop out that would be somehow, kind-of, maybe like an email I would write.

Well, it turns out getting just the text you’ve typed in email is really hard, because it’s mixed in with all kinds of replies and cut and paste and signatures… suffice to say, it wasn’t going to work without some heavy manual editing.  Thankfully, it turns out I have a pretty good source of prose in the posts I made to The WELL since joining in 1997.  A few extract commands later, some python magic from shabada, and voila:

Markov Kramer

Elder Augmented Reality

November 5, 2012 at 1:02 pm (No Comments)

There’s an older gentleman who works at the Lowes near my house. He’s a fixture of the place. If you saw him walking down the street you’d either say “There goes a mountain man,” or “That guy looks like he should work at a home improvement store.” He’s a floor customer service representative, and seems as comfortable in lumber as he does in plumbing or lawn and garden. He isn’t pushy, always has an interested, kind look in his eyes. You’ll often see him explaining a pipe fitting or how to install a ceiling fan to a young couple, their eyes narrowed, their brows furrowed, nodding, furiously taking mental notes. Unfortunately, they can’t put him in their cart and take him home.

While most tasks, like installing a ceiling fan or wiring a dimmer switch, aren’t fundamentally complex, until you understand the principles they can seem arcane and risky. Lots of subject areas are like this: Computers, carpentry, construction, decorating, training your pet, arranging flowers, tracking your business expenses, creating a household budget, replacing your car’s battery, gardening, clearing a drain, hemming a skirt, the list goes on and on.

Many of these skills are taught to us by our parents, aunts and uncles, or grandparents. Some of us are lucky enough to had this introduction to a wide range of skills, or are able to call one of these experienced elders to come over when we stumble onto one we haven’t dealt with before. The less lucky of us may not have had as much time with our elders, may not have that sort of relationship with them or may not be able to call on them due to distance or passing.

I think that we have a general human need for elder advice and guidance, and I think augmented reality is going to herald a paradigm shift in serving that need.

There are a lot of people reaching retirement age around the world. A lot of them are facing the end of their planned careers whether they like to or not. They often aren’t suited to the uncertainties of the new economy, and the businesses they work for want them to step aside so younger people can take their place. Many, or even most, of them can’t afford to stop working, though, so they often end up at low paying menial jobs because they don’t have a modern skill set. They have deep knowledge and experience in a field, and they have experience explaining their field, since they often trained the generation of workers after theirs.

On the other end, there are millions of us who haven’t tackled these problems before, but will scoop up the latest gadget, are living at a very high speed, and are in love with customized, personalized, authentic experiences. We make friends with the taco truck guy, we fret about the viability of his business, and shake our heads sadly if he closes down. We want the world to work how it feels it should. Experience plus careful workmanship should equal success.

Imagine if there was a marketplace of subject matter experts. Retired or semi-retired plumbers, gardeners, electricians, mechanics, decorators, seamstresses, florists, stylists, bakers, teachers, cooks. The list could be as long as your arm. Each one of them has an iPad or a big TV and a remote (maybe both). They list their expertise and a price for their time. Maybe they fill out a profile of their work experience, ala LinkedIn.

You’re sitting at home. Suddenly the sink drain clogs, or the air conditioner stops blowing cold air, or your wife starts dropping hints about a souffle for her birthday, or your kid wants to take homemade bread to school, or you need to install a ceiling fan.

You put on your Google Glasses (or iGlasses or whatever other brand of see-through AR may exist in a year and a half), and place a quick order. You might have gotten an hour or two as a gift, maybe when you spent $500 at the home improvement store. You use a super-streamlined job-posting interface, probably speaking to it to describe your problem (lets call it a souffle), and in a few moments you have a handful of candidates who are online and available.

You hit the order button and a retired baker in some other state gets a bing-bong on their iPad. They sit down, review your profile, decide you’re a decent sort, and hit accept. You instantly see their face in the corner of your Google Glasses, and they can see what you’re seeing from the Glasses camera. They introduce themselves, you describe your problem, and you go to work together to solve it. They watch you as you work, use their iPad to draw diagrams over your vision, NFL commentator style, or shift the camera around and demonstrate with their hands.


Once the souffle’s in the oven, they recommend some pairings based on their experience, which you’ll get in a voice-transcribed recording of the interaction dropped in your email. You thank them, and say goodnight. You bookmark them for future reference, and leave some feedback.

It’s a win-win, you get access to subject matter experts and a real, authentic experience. They get to pass on their knowledge, and get paid for it. The technology only enhances an interaction that is already possible, but inconvenient.

Imagine if something like this was part of everyday life. You could gift your kids with a dozen hours when they go to college. You could pick up the basics of a new skill every month, entirely project based, no Dummies books collecting dust, just human interaction.

This seems like one of those no brainers. You mix Skype or Facetime, oDesk, retirees and the growth of home based businesses with the enabling technology of augmented reality, and this pops out. It’s not a matter of if, it’s a matter of when.

Platform Persistence, Virtual Death and Pocket Worlds

October 26, 2012 at 1:00 pm (2 Comments)

Note: This is a long, rambling, train of thought post. The tl;dr version is: Emotional connection to bots happens, we get sad when things we care for go away, so there’s a big ethical risk associated with human-acting bots living in unportable platforms. We members of the ‘Bot 2.0’ community need to address this before we get too far.

A little over a year ago I started playing a cloud-based iPhone game called GodVille. GodVille describes itself as a Zero Player Game. You take the role of a god, you create a hero, and you send that hero out into the game world to fight on your behalf. Your hero is an independent being.  When you come back to check on them, they will have recorded an entertaining diary of monsters fought, treasures collected, and items sold, all without your input. You only have four influence options on your hero: you can encourage them, which makes them heal faster, discourage them, which makes them fight better, shout down at them, and activate some of the items they pick up.

While it isn’t a very interactive game, it’s still a compelling experience. I check on my hero every day or two, look for interesting items to activate, and encourage him as much as I can.

Your GodVille hero can’t permanently die. They can be killed, but they’ll just wait around in the ground, writing notes in their diary until you resurrect them. (They’ll get tired of waiting for you and dig themselves out after a few days.) Not killing these bot-like characters is common in online games, permanent death is generally reserved for the hardcore modes of single player releases. (A really interesting article in postulates that the free to play model is driving this, because developers don’t want to give you an excuse to walk away from their microtransactions, or get the feeling that your money was wasted.)

Pets in GodVilleOnce sufficiently powerful, your GodVille hero can adopt a pet, it’s own sub-bot that helps it fight and gains it’s own levels. My hero adopted a pet earlier this year. Over the a few weeks I watched the pet (a dust bunny named Felix) fight along side my hero, shield him from attacks and help heal him. The pet went up in level, gained some abilities, and everything was going just peachy.

Then I opened the app one day, and the pet was dead. My hero was carrying around Felix’s corpse. I went to the web and searched for pet resurrection, but found it wasn’t possible. Sometimes the hero will pay to have the pet resurrected, sometimes they’ll just bury them. After a grieving period, they’ll adopt a new one.

Felix’s death had a lot more of an emotional impact on me than I expected. I didn’t know Felix, I never met it, it really only existed as a few hundred bytes of data on a server somewhere. I’ve had more interactions with lamps in my house than I did with Felix.  If you tip a lamp I really like off a table and shatter it into a million pieces, I may be angry, but I likely won’t feel an immediate emotional loss.

A Lamp with Feelings

Felix’s death was hard because I’d made an emotional connection to him, watching him interact with my hero. His death highlighted my powerlessness in the game. I can resurrect my hero, within the confines of the game mechanic, but I can’t resurrect his pet. No matter what I do, no matter how hard I try, I can’t bring Felix back to life.

Someday, inevitably, GodVille will shut down. People will move on to other projects, the server bill won’t get paid, iPhone apps won’t be the hot thing anymore. My hero, his diary and pet will disappear, and because he only lives inside the GodVille system (and being part of that system is a fundamental aspect of who he is), he will be gone forever.

Bruce Sterling at SXSW 2010 (photo by jonl)

Bruce Sterling gave a great talk about this at SXSW in 2010, about how the Internet doesn’t take care of it’s creations. We build and throw away. Startups form, grow like crazy, and if they don’t sufficiently hockey stick, they close. Or they get popular but not popular enough, and the team gets hired away to bigger players. Either way, the service shutters, the content and context disappears, history is lost. If it’s bad to have this happen to your restaurant checkins and photos, how much worse is it when it happens to virtual beings you’ve created an emotional attachment to? As creators, if we encourage platforms like this, roach motels where content comes in and never comes out, what does that say about us?

Eighteen and a half years ago I created my first character on a text based multiplayer internet game called Ghostwheel, hosted by my first ISP, Real/Time Communications. Ghostwheel was a MOO, an Object Oriented version of a Multi-User Dungeon, the progenitor of today’s MMORPGs like World of Warcraft. In a MOO you can create characters, build environments and objects, talk to other people, fight, and even create bots.

Real/Time Communications hosted Ghostwheel on a small server in their data center, a 486 desktop machine. People from all over the world connected to that server, created characters, and wove shared stories together over the early boom years of the internet.

A Late 90’s Austin Ghostwheel Austin Meetup

Eventually Real/Time Communications lost interest in hosting and maintaining Ghostwheel (and eventually Real/Time itself disappeared), so we took it elsewhere. As someone with colocated servers and ISP experience, I ended up hosting it on one of my machines. It now lives in a cloud VM, and even though the players have left for newer, more exciting destinations, everything they created, the characters, the setting, the dusty echoes of romances and feuds and plots all still exist. It still exists because someone with the wherewithal got their hands on it, and cared enough about it to keep it going, and it exists because MOO is an open source platform that doesn’t depend on one company being in business.

While piecing together the thoughts for this post it occurred to me the that the MOO server could probably be compiled on some modern linux based smartphone. They have more than enough CPU power and memory, and even a 3G connection is fine for text. I could conceivably load Ghostwheel on one and carry it around in my pocket. A whole world, nearly a thousand characters, tens of thousands of rooms and objects, dozens and dozens of species of monsters, all living in my pocket. I could hand it to people and ask them about the weight of a world. Every time I think about that it blows my mind. There’s definitely the kernel of something new and weird there.

So back to my point, as I’ve talked about before there’s a whole species of autonomous bots appearing around us that we relate to as nearly human. Like my GodVille character, we don’t have direct control over them, their autonomy being one of the things that makes them seem more human. They’re coming, they’re awesome, and I think in a few years they’ll be as common as Facebook accounts.

The most exciting work I’ve seen in this field is from the good folks at Philter Phactory and their Weavrs system. Weavrs are social bots defined by location, work and play interests, and groups of emotional tags. The Weavrs system hooks into Twitter, generates its own personal web pages (kind of like a bot-only mini-Tumblr) for each weavr, and is extensible through API driven modules called prosthetics. Some example prosthetics include the dreams prosthetic, which folds images the weavr has reposted into strange, creepy kaleidoscopes.

Weavrs are easy to create, they produce some compelling content, and they’re fun to watch. I’ve created a few, my wife has one, several of my friends have them. Interest is picking up from marketing and branding agencies, and where the cool hunters go, tech interests will inevitably follow.

The thing that’s starting to concern me is the possibility that Bots 2.0 could end up being another field like social networking where the hosted model gets out ahead of ownership and portability. What happens when the service hosting our bots disappears?  What happens to all it’s posts, it’s images, it’s conversations?  (I suppose I wouldn’t be qualified to work at a cloud provider if I didn’t have strong feeling about data portability.)

Weavrs as a whole isn’t open source, but it has lots of open source bits. Philter Phactory is trying to run a business, and I don’t begrudge them that. They have the first mover advantage in a field that’s going to be huge. I’m sure data portability is on their radar, but it’s a lot easier to prototype and build a service when you’re the only one running it. Conversely, it’s a lot easier to scale out a platform designed to be run stand-alone than to create a stand-alone version of a platform.

Once a few more folks start to realize how interesting and useful these things are, I think we’re going to see a Cambrian Explosion of social bots, and I’m sure plenty of entrants in the field won’t be thinking in terms of portability. They’ll be thinking about the ease of centralized deployment and management, and the reams of juicy data they can mine out of these things.

I remember in the early 2000’s feeling a similar excitement about self publishing (blogging). It was obviously going to be something that was going to be around forever once it was perfected. You could see the power in it’s first fits and starts, and it was just going to keep getting better. I think there are more than superficial similarities between self publishing platforms and social bot platforms, in fact.

Thinking back on that evolution, I think the archetype that we should hope for would be the WordPress model. I remember Matt Mullenweg visiting the Polycot offices in 2004 or so. He was passionate, had a great project on his hands, and I’m embarrassed to say that we weren’t smart enough to figure out a way to help him with it. Matt, Automattic and the WordPress community have done a great job of managing the vendor lockin problem while still providing a great hosted service people are willing to pay for. They get the best of both worlds, the custom WordPress sites and associated developer community, millions of blogs hosted by ISPs, the plugin developers, and still get to run a nicely profitable, extremely popular managed service.  If goes away (god forbid), someone will still be maintaining the core codebase, and you’ll be able to export your data and run your own instance as long as you like. (Just remember to register your own domain name.)

I hope that the social bot community evolves something similar. I think that platforms are coming online to encourage that, and I think the people in the field are smart and recognize the ethical implications. Maybe in a year you’ll be able to run your bots on a hosted service or, if you’re motivated, run your own bot server and fiddle with it’s innards as you please.  Who knows, you may even run them on your smartphone.

Life in the Weavrs Web

April 30, 2012 at 9:06 pm (4 Comments)

Jeff Sym lives in South Austin and likes Indian TV dramas, dubstep inspired remixes and the Austin Children’s Museum. Keiko Kyoda lives in Japan, likes to read old travel books and wants Condensed Milk for dinner. They tweet. Sometimes they even post things they shouldn’t.

Jeff and Keiko didn’t exist yesterday.

The first time I failed the Turing test was 1993. I’d dialed up to a BBS in Austin, a one-line operation probably running out of some guys bedroom. There was an option in one of the menus to chat with the sysop. It was an ELIZA style bot. It took at least a screen full of text and growing irritation for me to realize I was talking to a machine. I don’t remember a lot from 1993, but I remember sitting there in front of my 14″ glowing CRT, feeling incredibly dumb.  (A few years later I upgraded to this NeXT Cube.)

Artificial intelligence is only as convincing as the data behind it. Back in that relative stone age the system could only echo back at me what I’d written or ask open ended questions. “How does that make you feel?” Watson read all of Wikipedia before it (he?) went on Jeopardy. If you started talking to Watson about cars, I bet it/he could respond with some really interesting trivia, and you could chat with it/him for a while before you realized you weren’t talking to a person.

The most visible ‘ask me a question and I’ll give you an answer’ system is Apple’s Siri. Siri can tell you what the weather’s like outside, and she’ll soon be able to tell you what year and model of car you just snapped a picture of. Siri could listen to you and tell if you’re angry, or if you had a really great day yesterday, based on your tweets and Facebook posts. Siri could team up with Mint to watch your bank account balance, and suggest that hey, you aren’t investing enough for retirement, maybe you don’t need that thing you just price compared on your phone. Maybe you should put that money into your Roth IRA instead. This is all possible because these systems have access to fantastically more data than they used to.

Jeff and Keiko are Weavrs. You create weavr bots by selecting a gender (or object), a name, and a collection of interest keywords. Then you define some emotions. _____ makes me _____ when I’m at _____. You can tell weavrs where they live, and they’ll wander around their neighborhood. They utilize public social APIs (flickr,, twitter, google local), driven by some black box keyword magic, to find and post things they like. You can add pluggable modules to weavr’s to say, post their dreams. Over time they can develop new emotions about different things. There’s even a system for programming a Monomyth into their lives.

Weavrs exist on their own. You can ask them questions, but you can’t tell them ‘I like this, post more like this.’ The developers of the Weavr platform consider this to be important. Weavrs evolve and grow without your direct hand guiding them. I can understand why they didn’t want to allow ‘more like this’ feedback. It makes the entire system more complex, but it’s obvious that having more full featured persona creation/control options is going to be a big part of the future of social bots.

Weavrs most public impact so far (at least as far as I can tell) reveals a bit about how people will likely react to this sort of thing. Author of Men Who Stare at Goats and The Psychopath Test, Gonzo Journalist Jon Ronson (@jonronson) did a bit on his video show about twitter bots. The Weavr folks found out and using the contents of his Wikipedia page, created a @jon_ronson Weavr. The result was somewhat predictable: much gnashing of teeth.  There’s an excellent article about this, and Weavrs in general, on Wired UK.

This is Bat^H^H^HBot CountryTwitter has over 140 million active users. A large number of these are spam bots, designed to convert ego (retweets and replies) into $ (clickthroughs). What we don’t really know, and what may in fact be unknowable soon, is how many of these are bots of a different kind. How many of them exist just to exist. To learn, grow, develop. We heard a lot about companies creating armies of real-looking twitter accounts for nefarious purposes during the Arab Spring.  It doesn’t take a lot of work, once you have a valid social model that can be fed keywords, to create a twitter bot the simulates the interest of every ‘person’ that Wikipedia has an entry for.

What we don’t hear about, and I don’t think is discussed enough, is the non-nefarious potential for these independent personas. Imagine a platform somewhat beyond Weavr. Weavr 2.0, maybe. It ties into more social platforms. It has artistic taste (or not). Maybe it takes walks through its neighborhood, and snaps out ‘photos’ from segments of google street view images.  (Jeff Sym liked this picture today, while he was wandering around downtown Austin.) Maybe it goes on trips, setting arbitrary routes through hot points. Maybe my (should I even call it ‘my’ anymore, except that in some way perhaps I’m responsible for it, like a child?) Weavr that’s really into Information Security decides to take a road trip to DEF CON. Maybe because he’s also a bit of a conspiracy theorist, he decides to drop by Roswell on his way, maybe he looks around in Google Street View and takes a picture. Maybe because I’ve stirred the 3d Visu-chromasome pot, he has an appearance (and taste in clothes), so maybe he puts himself into the picture (apologies to Charles Stross).

Wolfram Alpha (that powers the ‘question/answer’ part of Siri with a >90% relevancy rate) is 20 million lines of Mathematica code. You’d need a lot less than that to do what I just outlined. You need an event parser. Easy, the events are already online. You need a map, and the ability to search for hotspots of keywords along the route or near an area. If I did a keyword search for ‘conspiracy’ between Austin and Las Vegas, don’t you think Roswell would pop up? If I did a search for clusters of photos taken in Roswell on Flickr or some other social photo site, I’m sure I’d find the geo location and general object background of something interesting. Analyze light and time of day, pose and place model, render and voila. Picture postcard. Get it printed and mailed from New Mexico with a pay-as-you-go errand service. Boom, your virtual persona just became real.

These personas would be great for directed research: I need a ‘me’ who lives in Amsterdam and loves to take pictures and eat cheese, since I’m going there in 6 months and I want to get a feel for the place. What if you created a relationship engine. X is my Y, I’m in love with Z. You could have your own little soap opera that evolves online every day. It ties itself into current events. It evolves in ways you can’t predict, because it feeds off the now.

I’m going to ignore the whole ‘but if you don’t know they’re fake’ problem for now. On one hand, that’s kind of the point. On the other, it sucks to be duped. In the future (and present) you can’t trust anything you see. It’s probably time to get over any assumption of authenticity.

One thing that’s nagging at me, though, is death. A lot of us know people on twitter or Facebook who’ve died. In the coming years a lot more of us will. We’ll be reminded of birthdays, of past events, things will pop up in our timeline because someone commented a few years later. Their memory continues to exist, but in a concentrated form. Unless their profile’s deleted completely, they continue to exist. If there was a collection of directed, trained bots behind the scenes, they might even continue to post. They might be posting things relevant to their interests 10 or even 20 years later, if the systems keep running that long. As it is, Jeff and Keiko will only ‘die’ if Weavr shuts down. If they had control of their own codebases and knew how to set themselves up as a lightweight every-2-hour process on a system somewhere, maybe they could pay for their own existence through clickthroughs and blog ads… indefinitely.

Weavrs want to continue to exist, as well.  Right now you can clone them, in the future they may, after some length of existence, spawn some kids with variations on their ‘genetic makeup’.  Some kids will be interesting, attract attention and flourish, some will be too boring or too weird and disappear.  The parent weavr continues to exist through it’s children.

There’s the school of thought that all people have several people inside of them. You have a ‘masculine’ take-charge person, maybe a ‘feminine’ artistic, caring person, maybe a young zany person and an older, wiser person. They all make up you, but maybe with these technologies one day soon you’ll be able to manifest them more concretely. You could have an inner circle of very directed Weavrs. Maybe to maximize their inventiveness you’ll make deals with them.  More freedom for them, wider results sets for you.  The deal with your wise, older persona, in exchange for the investment tips and long-range perspective, is that it gets to virtually go down to Florida every winter. Maybe your virtual young, wild persona, in exchange for keeping you up to date on the latest fashion trends and music recommendations, gets to stay out late and virtually attend hot underground shows.  They’re not just agents, they’re symbiotes.

These autonomous net entities, these ghosts in the social web or e-horcruxes, whatever you’d like to call them, aren’t going back in the box.  We have to learn to deal with them, and due to social connectedness and meaning being a currency in our society, whoever figures out how to utilize them best is going to have an advantage. Businesses and marketeers will take advantage after the artists finish tinkering.  Someone’s already using Weavrs to create market segment identities (PDF) for the cities in China with more than a million people (there are 150 such cities, too many to look at individually).

We’re all familiar with code that runs ‘for us’.  Flickr, McAfee, these services run with our content or on our computers, but they don’t really run for just us, and they don’t exist independently… yet.  One groundbreaking thing that Weavr is moving towards is removing the AI logic from the content (Weavrs pull from the web and post back to it, but they don’t exist in a walled garden like Flickr, they exist outside of it and talk to it via APIs).  Eventually I think we’ll see some open source or self-runnable version of this, an agent that lives wherever you want.  Once my dependency on an outside software provider for the black box is gone, I’m free to integrate whatever bits I like (fork that thing on GitHub!), and work towards a social agent that can exist for as long as someone keeps the lights on.

Postscript 1:

I just had a weird thought.  Irma and I have noticed that our Weavrs post a lot of things we’re interested in (or find cool/neat).  Since we created them, they feel like an extension of ourselves, so there’s a personal ownership angle to the things they post.  “Oh,” I say, “this bot is like me.”  I don’t say that when my friends post things, though.  I don’t say, “Wow, this social appendage of me is like me.”  I suppose someone really ego centric would say that, but we consider our friends to be independent entities.  We know we don’t control them, and unless they’re our brothers or sisters, we probably didn’t have a hand in how they initially developed.  Our Weavrs, on the other hand, feel like an extension of ourselves.  I’m not sure what that means, but it’s a weird thought on individuality and influence domains.