Wednesday, March 16, 2011

The Economics of Dissent: How Twitter and Facebook Tipped the Revolutionary Equation


Perhaps it is time to update the phrase “The pen is mightier than the sword” to “The Internet is mightier than dictators” 

While this statement is made tongue-in-cheek, it is undeniable that we are living through a time of accelerated change. Suddenly, we are witnessing decades-long regimes being challenged by oppressed populations. It is not entirely clear what has changed, but the advent of the Social Internet seems to somehow be involved. Some see Twitter, Facebook and other online social applications as self-congratulating, delusional apps for the Silicon Valley nerd-o-sphere, where as others view them as dictatorial kryptonite.

As is frequently the case, reality is somewhere in between. It is true that the Social Internet hasn’t changed the fundamental fabric of society. It is also unlikely that Twitter and Facebook the revolutionary coordination weapon the world has been waiting for. Revolutions have always been the tipping of unstable systems, where some relatively minor events offer a coordination point around which dissent congeals. At the heart of the “Revolutionary Equation” is a perspective that revolutions are triggered and won based on information and signaling. Individuals revolt because they expect to make a difference and they expect to be sufficiently numerous that they will overcome their governments’ ability to suppress them. Twitter and Facebook have created an environment in which dissent can reach critical mass outside of governments’ ability to suppress it. The Social Internet has altered the “Revolutionary Equation” by reducing the cost of dissent and increased the cost of suppressing it.


The Revolutionary Equation
People revolt as a function of three variables: discontent (general measure of dissatisfaction), cost of dissent (personal cost dispensed by the government to those who dissent), and expected mass (expected volume of people who are willing to express their dissent).

Discontent
Let’s consider a distribution of discontent among a population.















The main point of this graph is to point out that any population will have a distribution of (dis)satisfaction. Happier countries will have a steeper and more concave curve and less happy countries will have a flatter more convex curve. Another way to think about discontent is that it is a measure of the benefit of overthrowing the government. Essentially, the least happy people attribute the highest utility to a revolution.


Cost of Dissent
A government has a relatively fixed ability to dispense "cost" for dissent.














Pretty much any government is willing to kill (or severely punish) some of its citizens to remain in power. However, no government can kill 100% of its population, so there is a limit to the amount of cost that a government can dispense. The illustration above is a gross oversimplification, but essentially points out that at some point, governments fall off a cliff in terms of being able to punish dissent. Generally, when there are signs of unrest, the more repressive governments will quickly signal to the population that their cost curve stretches up and to the right. As an example, in its reaction to protests against rigged elections in Iran, the government quickly signaled that they were willing to kill their own people in order to maintain control.


Expected Mass and Utility of Dissent
In a stable state, it is not worthwhile for the majority of the population to revolt. You will always have a few anarchists, terrorists or heroes who are sufficiently unhappy that it is worthwhile for them to fight – they know they won’t succeed in overthrowing a regime, but will fight nonetheless.















People will revolt when their expected mass exceeds the cost that they expect the government is able or willing to dispense. When people revolt, they are essentially betting that their expected mass is greater than what the government is able to repress. If you assume that the most dissatisfied people will dissent first, you effectively end up with a revolution when you have enough highly dissatisfied people who are willing to dissent at a high cost to get to the point where the government’s ability to suppress dissent breaks down.















Triggering events are usually caused by the expectation of mass – essentially how people perceive this graph to be. What usually happens next is a race between the population increasing the expected mass (visibility going up as people go to the streets) and the government reducing the expected mass (through information or by breaking up protests) and signaling the ability/willingness to dispense high cost (by bringing out the army or killing/jailing people).

Tuesday, October 19, 2010

Depth-First vs. Breadth-First Discovery of the Future

Steve Job's rant about Android and the Google model (http://techcrunch.com/2010/10/18/steve-jobs-android-audio/) is fabulously thought-provoking - not to be outdone by Andy Rubin's hysterical response (http://twitter.com/#!/Arubin/status/27808662429).

On one hand, you have the Internet-bred nerds that claim that an open model is fundamentally superior & will eventually win. On the other hand, you have Steve who  convincingly articulates that open vs. closed isn't what matters - it's about fragmented vs. integrated. He claims (and recent history does support him) that the integrated approach is more effective at delivering the best innovations to consumers.

Here is a proposed thought exercise. Imagine that progress is a 'search' for the discovery of continually better products for consumers. Think of it like the traversal of a tree - the root is today and branching from today are a multitude of options for how to design the future. Each decision is an edge on this tree. Choose your level of detail, but imagine that each potential product decision is represented in this tree. So, in the Apple-style integrated approach, you have a consolidated decision making process that decisively selects a *single* path - say ten steps down the tree, where they bet that humanity will elect to take its future. If you have a phenomenally visionary leader such as Steve, he is able to drive his part of the ecosystem to select a *single* path and more often than not, this path is a successful one. On the other hand, you have the open / fragmented approach, that Google is taking with Android, where the leader/organization takes only five steps decisively, but then lets the ecosystem try out a bunch of branches.

>> Fundamentally, the integrated approach is a depth-first search for the future, whereas the open approach is a breadth-first search for the future. 


The integrated approach plays two important roles
1) Unadulterated Expression of a Vision: Once in a very rare while, a true visionary comes along. These people are not just creative. They don't just see glimpses of the future. They are able to draw a complete self-consistent view of what the future will look like. I feel that many who try create the future (myself included) will have insights and ideas about how specific items will evolve. We might have a fantastic idea about how the spoon of the future will look - and the spoon we imagine is in a kitchen as we know it today; in a house as we know it today. But it's a damn nice spoon. Then, once in a while, you get someone like Steve who comes along, who is able to truly shed the shackles of the present. He tries to imagine the *entire* house of the future - unencumbered by the house of the present. He is able to take system-level evolutions and apply them in ways no one else would have even considered. This, in my opinion is one of the key roles of the integrated approach. It enables the selection of a specific path down the tree of innovation - unencumbered by constraints of the present.

2) Clear High Activation Energy Hurdles: In some cases, it takes a massive company to clear the activation energy required to open up a whole avenue for the future. For example, it took Apple Computer to get AT&T to finally open up the mobile market. All of us who'd spent years fighting to deal with the shit-show that was WAP and trying to convince carriers to give us space on their "home decks" had a quasi-religious experience the first time we interacted with the iPhone. It took Steve Jobs and the entire weight of Apple Computer to change the laws of physics in a world where any innovation immediately fell into the carrier-black-hole. There is no way that an Android-style open "let's throw a party and invite all of our friends to hack on your network" would have ever worked with the carriers (in addition to them losing their marbles everytime Google was mentioned). It took Steve's "my way or the highway" style, as well as the realization that Apple would easily be the best bet for someone to tightly control their environment (can you imagine Google or MSFT trying to approve apps?).

For the company (Apple) following the integrated approach, there are two key advantages
1) Resource efficiency: Overall, the effort needed to end up at node x down the tree is lower than waiting for an open model to breath-first its way to that same node. It is also arguable that the open model is perhaps likely to never even end up at some ideal/stable node, given the baggage it carries from all of the wrong paths that were explored. When mistakes are made, it is often impossible to fully eradicate their impact on the system (I bet you can still find some code related to Clippy in Windows :). So, the integrated model gives you a pure-bred creature, no scars from past battles, no fat from past excesses. Lean mean future-discovering machine.

2) Head start: Moreover, the integrated approach gets Apple there earlier than everyone else. If Apple's bet is correct, being in the future for a year ahead of the others allows them to benefit from the image of being an innovator (which they are), they have more time to learn and continue running ahead, and finally, they can build a protective mote to block others from getting to that point through market share, filing patents, etc...

But eventually, the open model wins
Now that I've spent all this time discussing how awesome it is to be integrated and drive with a singular vision, I will still contend that the open approach will eventually win. I would argue that it is a question of timing and what phase of an innovation cycle we are talking about. Once in a while, you need massively discontinuous steps for the future to happen (ex. introduction of the iPhone). However, once a new era begins, consumers at large will have a sufficiently diverse set of interests that no singular vision is optimal. Steve Jobs designed the house of the future, but it only comes in white and no fridge or hot water. Now, come into the picture all the roughnecks from Google and their contractors (HTC, Motorola, etc...), who give you tons of the great features they copied from Steve's new house - in any color and also allow you to build the shed however you like. Oh, and you can get a solar heater for free and it comes with a Froyo machine integrated!

< rant >
I do have to mention that all players involved tend to be disingenuous about their definition of openness and how they relate to it. On one hand, you have Google that claims full openness, but conveniently is decisive at key junctures. I do think that Google is the best search engine, but I can't remember seeing another search engine being offered when I was activating my Android (not that I would have chose anything else - just sayin'). On the other hand, you have Steve railing against the open model, whereas the entire empire he's built over the last decade is based on the OpenBSD operating system (crap! it even has open in the name!) and WebKit - two important open source project.
< / rant >

My point here is that the integrated approach does enable the discontinuous innovations. However, these system-level steps are not needed every year. In fact, they are most often needed to overcome the cruft that has been laid by past innovators that are still trying to hold on to their crumbling empire. Once the integrated assault opens the gates, the breadth-first exploration becomes a more efficient mechanism to discover successful paths to the future (for the current cycle). To see this clearly, compare an iPhone 3G with an iPhone 4. Then, compare last year's Android to the current versions (both hardware and software). While Steve has a staggering hit rate in terms of envisioning where the future lies, his vision is a singular one that has no way to keep up with the diversity of human interests and needs. I'm typing this on a MacBook Pro, but the switch from an iPhone to and Android 24 hours ago has so far been the most satisfying encounter I've had with technology this year (the iPad is a close second).

Tuesday, March 16, 2010

Objectives-Constrained Design - not Capacity-Constrained Design

We build too many things that don't matter.  We don't set a stake in the ground about what does.  And that gets in the way of delivering great products.



I've seen dozens of seemingly promising products come to existence.  And I've seen almost as many fail to gain traction with users.  Something that has often perplexed me is figuring out what differentiated products that got mass consumer adoption overnight from ones that didn't.  One of the things that stands out in my mind is that products tend to succeed thanks to a single core use case that really mattered to users.  As developers and product people, we have a tendency to think through all of the angles, all of the potential use cases and edge cases.  We are then imagining Jane user, who has some use case and feel compelled to address her need.  Even worse, we'll consistently have someone on the team that *is* Jane user - and we can't bring ourselves to tell her 'sorry - your use case isn't going to be solved for today.'

What invariably happens is that we develop products in a capacity-constrained fashion because that's the easiest way to decide what's getting built and what isn't.  Here's what most of my projects have looked like:

  • Make a list of all potentially interesting and imaginable features
  • Clump them into logical groups (label them as "releases" to make myself feel smart)
  • Sort the clumps in order of priority (usually based on some bullshit set of metrics)
  • Look at the number of engineers on the team and see what we can "fit"
I've done this almost every time.  I still do it almost every time.  And I think this is a terrible way to build products.

Why is this bad?  The two reasons this approach sucks are:
(1) It's not a ranking problem:  Implicit in this approach is that we never take a real stance about what will make us succeed.  Ranking features is not a statement about what is core (quite the opposite). Ranking puts all of the features on a continuum, whereas in most cases, the one or two things that make something work are in a league of their own.  Even putting them in the same list as everything else is a mistake.  Big category winners (Google, Twitter, etc...) have often done non-core things pretty poorly.  We say they did well despite lacking xyz - maybe it's just the opposite...

(2) Engineering is the least of the costs:  Users don't give a rat about how much effort went into building something (to my greatest chagrin).  For many consumer products that I've seen work, a fundamental characteristic is that they claim a piece of "mental real estate" in users' minds.  Google is the place I search.  Facebook is where I find my friends.  Evite is where I send invitations.  Twitter is where I make statements.  Reddit is where I procrastinate.  YouTube is video.  Etc...  Usually, claiming this real estate is around a simple, eminently repeatable use case / interaction.  Fundamentally, 99% of my searches, tweets, view views, etc... are identical (different content, but identical tasks).  My claim here is that if we make a thesis about what matters, as long as we haven't nailed that one thing, all other "features" are actually destructive, as they get in the way of the one feature that matters (even calling it a feature is a mistake IMO :).  If 99% of my visits to Google are to do a simple search, showing me a home page like Yahoo's means that you doubled the cost (in time) of using your search engine to give me features I never use.  Free doesn't come at zero cost ;).

After going on this rant, I will clarify that most of my projects have not been an example of the above, but it's something I've become more and more convinced of as time goes by. Perhaps the sequence for planning should look more like:

  • Make a list of the pieces of "mental real estate" we may want to conquer
  • Make a hypothesis about which one piece of mental real estate we're going after
  • Build purely for that piece of real estate
  • If we win, good job, now we can start adding all the little bells and whistles
  • If we don't get the piece of real estate, change our hypothesis, shed the junk we just built as much as possible and start over

Wednesday, February 3, 2010

Startup Lessons I didn't Learn from a Book

I gave a talk at Stanford mid-December 2009, where I was asked to talk about things I learned from starting my second company (Mixer Labs). Below is a presentation I gave, which captures some of the ideas/thoughts I felt were most interesting. Lots of subjectivity and generalizations, but thought people might find it an interesting/amusing read. Some of the slides are lacking context - for which I might make an updated post at a later time.

Startup Lessons by Othman Laraki

Tuesday, December 8, 2009

Hello World

Alright - I've been meaning to give my blog some love for a while and never got around to it. Better late than never. The good thing is that I've got a nice backlog of half-written thoughts to feed a interesting posts to get started.

So, what's this blog about? I'd summarize it as a random combination of thoughts on entrepreneurship, technology, and amateurish philosophical musings. Below is a little backlog of topics that are coming down the tubes:

- Forget about Aspirin and Vitamins, you want to be selling Crack
- An alternate definition of life
- What if countries did M&A transactions?
- The raindrop (you) on the window (life)
- Cognitive dissonance and what is our children learning
- Implications of truly ubiquitous location
- Scarcity as a basic building block for social interaction