This inspired presentation by Harry Brignull should be required viewing for anyone who makes products that they want people to love. But I’d argue that more than “naming & shaming,” a business case needs to be made for treating customers with respect and creating user interfaces that try to help users do what they want. I think the success of companies like Zappos & Netflix is a testament to the fact that such a case can be made, and that while a/b testing can be very valuable, short-term gains achieved by tricking (and very likely frustrating and angering) your customers are dwarfed by the long-term value created by treating them humanely.
Also, check out his Dark Patterns site.
Persistent conflicts within your organization might not be chalked up to personality, or culture, as you may think. It could be that discord is the natural outcome of the way the “game” is structured.
For nearly eight years I worked at a small cable network, and the entire time, to varying degrees, the were some tension between the group that ran programming (i.e. which shows to acquire/produce, and how to schedule them), and the group that was charged with producing the promotional spots for that programming. (I was working on the digital side, so had a more or less neutral vantage point.) For any big programming event, everyone would wait with bated breath for the ratings to come in. If they were good, everyone was happy, and there were mutual congratulations all around. If not, things tended to be less sunny. The promo folks would grouse about the quality of the shows, and of the difficulty in trying to make a silk purse from a sow’s ear, while the programming folks pointed the blame at the promotional spots. More precisely — everyone acknowledged that the promos were creatively excellent, but what was questioned was the efficacy of the messaging in those spots. The criticism was generally that the producers spots were more interested in winning awards (and win awards they did) than putting “butts in seats.”
This conflict led to back-channel sniping, heated meetings, and a general low-level hum of tension between the two groups that lasted during my entire eight-year tenure, despite some changes in the key personnel involved. Despite this persistence over time and changings-of-the-guard, it was generally thought to be a personality issue, or was perhaps more broadly attributed to the respective “cultures” of the departments. This way of thinking imagines some germ of conflict back in the beginning, which is then reproduced and magnified in those back-channel conversations, passing from person to person, like a disease.
It recently occurred to me that there might be a more fundamental, structural explanation for this conflict. That it was, in fact, inevitable given the way the reward structure and evaluation criteria were laid out. In short, the rules of the game guaranteed this conflict, much as the rules of Battleship ensure that one side will lose all its ships, or the rules of Monopoly that all the wealth will inevitably end up in the hands of one player.
Here’s why: both groups were evaluated by the same measure — the ratings. But there was no way to reliably isolate the impact of either group’s contribution to this measure. It’s a messy, abstracted, complex variable. On the other hand, the promo group *did* have another way to measure success, one which was clear and unambiguously attributable to their own efforts — those industry awards they were often faulted for chasing. Presented with two “success” paths — one in which success was clearly and solely their own, and one in which success (and failure) couldn’t really be easily attributed to them, they tended towards the latter. There’s nothing wrong with that — it’s the rational path for actors trying to maximize their own sense (and others’ perception) of accomplishment, and advance their careers. But for a business in which the ratings ultimately matter most, a structure which creates this sort of dissonance is flawed. Goals need to be aligned, and a structure put in place in which each contributor is reliably and objectively rewarded for their individual contribution to those goals.
In this case, I reckon the best solution would have been to make Promotion accountable to Programming, creating a sort of client/vendor relationship between the two, and making ratings solely the responsibility (for better or worse) of Programming. In this scenario, Promotion is structurally required to please Programming, and if their efforts aren’t meeting the greater needs, changes can be made. The larger point is here is that it’s important to look at the structure of rewards within an organization and the methods by which success is measured and attributed, and that it may be hermeneutically useful to view the system as a game, with inherent rules and win conditions which each individual actor is “playing” by and striving towards. The trick is to design (or redesign) the game so that everybody can win, the conditions for winning are clear and measurable, and that everybody’s win condition contributes to the overall success of the enterprise.
About a year ago I made a series of tweets:
“we are creatures that model behavior. in a fixed group, such behavior is reinforced, and norms emerge. in an environment like twitter,” 9:55 AM Apr 2nd from Echofon
“…where the people i am following (and hence possibly modeling) are a) diverse (all from diff groups) and b) not the same as the people” 9:56 AM Apr 2nd from Echofon
“…following me, how can norms of behavior emerge? the problem of the reverse panopticon. need more characters to really get into this.” 9:57 AM Apr 2nd from Echofon
Finally, I’ve gotten around to explaining what it is I was thinking about:
Community consists of mutually-reinforced norms and modes of behavior. Within certain groups (e.g. family, peer groups, professional associations) these norms emerge iteratively and collaboratively (which is not to discount the variable power relations inherent in any such system) as behaviors are modeled and then reproduced until a certain equilibrium is reached — what some might call “community values,” but I mean it in a broader way that it’s commonly used in the public discourse.
Taken to an extreme in the online space, this can lead to the incestuous “echo chamber” effect found on so many political forums (on both the left and right), and to specialized argot and in-jokes impenetrable to an outsider (as seen, for example, on the discussion forums on woot.com — WTF are those people talking about?) But more often than not, this is where true communities form (as they model real-life communities where groups have a shared meeting place, and everyone is equally visible to everyone else.) Metafilter is a great example of this sort of community; it has a clear ethos and recognizable “voice,” despite being (largely) democratically-governed and the content entirely user-created.
In a loosely symmetrical system of relationships such as that enabled by Facebook, in which all connections (“friendships”) are mutually confirmed, but each individual belongs to a different (if often largely overlapping) peer group, a nice middle ground is established — since everyone you are “following” is also following you (unless explicitly hidden), there tends to be some semblance of normative equilibrium, without the homogenizing and rarefying effect exhibited in completely closed systems.
Different people will have radically different experiences of Facebook, depending on whom they’ve decided to surround themselves with (e.g. professional contacts? Friends? Family?), but these differences tend to be incremental based on the number of “hops” away from each node. Put simply, a friend of mine on FB is going to have a different experience of it than I will, but it will likely be less different than a friend of a friend, and so on.
But in a system like Twitter’s, in which relationships are asymmetrical (and therefore only incidentally reciprocal), the notion of a shared experience and mutually-reinforced mores which form the backbone of “community” goes out the window. While there may be 1000s of people viewing a particular tweet, the context of that tweet is completely different for each of the people viewing. What appears to be community, then, is in fact merely a self-constructed simulacrum of a community, in which the people you appear to surround yourself with are themselves surrounded by a completely different group of people, thereby allowing no actual communal norms to develop, except on the most macro-, system-wide level.
But what about the panopticon? In brief, the panopticon is a prison architecture proposed by Jeremy Bentham in the 18th century in which a series of cells extend radially around a central observation node. The prisoners can’t see each other, and can’t see their observer (or precisely when/if they are being observed,) but the observer at the center can see all.
Ignoring the social control purpose this architecture was originally intended for, and most theorists have concentrated on, I’m viewing it more simply as a structure of communication and consumption — of who is viewing, who is being viewed, and what is visible to each. Twitter can be viewed as an infinitely overlapping structure of reverse panopticons, with each participant at the center of his/her own universe, with no visibility outward back to the people who are watching them. There is no “conversation” per se (without a tedious, forensic reconstruction process), as each participant is experiencing and responding to a very different messaging landscape. In such a chaotic landscape, shared norms (a key component of a “community”) cannot emerge. For example, if I follow a bunch of dirty-mouthed comedians (as I do), I might easily get the sense that the ethos of Twitter is wild, profane, and uncompromisingly edgy. But then when I comment in kind, I may well shock the sensibilities of (say) the internet development professionals that follow me. Now multiply this dissonance by the number of individual nodes in the network, and you have a custerfluck of epic proportions, with millions of people shouting together, alone.
Now you might say — “that’s not my experience of Twitter! I feel like a part of a strong community, with a generally shared ethos and many, many mutual interactions.” That’s wonderful, but it also sort of perversely makes my point — due to the asymmetrical architecture inherent in Twitter, every participant’s experience of the product is going to be radically different, dependent on how they’ve structured and maintained their personal network. Surely it’s possible to create sub-networks that consist entirely of symmetrical relationships, with all the members of the group following and being followed by all the other members, but this arrangement is counter to the inherent architecture of the system (unlike a simple community forum, where it is the de facto structure,) and one would need to go to great lengths to accomplish it. Given that, one can no more speak of the “Twitter experience” than they could of the “telephone experience,” or the “pencil experience.”
All of this is not to say that Twitter is not an incredibly interesting and potentially useful tool (like the telephone or the pencil.) Just that it is architected in such a way as to make true community very difficult to achieve, and to promote the existence of Twitter micro-celebrities with thousands of followers that they don’t themselves follow. These celebrity nodes are where shared sensibilities might converge, but the followers aren’t themselves sharing a context — they are all observing and perhaps responding to the center (where an @aplusk or a @hodgman or a @scobleizer might sit,) but are invisible to one another. Is this a problem? If it is, is there anything to be done about it? I have some ideas, but this has gone on long enough for now. Curious to hear your thoughts, and thanks my indulging my rather rambly, admittedly somewhat pretentious, and not-fully-formed post…
If it’s optimized for a search engine, it’s *not* being optimized for me. I want my content to be HIO - human intelligence optimized.