Friday, March 20, 2009

Reacting to the Past at Barnard College

(via PaxSims) Barnard College runs an interesting program called "Reacting to the Past," a set of classroom roleplaying exercises based on historical situations:
“Reacting to the Past” (RTTP) consists of elaborate games, set in the past, in which students are assigned roles informed by classic texts in the history ideas. Class sessions are run entirely by students; instructors advise and guide students and grade their oral and written work. It seeks to draw students into the past, promote engagement with big ideas, and improve intellectual and academic skills. Pioneered by Barnard College in 1996, the project is supported by a consortium of colleges and universities.

All of the games are set in the past, and thus might be regarded as history, but each game also explores multiple additional disciplines. Part of the intellectual appeal of RTTP is that it transcends disciplinary structures. In addition to games in the published series, the consortium seeks to expand the curriculum by supporting faculty workshops and collaboration on new game designs that explore a variety of historical moments in the humanities and sciences.
Neat stuff. There are some upcoming faculty conferences in different regions for those interested in using or contributing to the program (Paxsims has more about that). What struck me after watching the sample video at the RTTP site was that it was difficult to see what the structure of the exercises really was. Is this a roleplaying debating society, or are there some underlying game elements that aren't shown? Based on the accompanying web pages, it seems that there are some other aspects to the game that I just wasn't able to pick up on. This reminds me of some of the games designed by Megagame Makers (discussed in another post a while back), and I wonder if some of them (particularly the Washington Naval Conference one) could be transferred to this RTTP framework without much trouble. Perhaps that wouldn't be sweeping enough in scope for the RTTP program. I also wonder if the good folks at ChoMUN and MUNUC are aware of this... and vice versa....

Thursday, March 19, 2009

More on risk, uncertainty, and Nassim Taleb

To follow up on my previous post on Nassim Taleb and his work, there is a critical distinction that is often overlooked between risk and uncertainty.* Risk is quantifiable. Whether or not it has been properly measured, it refers to something that is measurable. Uncertainty is not quantifiable. Risk can be "bought down" or hedged in ways that uncertainty cannot. A standard example of risk is the sort of game you find at a casino, like roulette, where the odds are clearly known. The state of affairs that will result from a war (the outcome or outcomes), on the other hand, is uncertain, relying on too many variables, complex interactions, and unknown unknowns to be meaningfully quantified. The outcomes of risks have known probability distributions. Not so with uncertainties. Here's an example of these concepts in action, in the context of baseball.

This conceptualization of risk and uncertainty is sometimes mapped onto the "known unknown" vs. "unknown unknown" divide, with known unknowns characterized as risks and unknown unknowns as uncertainties. But the distinction between known unknowns and unknown unknowns is based on the knowledge of the observer, while the distinction between risk and uncertainty is in some respect a difference in "knowability." There are plenty of "known uncertainties" in the "known unknown" category, where we are perfectly capable of identifying something we don't know which nonetheless has a probability distribution we do not or cannot know.

Part of Taleb's argument with regards to the financial crisis could be framed in these terms. Financial managers thought they had transformed some types of uncertainty into risk that could be reliably estimated through new statistical techniques. But they were operating with assumptions about the underlying probability distribution of events that were unwarranted, meaning that all they had managed to do was conceal significant uncertainties (unquantifiable indeterminacies) that ultimately came back to bite them. (See this Wired Magazine article for a fascinating description of how this came about.) The more generalized form of this argument is that we tend to operate as though we are facing risks, rather than uncertainties, in part as a result of our psychological biases. (Taleb discussed a variant of this a bit in the podcast I referred to in the earlier post.)

This may all seem rather esoteric, but in a world where there is a fresh appreciation for the limits of statistical knowledge there needs to be a way to act under uncertainty. The elemental caution reflected in Taleb's advice on operating in what he would call the "fourth quadrant" and what I would call conditions dominated by uncertainty is as reasonable a response as any to decision-making in this type of environment. When faced with less-structured problems, dominated by uncertainties and unknown unknowns, highly structured analytic tools are frequently neither appropriate nor helpful. Gaming can be one of the least-structured analytic approaches, which limits its outputs but allows it to constructively address issues characterized by deep uncertainty.


* These definitions do not conform to popular usage of either term, but they are generally used in this way in policy analysis and represent a helpful way to characterize different types of indeterminacy. Risk in this case is not limited to costs or negative events, but instead applies to probabilistic outcomes both positive and negative.

Wednesday, March 18, 2009

The uncertain world of Nassim Taleb

A trader for 20 years, Nassim Taleb became fascinated by decision research as a result of watching his colleagues and the financial industry in general. Two books (so far) have resulted from this fascination: The Black Swan and Fooled by Randomness. I haven't had the chance to read either just yet, but I recently found this podcast of an interview with Taleb, in which he covers some of the same ground he goes over in his books. He is not an eloquent speaker and is occasionally hard to follow, but I enjoyed this conversation greatly. Russ Roberts struck me as a skilled interviewer, though I'm not sure how accessible some parts of the discussion will be to those without some economics-related background (give it a listen anyway... it's thought-provoking and worthwhile).

Taleb's work is an example of the recent trend of books about decision-making research which are aimed at a (relatively) broad audience. Blink, by Malcolm Gladwell, is the best example of this phenomenon I can think of offhand. In addition to demonstrating a heightened sensitivity to the role played by psychological decision-making biases, Taleb is particularly concerned with the use of statistics and related tools for decision-making purposes for which they are ill-suited, in his opinion. He has a point. About 55 minutes into the podcast, he and Roberts talk a bit about Taleb's distinction between what he calls "ludic" and "non-ludic" (also referred to as "ecologic") environments. Ludic decision-making environments are those where the "rules" are known, with no ambiguity, such as in a standard board game. Non-ludic environments are those where the rules are unknown and/or don't apply. Instead of a lack of ambiguity, non-ludic environments are characterized by "real world uncertainty." One of Taleb's basic concerns is that people seem to approach many non-ludic situations as thought they were operating in a ludic environment, using decision-making tools (such as certain applications of statistics, but also including innate psychological decision-making biases and heuristics) with a ludic basis and which do not necessarily provide the right kind of decision support for a non-ludic environment. Another way of saying this might be that people too often ignore the existence of unknown unknowns, to their peril.

This article by Taleb (found via Mapping Strategy, which has some intriguing thoughts on the subject as well) enumerates more of his ideas, focusing on his case against the misuse of statistics rather than the decision-making biases he spends most of the podcast talking about. It's interesting stuff. Our limited ability to plan for and consider the future based solely on backwards-looking models suggests that other methods are necessary. Mapping Strategy sees this as an endorsement of alternative tools like scenario planning, and I see a role for gaming to play in trying to understand our messy, non-ludic, ecologic world.

Monday, March 16, 2009

Red Team Journal and hypergame analysis

The current incarnation of Red Team Journal looks like a very promising site and I hope they start to post more often. They don't deal directly with gaming, but many of the concepts they discuss with respect to red teaming and alternate analysis are directly applicable to the study of gaming.

As an example, hypergame analysis (an extension of game theory developed by Peter Bennett) is a way of thinking about conflicts where the two sides perceive themselves as playing different games (in the game theoretic sense*). The perceived rules, outcomes, and decisions may vary significantly between the participants. Some actors may correctly perceive (completely or incompletely) that their opponents perceive the conflict very differently, while others may (incorrectly) believe that both sides share the same basic framework. Fourth generation warfare is marked by this kind of split perspective, in contrast to more traditional forms of warfare where both sides largely shared the same basic assumptions and perception of the conflict. Representing that kind of split perspective in a game (not in the game theoretic sense) is a challenge, as discussed previously here. Hypergame analysis might provide some insight into how to structure this kind of exercise, and I hope that RTJ returns to the subject at some point.

* Once again, the terminological issue of gaming vs. game theory rears its ugly head.

Friday, March 13, 2009

Some semi-relevant links

A few links that are slightly off topic for this blog, but only slightly....
  • A school with a game-based curriculum is opening in NYC next fall. It's currently looking for teachers and students:
    Quest supports a dynamic curriculum that uses the underlying design principles of games to create highly immersive, game-like learning experiences for students. Games and other forms of digital media also model the complexity and promise of "systems." Understanding and accounting for this complexity is a fundamental literacy of the 21st century.
  • The U.S. Institute for Peace is developing what it calls the Open Simulation Platform (OSP), which would be an open source online simulation development tool, hopefully streamlining and simplifying the process of simulation generation and execution to allow more trainers and educators to make use of this type of tool: (via Wiggins)
    The problem describing the OSP is that it is different things to different audiences. To students it is an online world. (Albeit right now it is a very simple text and static image based world.) To instructors/facilitators it is an online library to select a simulation and tools to run a simulation. To simulation authors it is a guide (someday we hope a wizened guide) to help one construct meaningful training simulations. And finally, to a community we hope that it becomes an improved marketplace of ideas: a place where people can really debate their beliefs in more meaningful ways. When proving one’s point about something comes down to creating a realistic and sophisticated simulation that demonstrates that point, we will have arrived on that level.
  • The Economist recently ran a story about Alternate Reality Games (ARGs). These events generally blend online and real-world elements, usually involing both puzzles and narrative. Their (commercial) origins were as alternative advertising for films and the like, but they are being applied to a variety of other purposes. As is often the case with The Economist, it's a pretty good introduction to the subject:

    It was back in 2001 that the first commercial ARG, “The Beast”, a promotional campaign for Steven Spielberg’s film “A.I.: Artificial Intelligence”, began blurring the line between reality and fiction. Instead of formally announcing the start of a game, ARGs merely leave clues for potential players to follow: a subtle image on a poster, perhaps, or a cryptic message on a website. Fans must piece together the narrative—that’s the “alternate reality”—on their own. ARGs are characterised by their reliance on technology and teamwork, and are often shrouded in mystery until they end, weeks or even months later. Only then is the full story (and the product being promoted) revealed.

    Having started off as marketing tools for films and video games—as with “The Beast”, or “I Love Bees”, an ARG created to promote “Halo 2”, a video-game, in 2004—ARGs are now entering the mainstream. Consider “The Lost Ring”, commissioned by McDonald’s for the 2008 Olympics. Designed by Jane McGonigal, an ARG pioneer who used to work at 42 Entertainment, the game brought together players across six continents to uncover a story of amnesiac athletes and to recreate a supposedly lost (but actually fictional) ancient Olympic sport. “Most people’s experience of the Olympics is vicarious,” says Ms McGonigal. “I wanted to give people a more social and active way to experience them.” This ARG, linked to a global sporting event, sponsored by a multinational company and run in seven languages, shows how far ARGs have come.

Wednesday, March 11, 2009

Pirates, yellowcake, and the press at Patterson

Rob Farley has a post up about the recent crisis game at the Patterson School of Diplomacy and International Commerce at the University of Kentucky. He links to the in-game press reports about the action, which give a pretty good idea of how things worked (posts are listed in blog-style reverse chronological order, so start at the bottom of the page). What's most interesting to me about this setup is that they brought in journalism students to play the world media and produce their press reports within the game, interviewing the participants as things moved along. It's an interesting concept. The media are frequently represented in this sort of exercise, but in the examples I've seen (particularly Simulex at the Fletcher School, which I will have to describe in more detail sometime), the "media" is a tool of the control team, used for distributing information (and disinformation) about developments in the scenario to the participants, rarely if ever producing reporting ON the participants. Bringing in journalism students to do so seems like a great way to kill two birds with one stone. The other participants get a taste of operating in a global media environment, while the journalists have an international crisis staged for them to practice reporting on. And as a bonus, it resulted in an easily readable account of the weekend. This is not something that would work for every crisis game, perhaps not even many of them, but it's a nice feature for this one.

Monday, March 9, 2009

Pre-mortems and the Day After Game

The "pre-mortem" is a technique proposed by Gary Klein and promoted by Daniel Kahneman to legitimize dissent and identify previously unconsidered risks in projects or plans. Kahneman is a psychologist who is essentially the father of behavioral economics, and who won the Nobel Prize in Economics in 2002 for developing prospect theory with Amos Tversky, so when he gets interested in a decision-making tool it's a good sign that there is some real value to it. In an interview, Kahneman described it like this:

'I learnt a great idea from a friend of mine, Gary Klein. He recommends what he calls a 'pre-mortem'. It's a very clever idea. You get a group of people who have made a plan, it's not completely final, but they have a broad plan. And then you bring them together for a special session, 45 minutes is usually enough. You tell them, take a sheet of paper and now imagine the following: a year has passed, we have implemented the plan and it is a disaster.

'Now, write down on that sheet of paper what happened. Why did the plan turn out a disaster?

'It's a brilliant idea. Because you have a group of people who are now encouraged to think of difficulties and problems with the plan. And that solves the problem of legitimising dissent very elegantly, and it's easy to implement.

Simple and elegant. So much so that there isn't much written about it online. How much more is there to say? Kahneman himself discusses the pre-mortem in this video interview (the whole discussion is worthwhile, but the pre-mortem section starts with about 12:38 remaining). As one commenter paraphrased part of Kahneman's argument:
In organizations where the members are competitive, you expect people to think quite hard about the flaws in the idea and what could go wrong. In a room of twenty people you might expect three or four new ideas that can be used to readjust and improve the proposed plan of action.
The whole concept of the pre-mortem reminds me of a type of game that RAND developed called the Day After Game. This is a much more structured exercise than a pre-mortem, but a pre-mortem seems limited to considering specific plans that are already well-developed, while the Day After Game can be applied to more general problems or issues. The Day After Game is divided into three periods. In the first period, participants are confronted with an unfolding crisis and asked for their response. In the second, the "day after" (hence the name of the technique) the crisis hits is explored, with the consequences mapped out and the effectiveness (or more likely ineffectiveness) of the measures proposed in the first period considered. But it's the third phase that really separates the Day After Game from other crisis exercises, because then participants are asked to step back to before the crisis event and consider what could have been done between the present day and the hypothetical future date of the crisis in the first period to improve the likelihood of mounting an effective response. By considering the severity of the day after, it becomes a mechanism for focusing attention on the "day before." In this respect, the Day After Game is a means of generating "prospective hindsight" in a similar fashion as the pre-mortem exercise.

This methodology was originally developed as a means of surveying the Washington policy community by running a series of games with many repetitions, in an attempt to identify the major schools of thought about policy alternatives that would likely dominate discussion in Washington in the near term. Over time it seems to have evolved into something more focused on generating "prospective hindsight." The RAND report on the first use of this technique (examining nuclear proliferation) is available for free in .pdf format in three volumes. One other Day After Game report (on cybersecurity) is publicly available as well. There is a good description of the methodology followed in each report. RAND has apparently used the technique in a number of other contexts, as well.

I'm a little surprised that this type of game hasn't made many inroads (to my knowledge) into academic or classroom settings. It's a simple enough concept. There isn't any strategic interaction, since Day After Games are run with all the participants responding as a group to the scenario, but that's part of what makes it easier to execute.

Friday, March 6, 2009

Eliot Cohen on punditry (and the implications for gaming)

I missed this when it was published, but Eliot Cohen had a great op ed in the Wall Street Journal in January, shortly after stepping down as senior adviser to SecState Rice. It doesn't directly touch on gaming, but it provides some very useful suggestions for pundits and policy commentators on the outside of government looking in:
Most commentators have a radically imperfect view of what's going on. Those on the inside, including at the very top, know more, though less than one might think. Government resembles nothing so much as the party game of telephone, in which stories relayed at second, third or fourth hand become increasingly garbled as they crisscross other stories of a similar kind ("That may be what the Russian national security adviser said to the undersecretary for political affairs on Wednesday, but it's not how the Turkish foreign minister described the Syrian view to our ambassador to NATO on Thursday.") Add to this the effects of secrecy induced by security concerns, as well as by the natural desire to play one's cards close to one's vest, and the result is a well-nigh impenetrable murk of policy making.

But it's even murkier on the outside. "Occasionally an outsider may provide perspective; almost never does he have enough knowledge to advise soundly on tactical moves," Henry Kissinger once remarked. Or as the White House correspondent of one major national newspaper once confided to me, "We really don't have a clue what's going on in there."

What, then, is a pundit to do? The best commentary has an impact, less because it offers new ideas (most ideas have been considered, however incompletely, on the inside) than because it clarifies problems or solutions that the insiders have only vaguely or incompletely considered. A tight, well-written, and carefully reasoned examination of a policy problem will bring into focus an issue that the officials have not had the time, or often the literary skill, to capture precisely. That kind of analysis is very much worth reading.

Invariably, a pundit will prescribe solutions. In doing so, he should follow the advice of the late Raymond Aron, the wisest French policy intellectual of modern times: Never criticize a policy unless you can convincingly depict a better course of action. Aron, like many of the greatest commentators on policy, had virtually no experience in government, but great empathy for those in a position to decide. Empathy -- the capacity for imagining what it is like to be the other -- is an essential quality for the thoughtful pundit. Policy makers, of course, prefer sympathy, which is soothing, unnecessary and often harmful.

There's more in the full op ed. It's not hard to see how this could relate to gaming. In a sense, the question of how games can have a policy impact is implicated in this line of thinking. If a game "clarifies problems or solutions that the insiders have only vaguely or incompletely considered" or provides the opportunity for outside commentators to develop the empathy with decision-makers that Cohen describes, that could be a valuable contribution. It is extremely rare for high-level decision-makers to participate in games; the time just isn't available. Cases like the Sigma II game in 1964 are few and far between. Therefore, the utility of games from a policy perspective is at best indirect. Cohen's piece provides some food for thought on maximizing the usefulness of games to current policymakers.

(Found this piece thanks to Wiggins at Opposed Systems Design)

Update: Rex Brynen has more thoughts on this, including an excellent example from his own experience (and some very kind words) at PaxSims. See also the comment by Wiggins.

Thursday, March 5, 2009

Game theory courses (and some learning by doing... in Starcraft)

Game theory doesn't have anything directly to do with the political/military gaming this blog is about, but it's a useful tool in conceptualizing strategic interaction, which can come in very handy. Academic Earth has posted video of the lectures from two different elementary game theory courses, one from Yale, the other from Berkeley. The Berkeley course only has one lecture posted so far, but the Yale course appears to be complete. However, the Berkeley course offers something unique: it is the first class to be taught from the perspective of the video game Starcraft. The strategic interaction within the game forms the basis for the course's explanations of the basics of game theory, and familiarity with the game is highly recommended for students taking the class. I'm not sufficiently familiar with Starcraft to fully appreciate this, but it's a fascinating concept. The class website is here, with more information. It's enough to make me consider trying to hunt down a copy of the game.

For a more complete explanation of the difference between gaming and game theory and an example of how game theory and gaming together can complement each other as analytic tools, see the section of this post describing an article by Paul Bracken that utilizes both.

Tuesday, March 3, 2009

Sigma I-64 and Sigma II-64: gaming Vietnam

In 1964 the JCS hosted two particularly interesting wargames, Sigma I-64 and Sigma II-64. The Sigma wargames were designed to consider the issues surrounding the escalation of U.S. commitment in Vietnam. H.R. McMaster's article in the current issue of World Affairs mentions these games in the context of what they revealed about the weakness of "Graduated Pressure," a strategy of carefully calibrated use of force meant to efficiently change the behavior of the adversary:
The failure of Graduated Pressure was foretold even before it was implemented. In 1964, two eerily prophetic Pentagon war games exposed fatal flaws in the strategy. In those war games, Southeast Asia experts played the role of the North Vietnamese government. In response to limited bombing designed to signal American resolve, those experts decided to infiltrate large numbers of North Vietnamese Army soldiers into the Central Highlands of South Vietnam. This, in turn, impelled the commitment of American troops to the South. The war games concluded that the combination of enemy sanctuaries in North Vietnam, Cambodia and Laos, the enemy’s ability to sustain itself on meager provisions, its strategy of emphasizing political and military actions to avoid strength and attack weakness, and limitations on the application of American military power, would mire the United States in a protracted conflict with little hope for success. The game ended after five years of fighting with five hundred thousand troops committed in South Vietnam. Bundy, however, found the conclusion to be “too harsh.” Rather than force a reexamination of strategy, the results of the SIGMA I and SIGMA II war games appear, in retrospect, as a roadmap that civilian and military leaders followed along the path to failure in Vietnam.
McMaster goes on to note that "The SIGMA war games had no effect on American policy or strategy in Vietnam." Why is that? On the surface, the games were extremely well positioned to have a major impact. Certainly, there was a need for sober reexamination of the underlying assumptions of U.S. strategy. Harold Ford cited a CIA officer’s comment on Sigma I-64:
“Widespread at the war games were facile assumptions that attacks against the North would weaken DRV capability to support the war in South Vietnam, and that such attacks would cause the DRV leadership to call off the VC. Both assumptions are highly dubious, given the nature of the VC war.”
Gaming with a broad cross-section of participants can serve to identify and challenge the assumptions they bring with them, and in this case there were participants in the Sigma games that did not share the predominant view of the conflict. Sigma I-64 was played by “working level CIA, State, and military officers.” Sigma II-64 also involved high-level policy officials, including National Security Adviser McGeorge Bundy, DCI John McCone, General Curtis LeMay, General Earle Wheeler, and A/S Defense John McNaughton, Deputy Secretary of Defense Cyrus Vance, Deputy CNO Admiral Horacio Rivero, Jr., A/S State William Bundy, and CIA deputy director for intelligence Ray Cline. This is an exceptionally high-level group to find participating in a wargame, but it does not appear that the Sigma wargames of 1964 had any direct impact on policy, despite their dire warnings about the potential consequences of U.S. actions and despite the participation of numerous military and civilian national security leaders. Without a willingness to be persuaded on the part of decision-makers, the games could not make a policy impact. While the results of Sigma I and II were uncannily prescient, the failure of participants to take the results seriously is not the most significant issue; policy-makers are right to be skeptical of a game's predictive value. But by failing to move policy-makers to reexamine their assumptions from a fresh perspective, the Sigma games represent a colossal missed opportunity.

(Thomas Allen's 1987 book, War Games
, has a chapter on the Sigma games (pp. 193-208). See also Harold Ford, CIA and the Vietnam Policymakers: Three Episodes 1962-1968, pages 57, 58, and 67.)