Jump to content
Chaos

Probability Errors

Recommended Posts

Nicholas Taleb says:

 

"What causes severe mistakes is that outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5-percent probability of an earthquake of magnitude 3 or higher, a 2-percent probability of one of magnitude 4 or higher, and so forth. The same with wars: You have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment."

 

This is definitely true, but I don't see how it could be incorporated into a debate round. Thoughts and ideas on this will improve policy debate.

Share this post


Link to post
Share on other sites

Now it's way past my bedtime so I might be reading the card wrong, but it looks like it's a reason to evaluate magnitude over probability

  • Downvote 1

Share this post


Link to post
Share on other sites

Nicholas Taleb says:

 

"What causes severe mistakes is that outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5-percent probability of an earthquake of magnitude 3 or higher, a 2-percent probability of one of magnitude 4 or higher, and so forth. The same with wars: You have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment."

 

This is definitely true, but I don't see how it could be incorporated into a debate round. Thoughts and ideas on this will improve policy debate.

 

http://www.the3nr.com/wiki/index.php?title=2009-2010_Wayzata_(MN)_-_Krishnan_Ramanujan_%26_Dru_Svoboda

Share this post


Link to post
Share on other sites

You're both reading it wrong. Taleb's arguments about uncertainty aren't the focus here, and Dustin was tired.

 

Typical policy rounds assign a static value to both probability and magnitude while not recognizing that there are different levels of magnitude for each subset of overall risk. Taleb says that is a dumb way to do risk analysis because it doesn't correspond to the real world.

  • Upvote 1

Share this post


Link to post
Share on other sites

I'm not sure whether this is what Taleb's talking about from the quote, but:

 

When we're talking about probability, what we're really talking about is a distribution of events. Unless that distribution is uniform, then some events are more likely than others. If we knew the distribution then we could be able to say something about the likelihood of particular events happening.

 

Two simple examples of this would be the binomial distribution and the Gaussian (normal) distribution. The binomial distribution describes the probability of positive outcomes in a series of independent yes/no trials - for example, the number of times tails comes up when you flip a (fair) coin. In that case we can describe the probability of a particular event happening - if the coin is fair, then there's a 50% chance that the coin comes up tails. The binomial distribution is what's called a discrete distribution, because we can identify all of the possible outcomes.

 

The normal distribution is the 'bell curve.' For many phenomena in the real world can use this to describe situations where we can't identify precisely every possible outcome, but we have a sense that the data is clustered around a particular value in the middle. Think about height, for example. Most people fall around some average height (say, American men are about 5'10). About 68% of men are within three inches of this height (within one standard deviation); most American men you'll meet will be somewhere between 5'7'' and 6'1'', right? Of course, there are some people who are much a bit taller or a bit shorter, and some people who are extremely tall or extremely short (but you don't encounter a lot of seven foot tall dudes). Those extremes are called the tail end of the distribution. (Wikipedia is telling me that height is actually lognormal rather than normal but whatever)

The normal distribution belongs to a group of distributions that's called continuous; there are other continuous distributions out there (some of which have a lot of values at one end and a long tail at the other - in other words, they're skewed. Think wealth distribution; there are a lot of people who are poor or middle class, and some EXTREMELY rich people).

 

The point of all of this, beyond pedantry and embarrassing myself when someone inevitably calls me out on a mistake I've made, is: I think Taleb is getting at the idea that most real-world events aren't described binomially, but rather in by some distribution that has many outcomes. We could describe the probability of war as a yes/no proposition and assign probability to that. But Taleb, if I'm reading his quote correctly, is saying we should do something different. For the purposes of risk assessment, a first step would be to identify possible scenarios and figure out what the relative probabilities of those scenarios are. Or we might think of the damage from war as being described by something closer to a normal distribution, if we thought the most likely outcome was somewhere in the middle, and there's a smaller possibility of a very low magnitude war or a very high magnitude war.

 

If we know the distribution and we know the 'magnitude' (the cost or benefit of a particular outcome), we can then calculate expected valuation for particular outcomes, which is pretty much just

 

(the likelihood of a particular event)*(the value we assign to a particular event) = Expected value of an outcome.  

 

In a debate context, this weighs impacts (outcomes) by both magnitude AND probability. Things that are REALLY REALLY BAD (extinction) may not be very likely, but they're so bad that we should avoid them (the expected valuation is so negative that it's not worth the risk. .000001 times negative infinity is still negative infinity, judge!). On the other hand, you could argue that the probability of certain events is so low that it should functionally be treated as zero (and thus have an expected value of zero), and thus it makes more sense to focus on higher-probability, perhaps less-extreme events. In this framework, there's no point in evaluating probability over magnitude, you look at them at the same time.

 

A lot of conventional impact weighing is about what impacts are important (assigning magnitude) and what impacts are likely (what's the distribution of outcomes). You could argue that the judge should be risk averse - since we can't know the distribution of events (it is sort of a weirdly Platonic concept), we should weigh worse-case scenarios more heavily. Of course, you could have a debate about what distribution is appropriate (which might get esoteric and, like I said, is more or less an empirical question - what distribution fits the data and the parameters of that distribution is something you can test mathematically, which is hard to do in a debate round).

Share this post


Link to post
Share on other sites

I think the best way to apply Taleb in debate is basically that we, as debaters, have been trained in most instances to assign a uniform probability to an entire scenario. Let's take the following example:

U: US-China relations on brink

L: Plan unpopular in China, hurts relations

!: Low US-China relations lead to armed conflict over Taiwan, goes nuclear, extinction

 

Now, we would probably do something pretty rudimentary. For example, how likely is it that the link gets triggered? Are relations really on the brink? But let's think about this in a different way, a sequence of events.

1) US-China relations on brink

2) China doesn't like the plan

3) Space policy spills over into other areas of relations (or is key to relations)

4) Low relations lead to war over Taiwan

5) US-China war about Taiwan escalates into nuclear war

6) US-China nuke war is an existential risk

 

So let's think about a world where things are going pretty well for the neg- they've won points 1-3. Point 4 is probably less likely even if they win points 1-3 because of a few questions. Do low relations lead to war? Are there other things that might cause war rather than just space policy, and at that over Taiwan specifically? Aren't the US' and China's economies interconnected enough that there's not really going to be a war? This is the level where alt cause or "___ prevents the impact" arguments come into play. So you might assign a probability like 60% to an armed conflict between China and the US. However, remember that they said Taiwan specifically. If you can prove that Taiwan isn't important enough for China and the US to risk war, you still prevent their scenario. Adding on this new layer, you might say that it's something like 50% risk.

 

However, that's only for an armed conflict. What about point 5? There are a lot of reasons (as we all know) that the US and China REALLY don't want to start a nuclear war. Alt causes claims go away at this level, but "___ prevents the impact" claims are probably just more true. So, the risk of a nuclear war is maybe something like 20%. However, keeping in mind we're only talking about Taiwan, that might be something like 5% risk or less (even assuming doomsday debate logic). Taiwan really isn't important enough for nukes to be used, and ideally you have a card that says something to that effect.

 

Finally, troublesome point 6. Everyone has read impact defense cards to nuclear war, and Bostrom 2 is pretty good about explaining why only Russia-US nuke war is an existential threat. After you've finished your slaughter of their probability, you should definitely have them down to less than a 1% risk.

 

So how do you frame it? Just as I just explained. Simply because they win that US-China relations would be hurt doesn't mean that they will start a war, and if they win that it doesn't mean it goes nuclear, and if they win that it doesn't mean it spells out extinction. It's the simplest and most common-sense way of doing risk analysis with probability.

 

 

 

 

 

TL;DR all debate impact scenarios are contrived lies and this Taleb quote just called you out on why.

  • Upvote 1

Share this post


Link to post
Share on other sites

Strub's post is really excellent. There's some mathematical terms in it that I'm not familiar with, but it describes Taleb's argument well.

 

I do disagree that probability debates as currently practiced are the same thing as debates over the average distribution of probability. There's never an analysis of multiple potential levels of magnitude, only an argument that they access the biggest one (extinction or value to life). Other less significant potential effects are rarely discussed. Competitively, this makes sense. Teams don't want to weaken their arguments by admitting that they're not certain that a war would happen, or that it might not cause extinction, or whatever.

 

I just wish that we could find a way to evaluate the distribution of magnitude along the axis of probability instead of generalizing all of our claims. Being able to recognize that there is a 5% risk of an extinction level war, a 10% risk of a nuclear war, and a 20% risk of a big war due to a certain situation is a much more useful skill for the real world then current impact debates are, because we never have the level of certainty that we always pretend to have as policy debaters.

 

Also,

 

Bostrom 2 is pretty good at explaining why only Russia-US nuke war is an existential threat.

I lol'd.

Share this post


Link to post
Share on other sites

Is the point of the book to give up on future predictions? Is a world without predictions better? Or what type of predictions should we be making?

His goal is to improve predictions, specifically economic predictions, in order to allow them to better accommodate finite human knowledge.

 

He advocates a diverse portfolio, and some other stuff related to exploiting structural trends within market systems. I haven't read too much of his work, so I can't go into much detail, because I don't really know.

Share this post


Link to post
Share on other sites

His goal is to improve predictions, specifically economic predictions, in order to allow them to better accommodate finite human knowledge.

 

He advocates a diverse portfolio, and some other stuff related to exploiting structural trends within market systems. I haven't read too much of his work, so I can't go into much detail, because I don't really know.

Perhaps a better question then is, "What is the net present value of the cost of war?" or the like.

Share this post


Link to post
Share on other sites

Perhaps a better question then is, "What is the net present value of the cost of war?" or the like.

I feel like talking about the net value is both necessary for making decisions and misleading because it obscures the fact that risk isn't homogenous. If we can talk about the net value of war without giving an overly focused description of risk and its relation to magnitude, then this question is a good way of looking at it.

Share this post


Link to post
Share on other sites

Thanks.

 

Seems like a great reason to K it up if a team isn't ready to engage that argument.

 

This links to you harder & K & Alt & framework. Ks grounded in historical analysis (assuming it is).

 

Part II:

 

Otherwise, obviously:

1. Read "Black Swan" cards back at them (ie the conclusion & other parts)

[insert card]

 

2. Social science predictions good ****

[insert card]

 

3. Turn- using the stock market as a basis for making decision = bunk. (Your argument is more specific to economic predictions)

A ) In the case of the unpredictability of the stock market, even hege funds consistently outperform the stock market.

B ) The stock market is more anonomous & less trust driven than international politics. There are fewer checks on individual behaviors, where as alliances & relationship & the media are focused on government for accountability.

 

4. Insert your own analysis about predicting the future being critical to life on the planet.

[insert]

 

5. Paradox/double bind--this links to them too (it links to them even worse if they run economic related arguments because thats where Talib grounds his argument)

 

6. Doesn't assume a counter plan + disad + turns the case story.

 

7. Answer back the reasons on a line by line basis.

 

Part III

 

It needs work, but not terribly shabby with cards or analysis in 3 spots.

 

as a side note, has anyone read Philip E. Tetlock’s “Expert Political Judgment†(2005)

Gregg Easterbooks review in the New York Times said it:

provided extensive detail on their dismal forecasting records.

 

Here is the Google search for "Expert Political Judgement" if you want to chase down some round-winning quotes.

 

Here is a CATO article referencing Tetlock:

http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

 

The question though is.....it seems better to be slightly better than random than to actually be random. And this begs the question about what types of predictions are good & bad.....or hard to predict or easy to predict.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...