Jump to content
MisterMills

Nuclear War Is Unfeasible?

Recommended Posts

I'm gonna break up my responses to you.

I edited my earlier post in response to the Dr. Pepper question you added.

 

 

Did my edit answer your question?

I'm going to answer this bellow and group it with your other arguement.

Ohhh. I see where some confusion is coming from, now. To fix this, I should clarify that I was earlier just oversimplifying some of my views for convenience. I don't really believe there's ever zero risk of a link, per se. Rather, I believe there are situations where the risk of a link and the risk of a link turn are assumed to be equal because of the absence of evidence about which, if either, would happen. In those cases, my functional equivalent of "terminal defense" is present. I just refer to it as terminal defense because it's convenient and the rest isn't usually relevant.

Part of Dustin's point, and also mine, is that this is wrong. There can be 0% risk of a link, there can also be 0% risk of uniqueness, and 0% risk of the impact.

 

Consider a politics scenario.

 

What does 0% risk of uniqueness look like? Well, if it already happened could be -a- scenario! I.e: If someone read Health Care politics and the aff got up and say, "Non-unique, it already passed." That would be a 100% uniqueness takeout. Alternatively, do you ever believe the arguement "uniqueness overwhelms the link"? That is essentially the same thing, 100% risk of defense on the uniqueness level that takes out the DA.

 

What does 0% risk of link look like? A blatant assertion for example.

"Plan would be unpopular beause Dr. Doom from Marvel Universe would use his influence to mind-control the GoP". The link story is entirely fictitous in this scenario, and an aff that gets up and says, "Dr. Doom doens't exist and even if he did he would have no influence on the GoP" is sufficent to be a 100% takeout to this disad.

 

What does 0% risk of impact look like? Maybe China doesn't give a shit about a bill, maybe they don't even know it exists, maybe the bill doesn't actually DO anything. These can take out the i/l to the impact and are defensive arguements.

 

A lot of these scenarios are silly but the point is that there are SCENARIOS in which there is 0% risk of an arguement. Also, consider a DROPPED arguement. If the neg drops the uniqueness ev the aff read, doesn't extend their own, there is 0% risk of the DA. Similarly, there can be 0% risk of an advantage.

There are only a few different outcomes if we accept that extinction is inevitable.

A. Aff plan does nothing useful, we all die.

B. Aff plan does something useful, we survive.

C. We don't do the aff plan, we all die.

 

The only chance of survival is by voting affirmative, even if I think there's an equal risk of a link and a link turn present.

The link turn doesn't make things worse, so I err on the side of the link and hope and pray that the terminal defense was somehow, illogically, improbably, wrong.

 

I should also mention that I believe there is rationally no such thing as 100% certainty for any argument because known unknowns are (99.99999999% probably) inevitable.

I don't think you actually believe this. Given a chaotic scenario, I don't think you would start doing ANY ACTION POSSIBLE to save yourself. This is essentially the 'white noise' arguement. To illustrate this I will use math (craaaaazy I know).

 

1/3 = .333333333 repeating.

1/3 x 3 = .99999999 repeating

.99999999 repeating = 1

This is true.

 

You may be right that theres a 'chance' doing something as idiotic as smashing a Dr. Pepper can on your head would save your life, but the point is that the probability of that is so low, it is overcome by other probabilities. I.e: What if God is watching and this is a test of faith? What if this Dr. Pepper can in particular is actually a BOMB rigged to blow if you hit yourself in the head with it? What if when you drowned THIS TIME you got gills and super powers?

 

The benefit to doing the action is overcome and swepped away with the white noise of all the other possible scenarios. Simply put, 1/x as x===> infinity approaches zero even if it never reaches it. Low risk=no risk.

  • Upvote 1

Share this post


Link to post
Share on other sites

I thought you were asking for clarification, not so that you could make arguments.

 

The known unknown argument goes conceded. You say uniqueness, but we can't even know that with 100% certainty. I am not 100% that Congress passed the healthcare bill or that Obama is president or that I exist, etc. 100% certainty requires infinite evidence and that you would then never ever change your mind no matter what, that's the way probabilities and evidence work. It's literally impossible, based on the way human psychology operates, to develop a conviction so strong you would never change it (I think). Even if it was possible, that's not called rationality, that is called dogmatism, and it is bad.

 

If things cannot get any worse and we accept that we do not have omniscience then we must conclude that in that situation some action is better than none. You can't refuse to evaluate the possible impact of one of your assumptions being false if you acknowledge that knowledge is limited, which means that taking actions that seem ridiculous, striving on against impossible odds, still makes sense in some situations, e.g. being in a concentration camp.

 

Probabilities are not overcome by other probabilities if those other probabilities only ever result in an outcome equivalent to the status quo. If things can't get any worse white noise is irrelevant.

Taking a 1 in a quintillion chance of X doom is a good idea if the alternative is a 1 in a double quintillionbajillion chance. It's about a double bajillion times better, actually.

 

The reason Calculus works isn't because .9999999 repeating equals 1, but because .9999999 repeating doesn't exist in the real world where there are non infinitely divisible atoms.

(Please, don't make this about Quantum physics. It would just complicate things even further, and I believe that amplitude streams aren't infinitely divisible.)

 

When there is no evidence that something happens, that does not mean there is a 0% probability. 0% probability also requires infinite certainty, only a Time Lord could achieve it. It means that if it happened all of space time would be ripped apart and existence would cease to have ever existed and madness would spread across the face of the Earth as Cthulhu wreaked chaos throughout the land. Unpredictable things happen every day (not improbable ones, but ones for which evidence is literally unavailable to allow us to predict the outcome, or things that we cannot even conceptualize due to limited brainpower).

 

Interesting issues might occur when you apply known unknowns to the idea that things can't get any worse, but (without spending much time on it) I think this would only ever be relevant as a conditional probability, so the other would automatically outweigh because one is a subset of the other and the subset is inherently more probable because adding details makes things less likely. This is the part where my belief might be vulnerable, I'll think about this in detail later. I think I might have to get rid of my belief that concessions should be treated as terminal defense; I'm not yet sure how that would impact the overall discussion here.

 

Edit: Or, I could just choose to make concessions equivalent to C probability, where C is automatically a greater percentage than any other probability in the debate, and also less than 1.

That's what I'll do, there's no problem here then.

Share this post


Link to post
Share on other sites

I thought you were asking for clarification, not so that you could make arguments.

I'm just trying to convince you otherwise because I think this is a bad model for debate. I was asking for clarification, but I disagree with what you proposed.

The known unknown argument goes conceded. You say uniqueness, but we can't even know that with 100% certainty. I am not 100% that Congress passed the healthcare bill or that Obama is president or that I exist, etc. 100% certainty requires infinite evidence and that you would then never ever change your mind no matter what, that's the way probabilities and evidence work. It's literally impossible, based on the way human psychology operates, to develop a conviction so strong you would never change it (I think). Even if it was possible, that's not called rationality, that is called dogmatism, and it is bad.

This isnt a question of dogmatism, its a question of fact. Things happened, even if the manner in which we describe them changes that doesnt mean that things didnt happen.

 

Heres some things I know for ceartiin are true

-The following events occoured: The Holocaust, the Cold War, The American Revolution, Obama passed a Stimulous, Obama signed START with Russia, Obama passed a Health Care bill.

-Anything we do now, in the year 2012, will not effect that these events happened. They may have reprucussions, but they will not change the fact that X years ago some event occoured.

 

Heres a hypothetical question for you- 1ac gets up and reads a politics scenario as an advantage. Only problem, the scenario already passed. They are reading cards about START from last year and have really old uniqueness evidence. 1nc gets up (just when you thought this debate couldn't get any dumber, lol, bear with me) and reads nothing but, "START ALREADY HAPPENED." cards. How are you going to vote? Will you vote for an advantage that already happened?

If things cannot get any worse and we accept that we do not have omniscience then we must conclude that in that situation some action is better than none. You can't refuse to evaluate the possible impact of one of your assumptions being false if you acknowledge that knowledge is limited, which means that taking actions that seem ridiculous, striving on against impossible odds, still makes sense in some situations, e.g. being in a concentration camp.

There is a difference between doing something that has LOW RISK FOR SUCCESS, and something something that has NO RISK for success.

 

So, you brought up the concentration camp, lets use that as an example. example. Attempting to run away from a concentration camp has a LOW PROBABILITY, but the chance for success is not zero for getting away.

 

Rubbing feces on a wall to make a demon-contract with Cthulu to rescue you from the camp has such a LOW PROBABILITY, it is zero.

 

How do we choose between these two impacts? Probability, relative probability. The probability of Cthulu saving you is so low, it is also overcome by the unknown unknowns, thinks you don't know you don't know about. AKa: what if your demon-contract results in human extinction? These things cannot be quantified but relatively it is safe to say that the probability of these events occouring are LOW.

 

A politics DA that already happened as a probality of ZERO, an advantage with no internal link to solving an impact also has a probability of ZERO. When I said low risk=no risk, I did not mean 1% = 0 %, I meant that VERY low risk=no risk.

Probabilities are not overcome by other probabilities if those other probabilities only ever result in an outcome equivalent to the status quo. If things can't get any worse white noise is irrelevant.

Taking a 1 in a quintillion chance of X doom is a good idea if the alternative is a 1 in a double quintillionbajillion chance. It's about a double bajillion times better, actually.

How do you know things can't get any worse? 1% RISK! Or in your case, .0001% risk! What if your action SPEEDS UP global warming, makes extinction happen FASTER?

 

The point is that voting for an aff whose plantext is: "The United States federal government should eat sardines." and then is followed up 8 minutes of "Extinction inevitable" with no discussion of how it resolves those descriptions of the status quo is nuts! The affs internal link is the white noise we were just talking about.]

The reason Calculus works isn't because .9999999 repeating equals 1, but because .9999999 repeating doesn't exist in the real world where there are non infinitely divisible atoms.

I don't know what atoms has to do with the statistics and calculating probability. Can you explain this one to me?

 

 

 

Just out of curiosuity, how do you resolve these things

-Counterplan links to the net benefit (lets say, .000000001% less than the plan and no one makes any args about o/d)

-a 2ar that goes all in on "realism inevitable, alt doens't spillover."

-A T debate in which the aff resolutely asserts that its heg advantage outweighs T because theres only a risk that voting aff LITERALLY increases american hegemony, and all the neg is going for is loss of education outweighs (this one I think gets to the heart of our discussion of your initial thought that existential risk comes first.)

  • Upvote 1

Share this post


Link to post
Share on other sites

I don't think you're truly considering what I'm saying because no where in your post do you attempt to solve the problem of known unknowns or the other issues I talked about. You don't answer any of the arguments I'm making, you just attempt to confront me with scenarios where "common sense" might oppose my conclusions. I would prefer that you use arguments instead because I don't trust common sense with the fate of the human race; empirically, common sense does stupid things like racism. Folk wisdom is far from objective.

 

Much of your post, ex: "what if your demon contract results in human extinction" assumes that random action is always exactly as likely to make things worse as to make things better. But I believe that there are some cases, such as if there's a high probability of human extinction, random action could at its worst only manage to make things a little bit worse, and at its best could make things a bajillion times better. To address a different example of yours, I don't much care whether there's a nuclear war in 10 years or in 5. I am essentially solely concerned with avoiding a nuclear war altogether irrespective of the time it occurs.

 

Why do you believe that 100% certainty is possible, given the existence of known unknowns? How do you know with 100% certainty that Obama is the president?

Or is that not it, and your problem is just about my argument about white noise being irrelevant in some desperate situations where we absolutely need to try something?

 

Maybe if I mention that I believe unknown unknowns are always assumed to be balanced, while this is not the case with known unknowns, you would understand? I believe that.

 

You said:

 

When I said low risk=no risk, I did not mean 1% = 0 %, I meant that VERY low risk=no risk.

 

I'm pretty sure that VERY low risk = almost zero. Any other conclusion seems inherently contradictory.

 

And, mathematically, .00000000000000000000001 is definitely not even almost zero, because

when compared to .0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001, it is actually very close to 1.

 

Maybe the problem is just that you have the common but yet totally bizarre idea that small numbers don't matter automatically, just because they are small. This is false. For example, in high level physics problems, or when dealing with existential risk, small numbers can be very important. If scientists didn't use exponential notation and just rounded instead, the world would be a much scarier place. There is also no threshold for what we should consider small.

 

I also think it's relevant that even if the probability was Small, if the impact was Big then we might need special exceptions.

I hope that you, and DML, and onlookers, will read this: http://lesswrong.com...vs_dust_specks/ (To clarify, the author later disclosed that he preferred torture by a lot.)

It clarifies the importance of Big impacts, and although it doesn't do so in terms of probabilities it is still very relevant to what we're saying here.

 

I need you to tell me which of these problems is the one you're having with what I'm saying. I have a feeling that it's mostly just you not trying to think beyond common sense, but rather trying to justify common sense. That would be a mistake, reason should control what you consider to be sensible, and it should not be the other way around. But if I'm wrong, and you aren't relying heavily on common sense here, then please specify which of these things I identified it is that you think is inaccurate or logically flawed, and give the argument which shows that I'm inaccurate. Don't focus on examples, because those always engage the "common sense" meter, which I distrust.

Share this post


Link to post
Share on other sites

I don't think you're truly considering what I'm saying because no where in your post do you attempt to solve the problem of known unknowns or the other issues I talked about. You don't answer any of the arguments I'm making, you just attempt to confront me with scenarios where "common sense" might oppose my conclusions. I would prefer that you use arguments instead because I don't trust common sense with the fate of the human race; empirically, common sense does stupid things like racism. Folk wisdom is far from objective.

Please do not downplay my arguement that we shouldn't focus on more probable impacts as "folk wisdom" or "common sense" unless you are prepared to defend your analysis as apocolyism.

 

The reason I present scenarios is because probability calculation is entirely worthless if it isn't applied to the real world. That is because there is no abstract math equation to measure risk (take the derivitive of the function of probaility x magnitude over the integral of a train approaching timeframe).

 

Here are some historical examples of what your impact calculous is total bullshit.

-The Red Scare (the probability we were infiltrated by communists is VERY LOW, but WHAT IF WE WERE? Better go execute some Americans and deport them)

-The Salem Witch Trials (the probability there are actually witches is VERY LOW, but what if there were!!?!? We would all go to hell and have eternal damnation!

-Racial Profiling (This random black person COULD BE A CRIMINAL, better arrest him before he commits a crime).

 

Known unknowns -is- whitenoise. We know an action CAN result in things, but theres also a CHANCE it can result in other things. My point is that when the PROBABILITY of an action resulting is positive change is LOW, it is OVERWHELMED by the probability it can result in negative change.

 

 

Much of your post, ex: "what if your demon contract results in human extinction" assumes that random action is always exactly as likely to make things worse as to make things better. But I believe that there are some cases, such as if there's a high probability of human extinction, random action could at its worse only manage to make things a little bit worse, and at its best could make things a bajillion times better. To address a different example of yours, I don't much care whether there's a nuclear war in 10 years or in 5. I am essentially solely concerned with avoiding a nuclear war altogether irrespective of the time it occurs.

Nuclear war in 5 nears is twice as bad as nuclear war in 10 years. If you truly believe what you are saying, you should spend your ENTIRE LIFE researching how to become immortal. The probability you suceed is very low (sorry champ, as far as I know everyone dies) but the MAGNITUDE IF YOU SUCCEED IS VERY HIGH (immortality is awesome)

 

Why do you believe that 100% certainty is possible, given the existence of known unknowns? How do you know with 100% certainty that Obama is the president?

Obama being president is not a KNOWN UNKNOWN, it is a known known.

 

Do you know what known unknowns are?

 

Here are some examples:

1.) Terrosism: We KNOW terrorists are out there, but we don't KNOW THEIR PLANS. The product of terrorists with their plans is terrorism, thus terrorism is a KNOWN UNKNOWN.

2.) A murder case: A murder is the product of a killing with a murderer. We KNOW there is a murder (theres a dead body) but we DON'T KNOW WHO THE KILLER IS (the unknown). An unsolved murder case is a known unknown. We know that we don't know who the murderer is.

 

I am so sure Barrack Obama is the president, I'll bet you $200 on it in exchange for $20 bucks if I'm right. This is the ultimate example of our arguement. I am RELATIVELY CERTAIN (i'd call it 100% probability) obama is the president, and shooting for a low magnitude gain (20) whereas your probability is 0 if NOT zero, and your gain is large (200 is high magnitude).

 

 

 

Obama being president is a KNOWN KNOWN. Just like, for the purpose of a debate round, a conceded

Or is that not it, and your problem is just about my argument about white noise being irrelevant in some desperate situations where we absolutely need to try something?

In a hopeless scenario, you shouldn't just ACT, you should do an action that has a probability for success over actions that have lower probabilities. This why if I saw a comet heading toward Earth I wouldn't start banging my head on the concrete out of hopes that it would save the planet as the PROBAILITY is does anything is outweighed by the HIGHER PROBABILITY that I get a concussion.

 

 

 

 

 

 

Maybe the problem is just that you have the common but yet totally bizarre idea that small numbers don't matter automatically, just because they are small. This is false. For example, in high level physics problems, or when dealing with existential risk, small numbers can be very important. If scientists didn't use exponential notation and just rounded instead, the world would be a much scarier place. There is also no threshold for what we should consider small.

First off, good policy makers focus on PROBABILITY. That is why we don't spend all our money on asteroid detection when there are so many more probable threats of lower magnitude to deal with.

 

Second off, ya you are right it is arbitrary but thats why we are DEBATING ABOUT IT. Thats why it needs to be applied to a specific scenario rather than talking generalities.

 

 

I also think it's relevant that even if the probability was Small, if the impact was Big then we might need special exceptions.

I hope that you, and DML, and onlookers, will read this: http://lesswrong.com...vs_dust_specks/ (To clarify, the author later disclosed that he preferred torture by a lot.)

It clarifies the importance of Big impacts, and although it doesn't do so in terms of probabilities it is still very relevant to what we're saying here.

First off, this article is not about probability theory. He's assuming that the likelyhood of all events is 100% (choose what is worse: 3 gajillion people experiencing a dust-spec or 1 person being violently tortured) rather this arguement is that CONCENTRATION is worse than dispersion

 

Second off, This arguement actually concedes that structural violence is bad.. I.e: In capitalist societies we like having things like CARS, this increases our joy only by a small ammount but it effects MILLIONS OF AMERICANS. The problem is that the acquisition of that technology (oil, etc) results in a lot of TERRIBLE THINGS happening to a select few of people. This is the reason structural violence is bad - small gains for everyone else because of TERRIBLE things that happen to a select few.

 

Heres a better scenario for our discussion - you can make a choice.

Suppose you hear a RUMOR (aka: unceartin probability) that a madman is going to KILL SOMEONE (high magnitude) unless you BRUTUALLY BEAT (moderate magnitude) a bunch of impoverished kids (the probability is certain since you are doing it).

 

Would you beat those kids?

I need you to tell me which of these problems is the one you're having with what I'm saying. I have a feeling that it's mostly just you not trying to think beyond common sense, but rather trying to justify common sense. That would be a mistake, reason should control what you consider to be sensible, and it should not be the other way around. But if I'm wrong, and you aren't relying heavily on common sense here, then please specify which of these things I identified it is that you think is inaccurate or logically flawed, and give the argument which shows that I'm inaccurate. Don't focus on examples, because those always engage the "common sense" meter, which I distrust.

I can list them

1.) I think that unless an impact is probable, it shouldn't be considered regardless of the magnitude when there are so many other PRESENT IMPACTS worth considering.

2.) I think that very low risk should be considered NO RISK because it is outweighed by the WHITE NOISE of the other possible risks of doing an action.

3.) I believe in terminal defense.

Share this post


Link to post
Share on other sites

I might have been using the wrong terminology earlier when I talked about known unknowns. I view unknown unknowns as a subset of known unknowns because surprise seems inevitable to me, and I'm 99% certain that I will learn something new in the future. But thinking about it and discussing it that way is probably not standard.

 

I don't believe you are really addressing my arguments because you're not demonstrating that 100% certainty is possible. I think this conversation has gone as far as it can. But that's okay.

 

*shakes hand*

 

I think the benefits of continuing to attempt to persuade you are outweighed by the costs (I want to watch TV). So after this then I'm done in this thread.

 

I feel obligated to address these before I stop the discussion though, because I feel offended that you accuse me of justifying these things.

 

Here are some historical examples of what your impact calculous is total bullshit.

-The Red Scare (the probability we were infiltrated by communists is VERY LOW, but WHAT IF WE WERE? Better go execute some Americans and deport them)

-The Salem Witch Trials (the probability there are actually witches is VERY LOW, but what if there were!!?!? We would all go to hell and have eternal damnation!

-Racial Profiling (This random black person COULD BE A CRIMINAL, better arrest him before he commits a crime).

 

These are all balanced situations. I don't contend that all low probabilities are worth considering, usually because there are similar but opposite low probabilities that discount them. This is what you refer to as white noise. I argued above that in some situations, things cannot get any worse, and so the impact of hypothetical white noise is asymmetrically distributed. You responded by saying that "nuclear war in 5 years is twice as bad as nuclear war in 10 years". That misses the issue entirely, because what I was contending is that an extremely low possibility of no nuclear war whatsoever outweighed a near certainty of delaying extinction by five years. It doesn't matter much to me when nuclear war happens, avoiding it at all is what I consider key. White noise usually balances things out, but to assume that it always does is naive.

 

You can give examples where white noise balances out. But in the case of a high probability of inevitable nuclear war I don't think things can't get much worse.

 

Here are some specific examples of costs you're not considering that would balance or skew the above scenarios in my favor:

 

1. You're not considering the costs of executing Americans - it's more probable that such a policy would give the Reds an advantage.

2. You're not considering the just-as-probable possibility that there's a Professor God who doesn't like people killing "witches" without evidence.

3. This has other costs. The reason we want to stop criminals is because of quality of life. Prison is 99% likely to destroy QOL of 1 person, 1 person is .001% likely to destroy QOL of others.

 

Consider also that if you decide in one instance to lock up a person with a .001% risk, you would need to implement a similar policy in all similar cases to be logically consistent.

 

Also, re: immortality, I'm considering cyronics at the moment. Nick Bostrom is a big fan of similar policies, I believe.

If I get freezedededededededededed, future tech might be able to extend my life a lot. That's as close as I know to get (for now).

Share this post


Link to post
Share on other sites

Hey, Chaos. Curious what you think about opportunity costs of these actions. Taking random actions in a situation where you are acting based on the chance that those actions might somehow make things better naturally forecloses your opportunity to take actions that could be infinitely more valuable action. So why take RANDOM action if we don't KNOW that it will make things better?

Share this post


Link to post
Share on other sites

this thread is terrible. Be ashamed of yourselves.

  • Upvote 1

Share this post


Link to post
Share on other sites

Hey, Chaos. Curious what you think about opportunity costs of these actions. Taking random actions in a situation where you are acting based on the chance that those actions might somehow make things better naturally forecloses your opportunity to take actions that could be infinitely more valuable action. So why take RANDOM action if we don't KNOW that it will make things better?

 

All actions might have these super duper opportunity costs that we don't know about. If there is no information known about the hypothetical opportunity cost then there is no reason the possibility of one should alter your decisions. Any specific yet random action that is chosen is just as likely as any other specific yet random action to be the super duper option. You should just guess and choose one.

Share this post


Link to post
Share on other sites

zoidberg_bad.jpg

 

but seriously though this got so derailed from OP's problem. If you want to start an argument over impact calculus and the forms it can take, move it to another thread instead of screwing someone else over.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...