Jump to content
Solax10

What was your first K?

Recommended Posts

This is amazing, I knew that there were a lot of K's but it's awesome seeing all the names and explanations of them! Thanks again to everyone!

  • Upvote 3
  • Downvote 1

Share this post


Link to post
Share on other sites

For me, it depends on what's meant by "your". A few days before my very first tournament, I had broken my arm, and I was on oxycodone. I had missed almost all of the debate meetings, and I knew functionally nothing. My first round, I hit a race team. It was not a good tournament for me.

 

The first K that I ever ran was anthropocentrism/speciesism.

 

It was about a year later that I learned about the security K and others, and I've since gone on to make my own custom Ks (distributism as a technical variant on the cap K and the process K).

Share this post


Link to post
Share on other sites

My first K was imperialism. Super easy to understand, yet teams we hit didn't know what to say about it. I'm pretty sure we haven't lost on it yet, but we also don't read it much anymore. 

  • Upvote 1
  • Downvote 1

Share this post


Link to post
Share on other sites

Sorry to add on to this, but could you give an example of the complexity K?

My favorite example (correct me if I'm wrong) but is basically saying we need to look at things like a DirecTV commercial does. Example: if we do this, it'll lead to this, then lead to this, then lead to that, etc. It says the aff ignores most of the interconnections needed to implement a truly effective policy. It says linear predictions themselves are bad and we should resort to looking specifically at every link in the chain that is needed for the policy to actually solve the internal links and impacts.

Edited by aram
  • Upvote 1
  • Downvote 1

Share this post


Link to post
Share on other sites

My favorite example (correct me if I'm wrong) but is basically saying we need to look at things like a DirecTV commercial does. Example: if we do this, it'll lead to this, then lead to this, then lead to that, etc. It says the aff ignores most of the interconnections needed to implement a truly effective policy. It says linear predictions themselves are bad and we should resort to looking specifically at every link in the chain that is needed for the policy to actually solve the internal links and impacts.

Your welcome for that example Alec. Lol, it says that like every time we add another link in our chain of predictions, it makes the chance of the event actually occuring less. I don't have direct TV, but that doesn't necessarily mean I'll get punched over a can of soup.  

  • Upvote 1
  • Downvote 1

Share this post


Link to post
Share on other sites

I can't imagine this winning too many rounds. What is non-linear policymaking? This all seems pretty defensive to me.

It is defense, but it's very good defense, and can complement other arguments such as a security K, or environmental apocalypticism K very well. Anything that can have a "political alarmism" link. 

 

A very easy way to quantify the complexity argument is to ask them how many internal links are in any given advantage and ask them how much faith they put into each link (though the cards usually involve more events to have to happen, so just counting cards isn't enough if you want to make this better). Say there's 7 internal links and they have a kind 75% probability for each one, you can literally do the math in round: .75^7 = .1334, or just over 13% probability that the impact will happen at all. Qualify that with analysis on the impact evidence (usually very little risk that some international relations mistake or failure to act will actually enrage another nation to the point of literal conflict, and way less risk that'll spiral into nuclear conflict via miscalc or whatever). If they think they're smart and say they're 100% certain of their advantage scenario, then you can still cast doubt on it (and use numbers much more friendly to you for the calculation - like "let's say there's a 50% chance this is gonna happen" which would make the .5^7 = 00.78% risk of an impact) or make an even better security K link about knowledge/power and how "absolute scientific faith" props up militaristic enframing of international relations problems... along the lines of a Burke K, like Foucauldian IR.

 

Here are my camp notes on the predictions K if you'd like to read them.

Share this post


Link to post
Share on other sites

My favorite example (correct me if I'm wrong) but is basically saying we need to look at things like a DirecTV commercial does. Example: if we do this, it'll lead to this, then lead to this, then lead to that, etc. It says the aff ignores most of the interconnections needed to implement a truly effective policy. It says linear predictions themselves are bad and we should resort to looking specifically at every link in the chain that is needed for the policy to actually solve the internal links and impacts.

Yeah, but whats the impact to this? Like, I have a difficult time of imagining the alternative - whats a nonlinear way of approaching predictions / IR?

 

It is defense, but it's very good defense, and can complement other arguments such as a security K, or environmental apocalypticism K very well. Anything that can have a "political alarmism" link. 

 

A very easy way to quantify the complexity argument is to ask them how many internal links are in any given advantage and ask them how much faith they put into each link (though the cards usually involve more events to have to happen, so just counting cards isn't enough if you want to make this better). Say there's 7 internal links and they have a kind 75% probability for each one, you can literally do the math in round: .75^7 = .1334, or just over 13% probability that the impact will happen at all. Qualify that with analysis on the impact evidence (usually very little risk that some international relations mistake or failure to act will actually enrage another nation to the point of literal conflict, and way less risk that'll spiral into nuclear conflict via miscalc or whatever). If they think they're smart and say they're 100% certain of their advantage scenario, then you can still cast doubt on it (and use numbers much more friendly to you for the calculation - like "let's say there's a 50% chance this is gonna happen" which would make the .5^7 = 00.78% risk of an impact) or make an even better security K link about knowledge/power and how "absolute scientific faith" props up militaristic enframing of international relations problems... along the lines of a Burke K, like Foucauldian IR.

 

Here are my camp notes on the predictions K if you'd like to read them.

I guess, but why not read a different K and then just 1-2 cards about risk then? What's the unique advantage of having this as a standalone K (I guess same questions as above). I mean, what if they say something like 'yeah, political science is not physics, but it's the best we can do with our limited set of information. Nuclear war of course isn't 100% likely and internal links are infinitely regressive--any scenario can be broken down to more and more internal links if you just think about it enough. Absent another way of understanding politics / IR, this is the best we can do precisely because the impact IS unknown but its unethical not to act.'

Share this post


Link to post
Share on other sites

Yeah, but whats the impact to this? Like, I have a difficult time of imagining the alternative - whats a nonlinear way of approaching predictions / IR?

 

I guess, but why not read a different K and then just 1-2 cards about risk then? What's the unique advantage of having this as a standalone K (I guess same questions as above). I mean, what if they say something like 'yeah, political science is not physics, but it's the best we can do with our limited set of information. Nuclear war of course isn't 100% likely and internal links are infinitely regressive--any scenario can be broken down to more and more internal links if you just think about it enough. Absent another way of understanding politics / IR, this is the best we can do precisely because the impact IS unknown but its unethical not to act.'

 

also it seems like the neg team makes predictions too 

like theyre predicting bad predictions lead to things like the iraq war rite

lol why would you read a k that you can link to as well

 

seems like the boggs card should beat this k. complexity sounds like anti politics to me

  • Downvote 1

Share this post


Link to post
Share on other sites

also it seems like the neg team makes predictions too 

like theyre predicting bad predictions lead to things like the iraq war rite

lol why would you read a k that you can link to as well

 

its not a question of if they make predictions its how they make predictions 

predictions are inevitable (to an extent) but if you make linear predictions then you leave yourself to policy failure (or what ever impact you run) 

from the books i have read complexity theory just takes into account the many different possibilities of a certain action when making its predictions  

So the K doesn't link to its self

 

Share this post


Link to post
Share on other sites

 

 

seems like the boggs card should beat this k. complexity sounds like anti politics to me

Completely untrue.  Owen (or another prag card) would definitely be a good answer because the K might result in policy parlysis because every prediction is "too linear" for the perfectionists.  However, its not really a matter of  refusing to engage the state, and its not revolutionary in any sense, its just pointing out why the specific epistemology of the plan is bad and why the plan sucks on account it.

  • Upvote 1

Share this post


Link to post
Share on other sites

its not a question of if they make predictions its how they make predictions 

predictions are inevitable (to an extent) but if you make linear predictions then you leave yourself to policy failure (or what ever impact you run) 

from the books i have read complexity theory just takes into account the many different possibilities of a certain action when making its predictions  

So the K doesn't link to its self

 

Yeah I get that part , but why is it bad to identify the strongest links in a complex web? Like a tsunami in Japan could be because a butterfly's wings in New York but my money is on global warming.

  • Upvote 1

Share this post


Link to post
Share on other sites

Yeah I get that part , but why is it bad to identify the strongest links in a complex web? Like a tsunami in Japan could be because a butterfly's wings in New York but my money is on global warming.

I think you're misinterpreting the argument. Refer to the notes I posted above. The argument isn't saying that link chains are always bad, it's saying that the way the aff's impact scenarios are represented is a very poor way of producing scholarship that lends itself to terrible races to mini-max impact debates (the race to the most nuclear wars or AIDS/Disease outbreaks or BioD scenarios).

 

In a nutshell, it's that more internal links mean less risk of an impact, narrative bias muddies the judge's ability to make an objective decision, the actual risk of the impact is extremely low (refer to the math I did above as an example), and the 1% Doctrine is a) a bad model for politics because it results in political paralysis - why step out of your house if there's a .78% chance a nuclear war would be the result? and B) it's historically legitimated crisis-based politics that obscure the political motivation behind them (the example is the Iraq War and yellow cake - the "evidence" was completely fabricated but the outcome was positive for American geopolitics - so goes the argument). That's why the complexity "k" is something you usually would read on-case.

 

Like...the complexity argument doesn't link to say, "remove the embargo because it's causing poverty and starvation in Cuba." That's just a fact, with no internal link scenario - the embargo simply does cause mass death. The complexity argument does link to long, say, economy or relations impacts with increasingly small risks of events from perception, to conflict, to nuclear warfare, to extinction (and many more depending what the initial links are).

Share this post


Link to post
Share on other sites

I think you're misinterpreting the argument. Refer to the notes I posted above. The argument isn't saying that link chains are always bad, it's saying that the way the aff's impact scenarios are represented is a very poor way of producing scholarship that lends itself to terrible races to mini-max impact debates (the race to the most nuclear wars or AIDS/Disease outbreaks or BioD scenarios).

 

In a nutshell, it's that more internal links mean less risk of an impact, narrative bias muddies the judge's ability to make an objective decision, the actual risk of the impact is extremely low (refer to the math I did above as an example), and the 1% Doctrine is a) a bad model for politics because it results in political paralysis - why step out of your house if there's a .78% chance a nuclear war would be the result? and B) it's historically legitimated crisis-based politics that obscure the political motivation behind them (the example is the Iraq War and yellow cake - the "evidence" was completely fabricated but the outcome was positive for American geopolitics - so goes the argument). That's why the complexity "k" is something you usually would read on-case.

 

Like...the complexity argument doesn't link to say, "remove the embargo because it's causing poverty and starvation in Cuba." That's just a fact, with no internal link scenario - the embargo simply does cause mass death. The complexity argument does link to long, say, economy or relations impacts with increasingly small risks of events from perception, to conflict, to nuclear warfare, to extinction (and many more depending what the initial links are).

No offense, but that's a worse form of scholarship that simply won't fly in any serious debate. I know people like to say the poverty = structural and leave it like that but saying the embargo causes mass deaths and that's a fact is nonstarter. Do you think poverty could've been a result of Castro's policies as well? Maybe because Cuba doesn't have the resources of other Latin American countries and can't generate substantial wealth in a neoliberal economy (note that doesn't judge whether neoliberalism is good or bad)? Maybe because Cuba just bet on the horse and lost a lot of money? Like, you are assuming a reverse causal arg to solvency that just doesn't seem to be the case here.

Share this post


Link to post
Share on other sites

No offense, but that's a worse form of scholarship that simply won't fly in any serious debate. I know people like to say the poverty = structural and leave it like that but saying the embargo causes mass deaths and that's a fact is nonstarter. Do you think poverty could've been a result of Castro's policies as well? Maybe because Cuba doesn't have the resources of other Latin American countries and can't generate substantial wealth in a neoliberal economy (note that doesn't judge whether neoliberalism is good or bad)? Maybe because Cuba just bet on the horse and lost a lot of money? Like, you are assuming a reverse causal arg to solvency that just doesn't seem to be the case here.

we're derailing the conversation, and you're not responding to the warrants of the arguments i'm informing you of. I recommend you to read the complexity kritik that UTNIF made last year, read the notes I linked to above, and PM me or start a new thread if you want to discuss the complexity argument.

Share this post


Link to post
Share on other sites

the first K i ever hit was your run of the mill Cap K 

the first K i ever ran was an imperialism K that basically said the aff perpetuates imperialism 

 

Another K i've hit is the Psychoanalysis K. Its kind of similar to the Security K, its just a hell of a lot more complicated

  • Upvote 1

Share this post


Link to post
Share on other sites

Stirner/egoism, Nietzsche, Junger, Schopenhauer, Human Rights Bad

 

In chronological order.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...