Jump to content
WorthyWorthington

How do I argue structural violence as an impact

Recommended Posts

In my transition from novice to varsity I've been given a lot of new arguments with a structural violence impact and i was wondering what i should do to make this impact outweigh impacts such as nuclear war 

Share this post


Link to post
Share on other sites

As a person who only read soft left impacts/arguments last year(I.E. do a plan to solve a "small" systemic impact) i think making structural violence args depends on two variables

 

A) Logic/common sense: Most internal link chains that solve war or other flash in a pan impacts are really unlikely. Not only is it unlikely to happen in general, but compound this with the fact that they only have 1 internal link chain when these phenomenon such as war and environmental destruction probably rely on thousands if not tens of thousands of interactions, therefore the likelyhood that one relatively small interaction causing this large reaction is extremely small. Most judges buy this line of reasoning but prefer you to not only state this thesis, but to point out the flaws in the link chain (if you don't point out flaws, they will assume the internal link chain is true). Therefore when they read an advantage, you might want to point out logical problems such as alternative causes to their internal links as well as internal link defense that says their impacts are rare but when you go for these arguments, you go for the weakest part of the link chain (for instance, not surveying trans people on government papers is not the nail in the coffin for counter terrorism strategies).

 

B) Framing: Not only do you have to win that their impact is unlikely, you have to win that the fact it is unlikely is reason enough to say no to the other team's argument. You should argue the magnitude of an impact does not matter if it does not have a high chance of occurring. This argument comes in about two forms.

  

        a) Positive peace argument-- Focusing on large flashpoints of violence obscures everyday "little genocides." This is bad because these individual violences get swept under the sheets and are unethical forms of knowledge production but they also spill up and likely lead to large flashpoints of violence because violence rarely happens out of nowhere. Our Xenophobia towards chinese people may be a cause of war towards china and maybe our treatment of Muslims fuels Islamic extremism in other countries. That might mean that the aff's form of calculation is a prerequisite to addressing truly existential risks.

        B) Complexity/K's of impacts-- pretty straight forward. impacts are empirically denied and probably too unlikely to happen. we can't understand every elemement of IR and every internal link makes the internal link chain less likely. 

Edited by Alwaysgoforinherency
  • Upvote 2

Share this post


Link to post
Share on other sites

You might want to be more specific about what you mean by structural violence. Having all of racism as an impact is different than feeding one soup kitchen's worth of people.

Share this post


Link to post
Share on other sites

Question: is it illegitimate meta-gaming to argue that since most extinction predictions made in debate end up being wrong, the opponent's arguments should be ignored? I've contemplated that argument in the past, but I keep changing my mind on whether or not it's a valid point to make, and how judges should evaluate it.

Edited by Scarf

Share this post


Link to post
Share on other sites

Question: is it illegitimate meta-gaming to argue that since most extinction predictions made in debate end up being wrong, the opponent's arguments should be ignored? I've contemplated that argument in the past, but I keep changing my mind on whether or not it's a valid point to make, and how judges should evaluate it.

That's a logical fallacy and misunderstanding how probability works.

 

There are plenty of good framing cards about how focusing on large magnitude events means that we exclude and ignore structural violence (Scheper Hughes is a good example). Just win some impact defense and point out why it's more important to focus on probability than magnitude, you can check cards like that (which should be floating around in camp impact files) to see how to articulate those kinds of arguments. Try framing it as DA/Kritiks of their impact calculus and show how it can internal link turn their scenarios.

 

For instance, ignoring the structural effect of poverty tends to correlate to increased domestic terrorism as unemployed younger adults are more easily recruited. Not the best example, but you should be able to think of something relevant to the aff/DA you're debating. 

  • Upvote 1

Share this post


Link to post
Share on other sites

 

Some shit about why induction is wrong that Hume would have loved

 

Yeah I'll bite and say that the problem of induction isn't a problem at all in CX and that you can easily gut-check this. Some philosophy professors might shit a brick but it's pretty easy to have the ethos of "the adult in the room" by calling out how unlikely the impacts of the aff are. Pretty sure that if we're going to use  pure logic in debate rounds than slippery slope and strawpersons would be called every few words in a duhb8 round. 

 

 

tho I agree with the rest of the post. 

Edited by RainSilves

Share this post


Link to post
Share on other sites

Question: is it illegitimate meta-gaming to argue that since most extinction predictions made in debate end up being wrong, the opponent's arguments should be ignored? I've contemplated that argument in the past, but I keep changing my mind on whether or not it's a valid point to make, and how judges should evaluate it.

I think if it is framed as a DA to the type of pedagogy a team endorses (I.e. teams focus on large contrived impacts makes debaters focus on these  scenarios instead of real world problems), it is a position to argue as long as you still explicitly answer their arguments. I think if debate is a focus on argumentation you should only evaluate arguments made in the round. 

Share this post


Link to post
Share on other sites

Yeah I'll bite and say that the problem of induction isn't a problem at all in CX and that you can easily gut-check this. Some philosophy professors might shit a brick but it's pretty easy to have the ethos of "the adult in the room" by calling out how unlikely the impacts of the aff are. Pretty sure that if we're going to use  pure logic in debate rounds than slippery slope and strawpersons would be called every few words in a duhb8 round. 

 

 

tho I agree with the rest of the post. 

It's not problem of induction, it's anthropic bias.  We can't observe worlds where we've gone extinct, so we tend to underestimate the risk of existential threats; Bostrom writes some decent stuff on this.

Share this post


Link to post
Share on other sites

It's not problem of induction, it's anthropic bias.  We can't observe worlds where we've gone extinct, so we tend to underestimate the risk of existential threats; Bostrom writes some decent stuff on this.

No, my criticism was directed at the argument that was 'people have been claiming X for years, x hasn't happened so we should discount the probability of x.'

 

Regardless of the true probability of extinction or our biases to it, that argument form is logically flawed.

Share this post


Link to post
Share on other sites

That's a logical fallacy and misunderstanding how probability works.

 

What fallacy is it? Pretty sure it's okay. If Chicken Little says each day that the sky will fall, and it never does, it's rational to start ignoring Chicken Little. (Clarifying exception: if he says the sky will fall in 10 years, you're not allowed to start ignoring him tomorrow.) You can use any reference classes you like when you're trying to evaluate past data, as long as those reference classes are tightly coupled to causal reality. And doing research on the internet or through books is a causal process, so knowledge production methods can indeed be evaluated empirically.

 

(Or are you indeed talking about the problem of induction here? Because if so: neg wins on presumption.)

 

In what other way could the concept of probability exist? By your argument, I think we would have to exclude arguments about whether or not peer review boosts a study's credibility, or whether Fox News tends to produce bad content, and other similar commonplace ideas. That seems like a good sign your reasoning's gone wrong.

 

 It's not problem of induction, it's anthropic bias.  We can't observe worlds where we've gone extinct, so we tend to underestimate the risk of existential threats; Bostrom writes some decent stuff on this.

 

This is possibly a good answer, but anthropic reasoning confuses me. It always feels like a cheating non-answer, like it can't actually explain anything, only rationalize. Maybe this is just my biases though. I should probably learn more about anthropic reasoning, eventually.

 

Do you honestly believe that most extinction chain arguments made by debaters are essentially true, but we got lucky over and over again? If that's your belief, it seems like you should live your life like you'll be dead this time next year. But I'm skeptical you actually have such a short-term mindset. Although obviously such inconsistency isn't an argument per se against your ideas.

 

I feel like this argument requires something flawed sort of like denying object permanence. If reality leads us to outcome X, and reality is causal, and we predicted Y beforehand, then we must have not had complete correct information. You have to deny that reality is causal and probability is in the mind, or else assert that there actually exist multiple different realities, if you want to say that the existence of anthropic arguments means the initial predictions were indeed "correct" in some sense, and that the past information was not flawed.

 

 I think if it is framed as a DA to the type of pedagogy a team endorses (I.e. teams focus on large contrived impacts makes debaters focus on these scenarios instead of real world problems), it is a position to argue as long as you still explicitly answer their arguments. I think if debate is a focus on argumentation you should only evaluate arguments made in the round.

 

I find this position desirable, but can't really find any ways to justify it that aren't arbitrary. We're allowed to point out if our opponent makes a fallacious argument, so we should also be allowed to point out if our opponent makes an argument that tends to be incorrect, or uses a method of researching arguments that tends to result in failed predictions.

Edited by Scarf

Share this post


Link to post
Share on other sites

No, my criticism was directed at the argument that was 'people have been claiming X for years, x hasn't happened so we should discount the probability of x.'

 

Regardless of the true probability of extinction or our biases to it, that argument form is logically flawed.

 

Your argument is literally the same thing as Hume's problem of induction. My argument is that you can ignore Hume since he's only right when thinking about formal logic and not the real world. 

Share this post


Link to post
Share on other sites

You beat the shit out of your partner and scream "YOU SEE THIS SHIT JUDGE, THIS HAPPENS EVERYDAY AND ITS ALL YOUR FAULT"

  • Upvote 1
  • Downvote 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...