Jump to content
DiamondLouisXIV

Can you help me develop a strat against the narrative

Recommended Posts

So I'm trying to wrap my head around narratives. I went against one for the first time yesterday but have heard debates with them several times. I wanted some suggestions on strats and specific arguments (links..ect) that would be good. This is a former version of the aff I went against, which is not in use now (below) 

 

Here were my thoughts. They are running their argument without the use of any fiat mechanism which means that everything they say, they claim will really happen (stopping genocide..ect). They do all of this just by talking about the problem and changing our mindsets which somehow scales up and causes a reduction in surveillance.

 

My last strat was this: Border kritik (you're plan text says 'USFG' which is an recognition of its existence, bites the K), FW (hey bro you should actually do policy debate), Methodology aka case (you have literally no solvency because you dont prove that this discussion leaves the room, and you have BOP soo), and then a BS CP (There is no reason that your narrative justifies the ballot, my partner and I advocate a counterheg narrative except we don't recognize the existence of the United States or any other border which is a NB to the CP).....not in that order.

 

I realize the contradiction with the CP AND reading theory blocks that say narratives bad, but they never argued it was abusive and its neccesary to demonstrate the argument that there is no unique reason to affirm, plus I said it only stands if theory fails.

 

I would love to hear some strat/argument suggestions on this because this ish blew my mind with how much it doesn't make sense.

bad.docx

Share this post


Link to post
Share on other sites

I read a narrative on the Adolescents topic for LD, and I did similar stuff. The lit is there in support of the things they say, so its not as silly as you make it out to be.

 

The main 3 strats I was scared of was a decadence k, a tuck and yang kritik of narratives of pain, and assorted baudrillard. None of these were particularly intense or had amazing link stories, but they directly refute and turn the case if mishandled. A good team will have fw prepped the hell out with turns etc, do maybe don't go for that

Share this post


Link to post
Share on other sites

Narrative aff's are bs.

 

1. 8 minutes of harms does not justify a ballot

2. No really, discourse doesn't solve anything.  Otherwise we'd have 'solved' "capitalism" like a decade ago.

3. Only actual solutions solve actual problems.  The refusal to engage in problem solving only perpetuates the problem.

4. Fundamentally abusive.  No ground for negative's to compete with aff non-advocacy, because absolutely anything is compatible with 'talking'.  

5. This precludes any discussion of real solutions from happening for strategic reasons (no one runs a CP when they know they'll lose the perm, for example), and thus is actually worse than the world where the narrative never happened.  

6. Encourages magical thinking about government.  People who think merely ranting about an issue is productive fail at producing real change, and worse, they insist that policy makers 'do something', but never bother to investigate whether that 'something' is effective at all, which leads to serial policy failures and ever expanding bureaucracy to oversee worthless programs.  (Example: Headstart - no proven impact on educational outcomes, yet we're funding and administering it still.)  Also creates the attitude that passing laws against something solve the something legislated against.  This is the kind of thinking that created and then exacerbated (repeatedly) the war on drugs, among dozens of other examples.

  • Upvote 1
  • Downvote 1

Share this post


Link to post
Share on other sites

I read a narrative on the Adolescents topic for LD, and I did similar stuff. The lit is there in support of the things they say, so its not as silly as you make it out to be.

The main 3 strats I was scared of was a decadence k, a tuck and yang kritik of narratives of pain, and assorted baudrillard. None of these were particularly intense or had amazing link stories, but they directly refute and turn the case if mishandled. A good team will have fw prepped the hell out with turns etc, do maybe don't go for that

People need to stop running from debates. Just because they have a block to T doesn't mean that they're going to win, that framework isn't the better option, or that their answers aren't awful. Plot twist, most teams answers to T/framework suck. Case in point, I recently judged a round where the neg went 1 off, FW and the aff read 8 minutes of their A2 framework block worth out making a single argument. Even when teams do have something responsive prepared, that doesn't make it a good argument. No, seriously, what's the best answer that a (generic) narratives aff can make to framework? Probably something along the lines of 'demanding action means we can't focus on narratives' + 'narratives good.' This notion is fundamentally silly because you can obviously still combine a narrative with a call to arms. Is it really that hard to say, after 8 minutes of complaining about how bad surveillance is, that it should be reduced? Come on now.

 

Remember that every "impact turn" to T/FW needs a link, an internal link, and an impact. Teams are really, really bad at actually getting that link part down. It might actually be bad if we never ever talked about personal experiences (which is what their evidence is in the context of) but that doesn't link to T.

 

Furthermore, don't forget that you get to leverage the topical version of the aff as an advantage CP. This is a woefully underused tool, because in most cases you can subsume a good portion of the aff's offense, but they can rarely access any of yours. The impact calculus you should be thinking about is the same as any DA+CP debate: does the risk of the solvency deficit outweigh the risk of the net benefit? Probably not.

  • Upvote 2

Share this post


Link to post
Share on other sites

People need to stop running from debates. Just because they have a block to T doesn't mean that they're going to win, that framework isn't the better option, or that their answers aren't awful. Plot twist, most teams answers to T/framework suck. Case in point, I recently judged a round where the neg went 1 off, FW and the aff read 8 minutes of their A2 framework block worth out making a single argument. Even when teams do have something responsive prepared, that doesn't make it a good argument. No, seriously, what's the best answer that a (generic) narratives aff can make to framework? Probably something along the lines of 'demanding action means we can't focus on narratives' + 'narratives good.' This notion is fundamentally silly because you can obviously still combine a narrative with a call to arms. Is it really that hard to say, after 8 minutes of complaining about how bad surveillance is, that it should be reduced? Come on now.

Remember that every "impact turn" to T/FW needs a link, an internal link, and an impact. Teams are really, really bad at actually getting that link part down. It might actually be bad if we never ever talked about personal experiences (which is what their evidence is in the context of) but that doesn't link to T.

Furthermore, don't forget that you get to leverage the topical version of the aff as an advantage CP. This is a woefully underused tool, because in most cases you can subsume a good portion of the aff's offense, but they can rarely access any of yours. The impact calculus you should be thinking about is the same as any DA+CP debate: does the risk of the solvency deficit outweigh the risk of the net benefit? Probably not.

I think your criticism is more of a call for better debates, which I think is a good one, but not exclusionary to narratives. Everything you said is true, which emphasizes the point that it comes down whoever better debates wins, and if that means the aff really does a good job turning FW, then they have a better shot at getting the ballot, and if they do a bad job, they probably won't.

Realistically, my response hinges on people being good at debate, so I think it's legitimate.

Share this post


Link to post
Share on other sites

I think your criticism is more of a call for better debates, which I think is a good one, but not exclusionary to narratives. Everything you said is true, which emphasizes the point that it comes down whoever better debates wins, and if that means the aff really does a good job turning FW, then they have a better shot at getting the ballot, and if they do a bad job, they probably won't.

Realistically, my response hinges on people being good at debate, so I think it's legitimate.

Probably 80% of my post was about why that's not really a threatening strategy. 

Share this post


Link to post
Share on other sites

Encourages magical thinking about government.  People who think merely ranting about an issue is productive fail at producing real change, and worse, they insist that policy makers 'do something', but never bother to investigate whether that 'something' is effective at all, which leads to serial policy failures and ever expanding bureaucracy to oversee worthless programs.  (Example: Headstart - no proven impact on educational outcomes, yet we're funding and administering it still.)

 

 

Whats the evidence that indicates that Head Start doesn't work?  You can't refute head starts effectiveness without evidence.  "Head start doesn't work....because I don't have evidence either way." isn't very compelling.

 

Head Start is the opposite of magical thinking.  There is lots of peer reviewed science about the effectiveness of head start.

 

And he's just a tip of the iceberg of the evidence:

 

Head Start generates a Return On Investment (ROI) that could make hedge fund managers envious. For every $1 invested in Head Start, America reaps a ROI ranging from $7 to $9.
1
James Heckman, a Nobel Laureate in Economics at the University of Chicago, recommended to the National Commission on Fiscal Responsibility and Budget Reform,
 
Sorry I would have included more but the text formatting was a pain to reformat:
Edited by nathan_debate

Share this post


Link to post
Share on other sites

the easiest answer to a narrative aff is to PIC out of the narrative, and then read narratives bad. Like if the narrative is about homophobia, you can say PIC: endorse queer politics but reject the use of narratives. that prevents them from weighing their LGBT offense and lets you hone in on narratives bad. then just bury them under a dozen narratives bad turns (Wendy Brown writes ten trillion of them -- a lot focus on some of the criticisms squirreloid levies, but in a way that fairly credits narratives instead of outright dismissing them).

  • Upvote 4

Share this post


Link to post
Share on other sites

 

Whats the evidence that indicates that Head Start doesn't work?  You can't refute head starts effectiveness without evidence.  "Head start doesn't work....because I don't have evidence either way." isn't very compelling.

 

Head Start is the opposite of magical thinking.  There is lots of peer reviewed science about the effectiveness of head start.

 

And he's just a tip of the iceberg of the evidence:

 

Head Start generates a Return On Investment (ROI) that could make hedge fund managers envious. For every $1 invested in Head Start, America reaps a ROI ranging from $7 to $9.
1
James Heckman, a Nobel Laureate in Economics at the University of Chicago, recommended to the National Commission on Fiscal Responsibility and Budget Reform,
 
Sorry I would have included more but the text formatting was a pain to reformat:

 

 

Let's look at actual data, instead of just wishful thinking.  Because your source has no data at all, just some theory about how investing early leads to big dividends later.  The only thing approaching real data is the graph from a 2007 presentation, and it's not clear what data that's based on at all.

 

And while I have problems with their theory (it assumes that 0-3 year olds aren't already massively engaged in 'learning activities' of some sort, just through perception and experimentation - i can only assume the authors don't have children), those objections are mostly irrelevant, what we need are actual data.

 

Look here: http://www.acf.hhs.gov/sites/default/files/opre/head_start_report.pdf

 

This is the government's own assessment of the effectiveness of HeadStart from 2012.  I understand it includes double-blind trials, but i have not gone over the methodology with a fine tooth comb

 

The important pages to look at are xxiii and xxiv.  These are the tables of impacts from participating in Headstart, with followups at later grades.

 

Statistics Comments:

They used a significance of p<.10, which is way too 'easy' to be significant on.  Especially when you realize they've got over 60 variables they're looking at and 4 columns, so you'd expect *24* significant results *by chance alone* if they actually measured all those.  (They didn't, and I haven't counted up how many they did measure, but its enough that some of those "significant" results are certainly errors).  

 

p<.10 is also below the accepted standard of significance for even a single measured variable.  (The accepted standard is p<0.05).  The only reason to use such a weak standard is because they were desperate to show *some* significant result, and so they had to relax their standards.  And with that many tests, they should be reducing their effective p even further.

 

Results Comments:

Despite biasing their statistics in favor of finding significant results where none was present, they only found 2 significant results for Grade 3 children who participated in Head Start, one of which is negative.  Virtually all the significant early results disappear by third grade.  In fact, they mostly disappear before third grade. That means there is no longterm return on Head Start - because the gains don't translate into cumulative gains, but instead, vanish.  In fact, most of the 'gains' are gone by the next year.  

 

Once you correct for the overwhelmingly biased statistical testing in favor of finding significant results, it probably looks much much worse besides.  Even as it stands, the governments own assessment of the program is basically that it accomplishes nothing.  

 

-----------------------

 

In the 2014 follow-up study, the government summarized their 2012 findings as "The Head Start Impact Study (HSIS) has shown that having access to Head Start improves children’s preschool experiences and school readiness in certain areas, though few of those advantages persisting through third grade (Puma et al., 2012)"

 

The report is here: http://www.acf.hhs.gov/sites/default/files/opre/hs_quality_report_4_28_14_final.pdf

 

Check out tables on pages 15-21.  Despite breaking their results down by program resources, very few significant effects persist through 3rd grade.  Of the two that did, one is only p<0.10 (they actually separate out different levels of significance), and so likely not a real effect.  Once you consider how many tests they did, even the single p<0.05 result looks potentially dubious.  (No mention in their methods of any correction for number of statistical tests done).

 

---------------------------

 

The 2010 impact study technical (method) report is here: http://www.acf.hhs.gov/sites/default/files/opre/hs_impact_study_tech_rpt.pdf

 

In the 2010 study we can get an idea of effect size (rather than just significance).  The tables on the top of page 5-52 have effect sizes for two of the measured variables.  All effect sizes are below 5% gains, frequently near or below 3%.  In short, the effect size is tiny.

 

This is mostly a methods paper - don't look for statistical testing of child performance here.

Edited by Squirrelloid

Share this post


Link to post
Share on other sites

As a debater who has devolped and used narratives in my  aff before , that is how you lose , straight up take it seriously , view the poetry as cards itself , the narratives are read for a reason . I won most aff rounds , up till doubles - dropped on a 2-1 at Dowling on Poetry is the only thing with real solvency

  • Upvote 3

Share this post


Link to post
Share on other sites

RE: Head Start:

 

Dude, you manage to ignore the studies the 2 page article is citing in terms of studies.  Specifically citations 1, 8, and 10 (one actually includes multiple citations).  You can't just lie about what the document is talking about.  Thats the equivalent of dropping cards in debate.

 

Also, that's the equivalent of saying Washington Post and New York Times, you're citing of a study doesn't count, because you didn't do the study or you didn't publish all the data. 

 

Plus, the study of Head Start is incredibly robust.  It goes back something like 30 years.  Using one 2010 study which goes the other way isn't an honest look at the data.  Thats the big picture on this question.

Edited by nathan_debate

Share this post


Link to post
Share on other sites

RE: Head Start:

 

Dude, you manage to ignore the studies the 2 page article is citing in terms of studies.  Specifically citations 1, 8, and 10 (one actually includes multiple citations).  You can't just lie about what the document is talking about.  Thats the equivalent of dropping cards in debate.

 

Also, that's the equivalent of saying Washington Post and New York Times, you're citing of a study doesn't count, because you didn't do the study or you didn't publish all the data. 

 

Plus, the study of Head Start is incredibly robust.  It goes back something like 30 years.  Using one 2010 study which goes the other way isn't an honest look at the data.  Thats the big picture on this question.

 

I'm ignoring them because the author doesn't include enough detail to approach anything resembling warrants.  ROI of $7 to $9 dollars per dollar?  Based on what?  I can't tell, he never talks about what the study did or how it determined that.  It's essentially useless.  There's going to be academic disagreement (otherwise there wouldn't have been a government study in the first place), which means they're only choosing the most advantageous literature.  The only meaningful way to make headway is to provide warrants - ie, talk about the actual findings of the paper and the data they used to find that.  Your author doesn't do that.

 

So no, a quick blurb that cherrypicks academic citations doesn't really have warrants.  It's clearly not a full literature review, it's taken the most extreme findings to support their agenda.  (And yes, it has an agenda - it's the National Head Start Association).  So just being able to cite something is insufficient to prove a point.

 

(And many of the citations are 'proving' incidental facts or measuring the wrong things. Some of the citations are not about pre-school at all - see 2,3,6, and one is about any pre-school, not just head start: 5.  That's almost half the citations.  Some of the studies about Head Start only measure 'school preparedness' or 'school readiness', ie, end of Head Start or start of Kindergarten assessments, which the government study I've cited proves is a disparity that disappears either way and has no longterm advantage.  There's at least two based on the title of the study, and possibly more use similar methodology.)

 

Finally, I'll note that all the cited studies appear to be correlation studies.  Correlation =/= causation.

 

Compared to a detailed government study which wanted to find positive benefits to Head Start and failed, yours is just highly-motivated interest group lobbying.

 

And a WP or NYT article which didn't explain the reasoning which led to a conclusion would be equally useless.  You don't have to do the research yourself.  But you do have to talk about it and explain how the conclusion was arrived at.

 

I'll finally note that you ignored all of the actual evidence I presented.  That's actually dropping cards.

 

-------------------

 

Edit: the first cited paper in footnote #1 from your source is here: http://home.uchicago.edu/ludwigj/papers/SRCD_Headstart_2007.pdf

 

It reviews studies on Headstart in the 1960s through the 1980s, concludes it was advantageous then, and then specifically says "What remains unclear is how Head Start might affect the life chances of low-income children today. Head Start’s impacts on children may change over time both because the program itself evolves, and, importantly, because the types of developmental environments – at home and in early childhood programs -- that children would experience if they are not in Head Start also change as more mothers enter the labor force and the range of other local, state and federal programs for young children expands (see, for example, Hill, Brooks-Gunn, & Waldfogel, 2003)." (p5)

 

They go on to try to deal with modern Head Start in the following way: "We try to answer this question in two ways, first by examining the short-term impacts that have been found for studies of other early childhood interventions where there is also evidence for long-term benefits in excess of program costs, and, second, by estimating directly the dollar value of a standard deviation increase in early childhood test scores."  These are exactly the kinds of measures that the government studies I cited refute, because these early childhood disparities, especially test scores, dissipate by 3rd grade.  Ie, there's no connection between these measures and later performance.

 

Note that the evidence implied in "also evidence for long-term benefits" is based on the Perry program findings, which is NOT HeadStart, and they never test any of the differences between the two.  (One of which is notably per pupil funding, but availability/student selection is probably also significantly different).  They hedge on their use of Perry as follows: "Of course, long-term gains may not be proportional to short-term impacts, there are obvious differences in the samples of children that participated in the Perry Preschool and Head Start programs, and the long-term benefits that accrue to children in early childhood programs could be different across birth cohorts because of changes over time in things like labor market conditions, social program generosity or incarceration policies. Nonetheless, at a minimum, the Perry Preschool data raise the possibility that “small” short-term impacts might be sufficient for a program with the costs of Head Start to pass a benefit-cost test." (p7, emphasis mine)

 

Basically, they assume short term gains measure something related to long-term performance, and assume HeadStart is comparable to Perry.  The former is refuted by the studies i cited, the latter is completely untested afaik.  

 

Indeed, I'm pretty sure the studies i cited blow up the internal links between long term performance and shortterm measurement in all of the Head Start studies, because they're all making those assumptions by necessity (when they're relevant to modern Head Start at all).

 

I could go on, but I shouldn't have to pick apart your sources if you can't be bothered to track them down and actually present evidence from them in the first place.

Edited by Squirrelloid

Share this post


Link to post
Share on other sites

 You could read some of the shit i read on my wiki as a counter- k 

 some type of rewriting history K = look at the vazquez card it might be fire counter-advocacy that is competive 

2. black framework / framework - i dont know why Topical Verison of Aff - Switch Side doesnt solve this aff - 

3. i dont know mabye engage the aff on wilderson - i see the link story on a first read through being really good 

Share this post


Link to post
Share on other sites

So I found this in the first study Squirreloid linked:

 

(Page 46 or 47 thereabouts)

"The analysis of main impacts generated a large number of individual statistical tests. Such conditions increase the probability that one or more statistically significant differences will emerge by random chance alone in the absence of a true impact—an event known as a “false discovery.” To guard against such false discoveries, Benjamini and Hochberg (1995) developed a statistical test designed to screen out marginally significant findings from large sets of impact estimates. This procedure was applied to the complete set of outcomes within each domain (cognitive, social-emotional, health, and parenting outcomes). This was done separately for each of the two study cohorts. Because the Benjamini-Hochberg test limits discovery of true impacts compared with conventional test procedures, we present findings both with and without the Benjamini-Hochberg procedure."

 

Which would seem to address some of the concerns cited. (Unless I'm looking at the wrong study)

 

 

Something else interesting is that they deemed to have an endogeneity problem, and used IV estimation (the part where they mention including covariates in the regression model) which carriers it's own criticisms.

Share this post


Link to post
Share on other sites

So I found this in the first study Squirreloid linked:

 

(Page 46 or 47 thereabouts)

"The analysis of main impacts generated a large number of individual statistical tests. Such conditions increase the probability that one or more statistically significant differences will emerge by random chance alone in the absence of a true impact—an event known as a “false discovery.” To guard against such false discoveries, Benjamini and Hochberg (1995) developed a statistical test designed to screen out marginally significant findings from large sets of impact estimates. This procedure was applied to the complete set of outcomes within each domain (cognitive, social-emotional, health, and parenting outcomes). This was done separately for each of the two study cohorts. Because the Benjamini-Hochberg test limits discovery of true impacts compared with conventional test procedures, we present findings both with and without the Benjamini-Hochberg procedure."

 

Which would seem to address some of the concerns cited. (Unless I'm looking at the wrong study)

 

 

Something else interesting is that they deemed to have an endogeneity problem, and used IV estimation (the part where they mention including covariates in the regression model) which carriers it's own criticisms.

 

If they were truly doing the bold, they wouldn't be using p<0.10 as their significance standard.  (The only way to adjust your tolerance for false positives is to adjust your acceptable p value, and they've adjusted it the wrong way).

 

The proper way to handle it is to figure out what p results in p < SignificanceStandard for the whole study.  Ie, what p generates a p<0.05 for the whole study results as a group.  They didn't do that, and their lax significance standard strongly suggests they're actively looking for false positives.

 

Still, even if we accept their results as valid, it disproves the value of HeadStart today.

Edited by Squirrelloid

Share this post


Link to post
Share on other sites

Best solution to narratives affs:

Cap

Biopolitics

Nietzsche

If ID politics, Wendy brown

Psychoanalysis if you have a good link

Baudrillard - commodification of suffering

Framework

Case- Narratives suck in debate, get out

Share this post


Link to post
Share on other sites

There is a great file on open evidence called Cap K vs Identity Teams from Northwestern, there are cards there that reject narratives and show that it makes racism worse because there is no advocacy against capitalism which is the reason why racism exists. This is the link: http://openev.debatecoaches.org/bin/download/2015/Northwestern/Cap%20K%20vs%20Identity%20Teams%20-%20Northwestern%202015.docx                                                                                                                                                                                                                                                                                                   Also by any chance, have you seen John Spurlock's round 5 2nr speech at Berkeley in 2013? He makes some great analytical arguments against the idea of a narriative: 

Share this post


Link to post
Share on other sites

depends?

Cap links to narratives depending on the narrative in question/its deployment can or could not be good. At the same time reading a K versus a critical aff without a 100% golden ticket equivalent of a link isn't a good idea.

 

Honestly having a case debate that indites/says narratives are bad + answers to other parts of the aff + Framework/T should be sufficient and it's easily preparable for any aff because 90% of K impact turns to framework are literally the same other than the link/fancy words the 2A will use.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...