Every debate culture has an argument structure.
One popular argument structure in American competitive debating is the Claim, Warrant, Impact model. In this structure, you open any argument by asserting something about the world in a concise and direct manner (your claim), proceed immediately to a rigorous demonstration that the claim is true (the warrant), and close the argument by showing the audience why the proven claim is important (the impact).
As a professional debate coach, I trained students on essentially this structure (with the minor aesthetic change of swapping the word “warrant” out for the more intuitive “analysis”). It’s an excellent structure!
But for years there has been a ghostly presence I could not properly name in the impact part of this structure. Students would regularly construct arguments that would appear to fit the Claim-Analysis-Impact structure, but I would feel intuitively that something important was missing, something I found myself only inarticulately gesturing in the direction of in my feedback.
Consider the following example argument that employs Claim-Analysis-Impact structure:
Claim: Foreign aid helps low-income countries combat disease.
Analysis: This is true because combating diseases, like tuberculosis and malaria, requires financial backing that low-income countries lack. For example, it costs money to procure vaccines, to train doctors, and to employ those doctors to distribute those vaccines. Foreign aid provides the financial backing that low-income countries need to combat these diseases.
Impact: Combating diseases in low-income countries is important because disease is a major obstacle to economic growth. By sending aid to low-income countries, we accelerate their economic development.
How many impacts are there in this argument? Without hesitation I can detect two impacts, one official impact (promoting economic growth), and one intermediate impact hidden in the analysis section (combating disease). The argument follows CAI structure, but if I were coaching someone making this argument I’d tell them they messed up in two ways: First, by mislabelling the claim (if the final impact is going to be economic development, that should be what the claim asserts it will prove), and second, by unnecessarily bundling two impacts together, one being mostly in the analysis section (health effects could easily stand on its own two legs without being further strung out into an economic impact).
Consider another argument that seems to be structured acceptably but runs afoul of this thing-that-impacts-need-to-have that I can’t seem to name:
Claim: Foreign aid programs have extensive oversight mechanisms.
Analysis: Foreign aid programs have gotten better and better over the years in overseeing their funds. Modern programs use multiple layers of monitoring, including internal audits, third-party evaluations, and recipient country reporting requirements. For instance, USAID requires quarterly financial reports, annual performance reviews, and independent audits for all major grants.
Impact: These oversight mechanisms ensure that foreign aid isn’t misused or diverted from its intended purposes. This is important, because foreign aid can only be effective if it is not diverted.
How many impacts are there in this argument? From my read, this argument doesn’t have any impacts. How could that be? The argument has a conclusion which is labelled “impact”, but the conclusion does not constitute positive constructive material for a position on foreign aid. Notice that what foreign aid is intended for isn’t specified - we are left not knowing if foreign aid is intended for good or bad things. Is foreign aid intended to bolster authoritarian regimes, or save starving civilians in disasters? We are told that foreign aid nowadays does indeed accomplish whatever it is set out to accomplish. This passage could amplify an argument, or pre-emptively respond to misdirection-based objections to foreign aid. For example, if foreign aid is spent on good things, then the fact that there are oversight mechanisms would amplify that argument, since we would get more good things if funds are not diverted.
So what is this thing that you need to have to really qualify an argument as having an impact?
It took me quite a while to figure out a name that seems to carve it out, but here it is: the impact of an argument needs to advance an evaluable.
An evaluable is a thing with normative valence that is either positive or negative (it is straightforwardly good, or straightforwardly bad - never neutral), which can be abstract or concrete, long-lasting or short in its duration, probable or improbable, wide or narrow (in terms of how many things it affects), and deep or shallow (in its intensity of positive or negative effect per affected thing).
So, the following are all fairly straightforward evaluables:
Making grandma proud (a concrete narrow low-intensity good thing)
A paper cut (a concrete short-lived shallow narrow bad thing)
Systematic privacy violations (an abstract broad bad thing)
Protecting autonomy in medical decisions (an abstract good thing)
User-interface friction (a concrete shallow bad thing)
Mathematical elegance in a proof (an abstract good thing)
Biodiversity loss (an abstract long-lasting broad deep bad thing)
Reducing chances of nuclear war by 1% (an abstract probabilistic deep good thing)
More total injustice relative to a counterfactual (a very abstract bad thing)
Averting exactly 50 violent assaults (a very concrete narrow deep good thing)
Finally, a name for the thing!
If we reconsider our first example argument from earlier, we can see that the disease/development argument still has two evaluables: combating disease, and economic development. Both of these are normatively valenced, both positive, both evaluable on multiple dimensions. Combating disease is a deep impact on those affected, since it means the saving of lives, or averting suffering from disease. Economic development is broader but shallower, affecting more people in a smaller way per person.
But what about the second argument? If we rifle through the words contained in that passage, the phrase that emerges as the likeliest candidate for an evaluable is perhaps oversight. This would be an abstract evaluable, like justice or biodiversity. You could indeed have more or less of it. To stand on its own, oversight itself would need to be good. If oversight merely amplifies the good of the aid, then the argument continues to not state its evaluable (the good thing that aid does). Perhaps oversight is itself good because failure to deliver aid to its intended recipients is wronging the sender of the aid, whose intentions are thwarted. Thwarted intentions would therefore be the valence-flip-side of oversight - oversight is good and maintains intentions, lack of oversight means intentions are thwarted, which is bad.
What is so satisfying about evaluables is that this concept:
Let’s one precisely identify the point of an argument.
Immediately invites rich, multidimensional consideration of any such point (e.g. breadth, depth, probability, duration).
Helps differentiate positive constructive material from other argumentation (e.g. rebuttal, pre-empting objections, amplification).
Clarifies for a speaker where they should drive their arguments to make them important, relevant, and clear.
All without the ambiguity of the word “impact”, a term that engendered no small amount of confusion in those I taught. No more ghostly presence!
Hey thanks for the comment.
I'd clarify that the oversight argument is flawed because the valence of the implicit evaluable (whatever the aid actually consists of / does) is unknown, not because the aid doesn't do anything. Implicitly, the aid is doing something, but what it's doing is unspecified by the arguer.
I agree with you that if a team made that argument alone, they'd have no material. Prop would be vulnerable to opp team that aid is intended for things that are bad, and so its being well overseen is just amplifying a bad outcome.
But to your question 1: prop could try to defend oversight itself! This would be unstrategic, as it is very hard to imagine a good argument for oversight-in-the-abstract, especially in a round of high-stakes considerations like endemic disease and economic development of low-income nations. The arguer could say, for example, oversight is itself good because it is a transferable institutional skill, to do oversight is to develop a proficiency in a valuable capacity for the sender of aid (the ability to effectively execute delivery of aid-like things). Seems low-impact compared to the effect of the aid itself, but in-principle oversight could be transformed into the evaluable of that argument.
re: question 2, I think in the Venn diagram, evaluables would be a circle totally inside the circle of impacts. So there wouldn't be evaluables that aren't impacts - only impacts that aren't evaluables.
Love this term, for all the reasons you outline! Seems both precise and descriptive (i.e. Evaluables are the thing you evaluate at the end of the debate!).
However, I'm a bit confused by the example you presented - in the 2nd argument, while we might say it has an evaluable, it seems like a piece of a larger argument (rather than a complete argument in its own right). As you correctly point out, the oversight is genuinely meaningless if the foreign aid doesn't actually achieve anything; if a team only made this argument, I might say they have no impact/evaluable at all! Without a positive argument supporting the intended effects, it doesn't really matter if we get the intended result or not.
Some questions around this:
1) Are the "oversight" evaluable and a concrete impact of foreign aid (i.e. improved medical outcomes) the same kind of thing? It seems like we're not really adding them up as much as we are multiplying them (i.e. Foreign aid can help treat disease, and we proved that about 50 % of the money actually ended up with the doctors, so you ought to evaluate my argument at 50 % efficacy). Should there be different concepts for these different kinds of evaluables?
2) Are there other example evaluables that aren't adequately covered by "impacts"?