In the digital “Tomorrow’s Professor” e-letter Vol 49, Issue 9, Prof David Smit of Kansas State University writes an excellent article examining the purpose for and teaching of writing in higher education. https://mailman.stanford.edu/mailman/listinfo/tomorrows-professor
The listed purposes and reasons for writing line up nicely with the characteristics and dispositions of critical thinking(CT) found in a massive CT research project conducted by Dr Fascione. More at: http://insightassessment.com/9dex.html.
Fascione’s 40+ year career in CT includes an extensive set of research quality tools for assessing CT in action, including an instrument tuned for use with Military & Defense thinkers http://insightassessment.com/9test-mdcti.html.
The 16 measureable axes http://insightassessment.com/Scales%20MSRP.html of the Military and Defense Critical Thinking Inventory (MDCTI) make a pretty interesting catalog of qualities we could hope to see in the classroom.
This triggered a discussion of critical thinking within the curriculum, and the following observation about “judgment”.
I notice the word “judgment” is used very liberally in the documents. Have you come across a formal definition in your research? It is a pet peeve of mine. The Army uses “judgment” in all its doctrinal manuals but the only definition I can find is within an appendix of FM 6-22 (and it is a very thin definition). We often talk about improving one’s judgment in class but I cannot find a doctrinal (or academic) baseline that provides a solid definition or explanation of the term, let alone methods for improving it. In [the] A714 elective, …[there is ]… a lesson on developing and improving judgment but it is tied to Warren Bennis’ book, Judgment: How Winning Leaders Make Great Calls. The problem is it is more a decision making model than anything else. Anyway, if you have any insight on “judgment,” I would be interested in your thoughts.
I haven’t seen a dominatingly good formal one; as you note, it’s always used in context of situational decision-making.
For rational choice models, the judgment has a process component and trust in the heuristics: “If you trust the MDMP process and do it properly, good decisions are the natural outcome”.
Or even: “well that worked out OK, so in retrospect, he must have “good” judgment.”
When you add complexity and uncertainty, you need a sense of probabilistic thinking to “judge” well. So, if I buy insurance and die that was obviously good judgment. If I buy insurance and don’t die within the term, we wouldn’t say that was “bad judgment”, I was managing a risk that needed to be managed, and I got one of the more favorable of a number of possible outcomes.
If I don’t buy insurance, and don’t die within the term,… was that bad judgment? Good or bad is a reflection not just of the probabilities of the outcomes, but also the consequences of the outcome.
If I only have a 1:1000 chance of dying in the next 5 years, does that mean I should take the risk and be uncovered? What if the chance is 1:10? 1: 1,000,000, 000?
What constitutes good judgment now has a value component to it which is open to different opinions and risk tolerances.
Doctrine, as authoritative guidance, not to be dispensed with lightly, is the professional values base by which we could assess judgment in light of our estimate of possible/probable outcomes with knowable probabilities and outcomes.
What happens though, when doctrine no longer captures the dynamism of the world it proposes to model?
Well, could we say *at the time* that GEN Petraeus was showing good judgment in rewriting the COIN doctrine and then deciding to use it?
In retrospect it was *obviously* good judgment, right? Was it a 1:1,000,000 shot that it would work out? Or was there a basis for believing it would probably work out well? Was he gambling or taking a measured risk or was it a sure thing? How do you choose an evaluation scheme that lets you make that judgment in the moment? Do you think rational choice was in play, or some other artful, emotional, intuitive judgment?
If you focus on the judgment of which values frame to use when evaluating a decision to support the surge, you can make a judgment to support for the “right” reasons or the “wrong” reasons. But the judgment to support the surge from either frame could be a bad judgment, if the probabilities and consequences were to show it to be a bad choice on its face. In that case the merits of the decision, independent of the values frame , could be sufficient to make it a bad decision.
We knew from doctrine and rules of conventional warfare that we should quickly dominate Iraq in DS/DS and in OIF 1; was that “forecast” or prediction qualitatively different than the one GEN Petraeus made when estimating the consequences of using the new COIN doctrine with all the extra variables in play where we didn’t have wargaming processes to use?
Well, what makes this even more troublesome, are insights like Taleb’s Black Swan and 4th Quadrant problems, which suggests that we aren’t well suited for anticipating catastrophic events which invalidate the carefully nurtured inductive wisdom of doctrines when the unpredictable 10 sigma event occurs.
So, now judgment is involved in deciding which regime of judging should be applied to a given situation: is there anything about the situation which suggests it is a 4th Quadrant problem when “normal” rules of judgment should be set aside? Or is it of the routine variety where “normal” doctrine and heuristics and statistics may be properly applied? This is the area of meta-cognition or meta-judgment.
What about the sensory and creativity components? If my creativity only lets me see 2 options, but miss a 3d option that would have worked, am I guilty of bad judgment or lack of imagination? But what if it was only the lack of time which prevented me from hearing the final briefer who had the magic bullet? Am I guilty of bad judgment or poor time management?
Design thinking of the Army variety, is beginning to address these cognitive challenges.
I would start with Herb Simon’s bounded rationality: about the limits of rational choice in the real world, and also James March’s primer on decision-making. Even though they are decades old, they are still among the clearest writers on the subject.
Gerd Gigerenzer and Gary Klein have taken up the mantle more recently about instinctive, expert intuition and the judgment that’s inductively built up in tacit knowledge embedded below the level of consciousness among experts when working in their field.
Dr Michael Burton, MD describes how our feelings of confidence, that emerge from the same subconscious, are an evolutionary by-product of how our brains have adapted, that are not connected to “realistic” estimates of the modern world around us.
All the work on cognitive biases illustrate the many ways in which our “judging” capacities have been shaped by evolution to be poorly suited for the fast paced, modern complex world, and why common sense isn’t very common and often not very sensible.
I have already mentioned to you that I really like Michael Roberto’s approach to shaping leader’s decision-making with these ideas in min;.
The can of worms is now opened, and the worms are crawling around. What’s a leader to do? It must be something, and it must be now; the press conference starts in 15 min.
That’s the basis of my current thinking under time constraints and under the pressure to act, and how I are thinking about CT in the curriculum.