Copyright (c) 2013 John L. Jerz

How Much Information Does an Expert Use? Is it Relevant? by James Shanteau

Home
A Proposed Heuristic for a Computer Chess Program (John L. Jerz)
Problem Solving and the Gathering of Diagnostic Information (John L. Jerz)
A Concept of Strategy (John L. Jerz)
Books/Articles I am Reading
Quotes from References of Interest
Satire/ Play
Viva La Vida
Quotes on Thinking
Quotes on Planning
Quotes on Strategy
Quotes Concerning Problem Solving
Computer Chess
Chess Analysis
Early Computers/ New Computers
Problem Solving/ Creativity
Game Theory
Favorite Links
About Me
Additional Notes
The Case for Using Probabilistic Knowledge in a Computer Chess Program (John L. Jerz)
Resilience in Man and Machine

How Much Information Does an Expert Use? Is it Relevant? by James Shanteau
 
For Shanteau, decision making experts and novices differ in their choice of diagnostic information. Each must make choices (with regard to what information is relevant) before they can make their decision .  What is relevant in one situation may not be relevant in another. There is an entire research field called judgment and decision making, and it cannot possibly be summarized in a few paragraphs. [From Shanteau's footnote number 2, 'experts' are considered to be the best at what they do. 'Novices' are intermediate in skill and are trying to become experts. 'Naive' decision makers know little or nothing.]
 
It seems reasonable that if we are going to write a computer program that makes judgments and decisions in a game of chess, that we should spend at least some time reading about the current theory in the research area of judgment and decision making.
 
Shanteau declares that most observers of judgment and decision making accept the following points: That in order to be effective, expert decision makers 1) attempt to use all cues that offer diagnostic information (it may be too costly to acquire, and therefore skipped), and 2) that simplifying heuristics are often used, sometimes to replace missing information that is considered likely or reasonable to predict. 

p.2"Most observers of judgment and decision making accept the following two-part argument: First, to make effective decisions, all cues which are diagnostic or predictive of the outcome should be included in a decision. In complex real-world environments, there will be numerous sources of diagnostic information. It follow that experts should base their judgment on many cues. [from footnote 1, Shanteau uses the term 'cue' to represent a 'dimension', 'factor', or 'attribute', which represent a source of information used by experts]

Second, most decision makers use simplifying heuristics when making judgments (Tversky & Kaheman, 1974). This leads to reliance on less than optimal amounts and inappropriate sources of information. That means decision makers generally base their judgments on a small number of cues, often used suboptimally."

Research shows that experts use simple linear models of important cues to construct their opinions. Anyone who has read a copy of Consumer Reports or has bought a car has probably used one of the expert-produced product rating charts constructed from linear models. 

p.2"Not only do experts make use of little information, the evidence suggests their judgments can be described by simple linear models. Despite extensive efforts to find nonlinear (configural) cue utilization (e.g., Hoffman, Slovic, & Rorer, 1968), almost all the variance in judgments can be accounted for by a linear combination of cues (Goldberg, 1968). 'The judgments of even the most seemingly configural clinicians can often be estimated with good precision by a linear model' (Wiggins & Hoffman, 1968, p. 77). As Dawes and Corrigan (1974, p. 105 conclude, 'the whole trick is to decide what variables to look at and then to know how to add.'

Consequently, expert judgments often lack the complexity expected from superior decision makers, either in the number of significant cues or in the model used to describe their judgments. These findings paint a picture of experts making judgments in much the same way as naive subjects, with little evidence of any special abilities (Kahneman, 1991)."

Experts and novice decision makers differ in what information is selected as important, or is classified by them as diagnostic in nature.

p.4"It appears that experts and novices differed in their ability to discriminate between relevant and irrelevant information... Where experts differ from novices is in what information is used, not how much."

Shanteau speaks about the ability of an expert to determine what cues are diagnostic in nature, and which are not. Our computer chess program has the ability to obtain diagnostic information from the positions being evaluated and the placement and  interactions of the pieces on the board, if we choose to obtain it. Note that the cost (in terms of lost search time) of obtaining the diagnostic information might impact the performance of the program. Clearly we are interested in obtaining diagnostic information that we can obtain quickly, but we might be interested in obtaining more complex diagnostic data if the performance of our computer chess program improves in simulated tournaments of a few hundred games.

p.5-6"What separates the expert from the novice, in my view, is the ability to discriminate what is diagnostic from what is not. Both experts and novices know how to recognize and make use of multiple sources of information. What novices lack is the experience or ability to separate relevant from irrelevant sources. Thus, it is the type of information used – relevant vs. irrelevant – that distinguishes between experts and others.

"The problem for novices is that information diagnosticity is context dependent. What is relevant in one context may be irrelevant in another. Only a highly skilled judge can determine what is relevant in a given situation – precisely the skill that distinguishes experts from non-experts. Thus, it is the ability to evaluate task context that is central to expertise."

"This view has three implications for research on judgment and decision making: First, the assumption that experts should use more information than novices in making a decision is not correct. The number of significant cues does not reflect degree of expertise. As reported in the studies summarized here, mid-level and even entry-level subjects often have as many or more significant cues as experts. Indeed, there is evidence to suggest that novices may rely on too much information and that experts are better because they are more selective (Shanteau, 1991). Thus, the information-use hypothesis is inappropriate.

"Second, the crucial difference between midlevel and advanced expert is the ability to evaluate what information is relevant in a given context. The problem, however, is that analysis of context is difficult, even for experienced professionals (Howell & Dipboye, 1988). Nonetheless, top experts through insights gained from experience know which cues are relevant and which are not. An interesting question for future research is to determine how this experience is translated into the ability to distinguish relevant from irrelevant. One possibility pointed out by Neale and Northcraft (1989) in an organizational setting is that experience leads experts to develop a 'strategic conceptualization' of how to make rational decisions."

"Third, these arguments imply that efforts to analyze experts across domains are fruitless. The Information-Use Hypothesis reflects an effort to evaluate expertise generically without reference to specific decision contexts. As the studies cited here show, the hypothesis does not work. This illustrates that it is difficult, if not impossible, for decision researchers to draw generalizations about experts without reference to specific problem domains. In future discussions of experts, any conclusions should be verified in more than just one domain."

"Instead, researchers should ask, 'How do experts know what kind of information to use?' [Experts have an]  ability to evaluate what is relevant in specific contexts. It is the study of that skill, not the number of cues used, that should guide future research on experts."

The ability to determine what is relevant in certain contexts is what separates experts from novices. If we could find a way to determine what is relevant in evaluating a chess position, we would have a better evaluation function. The proposed heuristic obtains diagnostic information from the position of pieces on the board and their interactions with other pieces. It uses this information to focus the search efforts.

p.6"In their paper on linear models, Dawes and Corrigan (1974, p.105) concluded that 'the whole trick is to decide what variables to look at and to know how to add.' Although this sounds simple, it can be quite difficult to accomplish... by concentrating on [the] number of significant cues, observers may have overlooked what makes experts special - their ability to evaluate what is relevant in specific contexts. It is the study of that skill, not the number of cues used, that should guide future research on experts."

Enter supporting content here