Today’s NY Times has an article about a new paper in Psychological Science on whether talent or practice is more important for elite performance. In a meta-analysis, the psychologists “found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions”. I haven’t read the Psych Science paper, but, while the 21% for music is at least somewhat plausible, the 4% and 1% are obviously too low. Something is wrong with this research.
The debate is a bit poorly framed too, although the NY Times article goes some distance in the direction of setting it straight. The dichotomy between talent and deliberate practice is a false one. There are at least two other huge factors: determination and quality training (in addition to starting at an optimal age, which I’m going to ignore here).
Every elite performer is a unique case, with its own unique combination of factors that explain it. But you are not going to understand elite performance if all you focus on is talent vs. hours of deliberate practice.
It doesn’t take a lot of sophisticated science to know that talent and (deliberate) practice are insufficient for elite performance. For one thing, it takes good training. The better the teaching/training, the better the results are likely to be. That’s one reason that students obsess about getting into good Ph.D. programs. And even that is insufficient. We all know people who are talented and worked hard under good teachers, and yet they didn’t perform at the expected level. What they were missing is what I here call “determination,” which is a whole complicated set of skills and dispositions that would be worth investigating in detail.
[Image credit: Flickr user Steven S.]
From the paper: ‘Why were the effect sizes for education and professions so much smaller? One possibility is that deliberate practice is less well defined in these domains. It could also be that in some of the studies, participants differed in amount of prestudy expertise (e.g., amount of domain knowledge before taking an academic course or accepting a job) and thus in the amount of deliberate practice they needed to achieve a given level of performance.’
Right. What is ‘deliberate practice’ in doing, e.g., philosophy? In teaching? What isn’t? What counts as good performance in these domains? And given the lack of any good answer to these questions, what were the studies that met their inclusion criteria studying w/r/t education and professions?
Hi Josh – you ask: “What is ‘deliberate practice’ in doing, e.g., philosophy? In teaching…”
I’ve taken a look at one of the studies in the meta analysis, which looked at a group of students on a computer programming course. The meta analysis used a bizarre definition of “practice” as follows:
– Attending for lectures and doing the assigned coursework did not count as practice;
– Any study before the start of the course did not count as practice;
– Independent study (such as, say, looking up stuff about the course on the internet) did count as practice.
The study actually found that the main predictors of success on the course were 1) attendance at lectures and 2) previous programming experience. No radical surprises there. But the authors of the meta analysis did not count either of these as practice.
So it’s no surprise that the meta analysis found that, using this definition, “practice” was unimportant for success on the course. It’s not because study is unimportant. It’s because their definition of “practice” excluded most of the student’s study time, including their time at lectures.
Yes, this seems a disappointing way to set up the meta-analysis. The very “deliberate practice” literature they criticize has revealed a variety of other conditions that the deliberate practice must satisfy to improve performance, and they say repeatedly in this literature that mere hours spent hammering away at an activity is not enough. Most importantly, prompt, objective feedback needs to be available and the domain needs to have the right kind of structure–problem types that can be broken down into components, and the causal structure of the domain needs to be stable enough to diagnose the causes of success or failure in a way that allows one to minimize that failure in the future (they do, at least, investigate the predictability of the environment as a moderating variable). The relevant informational structure of the environment can’t be glossed as mere predictability; for example, domains with lots of highly-valid but rare cues (“broken-leg” cues) tend to be more conducive to expertise. Other psychological factors known to moderate the success of deliberate practice like memory, motivation, and metacognitive ability were not investigated (where “talent” does matter, it’s probably a combination of those psychological factors—and we can still debate whether those are modifiable with the right kind of practice). “Professions” may also not be kinds with respect to expertise; most of this literature finds that expertise effects are rather task-specific, rather than general to a whole domain (e.g., an expert surgeon may be a terrible diagnostician). Finally, we need to know whether the field even is really conducive to genuine expertise (as opposed to pseudoexpertise, such as with stock brokers and intelligence analysts).
And the whole thing becomes even more disappointing when practice is posed in a simple dichotomy with “talent”. The study only concludes that the other variance must be due to factors other than deliberate performance. Until we know what these factors are and how systematic they might be, it’s hard to know whether to regard this meta-analysis as a major blow against the deliberate practice view. After all, even with the flaws noted above, the study found significant correlations between expertise and talent that were in fact moderated by the sorts of things the deliberate practice view suggests are important. Still, even if this meta-analysis has been overinterpreted, there’s a lot here that’s interesting and work thinking about.
I’d be interested to hear others say whether they thought the method for collecting studies was fair. E-mail solicitation is notoriously a possible source of bias in meta-analyses, and it seems surprising to me that only 88 studies out of this very large literature met their criteria.
FWIW, it’s also nice to see that their publication-bias analysis did not find evidence of bias.
Er, that one sentence of course should have read “After all, even with the flaws noted above, the study found significant correlations between expertise and practice that were in fact moderated by the sorts of things the deliberate practice view suggests are important.”