an interesting article on what your own belief about the nature of intelligence (fixed trait or something that can be developed) seems to be able to do to your own performance on IQ tests. Article is based on some proper research and you may be able to have a look at the source yourself. I haven't I have learned to trust Ars Technica's science coverage enough to not bother just for fact checking (not so if I believe something more can be learned from the source that could not make it into an article aimed at general readership).
Anyway, you'd be well advised to read one, other, or both before continuing reading what will shortly become a critique of the findings.
First off, let me make clear I do not necessarily disagree with the result of the study. The methodology seems sound enough and the conclusions drawn from the results seem reasonable enough, too. Still, let me say that I would very much like to see more studies, using different (but also some with same) experimental setups confirming - or refuting, as the case may be - this particular one. My objection, which I am going to describe here is fairly subjective, based on the particular experiment run and may not even be a refutation of the validity of the result, but a comment on artificiality of IQ tests as applied to our, essentially, stone age software.
Let me first quickly recap the study and its findings (but do make sure you also read Ars Technica article or the study itself). Subjects were given an IQ (or IQ-like) test after their views on nature of intelligence were assesed. The result show that people who believe intelligence to be a fixed, inborn, trait tended to tackle the easy questions first, while those who believed intelligence to be something that can be influenced by training, learning, etc, tended to tackle the more difficult questions first. This also held true if the subjects were primed with one or the other belief. Priming was also able to reverse subjects behaviour on the test (for subjects primed to the nature of intelligence contrary to their own original beliefs).
The researchers interpreted this to say that people who believed intelligence to be malleable employed test solving strategy that benefited their own intelligence by exercising it more and thus potentially increasing it in the process. On the other hand the fixed trait intelligence group tackled the easy questions first thus setting themselves up on a path of keeping their intelligence as it was (by not challenging it) and thus proving their own point of view. Whether clearly stated or not (and I am not sure on this point) the feeling I got was that the former group was somehow becoming better off by challenging their intelligence and thus improving it for the future tasks.
Now, while this last assertion may very well be true - I for one subscribe to the view that intelligence can be improved, or degraded, by training, or lack thereof (although I do also believe that the maximum intelligence one can attain is an inborn trait) - I also tend to think that this strategy may well be a self-defeating one when it comes to the outcome of the task at hand. What I am trying to say here is that, while improving your intelligence further by tackling the hardest questions first, this also means that almost certainly more time is being spent on them. In the setting of an IQ test where most often the time available is limited (as in real life, where pretty much any task at hand becomes irrelevant - and you may become dead, too - if you take too long to solve it) this may mean scoring lower overall due to lack of time to the easy questions.
The article (I haven't checked the original study - partly because the link in the Ars Technica article seems to be broken), unfortunately, doesn't say which group scored better overall, in absolute terms or, much more interestingly, when controlled for natural variation. Were this piece of information available my critique - not of the study, but of the test solving strategy employed - would be easily proven wrong, or otherwise. Namely, if the group of "intelligence is malleable" believers were to score consistently higher than their matched and statistically controlled "intelligence is fixed" peers then their implied better strategy (because it trains their intelligence even further during the test) would be vindicated (and quite possibly their view on intelligence, too). This would also be case were the group of "intelligence is malleable" to be proven statistically more intelligent (possibly due to past winning behaviour) than the "intelligence is fixed" one as indicated by school record, prior IW testing, life success, or any other proxies for higher intelligence available.
As it is, my gut feeling - supported by anecdotal evidence - is that, while there may be long term benefits from training one's intelligence by tackling, and spending more time on hard tasks, it would likely be offset - if not entirely rubbed out - by failure to complete the test to the extent required to score highly on it (and in real life actually failing at the overall task when it is irrelevant if the "hard" parts of it have been completed - think of a mechanic managing to do a stellar job of fixing and improving a part while failing to get the actual car usable because they ran out of time to put the wheels back on before the race finished, or loosing so much time it made rejoining the race irrelevant or futile).
As for anecdotal evidence: my own schooling and professional career (although much more the former than the latter - as the failure at the former seems to successfully prevent people from properly joining the latter) is full of examples of brilliantly intelligent people who spent (wasted?) their time figuring out the hardest part of various subjects and related training problems only to find themselves scoring poorly on official tests, or maybe even worse, actually failing to pass official tests until they attempted them several times. On the other hand, colleagues who concentrated on the more easily doable exercises tended to do much better both on tests and in their subsequent careers.
Finally, whatever the case may be in the end, I am looking forward to some more in-depth research on the subject, hopefully addressing concerns listed above.