Assessing research
Tags: originality, publications, Scientific communityPosted in Ethics, Getting published, Research and education
Being a PhD student for almost three years now, a challenge I keep on running into is assessing the research projects I have done. What I mean by assessing is: judging what is new and interesting about the research. During my Master’s I thought science is all about discovering new physics. But new physics is hard to define. In optics, one may say that all physics is already in Maxwell’s equations. However, sometimes the system at hand is too complicated to solve Maxwell’s equations, in other cases it only provides complicated mathematics.
A way I tried to define new physics is by asking the question, is this result surprising? Then I quickly ran into the issue that senior colleagues, experts in the field, are hard to surprise. And if they see something new, their expertise allows them to analyze a problem with relative ease. Hence a new insight is quickly born, and therefore this insight may not seem that special or newsworthy for them. Hence that insight is often called “trivial”.
So, albeit the “trivial” insight was obtained with relative ease, it may still be valuable and new to a large community. And this is what I notice when talking to people from other groups in closely related fields. Giving talks, it is crucial to clearly explain these “trivial” insights. At these occasions I learned that those insights are very valuable for that particular audience. Moreover, it is actually these trivial insights that we struggle with in discussions with referees. So the things referees learn from our work, are often quite different from what we learned.
The problem of assessing research also appears when I read papers on my PhD topic, or the theme I did my Master Thesis on. For me, these experiments are relatively easy to understand, and often the results do not come as a big surprise. The amount of surprise seems inversely proportional to your knowledge about that topic. Once I even noticed that I might understand part of the paper better than the authors themselves. Then I ask myself the question, is it important that this work is published, is it newsworthy? Or are we just part of a system, dominated by H-indices, that pushes us to publish trivial thoughts in high impact journals?
My answer, for now, is that you should think mostly about people in related fields, who may read your work. How obvious is the result for them, and would it increase their understanding. Especially as an experimental scientist, I think that experiments are often about demonstrating and testing our understanding. Hence, experiments are often rewarded for their elegance, for me that means an experiment demonstrates the (new) physics in a clear and convincing manner. And this is, in my opinion, the hardest part of experimental physics: choosing a very clever experiment that undoubtedly proves or disproves a theory or thought.
And maybe, if your experiment is too convincing, it may almost seem trivial.
28 Jan 2012 20:02, Ad Lagendijk
Frerik,
experimental science is not “just” proving a theory. Experimentalists can discover a phenomenon for which there is no theory yet. Example from physics is superconductivity, discovered in 1913 by Kamerlingh Onnes and only explained in 1957 by Bardeen, Cooper, and Schrieffer.
31 Jan 2012 17:40, Frerik van Beijnum
Ad,
you are completely right. However, I think that we are not discovering completely new things in every experiment we do. Sometimes we combine existing knowledge in an experiment, yielding an interesting or surprising result. And in some cases an experiment excels in how convincing it demonstrates particular physics. So, then you may not be the first to show the effect, but you may be the most convincing or insightful.
Do you have a criterion by which you assess research, or is it mostly loads of experience and a gut feeling?
4 Mar 2012 13:27, Mirjam
There are many types of experimental science and a lot depends on the personal preferences of the researcher. I have worked with a PI who liked to ignore everything that doesn’t behave as expected and combine those things that behave as expected into some new working thing. I have also worked with a PI that thought everything that works as expected is boring. He would skip over that and get very excited about the things that don’t behave as expected, because it means there is something new to figure out. In the literature will find a mix of these and other approaches. Wrt ‘a system that pushes us to publish trivial thoughts in high impact journals’: the idea is that you publish your results in a journal that is appropriate. If you have a general insight that is useful for a range of (related) fields (even if it may seem trivial) you can consider a high impact broad journal. If you have some very specialized results, e.g. measuring some numbers for a very particular system, you publish in a (possibly lower impact) specialized journal. All kinds of measurements are useful, not just the ones that proof theories, also the ones that tell you e.g. what the boiling point of the liquids you work with is. Also, please note that researchers themselves are ‘the system’: we decide where we publish our results and can collectively decide to place less importance on some of the high impact journals.
5 Mar 2012 19:16, Frerik van Beijnum
Mirjam,
You seem to suggest that the generality of the insight should be a criterion to assess your work. This is more or less what I thought too, before starting my PhD. But when I read Physical Review Letters (the premier physics journal), many letters are rather specialized, too specialized if you ask me. I have got the impression that the criterion is “a large breakthrough in your field”, where some fields have quite a surprisingly large number of large breakthroughs.