When I was in academia, I always thought there should be a journal for publishing things that go wrong or do not work. I can only imagine there are some experiments that were repeated many times in human history because no one published that they did not work.
My understanding – again, just from that book; I’ve never worked in academia – is that some journals now have a procedure for “registering” a study before it happens. That way, the study’s procedure will have been pre-vetted and the journal commits to – and the researchers promise to – publish the data irrespective of any conclusive results. Not perfect, but could certainly help.
I can’t tell you how many times I had some exciting idea, dug around in the literature, found someone 10, 20, even 30 years ago who’d published promising work along exactly the line I was thinking, only to completely abandon the project after one or two publications. I’ve come to see that pattern as “this didn’t actually work, and the first paper was probably bullremoved.”
It’s really hard to write an interesting paper based on “this didn’t work,” unless you can follow up to the point where you can make a positive statement of why it didn’t work, and at that point, you’re going to write a paper based on the positive conclusion and demote the negative finding to some kind of control data. You have to have the luxury of time, resources, and interest to go after that positive statement, and that’s usually incompatible with professional development.
I agree with you. My point is that we should normalize writing a paper where you report that the experiments and/or the hypothesis itself did not work. Later, someone (just like you, in your example) may find the paper and realize they did not try this and that. It is knowledge that can be built upon.
“We tried this and got nothing,” is not really knowledge that can be built on. It might be helpful if you say it to a colleague at a conference, but there’s no way for a reader to know if you’re an inept experimenter, got a bad batch of reagents or specimens, had a fundamentally flawed hypothesis, inadequate statistical design, or neglected to control for some secondary phenomenon. You have to do extra work and spend extra money to prove out those possibilities to give the future researcher grounds for thinking up that thing you didn’t try, and you’ve probably already convinced yourself that it’s not going to be a productive line of work.
It might be close if the discussion section of those “This project didn’t really work, but we spent a year on it and have to publish something” papers would include their negative speculation that the original hypothesis won’t work, or the admission that they started on hypothesis H0, got nowhere, and diverted to H1 to salvage the effort, but that takes a level of humility that’s uncommon in faculty. And sometimes you don’t make the decision not to pursue the work until the new grad student can’t repeat any of the results of that first paper. That happens with some regularity and might be worth noting, if only as a footnote or comment attached to the original paper. Or for journals to do a 5-10 year follow-up on each paper just asking whether the authors are still working on the topic and why. “Student graduated and no one else was interested” is a very different reason than “marginal effect size so switched models.”
but there’s no way for a reader to know if you’re an inept experimenter, got a bad batch of reagents or specimens, had a fundamentally flawed hypothesis, inadequate statistical design, or neglected to control for some secondary phenomenon.
I agree, to the extent that single, poor dataset can’t draw useful conclusions. But after (painstakingly) controlling for issues with this dataset and from lots of other similar datasets, there can still be some value extracted from a meta-analysis.
The prospect that someone might one day later incorporate your data into a meta-analysis and at least justify a follow-up, more controlled study, should be sufficient to tip the scale toward publishing more studies and their datasets. I’m not saying hot garbage should be sent to journals, but whatever can be prepared for publishing ought to be.
When I was in academia, I always thought there should be a journal for publishing things that go wrong or do not work. I can only imagine there are some experiments that were repeated many times in human history because no one published that they did not work.
My understanding – again, just from that book; I’ve never worked in academia – is that some journals now have a procedure for “registering” a study before it happens. That way, the study’s procedure will have been pre-vetted and the journal commits to – and the researchers promise to – publish the data irrespective of any conclusive results. Not perfect, but could certainly help.
It is a good idea, it just needs to be the norm which it isn’t at the moment.
I can’t tell you how many times I had some exciting idea, dug around in the literature, found someone 10, 20, even 30 years ago who’d published promising work along exactly the line I was thinking, only to completely abandon the project after one or two publications. I’ve come to see that pattern as “this didn’t actually work, and the first paper was probably bullremoved.”
It’s really hard to write an interesting paper based on “this didn’t work,” unless you can follow up to the point where you can make a positive statement of why it didn’t work, and at that point, you’re going to write a paper based on the positive conclusion and demote the negative finding to some kind of control data. You have to have the luxury of time, resources, and interest to go after that positive statement, and that’s usually incompatible with professional development.
I agree with you. My point is that we should normalize writing a paper where you report that the experiments and/or the hypothesis itself did not work. Later, someone (just like you, in your example) may find the paper and realize they did not try this and that. It is knowledge that can be built upon.
“We tried this and got nothing,” is not really knowledge that can be built on. It might be helpful if you say it to a colleague at a conference, but there’s no way for a reader to know if you’re an inept experimenter, got a bad batch of reagents or specimens, had a fundamentally flawed hypothesis, inadequate statistical design, or neglected to control for some secondary phenomenon. You have to do extra work and spend extra money to prove out those possibilities to give the future researcher grounds for thinking up that thing you didn’t try, and you’ve probably already convinced yourself that it’s not going to be a productive line of work.
It might be close if the discussion section of those “This project didn’t really work, but we spent a year on it and have to publish something” papers would include their negative speculation that the original hypothesis won’t work, or the admission that they started on hypothesis H0, got nowhere, and diverted to H1 to salvage the effort, but that takes a level of humility that’s uncommon in faculty. And sometimes you don’t make the decision not to pursue the work until the new grad student can’t repeat any of the results of that first paper. That happens with some regularity and might be worth noting, if only as a footnote or comment attached to the original paper. Or for journals to do a 5-10 year follow-up on each paper just asking whether the authors are still working on the topic and why. “Student graduated and no one else was interested” is a very different reason than “marginal effect size so switched models.”
I agree, to the extent that single, poor dataset can’t draw useful conclusions. But after (painstakingly) controlling for issues with this dataset and from lots of other similar datasets, there can still be some value extracted from a meta-analysis.
The prospect that someone might one day later incorporate your data into a meta-analysis and at least justify a follow-up, more controlled study, should be sufficient to tip the scale toward publishing more studies and their datasets. I’m not saying hot garbage should be sent to journals, but whatever can be prepared for publishing ought to be.