A case presentation on the failure of translational science: The academic disconnect

Following the last speaker’s presentation, one senior-level MD saw my displeasure with the performance as he approached the back of the line of suit-wearing men behind the microphone waiting to ask their questions.  I smiled at him from my aisle seat acknowledging that I shared his frustration.  As his spot in line approached my seat he leaned over and said, “Get up and say something! We need more female representation!”  I nodded my head, both appreciating the encouragement and knowing he was right.  I simply said,  “I need a minute, I am too pissed off right now.”  Members of an audience at a biomedical conference for medical professionals and researchers, the panel of experts in front of us was now taking questions in an Investigator’s Workshop on the topic “Are animal models of post-traumatic epilepsy translational?”   These symposia often cover hot-button issues and the topics (not by chance) are chosen to elicit heated discussions due to known disagreements and controversies within the field. 

 

The kind stranger could not possibly have known that my younger sister had a traumatic brain injury a month prior to the conference, a subdural hematoma, and brain surgery.  She was doing quite well, but was still constructing her full medical team and long-term recovery plan. Although a tremendous amount of research is funded in this area, she was offered symptomatic treatment and no preventative (i.e., disease modifying) therapies are clinically available for the progressive, acquired symptoms following a brain injury.  I was lucky to be sitting at this discussion of the foremost experts addressing why the models used by basic scientists are not developing effective treatments for the patients.  I had an opportunity to learn and an opportunity to contribute in a capacity that many patients and caregivers do not.

 

Speaker 1: The models are problematic and only have limited utility, but we should explore other endpoints/outcomes where they may be useful.

This talk was data-driven discussing how the acquired spontaneous seizures in traumatic brain injury models had been difficult to characterize due to the low incidence and frequency of seizure events that can only be accurately quantified following continuous video-EEG monitoring of animals for months after injury.  The lab had better success with (1) longitudinal studies of the neuroanatomical and circuit alterations using small-animal MRI and (2) studies of comorbidities associated with brain injury and epilepsy including anxiety and depression.  Models that don’t satisfy criteria for the human condition are often discarded and scientists move on to another option.  In this case, the model was repurposed in a patient-facing manner.  The number of seizures that develop in this model cannot statistically predict clinical efficacy of tested anti-seizure or disease modifying drugs reliably which is the end goal.  Therefore, the investigator asked whether the model may be useful as a tool to study comorbidities including: sleep, weight loss, anxiety, and depression.   New data was presented for experimental animals relative to controls and the investigator suggested a composite score combining multiple endpoints and outcomes could be used to determine whether a potential disease modifying drug would improve overall health for post-traumatic brain injury patients.

 

There was plenty to discuss here.  The rigorous, temporal analysis of multiple structural and functional endpoints helped the investigator characterize aspects of this model that were useful for her research goals and others that were not.  Interpretation of a composite score (albeit an interesting QOL metric) will be problematic.  One can foresee scenarios in which the individual comorbidity tests when considered separately do not demonstrate statistical significance between treatment and vehicle-control groups, but the composite score as a whole does demonstrate statistical significance.  Likewise, a single test in which the drug was highly effective will skew the efficacy of the test drug on composite score.  Are these accurate reflections of the clinical reality suggesting this drug-testing paradigm will prove predictive of successful drugs or will this method favor successful results for basic scientists that will later fail in the clinic?  This would have been a good question to ask.

 

Speaker 2: Standard practices need to be used across laboratories to produce reproducible results.

This talk took the standard academic approach presenting the primary lab’s data and a second lab’s data that was supportive of the original data.  The investigator did not describe the induction of the model in the talk, but stated that published reports include detailed descriptions of the lab’s methods and welcomed anyone interested in learning the methods to contact him.  The opinion expressed here was that the reproducibility issues that have been encountered regarding the lack of seizure development in a supposed model of post-traumatic epilepsy were attributable to variance in methods that could be as simple as the use of a rounded vs. beveled tip on the impact device. 

 

This minor experimental variable is only one of many that may include the geographical region in which animals are housed, the environment of the housing facility, the species and strain the investigator is using, and the sex of the animal.  Yes, standard practices are missing in basic research.  Funding agencies and peer-review panels attempt to mandate these to the extent they are able.  The sloppy scientist that doesn’t account for these variables in grant submissions is unable to obtain funding in a functioning system.  Absent from this talk was any suggestion of solutions or requests for feedback regarding how the field might fix this problem.  Why doesn’t funding and effort from the scientific community exist for independent labs to conduct research studies in parallel?  In this open forum complete with leading experts, no call for collaboration to standardize methods in an effort to help scientific progress was made.  Unfortunately, work of this nature doesn’t pay the struggling scientist in the competitive world of grant funding, high-tier publications, and tenure-track promotions.  Again, this would have been a worthy question to wait in line and ask.

 

Speaker 3:  The current models are not translational.  We need more innovation and check out my cool data that does not address the topic.

The moderator was clearly the speaker’s past mentor as extra time was spent introducing this investigator’s novel interpretation of the topic.  The introduction slide simply said NO in bold letters and the speaker launched into a TedX style talk on how these models are not translational and it is a waste of time for the Department of Defense or NIH to fund multi-team consortium to develop new relevant models.  Remember, it was a panel discussion.  This speaker left the panel and walked into the crowd spouting off about how translational research as it is defined would not prove useful and innovation was required to develop new therapies.  In addition, replicative studies or lack of replication was moot because one can’t trust how other scientists conduct their science.  As an example of innovation, studies demonstrating the effective integration of neuronal progenitor cells into the brain of a mouse model of epilepsy were shared.  These studies were not done in a traumatic brain injury model, but a different model entirely.  Innovative and published in a well-regarded journal, yes; translational, not likely and only time and additional studies will determine; relevant to the topic, no.  Supporters of this young investigator probably called this display brave.  There were no answers to be found here, only self-promotion.  The presentation was not designed for discussion amongst peers, but was strategically delivered to help the investigator’s career trajectory.  The song and dance number did not reflect a dedication to developing new therapies for people following a traumatic brain injury.

 

A successful Investigator’s Workshop speaker will address the topic using scientific data, but most importantly capture a story for the audience.  Ideally, bullet points from learned experience or on which the speaker would like feedback will be shared and will foster discussion amongst the moderator, panelists, and audience members.  It is an opportunity for the scientist to improve their approach as well as inform the audience.

 

I found my way to the microphone that day.  It took me awhile to recover from the third speaker.  I asked why if there was a history encountering issues with these models, was the field so devoid of manuscripts describing negative results. I already knew the honest answer, in part.  Despite their value, negative papers are difficult to publish or published in lower impact journal.  This provides the already extended scientist little motivation to complete studies that are not fruitful.   The moderator did not answer my question directly, but responded that this was important and had been discussed amongst planning committees as a topic to explore.  He suggested that I formally suggest it as an Investigator’s Workshop for next year’s conference.  A young investigator approached me at a reception later that evening and thanked me for my question.  She shared some of her struggles and excitement with the new studies she was working on.  In gives me some hope for science and my sister that there were others like her in the audience, full of integrity and motivated to conduct research that will translate from the bench to the bedside.

 

Heidi Grabenstatter, PhD