AI in the Research Lab: Where We Are, and Where We’re Going

Today’s guest blog is written by Poncho Meisenheimer, Vice President of Research and Development, Promega. Reposted from Promega Connections with permission.


I got into an argument with ChatGPT this morning.


That’s not unusual. I argue with ChatGPT a lot. In fact, that’s usually my goal.


As a scientist and a leader, it’s important to pressure-test my ideas. I need to account for biases, identify limitations, and strengthen weak points. Large language models like ChatGPT have given me a powerful tool in my pocket to become a better version of myself.


The advent of generative artificial intelligence has changed our world. We can’t keep doing things the way we did even only a year ago.



Promega is embracing AI. Every department is finding groundbreaking and responsible ways to deepen their impact using our ChatGPT enterprise license. As the Vice President of Research and Development, I have been working with our scientists to think beyond simple queries and imagine new horizons we can explore with these tools.


Make no mistake: I don’t think that ChatGPT and other AI tools can, will or should replace human scientists. Instead, they will empower all scientists to ask more ambitious questions and uncover new answers. They will upend our current paradigm about what science is and how it operates, and will help us build an even deeper understanding of the world around us.

AI and Experiment Planning

I expect that AI tools will soon be helping us plan experiments and research strategies that are more robust and comprehensive.


In our current paradigm of scientific research, expertise is largely centralized. A few people know more about something than anyone else in the world. At Promega, if I have a question about target engagement, I’m going to Matt Robers. Who could possibly know more about target engagement than Matt?


However, as scientists we need to lean on multiple perspectives to help us grow. As I started playing with ChatGPT and other large language models, I realized this system was an expert in everything. It was more than a Google search, it was a passing of knowledge. ChatGPT can “understand” your question and draw from a massive number of sources to synthesize a direct response. It decentralizes the expertise in a way that is accessible to anyone, anywhere, at any time. Now, I still go to Matt with questions, but I can also lean on ChatGPT and several other large language models in my pocket.


In my conversations with R&D scientists, we’ve started using ChatGPT to improve how we check our work. As we plan an experiment, we share our plans with ChatGPT and ask for feedback. Are we using the right controls? How is our data biased by the samples we’ve chosen to include or exclude? Where are our blind spots in how the data will represent biology?


This certainly improves our experiment design, and we’re only looking at the scale of a single experiment. From there, we can think bigger. What if every step we took in our project was informed by the whole of our current scientific knowledge?


As generative AI becomes a bigger part of our research process, my hope is that we see large language models like ChatGPT start to reinvent the classic lab notebook. Right now, when we document our research in a lab notebook, it’s an impersonal letter to no one. Sure, we might look through it when it’s time to write a paper. But it’s not writing back, challenging us to do and be better.


Imagine, instead, if the lab notebook became a conversation. Experiment planning becomes a constructive exchange with a highly knowledgeable colleague. Afterwards, we can upload our data and collaboratively document the conclusions. We can brainstorm ideas for the next steps towards our overall research goal. Before heading home, we can ask it to summarize the day’s work in five bullet points to share with our supervisor or team. Suddenly, a tedious necessity becomes an enhancer, powered by an AI companion.


AI and Data Analysis

As scientists, we love linear correlations between exactly two variables. If you discover a linear correlation good enough, we might even name it after you. Almost all standard assays and experiments are currently designed to be analyzed by a human, and the Promega catalog is filled with products that correlate two variables so our minds can easily spot the pattern. Many of them involve monitoring luminescence that correlates with a single biological variable. You measure that signal and extrapolate whether two proteins are interacting or if your potential drug molecule is binding its target.


Imagine a system of twenty variables that can be combined and analyzed in more than a million different ways. Where would you begin? How would you know you found a helpful new relationship? How could you capture all of that beautiful complexity?


We’re already playing with this question in Promega R&D. At first, it’s as simple as feeding data from a linear assay into various AI programs and observing how the analysis is similar and different to what we did on our own. However, when we ask the program how many distinct patterns it’s recognizing in the data, it’s usually far more than we could begin to tease out. As we add more variables, that level of analysis grows wildly.


Our linear mode of thinking fails to capitalize on the unimaginable processing power of the machines at our fingertips. Through conversations with our customers, we’ve learned that many are already using AI tools to analyze their data. We must rethink how we design assays. We need to consider that the initial consumers of our assay data are more and more AI algorithms rather than people.


At Promega, we’re now learning to embrace variance. Instead of one signal per well, we’re looking at four or more. Some could be linear, others stable, or others functional but nonlinear. We’re thinking of new types of controls and how to analyze mixed samples. This may sound bewildering, but AI is capable of deconvoluting the complexities of biology.


We often have a fear of generating too much data. The drive to publish has trained us to focus solely on the experiments that will prove our prediction. But how can you be sure that extra reporter won’t tell you anything useful if you aren’t looking at the data from a thousand or more angles? What could we glean from these “wasted” experiments and abandoned null hypotheses, given pan-dimensional processing power?


Image generated by DALL-E using ChatGPT.

AI and Being a Scientist

Finally, I’ve found that AI helps me become more effective at all the parts of my job that take place away from the lab bench.


I recently proposed a controversial plan for project prioritization to our R&D Management team. I knew it would be met with some skepticism, to put it gently. After I presented my idea, I asked each member of the team to record a one-minute response summarizing their biggest concern with my idea. I fed the transcripts of those recordings into ChatGPT alongside my original idea.


“You are a director in my R&D department,” I told it. “Please argue each of these positions one by one, asking me clarifying questions and challenging my opinions until you are satisfied that I have fully addressed the concern.”


It took a week for me to get through all the concerns and consider my responses. When I went back to the team, I explained how I had improved the plan to address each of their concerns. I presented data and articulated my idea in terms that spoke directly to their priorities. In the end, we were able to evaluate my idea together with a greater level of discernment and rigor. But more importantly, the plan was better than it was before.


As scientists, it’s easy to focus on our effectiveness in the lab. However, our jobs require skills to build positive relationships, effect change and navigate organizations and systems. My favorite way to use AI is to augment these skills.


A colleague told me that when they’re responding to an email that upset them, they’ll share their response with ChatGPT and ask it to help them frame their message in a way that doesn’t sound angry or aggressive. I’ve heard other stories of people uploading project proposals and asking for critiques before they pitch it. Just recently, I needed to disseminate some information throughout R&D, so Sara Mann, VP of Commercial Excellence, suggested I use ChatGPT to develop a communication tailored to our organizational structure and communication channels. In the past, I might tell five people and ask them to spread the word. This route is much more effective.


We have this assumption that AI will make us faster or more efficient. That’s less important to me. Instead, I find that AI helps me be better. As I integrate it further into my life, I’m not comparing myself to someone else. I’m comparing myself with AI to myself without AI. With these tools, I can be more effective as a leader and a scientist. I can strengthen relationships, foster understanding and collaboration and sharpen my thoughts.


Our world is different now because of AI, and science will change with it. But one thing is constant: We will always need scientists with undying curiosity and a constant drive to improve their world.


When people like that partner with artificial intelligence, then science will really take off.


ChatGPT was used in the editing of this article but was not used to generate novel content.