Today’s guest blog is written by Jordan Nutting, a Science Writer with Promega. Reposted from the Promega Connections blog with permission.
Integrating artificial intelligence (AI) into the process of scientific research offers a wealth of efficiency-boosting tools that are transforming the ways scientists can approach their work. Many are already using AI to refine code, automate data processing, and edit papers, presentations, abstracts and more. Personally, I find generative language models like ChatGPT to be invaluable “editorial assistants” in my work as a science writer, helping me work through wonky sentence structures, be more concise and get over writer’s block, to name a few applications.
But a scientist’s work doesn’t only involve writing or analyzing data, making presentations or keeping up with the literature. An essential component of any research scientist’s skillset is their ability to develop entirely new ideas and novel research proposals. Coming up with research questions and plans is a central component of graduate education and research careers, both in academia and industry.
As AI continues to advance and find broader use, a critical question arises: Can AI play a pivotal role in the creative process of developing entirely new ideas, such as crafting novel research proposals?
The Challenge of Crafting Novel Research Proposals
Coming up with a brand-new idea or question to ask in research is a lot like the writer’s block that comes from staring at a blank screen.
With all the possibilities your field of study offers you, how do you even get started? How do you identify where you want to go? What will you learn along the way? Even if the output is new to you, is it truly novel? Or did someone back in the 1960s do it first (and better)? My third-year research proposal was one of the most significant challenges I faced during my graduate studies because of these daunting questions.
Thinking up and developing research ideas is a creative, curious and critical endeavor, but there are steps you can take to strengthen and refine your instinct and “muscle memory” for developing ideas:
Know and understand what’s been done: read lots of reviews and perspective articles in your field to find known or unknown gaps in knowledge.
Explore other, related fields: consider if your expertise could speak into those fields’ challenges.
Talk (and debate) with other experts: get comfortable with being challenged and challenging ideas. New ideas can often be found where there’s tension in knowledge.
Make time to just think.
Can AI make any of these tasks more productive or thought-provoking?
Image generated using Adobe Firefly
How Can AI Help?
We’re not (yet) in a place where you can query an AI tool to provide the specific questions and ideas that make up a compelling, specific, knowledgeable and novel research proposal. But AI tools can make it easier to get your creative juices flowing, notice more curious connections and ask better questions along the way.
Here’s two examples of how AI can help scientists develop and propose new research ideas.
AI tools can make literature “deep dives” more productive and informative.
AI tools like Semantic Scholar, Consensus and Elicit employ advanced search algorithms and natural language processing to streamline literature searches for scientists. For example, a researcher exploring climate change effects may input a query about rising sea levels. The AI tool can then understand the context, filter out irrelevant information and provide a refined list of recent, high-impact studies on the topic. It can also suggest related keywords, offer summaries of key findings and highlight influential authors and papers. AI data visualization tools like Litmaps can create visual maps of how various citations are connected, helping researchers see wider trends and stories in their field, an invaluable insight when crafting proposals. These tools save time and help ensure that the research proposal is based on the latest and most pertinent scientific insights.
AI tools can act like peer review, helping you refine and challenge ideas.
While pondering new research directions, scientists can engage large language model tools like ChatGPT or Bard in a conversation about their proposed ideas. The AI tool can offer insights, suggest refinements and explore potential biases or ethical considerations. It can also simulate peer review by providing constructive feedback or identifying different angles. By collaborating with AI in this manner, scientists can develop more robust ideas.
Limits and Ethical Questions for AI in Research
Despite their obvious benefits, tread carefully when using AI to generate research ideas. As others have noted, some tools may fabricate academic references or make outright false claims. Alternatively, true claims or questions may be made without proper citation, running the risk of plagiarizing those whose work is included in the tool’s training set.
If errors or misconduct arise when applying AI tools, it’s the user and not the tool that will be held accountable. In fact, it’s this gap in accountability that is driving opposition to listing AI language models as authors in research articles.
The Human Element
Because of these limitations, you’ll need to filter AI tools’ outputs through the lens of your own expertise. At the most basic level, you’re responsible for fact-checking information and references and evaluating if an idea generated with support from AI tools is feasible.
But beyond confirming correctness, you’ll need to decide for yourself (and ideally in consultation with other experts) whether your idea is “good” or not. That’s a value judgement that AI tools are not ready to assess. No, curcumin or bat guano extract are probably not promising platforms for an oncology drug discovery campaign (as one AI tool assured me). To make that decision, you’ll need to muster your expertise, instincts and passion. These human qualities are what will make a research proposal sing and capture reviewers’ attention—but AI can certainly help you get there.
Yes, this article was written with editorial support from ChatGPT 3.5.
WOULD YOU LIKE TO SEE MORE ARTICLES LIKE THIS? SUBSCRIBE TO THE ISHI BLOG BELOW!