
Generative AI (GenAI) is rapidly reshaping the research landscape, offering new capabilities for researchers at every stage of their careers. At Ohio State, we recognize the transformative potential of tools like GPT-4, which can now function as autonomous data analysts—cleaning datasets, developing analytical strategies, running statistical tests, generating visualizations, and even drafting sections of academic papers.
These capabilities are already influencing how we approach data analysis, scholarly writing, and research dissemination. At the same time, AI introduces new complexities. It challenges traditional support structures—such as scientific writing assistance, coding help, and library services—and raises critical concerns around accuracy, bias, and the potential for fabricated content. As the technology evolves, our response must remain agile and informed.
Despite these challenges, GenAI is poised to expand research possibilities across disciplines. It will reshape methodologies, redefine training needs, and open new avenues for inquiry. At Ohio State, we are committed to supporting researchers as they explore these opportunities responsibly.
Guiding Principles for GenAI Use in Research
- Responsible Use and Verification
Use AI only where it performs reliably. Always verify outputs for accuracy and attribution, and document your validation methods. This is especially critical when using AI to generate or interpret data, or to draft scholarly content. There are several options for verifying your results. - Transparency and Disclosure
Disclose all AI use in grant proposals, publications, conference submissions, and public presentations. Sponsors, journals, and professional societies are developing their own policies—compliance with Ohio State guidance may not be sufficient on its own. - Bias Awareness and Mitigation
AI systems can amplify bias. Faculty should assess and mitigate bias, particularly in research involving human participants or sensitive data. Describe how bias was evaluated and addressed in your methodology. - Privacy and Data Protection
Do not input personal, protected, or regulated data (e.g., HIPAA, FERPA, Common Rule, Export Control) into unsecured AI tools. Use only university-approved platforms and follow Ohio State’s data governance policies. - Institutional Alignment and Engagement
Faculty are encouraged to engage with AI working groups and committees to help shape Ohio State’s research strategy. Consider how your work aligns with institutional priorities and contributes to broader conversations about AI’s role in higher education. - Training and Capacity Building
Support is growing for AI training across the university. Faculty can help by identifying discipline-specific needs, mentoring students and colleagues, and contributing to the development of shared resources. - Reproducibility and Auditability
Maintain detailed records of AI interactions, model configurations, and decision-making processes. This supports reproducibility and strengthens the credibility of AI-assisted research. - Innovation Through Experimentation
Faculty are encouraged to propose pilot projects and proof-of-concept studies that explore AI’s role in research workflows. These efforts will inform future policy and practice at Ohio State. - Ethical Considerations for Human Subjects Research
When using GenAI in research involving human participants or data, ensure compliance with all ethical requirements, including IRB review and informed consent. Disclose AI-assisted analyses or interventions to participants when appropriate. - Intellectual Property and Copyright
Faculty are encouraged to mindful of licensing, and intellectual property considerations when using GenAI to generate text, code, images, or data. Attribute GenAI-generated content appropriately and consult with the Office of Research or legal counsel for questions about ownership, licensing, or commercialization.
Checklist for Responsible Use
Purpose and Scope
- Is this tool appropriate for my research purpose?
- Am I using AI to explore ideas, generate drafts, analyze data—or in a way that could compromise rigor? Have I checked the analysis to make sure that the analysis is correct or the proofs have gaps?
- Could this tool unintentionally influence my conclusions?
- Am I relying too heavily on AI rather than human judgment, especially in interpretation?
Transparency and Disclosure
- How transparent do I need to be about using AI?
- Should I disclose AI assistance in my methods, acknowledgments, or elsewhere?
- Would peers or reviewers view this use as appropriate or problematic?
Bias and Accuracy
- Could AI-generated content introduce bias or error?
- Am I critically evaluating outputs for accuracy, fairness, and relevance?
- Is the model trained on appropriate and a rich and varied set of sources?
- Do I understand the limitations or scope of the data the AI tool was trained on?
- Could this tool unintentionally influence my conclusions?
- Am I relying too heavily on AI rather than human judgment, especially in interpretation?
Privacy and Ethics
- Am I protecting sensitive or proprietary data?
- Does using this tool comply with data privacy, IRB, or grant requirements?
- Does my use of AI align with my discipline’s ethical standards?
Ownership and Intellectual Property
- Who owns the output?
- Are there authorship, copyright, or licensing issues I need to consider?