Skip to Main Content
Texas A & M Libraries Logo Research Guides

ChatGPT and Higher Education

Dialogues with ChatGPT: Academic Research

Conversations between the author of this guide and ChatGPT3.5 which shared its insights on the five selected (out of the ten) questions raised in the article ChatGPT: Five Priorities for Research by EAM Van Dis, J Bollen, W Zuidema, R van Rooij - Nature, 2023 - nature.com. https://www.nature.com/articles/d41586-023-00288-7.

Table of Contents

  1. Which research tasks should or should not be outsourced to large language models (LLMs)?
  2. Which academic skills and characteristics remain essential to researchers?
  3. What steps in an AI-assisted research process require human verification?
  4. How should research integrity and other policies be changed to address LLMs?
  5. What legal implications do LLMs have for scientific practice (for example, laws and regulations related to patents, copyright and ownership)?

 

Which research tasks should or should not be outsourced to large language models (LLMs)?
 <I am going to ask you some questions about conversational AI and academic research. Please share your insights on these questions. The first question is: Which research tasks should or should not be outsourced to large language models (LLMs)? prompt. ChatGPT 3.5 version. September, 2023 openai.com/chat.>

The decision to outsource research tasks to Large Language Models (LLMs) depends on various factors, including the nature of the task, the goals of the research, and ethical considerations. Here are some insights on which research tasks should or should not be outsourced to LLMs:

Tasks that should be outsourced to LLMs:
Text Generation and Summarization: LLMs excel at generating human-like text and summarizing large volumes of information. Researchers can use them to automate the process of creating summaries, abstracts, or even writing reports.
Language Understanding and Processing: LLMs can be valuable for tasks like sentiment analysis, entity recognition, and language translation. They can quickly process and analyze large datasets in multiple languages.
Data Augmentation: LLMs can help researchers generate additional data for training machine learning models. This is particularly useful when there is limited labeled data available for a specific task.
Information Retrieval: LLMs can assist in retrieving relevant research papers, articles, or data from large corpora, saving researchers time in information gathering.

Tasks that should be approached with caution:
Experimental Design: While LLMs can suggest research topics or generate hypotheses based on existing literature, human expertise is crucial in designing experiments and research methodologies. LLMs lack the ability to understand the nuances of experimental design fully.
Ethical and Sensitive Research: LLMs may not always adhere to ethical guidelines and can inadvertently produce biased or harmful content. Research involving sensitive topics, such as mental health or discrimination, should be handled with care by human researchers.
Critical Review and Peer Feedback: LLMs can assist in drafting research papers, but the critical review and feedback process should involve human experts. Peer review by human researchers is essential for maintaining research quality and integrity.
Highly Specialized Research: In fields that require highly specialized knowledge, such as advanced medical research or cutting-edge scientific discoveries, LLMs may not have the domain-specific expertise needed to contribute meaningfully.
Creative and Innovative Thinking: LLMs are not inherently creative or innovative thinkers. Tasks that require groundbreaking ideas or unconventional problem-solving may be better suited to human researchers.

Tasks that should not be outsourced to LLMs:
Decision-Making in Ethical Dilemmas: LLMs lack moral judgment and empathy. Decisions involving ethical dilemmas or matters of human welfare should always be made by humans with ethical training.
Legal or Regulatory Compliance: Legal and regulatory compliance issues should be handled by legal experts who understand the specific laws and regulations applicable to the research.
Human Interaction and Collaboration: LLMs cannot replace the benefits of human collaboration, brainstorming, and interpersonal communication in research projects.

In summary, LLMs can be valuable tools for various research tasks, especially those related to text generation, language understanding, and data processing. However, researchers should exercise caution and human oversight, especially in tasks requiring ethical judgment, experimental design, critical review, and highly specialized expertise. The appropriateness of outsourcing tasks to LLMs should always be assessed on a case-by-case basis, considering the specific requirements of the research project.

Which academic skills and characteristics remain essential to researchers?
<Which academic skills and characteristics remain essential to researchers? prompt. ChatGPT 3.5 version. September, 2023 openai.com/chat.>


Despite advances in technology and the integration of AI and machine learning into research processes, several academic skills and characteristics remain essential to researchers
Critical Thinking: Critical thinking is the foundation of good research. Researchers must evaluate information, identify biases, assess the validity of claims, and draw reasoned conclusions. This skill helps in designing robust experiments, analyzing data, and interpreting results.
Problem-Solving: Research often involves addressing complex and novel problems. Researchers need strong problem-solving skills to formulate research questions, design experiments, and overcome obstacles that may arise during the research process.
Creativity and Innovation: While technology can assist in data analysis and information retrieval, the generation of innovative ideas and hypotheses remains a distinctly human trait. Researchers must be creative thinkers to push the boundaries of knowledge.
Domain Expertise: In many fields, deep domain expertise is essential. Researchers need a strong understanding of the theories, principles, and methodologies relevant to their area of study. This expertise allows them to ask meaningful questions and contribute to the advancement of knowledge.
Research Design: Crafting well-designed experiments or studies is crucial. Researchers need to know how to choose appropriate research methods, collect data, and control variables effectively. A solid grasp of research design principles ensures the validity and reliability of findings.
Communication Skills: Researchers must communicate their findings effectively through research papers, presentations, and discussions. Clear and concise communication is vital for disseminating knowledge to peers and the broader community.
Ethical Awareness: Ethical considerations are paramount in research. Researchers must adhere to ethical guidelines, protect human and animal subjects, and handle data with integrity. Ethical awareness ensures research credibility and safeguards against harm.
Data Analysis: Proficiency in data analysis is essential for drawing meaningful conclusions from research data. Researchers should be skilled in statistical methods, data visualization, and software tools relevant to their field.
Collaboration and Networking: Collaboration often leads to more robust research outcomes. Researchers should be adept at working with peers, forming interdisciplinary partnerships, and networking to share knowledge and resources.
Adaptability: Research landscapes evolve rapidly, with new technologies, methodologies, and discoveries. Researchers must be adaptable and willing to embrace change, learning new skills and approaches as needed.
Time Management: Efficient time management is crucial to balance research, teaching, administrative duties, and personal life. Researchers often work on multiple projects simultaneously and must allocate their time effectively.
Persistence and Resilience: Research can be challenging and may involve setbacks and failures. Persistence and resilience are essential traits that keep researchers motivated and focused on their goals.
Curiosity: A natural curiosity about the world and a desire to explore the unknown are fundamental characteristics of successful researchers. Curiosity drives the quest for knowledge and fuels the research process.
Mentoring and Teaching: Many researchers also engage in mentoring and teaching activities. These skills are valuable for passing on knowledge to the next generation of researchers and promoting academic growth.
 

What steps in an AI-assisted research process require human verification?
<What steps in an AI-assisted research process require human verification? prompt. ChatGPT 3.5 version. September, 2023 openai.com/chat.>


In an AI-assisted research process, there are several critical steps that require human verification to ensure the quality, accuracy, and ethical integrity of the research. These steps include:
Problem Formulation: Humans should define the research problem and research questions. While AI can assist in generating ideas and hypotheses based on existing data, humans must ensure that the research questions are relevant and aligned with the research objectives.
Data Collection: Although AI can automate data collection from various sources, humans need to oversee this process. They should make decisions about which data sources to use, establish data collection protocols, and ensure that data is collected in an ethical and legal manner.
Data Cleaning and Preprocessing: Data often requires cleaning and preprocessing before analysis. Human researchers are responsible for identifying and handling missing data, outliers, and other data quality issues. They also need to make decisions about data transformations and scaling.
Feature Selection and Engineering: Decisions regarding which features or variables to include in the analysis, as well as the creation of new features through feature engineering, require human expertise. AI can suggest features, but humans must validate their relevance and interpretability.
Model Selection and Configuration: While AI can assist in selecting machine learning models and tuning hyperparameters, human researchers should choose the most appropriate models based on their domain knowledge and research goals. Model selection often involves trade-offs that require human judgment.
Ethical Considerations: Human researchers should evaluate the ethical implications of their research. This includes ensuring that the data used is obtained with informed consent, protecting the privacy of individuals, and considering potential biases in the data and models.
Interpretation of Results: AI can generate predictions and insights, but human researchers are responsible for interpreting these results in the context of the research questions and domain knowledge. They must also assess the practical significance of the findings.
Bias Detection and Mitigation: Human verification is crucial for detecting and mitigating biases in data and models. Researchers should assess whether the AI has introduced or amplified biases and take corrective actions.
Validation and Cross-Validation: Human researchers should validate model performance using appropriate validation techniques, such as cross-validation. They need to make decisions about how to split the data, choose evaluation metrics, and interpret validation results.
Peer Review: Before publishing research findings, human researchers should subject their work to peer review by experts in the field. Peer reviewers can provide valuable feedback and ensure the scientific rigor of the research.
Contextualization: Human researchers provide the contextual knowledge necessary to frame research findings within the broader context of existing literature and theory. They should explain how their work advances knowledge in the field.
Ethical Approval: For research involving human subjects, animal testing, or other ethical considerations, human researchers are responsible for obtaining the necessary approvals and ensuring compliance with ethical guidelines.
Communication and Publication: While AI can assist in generating research papers, humans must be involved in the writing and editing process to ensure clarity, coherence, and adherence to academic publishing standards.
Decision-Making in Complex Scenarios: In cases of complex or novel research challenges, human judgment is irreplaceable. Researchers must make informed decisions, especially when faced with ethical dilemmas or unexpected issues
Reproducibility: Researchers should document their methods and provide code and data for reproducibility. Human verification ensures that others can replicate the research.

In summary, while AI can automate many aspects of the research process and provide valuable insights, human verification remains essential at various critical stages to ensure the research is scientifically sound, ethically conducted, and meaningful in the context of the research objectives. Human expertise, domain knowledge, and ethical considerations are indispensable for high-quality research.

 

How should research integrity and other policies be changed to address LLMs?
<How should research integrity and other policies be changed to address LLMs? prompt. ChatGPT 3.5 version. September, 2023 openai.com/chat.>
 

The integration of Large Language Models (LLMs) into academic research raises important considerations for research integrity and policies. To address these challenges, research institutions and organizations may need to adapt their policies and practices. Here are some ways research integrity and other policies can be changed to address LLMs:

Transparency and Disclosure: Data Sources: Researchers should transparently disclose the sources of data, including any use of LLMs for data generation or augmentation. This includes acknowledging if LLMs were used to simulate data.
AI Contributions: When LLMs are used for tasks such as text generation or data analysis, researchers should clearly indicate the extent of AI involvement and human oversight in the research process.
Ethical Guidelines: AI Ethics: Research ethics committees and organizations should develop guidelines specific to AI-assisted research. These guidelines should address issues like informed consent, data privacy, and bias mitigation in the context of LLMs.
Responsible AI Use: Institutions should establish policies that emphasize the responsible and ethical use of LLMs in research. Researchers should be educated about the ethical implications of AI and encouraged to conduct AI audits for bias and fairness.
Data Quality and Preprocessing: Data Validation: Policies should encourage researchers to validate the quality and relevance of data generated or processed by LLMs. Human verification should be integrated into data preprocessing pipelines.
Bias Detection: Procedures for detecting and mitigating bias, particularly in LLM-generated content, should be incorporated into research protocols.
Peer Review: AI Expertise: Peer reviewers should possess expertise in AI and be able to evaluate the appropriateness of AI methods, the validity of results, and the ethical considerations associated with LLM usage.
AI Audits: Peer review processes may include assessments of AI models used in research. This can help ensure the fairness and robustness of AI components.
Transparency and Reproducibility: Code and Models: Researchers should be encouraged or required to share code and AI models used in their research. This facilitates transparency and reproducibility, allowing others to verify and build upon the work.
Data Availability: Policies should promote data sharing when possible, while respecting privacy and ethical considerations.
Collaboration and Interdisciplinary Teams: Interdisciplinary Research: Encourage interdisciplinary research teams that include experts in AI, ethics, and domain-specific knowledge. Collaboration can help ensure responsible and informed use of LLMs.
Training and Education: AI Literacy: Research institutions should offer training and educational programs to researchers to enhance their AI literacy and awareness of AI ethics.
AI Auditing: Training in AI auditing techniques should be provided to researchers to evaluate AI models and data for biases and fairness.
Review of Existing Policies: Regular Evaluation: Research institutions and organizations should periodically review and update their policies to reflect evolving technologies and ethical considerations related to AI and LLMs.
Public Engagement: Ethical Dialogues: Encourage researchers to engage with the public and other stakeholders in dialogues about AI and LLMs in research. This can help build trust and address concerns.
Funding Agencies: Guidelines for Grant Applicants: Funding agencies can provide clear guidelines to grant applicants regarding the use of AI and LLMs in research, emphasizing ethical considerations.
Sanctions for Misconduct: AI Misuse: Establish clear sanctions and consequences for researchers who misuse LLMs or engage in unethical practices involving AI.
International Collaboration: Global Standards: Collaborate with international partners to establish global standards for AI-assisted research ethics and policies.

Incorporating these changes into research integrity and other policies can help address the ethical, transparency, and quality assurance challenges posed by the use of LLMs in academic research, ensuring that research conducted with AI remains rigorous and responsible.

 

What legal implications do LLMs have for scientific practice (for example, laws and regulations related to patents, copyright and ownership)?

<What legal implications do LLMs have for scientific practice (for example, laws and regulations related to patents, copyright and ownership)? prompt. ChatGPT 3.5 version. September, 2023 openai.com/chat.>


The use of Large Language Models (LLMs) in scientific practice can raise several legal implications related to patents, copyright, ownership, and intellectual property. These implications can vary depending on how LLMs are used and the specific legal jurisdiction. Here are some key legal considerations:
Copyright and Ownership: Generated Content: LLMs can generate text, including research papers, articles, and other written materials. The question of copyright ownership can be complex. In many jurisdictions, the original creator of content is considered the copyright holder. However, when an LLM is used to generate text, determining authorship and ownership may become more challenging.
Dataset Ownership: The data used to train LLMs, including text from publicly available sources or proprietary datasets, may have copyright restrictions or terms of use. Researchers must respect these terms and ensure that they have the right to use the data in their research.
Plagiarism and Attribution:
Plagiarism Detection: LLMs can assist in detecting plagiarism by comparing text generated by researchers with existing published content. Researchers must be diligent in providing proper attribution to sources when using LLMs to generate content and avoid plagiarism.
Ethical Use: Researchers should ensure that the use of LLMs to generate content aligns with ethical guidelines and academic standards for proper attribution and citation.
Patents and Inventions: Invention Disclosure: In cases where LLMs are used in scientific research that results in inventions or innovations, researchers should be aware of their obligation to disclose such inventions to their institutions. Patent rights may need to be considered, and institutions may have policies regarding intellectual property derived from research.
AI as Inventor: Some legal systems have debated whether AI, including LLMs, can be considered inventors for patent purposes. The question of whether AI can hold patent rights is still evolving and varies by jurisdiction.
Data Privacy and Security: Data Protection Laws: When LLMs are trained on data containing personal or sensitive information, researchers must comply with data protection laws (e.g., GDPR in Europe). Proper anonymization and consent procedures are critical.
Data Breaches: Researchers must take precautions to protect data used in LLM training to prevent data breaches, which can have legal consequences.
Ethical and Regulatory Compliance: Research Ethics: LLM-generated content must comply with research ethics guidelines. This includes obtaining informed consent for studies involving human subjects and adhering to ethical principles related to AI and data use.
Regulatory Compliance: In certain fields such as healthcare or finance, there may be specific regulations governing the use of AI-generated content. Researchers must ensure compliance with relevant laws and regulations.
Liability: Errors and Bias: LLMs can generate content that contains errors or biases. Researchers should be aware of the potential legal liability if AI-generated content leads to misinformation, harm, or unintended consequences.
Quality Control: Implementing quality control measures and human oversight in the use of LLMs can help mitigate liability risks.
Intellectual Property Agreements: Contractual Agreements: Researchers should review contractual agreements with organizations, institutions, or collaborators, as these agreements may specify rights and responsibilities related to AI-generated content and intellectual property.
Open Access and Licensing: Open Access Policies: Researchers should consider open access publishing and licensing models for research outputs generated using LLMs to ensure broader access and adherence to open science principles.

It's important for researchers and organizations to consult with legal experts who specialize in intellectual property, data privacy, and technology law to navigate these legal complexities. Legal requirements may vary by jurisdiction, and staying informed about the evolving legal landscape related to AI and LLMs is essential for responsible and compliant research practices.