Reflections and insights from real-world cases in nursing higher education
- Misty Schwartz 1 *
- Joy Doll 2
- Steven Fernandes 3
- John R. Stone 4
- 1. College of Nursing, Creighton University, Omaha, Nebraska, United States
- 2. Health Informatics, Department of Mathematics, Creighton University, Omaha, Nebraska, United States
- 3. Computer Science, Design and Journalism, Creighton University, Omaha, Nebraska, United States
- 4. Creighton Center for Promoting Health and Health Equity, Creighton University, Omaha, Nebraska, United States
Abstract:
Artificial intelligence (AI) is transforming both higher education and healthcare practice landscapes, driving key stakeholders to explore the opportunities and challenges it offers. This paper presents real-world cases identified by a team of authors, including a graduate nursing faculty member, an occupational therapy faculty member and health informatics professional, a computer science faculty member, and a physician and bioethicist. The authors describe use cases of students who used AI in a graduate nursing course and examine the ethical implications. They share their current interprofessional perspectives and reflections to foster continued dialogue and discernment about the role of generative AI in higher education. As AI continues to grow and evolve, both educators and students will need to develop and understand ground rules. Ethics provides us with a way of exploring AI use cases so that we can share and learn from our own and others' experiences.
- Keywords:
- Artificial intelligence; Nursing education; Use-case ethical analysis
- Received:
- November 12, 2025
- Accepted:
- December 05, 2025
- Published:
- January 03, 2026
- How to cite this article: Misty Schwartz, Joy Doll, Steven Fernandes, et al. Reflections and insights from real-world cases in nursing higher education. Journal of Nursing Education and Practice. 2026;16(1):12-21.
1.Introduction
Artificial intelligence (AI) is a constructively disruptive force in today’s landscape, pushing rethinking about workload, effort, creativity, and content ownership, along with teaching and learning. Generative AI has become particularly pervasive, offering opportunities for learners to engage with these tools to ask and receive answers to questions that generate lengthy responses about any topic imaginable.[1] The opportunities and use cases can bring extensive value to various situations. Yet, as with every innovation, exploring ethical issues and unintended adverse consequences is critical. For example, generative AI forces a user to prompt the tool with questions to generate an appropriate response. Learning ways to prompt and ask generative AI to produce the desired response requires skill-building. And some expertise is needed to confirm whether the AI response is accurate. Also, generative AI may be an unsuitable choice when the tool cannot generate the expected or needed response for an activity. The utilization of generative AI presents opportunities to question and grow in its appropriate use.
Higher education has a significant role to play in both the training and preparation of AI use across multiple fields and in helping learners critically think about its appropriate use.[2] Yet, many questions about AI remain unanswered. In this article, the authors share real-world cases from learners and consider ethical implications in nursing education. The authors present real-world cases and share their current thoughts and perspectives, with the hope of fostering ongoing interprofessional dialogue that reflects on and contributes to evolving insights about the ethical use of AI in higher education and practice.
AI primer
Most learners will find significant benefit in using generative AI. Generative AI tools exist and can now be easily integrated into one’s academic routines, with additions added to search engines and common tools used for learning, as well as access to specific tools through library services. In generative AI, the user provides a prompt, and the AI provides a response to the prompt in the form of narrative, visuals, or even mathematical formulas. Like all systems, generative AI is not perfect. Challenges include bias, hallucinations, a lack of transparency about response sources, a risk of privacy violations when using open-source software, and a potential lack of ability to generate responses if the data is not available.
As this article was written, AI tools are being added and transformed at a consistent and fast pace. Generative AI relies on the concept of a human-in-the-loop (HITL). Currently, AI tools fall into one of three categories (see Table 1). HITL occurs when AI is integrated into an existing system, and humans must review for accuracy. Human-on-the-loop (HOTL) is when AI takes over a portion of a workflow and requires human monitoring. Human-out-of-the-loop (HOOTL) occurs when the workflow takes over for the humans.[3] In higher education, most learners will use tools that are HITL. Both HITL and HOTL require some level of user expertise to review and verify whether the AI has generated an appropriate response. Without this knowledge, a risk is that the user will receive inaccurate information (including hallucinations) and use it inappropriately. In these scenarios, the user needs to engage in critical thinking and decision-making to determine whether the information is appropriate. Generative AI is also not appropriate for solving all problems and may actually overcomplicate problem-solving. Given these confounding factors, users must have some content knowledge prior to AI utilization.
| Human-in-the-loop (HITL) | AI is integrated into an existing system, and the human needs to review for accuracy | Automated essay scoring where the AI grades assignments, but the instructor reviews the results for accuracy before releasing grades. |
| Human-on-the-loop (HOTL) | AI takes over a portion of the workflow and requires human monitoring | AI-based proctoring systems that monitor online exams for suspicious behavior, while instructors review flagged cases to determine actual misconduct. |
| Human-out-of-the-loop (HOOTL) | The AI workflow takes over for the humans | AI agents autonomously handle routine student queries, such as course registration deadlines, without requiring human intervention. |
AI tools are also open-source and closed-source. An open-source AI tool refers to software where some combination of the model architecture, training code, inference code, and model weights is publicly released under an open license. The term “open source” is contested in the AI field because, unlike traditional software, AI models involve trained parameters (weights) that are not source code in the conventional sense. A fully open-source AI tool typically includes the model weights, the code for running inference, and ideally the training code and dataset. Models like LLaMA[4] are often called open-source or open-weights models, released under licenses that permit modification and redistribution with varying restrictions. For example, putting a case example into open-source generative AI to get some ideas may be harmless, but if that case includes private health information (PHI), there is a risk of violating privacy laws.
With the constant diffusion of AI tools, learners and educators must develop proficiency in understanding their underlying technical concepts while being guided by ethics to navigate current and future challenges. Ground rules are needed for AI’s ethical use in learning and guidance on how AI can enhance learning (see Table 2). In addition, agreement between learners and the educator on the ground rules is critical. Learners need to use critical thinking to be able to engage with AI tools responsibly.[5, 6] But as da Silva and colleagues (2024) and Sun (2025) note, a concern is that using AI may diminish critical thinking development; this concern needs strategic responses.[5, 6] Uses and ground rules will vary based on the course content and learning objectives. In some cases, generative AI may provide significant learner benefits and efficiencies. At other times, AI use puts the learner at risk of failing to acquire essential knowledge and skills that could be critical to a future job function.
Artificial intelligence (AI) is becoming a big part of health informatics—and you may already use tools like ChatGPT, Copilot, Grammarly, or other AI-powered applications in your studies and work. In this course, you are welcome to use AI tools to support your learning, provided you use them responsibly and transparently. Here are some basic guidelines: • Learning First: AI should help you learn, not replace the learning. For example, using AI to brainstorm ideas, check your writing, or summarize complex concepts is fine. Copying and pasting entire answers or assignments from AI without adding your own thinking is not. • Transparency: If you use AI to help with an assignment, include a brief note (e.g., “I used ChatGPT to brainstorm outline ideas” or “AI helped me with grammar suggestions”). This helps keep our work honest. • Critical Thinking: AI can make mistakes, especially with technical or healthcare content. Always double-check facts and references. You’re responsible for the accuracy of your work. Sharing your prompts used can help demonstrate your critical thinking skills when using AI tools. • Professional Standards: In health informatics, we deal with sensitive data, ethical concerns, and high professional standards. Never use real patient information with AI tools, and always follow HIPAA and institutional privacy rules. • Fair Use: Think of AI like a calculator in a math class—it’s helpful, but you still need to understand the process. I expect assignments and discussions to reflect your own knowledge and voice. Remember, AI is a partner, not a substitute. Use it to sharpen your skills, but your insight, analysis, and integrity are what matter most. If the instructor has concerns of overuse or inappropriate use, outreach may occur to discuss appropriate use in the course. |
Higher education is obligated to teach critical thinking skills and prepare learners for the workforce. Therefore, integrating generative AI while meeting that obligation and establishing AI use ground rules is an important function for any course. To be clear, AI brings extensive benefits and can improve productivity, along with providing support for brainstorming and problem-solving. The intent here is not to discourage appropriate AI use, and the authors recognize that skill-building in AI is critical for the workforce. Currently, various opinions and perspectives on AI use exist on a continuum. However, higher education should support exploration into the most effective uses with real-world conversations about appropriate and transparent use, potential risks, and ethical discernment of when and where to use generative AI. Accuracy and transparency are fundamental to clinical decision-making, supporting both patient safety and professional trust.[7, 8] As AI becomes embedded in healthcare, clinicians must be able to interpret and communicate AI-informed insights with clarity and accountability. Laying this foundation begins with education, where students should learn to integrate accuracy, transparency, and ethical reasoning into their use of technology and clinical judgment.
Like other evolving technologies, AI is a tool that should be viewed with both curiosity and caution. Its implications and unintended consequences are still being discovered in higher education and beyond. Approaching its use with an attitude of discovery and growth, but a cautious mindset is critical for academicians as we evolve with these tools and as these instruments themselves evolve and integrate into higher education.
To elicit the evolving impact of these tools, the authors present three real-world cases of students’ use of AI in higher education that led to our own discernment, dialogue, and decision-making with the intent of encouraging further dialogue, ethical reflection, and informed decision-making among others in academic practice. All the cases occurred in the context of a graduate nursing program, yet can be scaled for other health professions and undergraduate education as well.
2.Cases: AI and ethical concerns
The instructor met with the student to discuss the AI concerns. The student explained that she used AI to develop a content outline and suggest an appropriate model. The student stated she then conducted her own search, found related literature, and composed the paper. When completing the assignment, she used AI to ensure her draft aligned with the rubric. She also used a grammar checker (Grammarly) for editing and spelling.
The College of Nursing (CON) faculty determined there was no clear violation of the CON’s academic integrity policy, since the student had neither plagiarized nor fabricated evidence.
The faculty grader found that six of the nine listed articles were neither accurate nor retrievable. Two faculty members were unable to locate six of the referenced studies, despite having access to a comprehensive library of resources through the University. Relevant PDFs were also requested from the student, but were never supplied. Multiple meeting requests with the student were also declined. Ultimately, the entire paper’s content and evidence were judged to be fabricated. Faculty evaluators determined that the student’s fabricated submission violated the CON academic integrity policy and required course failure.
Faculty determined that three elements from this case suggested AI use: Rapid completion of multiple assignments at the course end, incongruency between submissions and course rubrics, and failure of the faculty to find an article using the “DOI” identification. A search using the same article title yielded a publication with different authors, DOI, date, journal volume, and content unrelated to the student’s focus. Two AI detectors, Undetectable AI and GPTZero, showed AI usage of 70% and 100%, respectively. These detectors were used as tools for gathering information and raising instructor awareness.
In the meeting with the student, the student first stated that only a grammar checker was used but then admitted to using AI in the submissions to try to complete the course. As in Case #2, faculty evaluators determined that the student’s fabricated submissions violated the CON academic integrity policy and should receive zero credit for the five submissions, resulting in course failure.
These cases and discussions set the stage for a broader ethical discussion. In what follows, we first summarize issues and questions about AI in graduate education that emerged during a CON faculty discussion about the above cases. After noting preliminary ethical themes from that review, we provide and address questions to help further examine ethical factors.
3.Initial faculty case responses
These cases motivated discussions among graduate faculty about AI in nursing education. The following ethical issues and concerns emerged:
- Institutional and faculty obligations to ensure students’ AI engagement and competency (AI literacy)
- Implications for academic integrity and responsible use of technology
- CON fairness obligations regarding student AI access
- Transparency requirements for AI use, including aspects like disclosure of AI use and provision of prompts used
- CON obligations to ensure a curriculum that sufficiently enhances critical thinking and evidence analysis in light of AI challenges
- Need to examine AI’s potential challenges to professionalism
3.1Ethical questions and analysis
The above case studies and CON faculty discussion pose the following core question:
- In writing papers and completing assignments, from literature searches through outlines to final drafts, what are ethically permissible ways to use AI?
Given this core question, the cases and other reflections raise the following sub-questions, to which we add responses and practical suggestions.
- How (empirically) can AI assist students at each stage in developing and completing a paper or assignment? (What could AI do?)
- To determine what AI use is ethically permissible, CON faculty must determine what AI can actually do. Also, faculty have to determine the intent of the assignment with the learning objectives and whether the use of AI as a tool will facilitate learning or not. Hence, we recommend that the CON should have an ongoing assessment of the relevant AI literature. Practically, this process could be done in several ways. A response may be a faculty subcommittee required to submit a yearly summary and recommendations for new or revised guidelines. Faculty should also work closely with librarians to stay informed about the latest research tools and current literature in their field, ensuring that course content and assignments reflect the most up-to-date evidence. To further support this effort, faculty and librarians collaborate to stay informed about the evolving technological resources, maintaining awareness of emerging tools and innovations that impact teaching, learning, and professional practice. CON leadership may consider providing technical support, such as hiring a specialist to conduct periodic literature reviews and maintain evidence resources for faculty use.
- Typical committee infrastructure for academic integrity and/or plagiarism may work to support faculty and faculty development concerns around AI. Faculty would benefit from training and support to respond to concerns, along with clear ground rules on appropriate AI use. Faculty engagement sessions are also critical to ensure that faculty are aware of the uses of AI that are productive and which scenarios present ethical discernment. Faculty may need to bring these discussions to forums like faculty meetings or course group huddles to brainstorm strategies and support in the evolving growth of AI. Furthermore, faculty should support the use of AI where appropriate as students grow their skills and its use in healthcare proliferates.
- To determine what AI use is ethically permissible, CON faculty must determine what AI can actually do. Also, faculty have to determine the intent of the assignment with the learning objectives and whether the use of AI as a tool will facilitate learning or not. Hence, we recommend that the CON should have an ongoing assessment of the relevant AI literature. Practically, this process could be done in several ways. A response may be a faculty subcommittee required to submit a yearly summary and recommendations for new or revised guidelines. Faculty should also work closely with librarians to stay informed about the latest research tools and current literature in their field, ensuring that course content and assignments reflect the most up-to-date evidence. To further support this effort, faculty and librarians collaborate to stay informed about the evolving technological resources, maintaining awareness of emerging tools and innovations that impact teaching, learning, and professional practice. CON leadership may consider providing technical support, such as hiring a specialist to conduct periodic literature reviews and maintain evidence resources for faculty use.
- How should AI ethically be employed at each assignment development stage, if at all? (Examples: Idea generation and planning, creating an outline, using grammar and spell checkers.)
- AI can be a powerful engine for generating ideas, outlines, and narratives, impairing critical thinking and analytical skills in the process. Ultimately, the danger lies in inferior student capacities that, in turn, can impair these students’ patient care, potentially violating principles of beneficence and nonmaleficence. Students may become overly reliant on these tools and may not have the same access to them in the clinical environment. In some clinical situations, time will be a factor, and pausing to use AI will not be an option. Nurses and other clinical team members need to be able to critically think and sometimes think quickly. Generative AI can do this and perhaps even faster than the human brain. But these scenarios still often rely on human input. For example, if a list of symptoms is inputted but one or two are omitted, the AI’s outputs might differ significantly. The output may also be affected depending on whether the tools are open-sourced or closed-source. A constant objective must be that using AI avoids impairing the development of critical thinking skills in nurses. In fact, this example alone demonstrates the need for critical thinking when using AI tools. To ensure students engage with content and not simply use generative AI to produce assignment material, we recommend a scaffolding approach to AI integration. “Scaffolding” involves gradually building learners’ understanding and confidence through structured support, guided practice, and increasing independence in applying AI tools effectively and ethically. For example, in a writing-intensive class, faculty may require students to draft initial outlines and narratives without the aid of AI, then evaluate those submissions and give feedback. Next, students could use AI to refine their outlines and narratives, transparently showing how and where they used AI for revisions.
- We recommend a similar approach for literature and evidence searches and analysis. However, we also acknowledge that avoiding the use of AI has become almost impossible, as it is built in as a function in most search engines and even in writing or grammar tools. To enhance skills and critical screening capacities, students should first use recommended search engines and faculty- and/or self-developed search terms. After feedback, they could use AI for further searches and refinements. In light of AI, faculty may need to redesign assignments requesting disclosure of AI, prompts used, or may ask students not to use AI in certain scenarios. Faculty may need to be thoughtful in how to frame questions and assignments to promote critical thinking, allowing the use of AI as an adjunctive tool, not the method for completing assignments. Students also benefit from guidance on using AI to create flashcards or study guides. For example, if a student uploads content to a generative AI, do they have the right to do that? Faculty need to set forth clear ground rules for assignments on AI use. The authors recognize this requires rethinking with time and intention. However, such reconsideration is a part of the process of disruption due to innovation.
- AI can be a powerful engine for generating ideas, outlines, and narratives, impairing critical thinking and analytical skills in the process. Ultimately, the danger lies in inferior student capacities that, in turn, can impair these students’ patient care, potentially violating principles of beneficence and nonmaleficence. Students may become overly reliant on these tools and may not have the same access to them in the clinical environment. In some clinical situations, time will be a factor, and pausing to use AI will not be an option. Nurses and other clinical team members need to be able to critically think and sometimes think quickly. Generative AI can do this and perhaps even faster than the human brain. But these scenarios still often rely on human input. For example, if a list of symptoms is inputted but one or two are omitted, the AI’s outputs might differ significantly. The output may also be affected depending on whether the tools are open-sourced or closed-source. A constant objective must be that using AI avoids impairing the development of critical thinking skills in nurses. In fact, this example alone demonstrates the need for critical thinking when using AI tools. To ensure students engage with content and not simply use generative AI to produce assignment material, we recommend a scaffolding approach to AI integration. “Scaffolding” involves gradually building learners’ understanding and confidence through structured support, guided practice, and increasing independence in applying AI tools effectively and ethically. For example, in a writing-intensive class, faculty may require students to draft initial outlines and narratives without the aid of AI, then evaluate those submissions and give feedback. Next, students could use AI to refine their outlines and narratives, transparently showing how and where they used AI for revisions.
- What transparency about AI use should be required? (For example, should the CON develop a specific template or component in rubrics where students show where and how AI was used in all stages?)
- To cultivate intellectual honesty, help avoid plagiarism, ensure other aspects of academic integrity, and honor others’ contributions, we recommend requiring complete transparency about AI use in all assignments and submissions. Transparency means that students clearly disclose when AI tools are used, including the specific tools and prompts, and attest that no AI assistance was used when requested by faculty. Providing students with examples of statements of AI use is critical, as it helps students understand the professional expectations of both the how and why of transparency. One example can be found at the end of this paper (see Acknowledgements); it is concise, shows the tools used, and describes how the tools contributed.
- But faculty routine checks for unreported AI use should continue, and the tools should be periodically upgraded. Students should expect faculty to ask and contact them about AI use. These ongoing discussions promote professional formation and support learning for both the faculty and students as tool use evolves. Students who are unsure about AI usage should inquire for clarification. Faculty may need to include guidance on AI use in both a syllabus and with specific assignments. The faculty’s rationale for allowing or not allowing AI could also be provided to the students.
- To cultivate intellectual honesty, help avoid plagiarism, ensure other aspects of academic integrity, and honor others’ contributions, we recommend requiring complete transparency about AI use in all assignments and submissions. Transparency means that students clearly disclose when AI tools are used, including the specific tools and prompts, and attest that no AI assistance was used when requested by faculty. Providing students with examples of statements of AI use is critical, as it helps students understand the professional expectations of both the how and why of transparency. One example can be found at the end of this paper (see Acknowledgements); it is concise, shows the tools used, and describes how the tools contributed.
- What does fairness require in ensuring all students have some minimal access to AI resources and preparation in how to use them (if they should)?
- Access to AI involves both the opportunity to use tools and skills in doing so. Johnson et al. (2025) discuss the principles of AI access and fairness in healthcare.[9] Equitable access to AI in higher education requires intentional efforts; all students should have fair access to AI tools, training, and understanding. We believe fairness requires that all students should be provided with some ‘to-be-decided’ minimal level of access to, and training for, AI tools sufficient for educational program demands. This means that faculty may need to provide guidance or training on the use of AI, including awareness of the tools available to students. As mentioned previously, faculty themselves may need training and access to tools to become prepared for their use in supporting students. Therefore, higher education institutions should also provide a certain level of institutional access to high-quality AI platforms and support for the teaching of responsible use and workforce preparation.
- What guidance does the university, nursing professional ethics, professional organizations, and other entities and commentators provide (e.g., state licensing, national boards)?
- The growing integration of AI in education and healthcare has prompted several accrediting bodies and professional nursing organizations to develop and/or refine statements outlining expectations for the ethical and appropriate use of AI. The American Association of Colleges of Nursing (AACN) developed The Essentials: Core Competencies for Professional Nursing Education in 1986 and then updated those competencies in 2021.[10] These competency expectations apply to graduates of baccalaureate, master’s, and Doctor of Nursing Practice (DNP) programs throughout the United States. The AACN maintains that throughout the essentials, nurses will not only be required to understand and incorporate technology but also support advanced technologies like AI in their curriculum and practice. The National Organization of Nurse Practitioner Faculties (NONPF) is responsible for the educational standards and future directions in nurse practitioner education.[11] This organization’s competencies align with the AACN core competencies regarding the expectations of technology and informatics that are embedded in the nurse practitioner role. Sun (2025) and Riley (2024) both suggest there is a responsibility for educators to provide AI literacy in nursing education;[6, 12] the recommendations apply to both undergraduate and graduate faculty. Faculty in higher education need to reimagine classroom engagement, assessment, and simulation to empower future healthcare professionals to think critically, lead with discernment, and act ethically when embracing AI. Examples include providing students with AI-enhanced case discussions (classroom engagement), AI-integrated evidence appraisal assignments (assessment), and ethical decision-making simulations with AI Support (simulation).
- The Higher Learning Commission (HLC), the U.S. national accrediting agency for institutions of higher education, also recognizes that AI is permeating all aspects of higher education, both as an opportunity and a risk. The Trends 2025 report acknowledges that AI will be a part of teaching, operations, and services. It also mentions the need for institutions to respond thoughtfully by providing the infrastructure and faculty support necessary to prepare students to use AI while also addressing challenges with responsible use.[13]
- When considering licensing agencies and state boards of nursing, the National Council of State Boards of Nursing (NCSBN), the American Association of Nurse Practitioners (AANP), and the American Nurses Association (ANA), have all issued statements on AI in nursing education and how that should translate into practice.[14,15,16] They all acknowledge the potential benefits of AI to enhance individualized learning and improve clinical skills and highlight that students should use it to support their learning; however, it should not replace those required human skills, critical thinking, and ethical judgment needed in healthcare education.
- Collectively, the regulatory, accrediting, and professional governing bodies have all acknowledged the growing role of technology and AI in nursing and healthcare. While each offers its own position or directive regarding the effective and ethical use, few offer specific or prescriptive recommendations within educational and/or clinical settings. This ambiguity can lead to varied interpretations and potential misunderstandings.[17] As a result, the responsibility for easing the transition from academia to clinical practice rests with individual institutions, colleges, departments, and faculty who must consider these guidelines to align with their curriculum, mission, and professional standards. This responsibility also reflects the expectation of practicing providers that higher education graduates will be ready to meet professional standards and clinical demands.[17, 18] This also reinforces the ongoing need for faculty development and curricular guidance.
- The growing integration of AI in education and healthcare has prompted several accrediting bodies and professional nursing organizations to develop and/or refine statements outlining expectations for the ethical and appropriate use of AI. The American Association of Colleges of Nursing (AACN) developed The Essentials: Core Competencies for Professional Nursing Education in 1986 and then updated those competencies in 2021.[10] These competency expectations apply to graduates of baccalaureate, master’s, and Doctor of Nursing Practice (DNP) programs throughout the United States. The AACN maintains that throughout the essentials, nurses will not only be required to understand and incorporate technology but also support advanced technologies like AI in their curriculum and practice. The National Organization of Nurse Practitioner Faculties (NONPF) is responsible for the educational standards and future directions in nurse practitioner education.[11] This organization’s competencies align with the AACN core competencies regarding the expectations of technology and informatics that are embedded in the nurse practitioner role. Sun (2025) and Riley (2024) both suggest there is a responsibility for educators to provide AI literacy in nursing education;[6, 12] the recommendations apply to both undergraduate and graduate faculty. Faculty in higher education need to reimagine classroom engagement, assessment, and simulation to empower future healthcare professionals to think critically, lead with discernment, and act ethically when embracing AI. Examples include providing students with AI-enhanced case discussions (classroom engagement), AI-integrated evidence appraisal assignments (assessment), and ethical decision-making simulations with AI Support (simulation).
4.Further analysis
The AI literature and the evolution of the healthcare field clearly show that students, faculty, and clinicians will need continuous education and upskilling in using AI technology. Carefully identifying, evaluating, and implementing best practices will demand such progress.
A steering committee of the National Academy of Medicine (NAM) and the Coalition for Health AI has identified trust as a key component of AI.[19, 20] AI needs to be trustworthy, and this is critical if learners are going to use it. Educational and clinical institutions and systems will need to establish criteria not only for trustworthy AI sources and uses, but also for how to best assess trustworthiness. For that assessment, faculty will need a ‘discovery’ orientation, and case analysis can support how to proceed. In fact, cases are probably the earliest and most effective way to understand and prevent the unintended consequences of AI in higher education. For example, Ball, Dunlap and Michalowski (2024) use case studies to illustrate the importance of nurses becoming informed about the ethical implications of AI data use and examining current practices.[21] They then provide recommendations to enable nurses to actively engage with and contribute to the responsible and ethical integration of AI technologies in healthcare. In guiding students, facilitated case analysis can serve as an instructive strategy to deepen understanding, promote ethical reasoning, and prepare future nurses and healthcare providers to navigate the complexities of AI use in clinical practice.[22]
Academic programs, including Colleges of Nursing, should maintain alignment between their internal policies and the broader institutional directives, accrediting requirements, and professional organizational standards. Policies, guidelines, and faculty development programs should be implemented to promote effective and appropriate AI integration. Consistency analysis should also include specific courses. While all courses should presumably adhere to a minimal set of AI use guidelines, special aspects of some courses may require other AI uses under specific circumstances. For example, we envision simulated clinical rounds in which AI is immediately used to generate possible causes of a new symptom set. But then such scenarios should also include applications for outcome checking—the trustworthiness factor.
Across most of the literature reviewed, trust emerged as a central theme in discussions of AI in education and healthcare. Trust and ethics are fundamentally intertwined. Within higher education, fostering trust requires preparing future nurses and providers to engage critically and transparently with AI systems, with learners understanding not only the potential benefits but also the limitations and biases. Building on this interconnectedness, trust and ethics cultivated in higher education must translate seamlessly into clinical practice, influencing both the learning and application ends of professional development.
Limitations
Our cases and literature scanning focus on graduate nursing education, general healthcare education, and assignment submissions. Nursing education about clinical scenarios or in clinical settings may raise issues and concerns not covered here. We did not employ a systematic review, nor did we consider non-English sources. Thus, relevant literature may have been overlooked. Many publications explore and/or recommend sets of ethical guidelines, principles, and/or values. While generally worthy, our paper does not attempt a digest of such accounts. Instead, we considered literature closely aligned with the cases and related issues. Lastly, we are focused on higher education and not the clinical use of AI.
5.Conclusion
AI is rapidly transforming higher education and healthcare practice, creating both opportunities and challenges that influence how students learn and how future professionals will engage in practice. This article presents real-world cases of AI use in a graduate nursing course, identified and examined by an interprofessional team including a nursing faculty member, an occupational therapist and health informatics professional, a computer scientist, and a physician-bioethicist. Through case discussions, the authors explore emerging trends, ethical implications, and practical considerations related to students’ use of generative AI. As AI tools continue to evolve and educators and learners adapt to them, guidelines and expectations will likewise shift. However, ethics remains a consistent foundation for examining new situations, fostering reflection, and advancing shared understanding and best practices in the responsible use of AI in higher education. Ethics provides a way to explore AI use cases with a view to sharing and learning from our experiences and those of others. Through continued reflection, the lessons learned, and best practices will continue to advance.
Authors contributions
All authors made substantial contributions to the conception and writing of this manuscript. Author MS led case development, manuscript organization, revisions, and reference formatting. Authors JD, SF, JS contributed to drafting, literature review, and iterative revision. All authors approved the final version.
Funding
Not applicable.
Conflicts of Interest Disclosure
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper.
Informed consent
Obtained.
Ethics approval
The Publication Ethics Committee of the Association for Health Sciences and Education. The journal’s policies adhere to the Core Practices established by the Committee on Publication Ethics (COPE).
Provenance and peer review
Not commissioned; externally double-blind peer reviewed.
Data availability statement
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Data sharing statement
No additional data are available.
Acknowledgements
The authors gratefully acknowledge Dr. Susan Connelly, DNP, APRN-NP, CPNP, PC/AC, Associate Professor of Nursing, for her involvement in the course faculty discussions that helped inform this project and for her contributions to the initial development of the case studies presented in this article. AI tools (ChatGPT and Grammarly) were used to assist with grammar, organization, and phrasing. All ideas, analyses, and interpretations were the authors’ own. The AI-generated content was reviewed, edited, and verified for accuracy before submission.
References
- Sengar S, Hasan A, Kumar S. Generative artificial intelligence: A systematic review and applications. Multimedia Tools and Applications. 2025;84:23661-23700. doi:10.1007/s11042-024-20016-1
- Mittal U, Sai S, Chamola V. A comprehensive review on generative AI for education. IEEE Access. 2024;12:142733-142759. doi:10.1109/ACCESS.2024.3468368
- López-Meneses E, López-Catalán L, Pelícano-Piris N. Artificial Intelligence in educational data mining and human-in-the-loop machine learning and machine teaching: Analysis of scientific knowledge. Applied Sciences. 2025;15(2):772. doi:10.3390/app15020772
- Meta A. The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation. Meta AI Blog. 2025. https://ai.meta.com/blog/llama-4-multimodal-intelligence
- da Silva M, Ferro M, Mourao E. Ethics and AI in higher education: A study on students’ perceptions. In International Conference on Information Technology & Systems. Cham; Springer Nature Switzerland. 2024: 149-158. doi:10.1007/978-3-031-54235-0_14
- Sun G. Integrating artificial intelligence into nurse practitioner education: Strategies for teaching the next generation of nurse practitioners. Journal of the American Association of Nurse Practitioners. 2025;37(9):491-499. doi:10.1097/JXX.0000000000001170
- Demaree-Cotton J, Earp B, Savulescu J. How to use AI ethically for ethical decision-making. The American Journal of Bioethics: AJOB. 2022: 1-3. doi:10.1080/15265161.2022.2075968
- Franco D’Souza R, Mathew M, Mishra V. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Medical Education Online. 2024;29(1):2330250. doi:10.1080/10872981.2024.2330250
- Johnson K, Horn I, Horvitz E. Pursuing equity with artificial intelligence in health care. JAMA Health Forum. 2025;6(1):e245031. doi:10.1001/jamahealthforum.2024.5031
- American Association of Colleges of Nursing (AACN). The essentials: Core competencies for professional nursing education. 2021. https://www.aacnnursing.org/Portals/0/PDFs/Publications/Essentials-2021.pdf
- National Organization of Nurse Practitioner Faculties (NONPF). National Organization of Nurse Practitioner Faculties’ Nurse Practitioner Role Core Competencies. 2022. https://cdn.ymaws.com/www.nonpf.org/resource/resmgr/np_competencies_
- Riley C. Incorporating artificial intelligence into nursing education: Challenges and recommendations. Leader to Leader, National Council of State Boards of Nursing (NCSBN). 2024. https://www.ncsbn.org/public-files/LTL_Spring2024.pdf
- Higher Learning Commission (HLC). Publications: Trends in higher education. 2025. https://download.hlcommission.org/HLCTrends_INF.pdf
- National Council of State Boards of Nursing (NCSBN). NCSBN Model Rules. 2021. https://www.ncsbn.org/public-files/21_Model_Rules.pdf
- American Association of Nurse Practitioners (AANP). Artificial intelligence [Position statement]. 2023. https://www.aanp.org/advocacy/advocacy-resource/position-statements/artificial-intelligence
- American Nurses Association (ANA). The Ethical Use of Artificial Intelligence in Nursing Practice. 2022. https://www.nursingworld.org/globalassets/practiceandpolicy/nursing-excellence/ana-position-statements/the-ethical-use-of-artificial-intelligence-in-nursing-practice_bod-approved-12_20_22.pdf
- Arbelaez Ossa L, Milford S, Rost M. AI through ethical lenses: A discourse analysis of guidelines for AI in healthcare. Science and Engineering Ethics. 2024;30(3):24. doi:10.1007/s11948-024-00486-0
- Zhaksylykova D, Tursynbek A, Nadirbekova G. A qualitative study on the perspectives of doctors, nurses, and residents about artificial intelligence and its application in healthcare: Implications to education. Nurse Education in Practice. 2025;104:104600. doi:10.1016/j.nepr.2025.104600
- Adams L, Fontaine E, Lin S. Artificial intelligence in health, health care, and biomedical science: An AI code of conduct framework principles and commitments discussion draft. NAM Perspectives. Commentary, National Academy of MedicineWashington, DC. 2024. doi:10.31478/202403a
- Coalition for Health AI. Responsible Health AI for All. 2025. https://www.chai.org/
- Ball Dunlap P, Michalowski M. Advancing AI data ethics in nursing: Future directions for nursing practice, research, and education. JMIR Nursing. 2024;7:e62678. doi:10.2196/62678
- Katznelson G, Gerke S. The need for health AI ethics in medical school education. Advances in Health Sciences Education: Theory and Practice. 2021: 1447-1458. doi:10.1007/s10459-021-10040-3
This work is licensed under a Creative Commons Attribution 4.0 License.

