What the Room Asked Me at ASNA 2026
- Chris Hickman

- 3 days ago
- 6 min read

Yesterday I stood in front of a room of nurses at ASNA 2026 and talked about artificial intelligence in nursing. My session was titled "Artificial Intelligence in Nursing: What Clinicians and Educators Need to Know in 2026." I walked through what AI actually is, where it is already running inside the systems we use every day, and what the honest risks are.
I have been working in data and technology for more than a decade, but this is the first time I have felt real urgency around a technology conversation. The nurses in that room felt it too. What surprised me was not the talk itself. It was what happened when I opened the floor.
Three questions came back, phrased different ways by different people. I want to work through each of them here, because I think they are the questions every nurse, faculty member, and leader should be sitting with right now.
"How is this actually going to help us?"
This one came first, and it came from people who were not defensive. They were tired. I could hear it in their voices. They wanted to know if any of this was going to give them time back.
The honest answer is yes, in some places, and we already have evidence.
The cleanest example I can point to is ambient AI scribes. A 2025 quality improvement study of 263 clinicians across six health systems found that after thirty days of using an ambient scribe, burnout dropped from 51.9% to 38.8%. Participants reported the equivalent of 10.8 minutes saved per workday, 8.5% less total time in the EHR, and more than a 15% drop in time spent composing notes (Olson et al., JAMA Network Open, October 2025). That is not a pilot hope. That is a measured change in the behavior of working clinicians.
Where else? Decision support that prioritizes and sorts the flood of information coming at a nurse during a shift. Early-warning risk stratification that can surface deterioration before a manual review would catch it (work that can take five to fifteen minutes per patient when a nurse has to do it by hand). Computer vision on wounds, imaging, and skin findings, where consistency across clinicians has been a problem for as long as visual assessment has existed.
The pattern is the same across all of it. AI is good at pattern recognition at scale. It finds signal in noise. Nursing is also pattern recognition, but at the bedside, and nurses are drowning in noise.
The help is conditional though. It depends entirely on whether the system was built with clinical reality in mind. Which leads straight into the second question.
"How do I make sure I have a voice when these systems get built?"
This one did not come as a technical question. It came as a practical one. A nurse stood up and said, more or less, "These tools are showing up in my unit and nobody asked me. How do I get a seat at the table?"
I want to sit with that for a second, because it is the question that matters most.
Here is what I told her. The American Nurses Association published a revised position statement in May 2025 titled "The Ethical Use of Artificial Intelligence in Nursing Practice." It states plainly that nurses must "ensure the voice of nursing is present when decisions are made in healthcare systems" (ANA, OJIN, 2025). That is not a suggestion. It is a professional obligation, written into a position adopted by the ANA Board of Directors.
The 2025 Code of Ethics for Nurses, Provision 7.5, goes further. Nurses remain accountable for decisions made in the course of their practice, even when a technology assists in that decision (ANA, 2025). Accountability without authority is a trap. If you are going to be held responsible for a decision the system helped make, you need to be in the room when the system is evaluated, selected, and configured.
So how do you get there? In my experience, three moves help.
First, ask to see your organization's AI governance structure. If one exists, find out who sits on it and whether nursing is represented. If one does not exist, that gap is the first problem to solve.
Second, join the ethics committee, the informatics council, or the quality oversight group. These are the bodies where AI policy gets written. Most of them want clinical voices and struggle to recruit them (my own experience).
Third, document the clinical realities that AI systems miss. Nurse researchers at Duke have been building a framework called BE FAIR (Bias Elimination for Fair AI in Healthcare), which is explicitly about using frontline nursing expertise to catch bias across the lifecycle of clinical algorithms (Cary et al., Journal of Nursing Scholarship, July 2024). That kind of structured feedback is how you turn what you see on shift into something a governance committee cannot ignore.
One number worth sitting with. Only four of thirteen academic medical centers interviewed said they considered racial bias when developing or vetting machine learning algorithms (ACLU, 2023). Nurses see the downstream effect of that gap every day. Pulse oximeters systematically overestimate oxygen saturation in patients with darker skin pigmentation, with occult hypoxemia up to three times more common in Black patients than white patients (University of Michigan AIDHI). That bias then propagates into deterioration models and sepsis scores that depend on SpO2 as an input. If nurses are not at the table when these models are chosen and monitored, the bias stays invisible.
A seat at the table is not a reward for seniority. It is how bias gets caught before it hurts someone.
"What about hallucinations?"
The third question was usually phrased something like, "I heard AI just makes things up. How worried should I be?"
Very worried, but specifically worried.
Generative AI (the kind that writes text, summaries, and educational content) can produce outputs that are fluent, confident, and wrong. The technical term is hallucination. In a 2025 global survey of seventy clinicians across fifteen specialties, 91.8% reported they had encountered a medical hallucination, and 84.7% considered them capable of causing patient harm (Kim et al., medRxiv, February 2025). That is not a fringe concern.
A study published in Nature Communications Medicine in August 2025 ran 300 physician-designed clinical vignettes through six leading large language models. Each vignette contained a single fake lab value, sign, or disease. The models repeated or elaborated on the planted error in up to 83% of cases. A simple mitigation prompt cut the rate roughly in half, but did not eliminate it (Omar et al., Nature Communications Medicine, August 2025).
What this means in practice: if a nurse asks a general-purpose chatbot to summarize a patient history, the chatbot will sometimes invent a medication, a lab result, or a diagnosis that reads like it belongs. The output looks correct. The style is confident. The clinical content may be fiction.
The professional response is not to ban the tools. It is to verify everything. If the system cannot show you its sources, or if you cannot explain what it is doing, you should not rely on it. That is not a new principle. Nursing has always required that the person holding the license be able to justify the decision. AI does not change that. It raises the stakes.
The line I want to hold
When I closed the talk, I left the room with one line I have been repeating to myself all year. AI surfaces signal. Nurses determine meaning.
The algorithm calculates probability. The nurse interprets context, family, history, and the thing in the room that the chart cannot capture. That authority cannot be handed off, and I do not think it should be.
AI is going to move forward in nursing. That is not a prediction. It is already happening inside the systems you use. The question is whether nursing shapes it or inherits it. The nurses I met at ASNA are ready to shape it. They asked the right questions. Now it is on us, as a profession, to make sure those questions get asked in every governance meeting, every curriculum review, and every vendor demo for the next decade.
Lead. Question. Govern. Educate. Stay human.
Thanks to everyone who came out yesterday. I am still thinking about your questions.




Comments