AI’s Role In Health, Unanswered Ethical Questions Take Center Stage At Triangle AI Summit
(The Chronicle, Max Tendler) — Hundreds poured into the Washington Duke Inn Friday for the Triangle Artificial Intelligence Summit; Duke’s second annual forum focused on engagement with AI technology in the region.
Hosted by Provost Alec Gallimore, the symposium took place a week after he announced a new initiative that aims to increase conversation about the technology and make the University a leader in the field.
“We’re not just reacting to the evolving field of artificial intelligence,” Gallimore said in an introductory speech. “We are actively shaping its future.”
The summit — organized by Duke Learning Innovation and Lifetime Education, Duke Libraries, Duke Community Affairs and the School of Nursing — was formatted with four “pillars” in mind drawn from the initiative: trustworthy and responsible AI, advancing discovery with AI, life with AI and sustainability in AI.
Attendees began with an overview from New York Times reporter Cade Metz on the history and functions of AI, then heard from leaders in the AI space in a series of panels framed around the four pillars. The event also featured a showcase of AI projects from across the Triangle and the perspectives of undergraduate participants in Duke’s Code+ summer program.
New AI initiatives at Duke
Tracy Futhey, vice president of information technology and chief information officer, discussed a number of the University’s plans to invest in AI in support of its educational mission — including constructing a data center with an energy output large enough to power “the entire town of Carrboro.”
She said her office is working with Facilities and Management to outline the feasibility of the project, focusing on “how we build a next-generation data center that will be energy efficient [and] … sustainable for the environment.” She noted that Duke is investigating liquid cooling technology as a cost-effective approach to reduce the facility’s climate impact, as data centers are known for their high energy consumption.
The new center is projected to be online in 18 to 24 months.
Futhey listed several AI programs that will be available to community members in the upcoming academic year, including ChatGPT-4o and a new software called DukeGPT.
She also promoted AI training partnerships between Duke and other universities in the state, noting that “there is more that needs to be done than any one institution can do.”
“Our goal here is to have North Carolina be the number one place for AI research and education,” Futhey said.
AI’s uncertain role in health
Moderator Nicoleta Economou-Zavlanos, assistant professor of biostatistics and bioinformatics and director of Duke Health AI evaluation and governance, began her panel with a seemingly simple question: “What does trustworthy and responsible AI mean to you?”
“I’ll just be honest with you all; I have no clue,” said Jun Yang, Bishop-MacDermott family professor of computer science. “And I don’t think I’m alone.”
He was not. As the panel turned to the use of AI in health systems, Robert Califf, former commissioner of food and drugs, Trinity ’73 and School of Medicine ’78, expressed concerns about the inability of AI tested in one health care setting to be deployed in another, a problem which he said comes down to barriers to sufficiently validating AI tools.
“I don’t know of a single health system in the country that can actually do what needs to be done to validate AI,” he said.
Califf also said that hospitals often integrate AI for financial reasons, making “patient well-being … a minor part of the equation.”
He pointed to the Department of Health and Human Services’ recent Make America Healthy Again report, which has been accused of being produced with falsity-spouting AI, as an example of a failing national approach to integrating AI into health systems responsibly.
Steve Kearney, medical director at the Cary-based international tech company SAS Institute, said AI’s development has to “move at the speed of trust” but maintained that responsibility for deciding how software products should be integrated into health care lies with practitioners.
“We develop software,” he said. “We are not the experts in how you should apply it to patient care. That should be all of you.”
Yang emphasized that despite the concerns, AI should be embraced.
“The world is using it,” he said. “We have to figure out ways to deal with it.”
Unanswered ethical questions
The “Life with AI” panel continued the discussion of ethical queries that arise with the growing role of AI in society, though speakers settled on few answers.
“There’s going to be bad, and there’s going to be good, and life with AI is going to be figuring out what the balance is,” said Chris Bail, professor of sociology and director of Duke’s Society-Centered AI Initiative.
According to Bail, much of the bad entails how the technology can be used to instill echo chambers and worsen polarization on social media.
Brinnae Bent, executive in residence in the engineering graduate and professional programs, noted the effects on marginalized communities in particular. She referenced teenage girls being “targeted” by waves of deepfake pornography — AI-generated explicit images that seem realistic but have been digitally altered — and AI facial-detection software being used in police stations across the country, despite studies showing these tools to exhibit racial bias.
Moderator Yakut Gazi, vice provost for learning innovation and digital education, pointed out that the developing technology is projected to threaten millions of jobs. She asked the panelists if Americans should “stop pretending that everyone can be re-skilled” and instead “start planning for a society where not everyone needs to … work.”
But Jenny Maxwell, head of Grammarly for Education, rejected the notion that mass layoffs are inevitable. She pointed to financial tech company Klarna’s regret and subsequent wave of rehires after AI failed to adequately replace hundreds of human customer service workers, leading to a roughly $40 billion devaluation of the company.
Later on, Andrew Pace, executive director of the Association of Research Libraries, said AI likely wouldn’t replace jobs directly. Rather, he claimed that “you’re going to be replaced by somebody who understands AI” — seeming to identify development and maintenance of the technology as a space for growth.
Panelists also discussed the ramifications of the popularization of generative AI use among students and concerns that certain skills will atrophy as a result. No comprehensive national regulations currently exist for AI’s use or development in educational or professional spaces.
Speakers across the summit’s events underscored the growing gray area within AI development. But many expressed confidence that engaging with that uncertainty in the hopes of answering “hard questions” constitutes a strong path forward.
As Califf put it: “Isn’t that what universities are for?”