Photo of Hooman Rashidi by Rayni Shiring, University of Pittsburgh
At the AI in Healthcare Research Day on March 14, leaders of the University of Pittsburgh’s Computational Pathology and AI Center of Excellence (CPACE) outlined steps they are taking to ensure that Pitt is a leader in AI.
“Our vision and mission are pretty simple,” said Hooman Rashidi, associate dean of artificial intelligence in medicine and professor of pathology, University of Pittsburgh School of Medicine, and executive director of CPACE. “Thanks to the support of our dean, Dr. Anantha Shekhar, we are set to become one of the premier AI centers in pathology and lab medicine for others to emulate."
Pitt is also starting to help other schools set up their AI centers, and Rashidi said that just days before the event, they had a visit from UCSF colleagues who want to do something similar.
“We do want to share the wealth and share the knowledge, so that this is not contained just within because we want to have a societal impact,” he said. “And the way we want to do this, in terms of the mission, is integrating cutting-edge AI tools but in a responsible, efficient way.”
The event, organized by Ahmad Tafti, interim director of scientific affairs section at CPACE and the director of the HexAI Research Laboratory in Pitt’s School of Health and Rehabilitation Sciences, brought together Pitt scientists as well as AI leaders from other institutions and industry experts.
Liron Pantanowitz, Maud L. Menten Professor and chair of the Department of Pathology, Pitt School of Medicine, explained the need for AI to advance the mission of the department, especially in predicting patient outcomes. He said it was important for Pitt “not to just sit on the sidelines and wait for others to figure out how to make AI, use AI and monitor AI. I believe that we need to participate ourselves and be leaders.”
Rashidi also laid out the School of Medicine’s role in democratizing AI in education without focusing on coding. “You need educational applications and resources under one umbrella and to make sure that it's a very easy environment that people can interact and play with, so that it can handle all sorts of data types,” he said.
“The more AI champions we have, the easier it will be for all of us to interact, the more bridges we will actually be building.”
The star of Pitt’s curriculum, he said, will be a 25-hour, self-paced, interactive elective that will be introduced in late 2025/early 2026.
The course will provide AutoML tools and synthetic data to avoid patient privacy issues and will enable learners to quickly build vision models and tabular data models.
“The point of the course is to get you excited in the first 20 minutes, see how easy it is to build them, but then to actually teach you how to build them the right way because what's the proper study design? What are the statistics, regulatory aspects, ethical aspects, is what matters the most.”
He also noted that Jason Rosenstock, associate dean for medical education, came up with a simple ethical roadmap that Pitt’s students, staff and faculty could follow when they use generative AI. It is known as the DVP framework, which stands for disclose, verify and protect. That entails disclosing when work is created using AI, verifying that it is accurate and not a hallucination, and being protective of the material by avoiding copyright or HIPAA violations.