Site icon KryptoCoinz

Improving health, one machine learning system at a time | KryptoCoinz

Captivated as a child by video games and puzzles, Marzyeh Ghassemi was also fascinated at an early age in health. Luckily, she found a path where she could combine the two interests. 

“Although I had considered a career in health care, the pull of computer science and engineering was stronger,” says Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES) and principal investigator at the Laboratory for Information and Decision Systems (LIDS). “When I found that computer science broadly, and AI/ML specifically, could be applied to health care, it was a convergence of interests.”

Today, Ghassemi and her Healthy ML research group at LIDS work on the deep study of how machine learning (ML) can be made more robust, and be subsequently applied to improve safety and equity in health.

Growing up in Texas and New Mexico in an engineering-oriented Iranian-American family, Ghassemi had role models to follow into a STEM career. While she loved puzzle-based video games — “Solving puzzles to unlock other levels or progress further was a very attractive challenge” — her mother also engaged her in more advanced math early on, enticing her toward seeing math as more than arithmetic.

“Adding or multiplying are basic skills emphasized for good reason, but the focus can obscure the idea that much of higher-level math and science are more about logic and puzzles,” Ghassemi says. “Because of my mom’s encouragement, I knew there were fun things ahead.”

Ghassemi says that in addition to her mother, many others supported her intellectual development. As she earned her undergraduate degree at New Mexico State University, the director of the Honors College and a former Marshall Scholar — Jason Ackelson, now a senior advisor to the U.S. Department of Homeland Security — helped her to apply for a Marshall Scholarship that took her to Oxford University, where she earned a master’s degree in 2011 and first became interested in the new and rapidly evolving field of machine learning. During her PhD work at MIT, Ghassemi says she received support “from professors and peers alike,” adding, “That environment of openness and acceptance is something I try to replicate for my students.”

While working on her PhD, Ghassemi also encountered her first clue that biases in health data can hide in machine learning models.

She had trained models to predict outcomes using health data, “and the mindset at the time was to use all available data. In neural networks for images, we had seen that the right features would be learned for good performance, eliminating the need to hand-engineer specific features.”

During a meeting with Leo Celi, principal research scientist at the MIT Laboratory for Computational Physiology and IMES and a member of Ghassemi’s thesis committee, Celi asked if Ghassemi had checked how well the models performed on patients of different genders, insurance types, and self-reported races.

Ghassemi did check, and there were gaps. “We now have almost a decade of work showing that these model gaps are hard to address — they stem from existing biases in health data and default technical practices. Unless you think carefully about them, models will naively reproduce and extend biases,” she says.

Ghassemi has been exploring such issues ever since.

Her favorite breakthrough in the work she has done came about in several parts. First, she and her research group showed that learning models could recognize a patient’s race from medical images like chest X-rays, which radiologists are unable to do. The group then found that models optimized to perform well “on average” did not perform as well for women and minorities. This past summer, her group combined these findings to show that the more a model learned to predict a patient’s race or gender from a medical image, the worse its performance gap would be for subgroups in those demographics. Ghassemi and her team found that the problem could be mitigated if a model was trained to account for demographic differences, instead of being focused on overall average performance — but this process has to be performed at every site where a model is deployed.

“We are emphasizing that models trained to optimize performance (balancing overall performance with lowest fairness gap) in one hospital setting are not optimal in other settings. This has an important impact on how models are developed for human use,” Ghassemi says. “One hospital might have the resources to train a model, and then be able to demonstrate that it performs well, possibly even with specific fairness constraints. However, our research shows that these performance guarantees do not hold in new settings. A model that is well-balanced in one site may not function effectively in a different environment. This impacts the utility of models in practice, and it’s essential that we work to address this issue for those who develop and deploy models.”

Ghassemi’s work is informed by her identity.

“I am a visibly Muslim woman and a mother — both have helped to shape how I see the world, which informs my research interests,” she says. “I work on the robustness of machine learning models, and how a lack of robustness can combine with existing biases. That interest is not a coincidence.”

Regarding her thought process, Ghassemi says inspiration often strikes when she is outdoors — bike-riding in New Mexico as an undergraduate, rowing at Oxford, running as a PhD student at MIT, and these days walking by the Cambridge Esplanade. She also says she has found it helpful when approaching a complicated problem to think about the parts of the larger problem and try to understand how her assumptions about each part might be incorrect.

“In my experience, the most limiting factor for new solutions is what you think you know,” she says. “Sometimes it’s hard to get past your own (partial) knowledge about something until you dig really deeply into a model, system, etc., and realize that you didn’t understand a subpart correctly or fully.”

As passionate as Ghassemi is about her work, she intentionally keeps track of life’s bigger picture.

“When you love your research, it can be hard to stop that from becoming your identity — it’s something that I think a lot of academics have to be aware of,” she says. “I try to make sure that I have interests (and knowledge) beyond my own technical expertise.

“One of the best ways to help prioritize a balance is with good people. If you have family, friends, or colleagues who encourage you to be a full person, hold on to them!”

Having won many awards and much recognition for the work that encompasses two early passions — computer science and health — Ghassemi professes a faith in seeing life as a journey.

“There’s a quote by the Persian poet Rumi that is translated as, ‘You are what you are looking for,’” she says. “At every stage of your life, you have to reinvest in finding who you are, and nudging that towards who you want to be.”

Exit mobile version