2024 Convocation Speaker Brahm Rhodes
Thank you, Provost Bestavros and the Faculty of Computing & Data Sciences for inviting me to speak today.
To the graduating class of 2024, it's an honor to celebrate this important milestone with you. I'm grateful for the opportunity and hope to impart some helpful insights.
Consider this question: what happens when we accurately predict the wrong thing? You are graduating from this program with skills and knowledge that give you access to enormously powerful technologies and methods. These technologies come with equally enormous responsibilities.
Let me tell you a story. Imagine Diane, a 50-year-old woman with poorly controlled diabetes and hypertension. Despite her high blood sugar and uncontrolled blood pressure, an algorithm meant to identify high-risk patients and channel them to additional resources scores Diane as low-risk. Without the extra support and resources she would receive as a high-risk patient, Diane's conditions worsen. She suffers a heart attack and a lengthy hospital stay, leading to missed work, lost wages (possibly her job), inability to care for her family, and even higher health care costs.
Now, imagine this happening to millions of patients with complex health needs who are denied resources by algorithms that fail to predict their health risks accurately. The impacts devastate individual patients, families, and the entire healthcare system. For reference, in the US, costs attributable to diabetes are more than $400B annually. What happened? Diane scored low risk despite her obvious health challenges because the algorithm optimizes future healthcare costs based on past health costs and utilization to predict future healthcare costs rather than actual health needs. While the goal was to reduce costs, the real goal was to control healthcare costs by keeping chronic conditions from becoming catastrophic.
The developers chose past costs as a variable and proxy for how sick people were. By doing so, they said that people who cost more should be considered high-risk. They essentially said people with chronic conditions who see the doctor more often should get more care than people who see the doctor less—failing to recognize that utilization of the health care system is unequal across different demographic groups for many reasons other than how sick you are. This disproportionately impacted poorer patients as sick or sicker than wealthier patients. It won't surprise you that the patients excluded from high-risk care were predominantly poor, but not exclusively so, and Black. The developers didn't deliberately design the system to exclude Black people, but that's precisely what happened.
Connect
As graduates of BU's Computing & Data Sciences program, you have been immersed in an environment that prioritizes ethical and responsible innovation and the power of data to drive positive change. Your experiences here have prepared you to address difficult and complex challenges, not just with technical prowess but with a deep sense of ethical responsibility. I know the power of a BU education firsthand. I should, since I spent enough time here. When I was a student here, long ago (as my kids like to say, before color television), back when that fabulous building down the street was a Burger King and parking lot, and data science was called statistics, I had the opportunity to work with a brilliant professor who became my PhD advisor. He had an old-school approach to scholarship and mentorship. Early on, he gave me two challenging tasks: first, solving every problem in a math book written in 1910. It worked me hard. The problems were difficult, and I used every math skill I had, and a few I didn't, to get through it.
If you have seen the original Karate Kid movie (or maybe the remake), it was basically "wax on, wax off math" for months. When I completed that, he handed me the next task. He gave me 30 days to read one of his research papers (he picked it), derive every equation, and explain the underlying principles and math. My panicked thought was, "Well, that will never happen. Since the only words I recognized were the prepositions and the math seemed more opaque than what I just went through." But I collected myself, read the paper as best I could, and dug up the references and their references until I found something I understood. Then, I worked my way back up the ladder. Since this was before your time, I did this in the library (no Google), with books and journals, and made a lot of copies (no PDFs).
"You can learn almost anything if you are willing to return to first principles or as near as possible to ground your knowledge and build your understanding."
In the end, I figured it out, got it done, and went on to do a lot of exciting work with that professor and his research group. Why tell you this story that has nothing to do with data science or AI? Because it illustrates a concept that I hope will serve you as well as it has served me. You can learn almost anything if you are willing to return to first principles or as near as possible to ground your knowledge and build your understanding. Build on what you know to learn and understand what you don't, and use first principles as a foundation and guide for those efforts.
More recently, I returned to first principles to teach myself data science and AI. As I did this, one of the things that stood out was that bias is in the data, not just the people. Don't forget that the systems we live and work with today and that provide the data to train AI models often emerged at a time when it was ok to exclude or disadvantage women and minorities. And did so intentionally. So, unless we deliberately address that built-in bias, we will continue to have solutions and systems that can and will do enormous harm. AI is a powerful technology, and I believe in its potential; it will either make things better or worse, and there will be no middle ground. In your time here, you've worked on projects that had a positive impact and grounded that work in data ethics and responsible AI. While numerous frameworks and guidelines exist for ethical use of data and Responsible AI, and those will continue to evolve, I want to highlight a simple first principle you can use to guide your efforts.
Kindness
If you remember that there are real people on the other side of the data and choose to be kind to those people, you'll be more likely to address the problem in a way that will positively impact their lives rather than cause intentional or unintentional harm. Exploration: Data tells human stories without tears. "A doctor (my wife) said to me recently, and I'm sure she was quoting someone else, that "in healthcare, data tells the patient's story without the tears." That's true and can be applied universally. Almost every data point you work with can be tied to a person or human story. A single number can represent a life-changing event (birth of a child or a marriage), a personal triumph (new job, completing a marathon, graduating college), or a heartbreaking loss (a health crisis or loss of a loved one). When we work with data or build AI models, it's easy to get lost in abstractions and forget the real people whose experiences will be impacted by our predictions. As data scientists, or whatever your profession, we must remember the human element and approach our work with kindness and respect for the people we impact.
I could fill the rest of this talk with horrific stories of data science gone wrong, but I'll ask you to do a Google search, read some of the stories, and consider whether that was you or your friend. Implications: The power and risks of AI, and the rapid advancements in artificial intelligence and machine learning have given us tools of unprecedented power. With AI, we can diagnose diseases, unlock new frontiers of scientific discovery, or serve up the next TikTok video. But as the saying goes, with great power comes great responsibility. The grand discussions of AI risk often miss the point and focus on the scary SciFi outcomes. In the near term, the biggest AI risk is from people using it to do bad things --steal your money, disrupt the election, or torment a teenager with deep fakes. In the mid to long term, the risk of AI is increasing disparities and making unjust systems more so. After that, it's artificial general intelligence or AGI as a bad actor or Skynet for the Terminator fans.
If you have a slow leak in your plumbing that will destroy your house in 20 years, you must fix it. But if your home is also on fire, put out the fire first. Since I work in climate and responsible AI, I'm often asked about AI risk vs climate and why I work mainly on climate. I'm focusing on saving the planet, so at least we're around, and the AGI has something to rule over. Right now, I'm trying to put out the fire. Plus, I can walk and chew gum at the same time. AI systems are only as unbiased as the data they're trained on and the humans who design them. We've already seen examples of AI perpetuating racial and gender biases in healthcare, hiring, lending, policing, and criminal justice. We've seen social media algorithms optimize for engagement at the expense of mental health, civil discourse, and political stability. And we've seen the dangers of AI being used for surveillance, manipulation, and oppression. As the next generation of data and AI leaders, you have a responsibility to consider what AI can do, what it should do, and how it should do it. You have the power to build AI systems that are transparent, accountable, aligned with human values, and built with kindness.
Kindess-Driven Innovation
When we put kindness at the core of our work, it's a first principle (we know how to be kind) that we can use to unlock the ability to understand the people on the other side of the data and make a positive impact. Notice I use kindness, not empathy, because empathy requires shared feelings and emotional attunement. Kindness is a choice. You can just do it, no matter how you feel or what you know. Taking the time to understand the needs, desires, and challenges of the people we serve enables us to create products, services, and solutions that genuinely improve lives.
We can leverage data and the tools of data science & AI to make the world more equitable and just. We just have to choose to do it. Martin Luther King Jr, perhaps the most famous graduate of BU, after one of you, of course, said, "The moral arc of the universe bends towards justice." I think he missed the mark or was at least incomplete (I know that's bold for me to say). While it is a hopeful statement, it lacks agency. Who is bending the universe? How is it bending? It's us. We should bend the moral arc of the universe towards justice. But defining justice is often a gnarly process, and we sometimes have differing views on what's just. But we can approach that process with kindness; maybe that's a good place to start. Justice is not a first principle; kindness is.
"Your education is not over; this is the beginning of your learning. Go back to the first principles to learn new things. Do this forever."
I'll leave you with this thought and request: Your education is not over; this is the beginning of your learning. Go back to the first principles to learn new things. Do this forever. Approach everything you do with kindness and gratitude. Be kind to yourself. You can't care for others if you don't care for yourself. I envy you and the opportunities you will have. I'm proud of what you have accomplished and more proud of what you will achieve. And, to mangle another famous quote, don't put a dent in the universe; bend it. Celebrate today, and as you go out into the world tomorrow, be bold, ambitious, and kind. Thank You.
Brahm Alexander Rhodes (ENG ’85, ENG ’88, GRS ’91), General Partner, Malaika Ventures
Dr. Brahm Rhodes is a multi-disciplinary engineer, venture-backed founder, and General Partner at Malaika Ventures, where he invests in early-stage climate tech startups through a transformative climate justice lens. Brahm is passionate about leveraging technology and data to increase access to and improve knowledge, health, and financial well-being and address climate change and responsible AI to create a sustainable future for everyone.
Brahm has founded and worked in various startups, was an NIH Research Fellow at Harvard Medical School, and has deep experience across multiple industries and technologies. He is also a mentor with leading startup accelerators, teaches data science and AI/ML, and lectures on responsible AI. Dr. Rhodes holds a BS in Electrical Engineering, an MS in Aerospace Engineering, and a Ph.D. in Mechanical Engineering from Boston University.