Title: Fairness Estimation for Small and Intersecting Subgroups in Clinical Applications
Ph.D. Advisers: Julian Wolfson and Jared Huling
Abstract: Along with the increasing availability of health data has come the rise of data-driven models to inform decision-making and policy. These models have the potential to benefit both patients and health care providers but can also exacerbate health inequities. Existing “algorithmic fairness” methods for measuring and correcting model bias fall short of what is needed for health policy in several ways that we address in this dissertation. First, in clinical applications, risk prediction is typically used to guide treatment, creating distinct statistical issues that invalidate most existing techniques. Second, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups. Third, most existing techniques are only usable for relatively large subgroups. Finally, most existing algorithmic fairness methods require complete data on the grouping variables, such as race or gender, along which fairness is to be assessed. However, in many clinical settings, this information is missing or unreliable. We address each of these challenges and propose methods that expand the possibilities for algorithmic fairness work in clinical settings