Contact

Assistant Professor

Computer Science and Engineering

University of California, Santa Cruz

Email:

Office: E2-341A

About Me

I’m an Assistant Professor at the Computer Science and Engineering department at UC Santa Cruz. My research interests are crowdsourcing and algorithmic fairness, both in the context of machine learning. The central question associated with my work is learning from dynamic and noisy data.

Previously I was holding a postdoctoral fellow position at Harvard University. I have a Ph.D. from the University of Michigan, Ann Arbor and a B.Sc. from Shanghai Jiao Tong University, China.

My research is generously supported by the National Science Foundation, Office of Naval Research, Amazon (in collaboration with NSF FAI), and UC Santa Cruz. I was partially supported by the DARPA SCORE program.


noisylabels.com is online! We collected and published re-annotated versions of the CIFAR-10 and CIFAR-100 data which contains real-world human annotation errors. We show how these noise patterns deviate from the classically assumed ones and what the new challenges are. We hope these datasets will facilitate the benchmarking and development of weakly supervised learning solutions.


[2021.10] Our group has 4 papers accepted to NeurIPS 2021 with one spotlight selection! These selected works span the study of fairness in machine learning and learning from weak supervisions. We have shown possible persistent qualification disparity from careless deployment of models (spotlight!), delayed impacts of actions in bandit learning, as well as improving fairness guarantee when learning from noisy labels. We have also defined the problem of weakly supervised policy learning.

[2021.09] Our work Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search has been accepted to EMNLP 2021 for an oral presentation.

[2021.08] Our paper Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels has won the best paper award at IJCAI 2021 workshop on Weakly Supervised Representation Learning. Congrats to Zhaowei and Yiwen (our former summer intern)!

[2021.08] I gave an invited talk at IJCAI 2021 workshop on Weakly Supervised Representation Learning.

[2021.07] Our paper Linear Classifiers that Encourage Constructive Adaptation has won the best paper award at ICML 2021 workshop on Algorithmic Recourse. Congrats to Yatong and Jialu!

[2021.07] Congrats to Yatong who has received the BSOE inaugural Fellowship for Anti-Racism Research for her research on algorithmic fairness!

[2021.06] I’ll be serving as the Area Chair for NeurIPS 2021 and AAAI 2022.

[2021.05] My new preprint (ICML 2021 long talk forthcoming) highlights the disparate effect of memorizing instance-dependent noisy labels. I also show how several existing learning with noisy label solutions fare at instance level.

[2021.05] We provide a new tool to estimate noise transition matrix (to appear at ICML 2021). Check it out! It is efficient, model-free, scalable, and can be broadly applied in a learning with noisy label task. Technical details here.

[2021.03] Our fairness project is featured in UCSC News and Santa Cruz Sentinel.

[2021.01] Our proposal Fairness in Machine Learning with Human in the Loop is now awarded by the NSF FAI program! Thank you NSF & Amazon! As the lead institute, we will receive $1M to conduct a wide range of research on understanding the interaction of machine learning and human agents.