Changho Suh – CS Conference PapersAfter Oct. 2018[1] Y. Roh, K. Lee, S. E. Whang and C. Suh, ‘‘Improving fair training under correlation shifts,’’ ICML 2023. [2] S. Um and C. Suh, ‘‘A fair generative model using LeCam divergence,’’ AAAI 2023. [3] Y. Roh, K. Lee, S. E. Whang and C. Suh, ‘‘Sample selection for fair and robust training,’’ NeurIPS, Dec. 2021. [4] S. Kim, M. Jang and C. Suh, ‘‘Group match prediction via neural networks,’’ RecSys (ComplexRec), Sep. 2021. [5] Y. Roh, K. Lee, S. E. Whang and C. Suh, ‘‘FairBatch: Batch selection for model fairness,’’ ICLR, May 2021. [6] J. Cho, G. Hwang and C. Suh, “A fair classifier using kernel density estimation,” NeurIPS, Dec. 2020 (Top 10 KAIST Research Achievements). [7] A. Elmahdy, J. Ahn, C. Suh and S. Mohajer, “Matrix completion with hierarchical graph side information,” NeurIPS, Dec. 2020. [8] K. Lee, C. Suh and K. Ramchandran, “Reprogramming GANs via input noise design,” ECML-PKDD, Sep. 2020. [9] M. Kang, K. Lee, Y. H. Lee and C. Suh, “Autoencoder-based graph construction for semi-supervised learning,” ECCV, Aug. 2020. [10] Y. Roh, K. Lee, S. E. Whang and C. Suh, “FR-Train: A mutual information-based approach to fair and robust training,” ICML, July 2020. [11] D. Kim, K. Lee and C. Suh, “Improving model robustness by automatically incorporating self-supervision tasks,” NeurIPS Workshop, Dec. 2019. [12] H. Kim, K. Lee, G. Hwang and C. Suh, “Crash to not crash: Learn to identify dangerous vehicles using a simulator,” AAAI, Jan. 2019 (oral presentation, website, article, media). [13] K. Ahn, K. Lee, H. Cha and C. Suh, “Binary rating estimation with graph side information,” NeurIPS, Dec. 2018. Apr. 2016 ~ Sep. 2018[14] K. Lee, K. Lee, H. Kim, C. Suh and K. Ramchandran, “SGD on random mixtures: Private machine learning under data-breach threats,” ICLR Workshop, Apr. 2018. [15] K. Lee, H. Kim and C. Suh, “Simulated+Unsupervised learning with adaptive generation and birectional mappings,” ICLR, Apr. 2018. [16] K. Lee, K. Lee, H. Kim, C. Suh and K. Ramchandran, “SGD on random mixtures: Private machine learning under data-breach threats,” MLSys, Feb. 2018. [17] M. Jang, S. Kim, C. Suh and S. Oh, “Optimal sample complexity of M-wise data for top-K ranking,” NeurIPS, Dec. 2017. [18] K. Lee, H. Kim, and C. Suh, “Crash to not crash: Playing video games to predict vehicle collisions,” ICML Workshop, Aug. 2017. [19] K. Lee, J. Chung, and C. Suh, “Large-scale and interpretable collaborative filtering for educational data,” KDD Workshop, Aug. 2017. [20] S. Mohajer, C. Suh, and A. Elmahdy, “Active learning for top-K rank aggregation from noisy comparisons,” ICML, Aug. 2017. [21] K. Lee, J. Chung, Y. Cha and C. Suh, “Machine learning approaches for learning analytics: Collaborative filtering or regression with experts?” NeurIPS Workshop, Dec. 2016. [22] Y. Chen, G. Kamath, C. Suh and D. Tse “Community recovery in graphs with locality,” ICML, June 2016. Before Apr. 2016[23] Y. Chen and C. Suh, ‘‘Spectral MLE: Top-K rank aggregation from pairwise comparisons,’’ ICML, July 2015 (Bell Labs Prize finalist, media). |