I am currently a Quantitative Researcher at Two Sigma Investments.
I obtained my Ph.D. in Statistics from Purdue University, advised by
Prof. Jennifer Neville in the
Departments of Computer Science and Statistics.
I was also fortunate to have worked closely with
Profs. Vinayak Rao,
Petros Drineas, and
Qiang Liu.
My general research interests lie at the intersection of machine learning, statistics, data mining, and theoretical computer science.
More specifically, my current interests include statistical network analysis, point processes, kernel and nonparametric methods, Stein's method, approximate Bayesian inference, and randomized sketching methods.
Motivated by the analysis of large-scale datasets exhibiting complex dependencies, my research focuses on developing interpretable models for relational and temporal data, flexible model criticism techniques for intractable distributions, and scalable learning algorithms with provable guarantees.
I received a joint M.S. degree in Statistics and Computer Science from Purdue University in 2015, and a B.S. degree in Statistics from the Special Class for the Gifted Young at the University of Science and Technology of China in 2013.
Changping Meng, Jiasen Yang, Bruno Ribeiro, and Jennifer Neville. HATS: A hierarchical sequence-attention framework for inductive set-of-sets embeddings. To appear in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2019.
Agniva Chowdhury, Jiasen Yang, and Petros Drineas. Randomized iterative algorithms for Fisher discriminant analysis. To appear in Proceedings of The 35th Conference on Uncertainty in Artificial Intelligence (UAI), 2019.
Jiasen Yang, Vinayak Rao, and Jennifer Neville. A Stein–Papangelou goodness-of-fit test for point processes. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
Agniva Chowdhury*, Jiasen Yang*, and Petros Drineas, Structural conditions for projection-cost preservation via randomized matrix multiplication. Linear Algebra and its Applications, 573, 144–165, 2019. (*Equal contribution)
Jiasen Yang, Qiang Liu*, Vinayak Rao*, and Jennifer Neville. Goodness-of-fit testing for discrete distributions via Stein discrepancy. In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
Agniva Chowdhury, Jiasen Yang, and Petros Drineas. An iterative, sketching-based framework for ridge regression. In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
Jiasen Yang, Vinayak Rao, and Jennifer Neville. Decoupling homophily and reciprocity with latent space network models. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), 2017.
Jiasen Yang, Bruno Ribeiro, and Jennifer Neville. Stochastic gradient descent for relational logistic regression via partial network crawls. In Proceedings of the 7th International International Workshop on Statistical Relational AI (StarAI), 2017.
Jiasen Yang, Bruno Ribeiro, and Jennifer Neville. Should we be confident in peer effects estimated from partial crawls of social networks? In Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM), 2017.