2023 CAIT Research Showcase & Large Language Models Event
Mark your calendars for 2023's CAIT Research Showcase & Large Language Models Event!
Join us on May 3rd for is year's CAIT Research Showcase, featuring lightning talks from faculty recipients of CAIT Research Awards and PhD fellowships.
This year, we're excited to host a panel on Large Language Models: Opportunities and Challenges, moderated by Dean Shih-Fu Chang. Large Language Models (LLMs), like ChatGPT, are already revolutionizing how we think about the role of AI in our lives. Dean Chang and a panel of AI experts from academia and industry will discuss what LLMs mean for AI and beyond.
9:00am | Breakfast, Registration Opens
9:30am | Lightning Talks I
10:30am | Panel: Large Language Models: Opportunities and Challenges
12:15pm | Lightning Talks II
Baishakhi Ray, Associate Professor in the Department of Computer Science, Panelist
Baishakhi Ray is an Associate Professor in the Department of Computer Science at Columbia University, NY, USA. She is also a visiting academic at AWS AI Lab. She has received the prestigious IEEE TCSE Rising star award and NSF CAREER award. Baishakhi’s research interest is in the intersection of Software Engineering and Machine Learning. Her research has been acknowledged by many Distinguished Paper awards and has also been published in CACM Research Highlights, and has been widely covered in trade media.
Hongseok Namkoong, Assistant Professor at Columbia Business School, Panelist
Hongseok Namkoong is an Assistant Professor in the Decision, Risk, and Operations division at Columbia Business School and a member of the Columbia Data Science Institute. His research interests lie at the interface of machine learning, operations research, and causal inference, with a particular emphasis on developing reliable learning methods for decision-making problems. Hong's research has been recognized by several awards, including paper awards at Neural Information Processing Systems, International Conference on Machine Learning, INFORMS Applied Probability Society, and Conference on Computer Vision and Pattern Recognition, and the Amazon Research Award. He received his Ph.D. from Stanford University where he was jointly advised by John Duchi and Peter Glynn, and worked as a research scientist at Facebook Core Data Science before joining Columbia. Outside of academia, he serves as a LinkedIn Scholar at LinkedIn's Responsible AI team.
Gil Eyal, Professor of Sociology, Panelist
Gil Eyal is Professor of Sociology at Columbia University and Director of The Trust Collaboratory at INCITE. Previously, he was co-Director of the Precision Medicine and Society Program at Columbia. He is the author, most recently, of The Crisis of Expertise (Polity 2019), and before that The Autism Marix (Polity 2010). He is co-Editor of the forthcoming Oxford Handbook of Expertise and Democratic Politics (OUP 2023), to be launched at a conference on Democratic Politics and the Problem of Mistrust in Experts at Columbia on April 20.
Ren Zhang, Data Science Director at Amazon, Panelist
Ren is the Data Science Director at Amazon for Personalization. Her team provides relevant and timely product recommendations to make customer’s shopping journey easier. Prior to Amazon, Ren was the Chief Data Scientist for BMO Financial, head of AI Center of Excellence. Prior to BMO Financial, she worked at Prudential Financial and Commonwealth bank of Australia in Data Science and Innovation executive roles. And at American Express, she held progressively senior leader roles in credit risk, fraud risk, analytics and risk capabilities, with her last being Vice-President, Risk and Information Management of Enterprise Growth. Ren holds a PhD in Statistics from The Wharton School at the University of Pennsylvania.
Shih-Fu Chang, Moderator
Shih-Fu Chang is the dean of Columbia Engineering, and is the Morris A. and Alma Schapiro Professor with appointments in the departments of Electrical Engineering and Computer Science. As one of the most influential experts in multimedia, computer vision and artificial intelligence, his research has led to spinoff companies and licensed technology in multimedia search.
Joint Selection and Inventory Planning under Dynamic Substitution
Jingwei Zhang is a postdoctoral research scholar in the Decision, Risk and Operations division at Columbia Business School. His research interests lie broadly in stochastic modeling and approximate dynamic programming, with application in resource allocation, revenue management and assortment optimization. He obtained his PhD in decision sciences at Fuqua School of Business, Duke University. Prior to that, he earned bachelor degrees in science and in social science from National University of Singapore.
Conveying Empathy in Spoken Language
Julia Hirschberg is Percy K. and Vida L.W. Hudson Professor of CS at Columbia and was previously at Bell Laboratories/AT&T Labs. She works on speech and NLP: TTS, detecting emotion, charisma, humor, empathy, entrainment, radicalization, deception, and trust in speech and language. She served on the ACL, CRA, IEEE SLTC, NAACL, ISCA (2005-7 as president), Executive Boards, and the AAAI Council. She was editor of Computational Linguistics and Speech Communication, is a fellow of AAAI, ISCA, ACL, ACM, and IEEE, and a member of the NAE, the American Academy, and the APS. She received the IEEE Flanagan Award and the ISCA Medal for Scientific Achievement and is an Amazon Scholar.
Confidence-Aware Reinforcement Learning for Human-in-the-Loop Decision Making
Matei Ciocarlie is an Associate Professor in the Mechanical Engineering Department at Columbia University, with affiliated appointments in Computer Science and the Data Science Institute. His main interest is in robotics, looking to discover how artificial mechanisms can interact with the world as skillfully as biological organisms. Matei’s current work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation.
Exponentially Faster Parallel Algorithms for Machine Learning
Eric is an Assistant Professor in the Department of Industrial Engineering and Operations Research at Columbia University, affiliated with the Data Science Institute. His research interests include optimization, algorithms, machine learning, and mechanism design. He received a PhD in Computer Science from Harvard University where he was advised by Yaron Singer. His thesis was awarded an ACM SIGecom Doctoral Dissertation Honorable Mention.
Scalable Black-Box Optimization via a Pseudo-Bayesian Framework
Haoxian has been a Ph.D. student in the IEOR Department at Columbia since Fall 2021. Before that, he completed his master's degree in Operations Research at Columbia and bachelor's degree in Applied Mathematics at UCLA. His research interest is developing scalable machine learning methods with the tools in applied probability, uncertainty quantification, and statistics. Specifically, his current research focuses on developing scalable Bayesian Optimization that provides a principled and theoretically justified approach to building an uncertainty quantifier that leads to computationally efficient exploration-exploitation strategy.
Melanie Subbiah is a 3rd year Computer Science PhD student at Columbia University working with Professor Kathleen McKeown on Natural Language Processing. Her research focuses on narrative summarization and online text safety. Prior to starting graduate school, she worked at OpenAI on ChatGPT’s predecessor, GPT-3. Melanie completed her Bachelor’s degree in Computer Science at Williams College in 2017, graduating with Highest Honors for her thesis on domain-independent narrative generation advised by Professor Andrea Danyluk. After Williams, she first worked at Apple in AI research before transitioning to OpenAI. She received her Masters from Columbia in 2022 as part of her PhD
I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
Nazar: Monitoring and Adapting to Domain Shifts Across Millions of Edge Devices
Junfeng Yang, Asaf Cidon, & Wei Hao
Junfeng Yang is Professor of Computer Science, Member of the Data Science Institute, and co-Director of the Software Systems Lab at Columbia University. Yang’s research centers on building reliable, secure, and fast software systems. His work won NSF CAREER award, Sloan Research Fellowship, AFOSR YIP award, and best paper awards at top publication venues; was covered by Scientific American, The Atlantic, The Register, Communications of ACM, etc; and improved real-world systems with billions of users.
Asaf Cidon is an assistant professor of Electrical Engineering and Computer Science (jointly affiliated) at Columbia University, and a member of Columbia's Data Science Institute. He has broad research interests in software systems, storage, ML for systems and ML for security. Previously he was the Senior Vice President of Email Protection at Barracuda Network, where he was the co-GM of the $200M email security and archiving business, with a team of 100 engineers and product managers. He joined Barracuda via the acquisition of the startup he founded, Sookasa, where he was the CEO and co-founder. He completed his PhD at Stanford University under Mendel Rosenblum and Sachin Katti, focusing on distributed storage systems. His research was adopted in commercial storage systems of several companies, including Facebook, Tibco, Hortonworks and Rubrik, was recognized by best paper awards in USENIX OSDI, USENIX Security and USENIX ATC, and by the NSF CAREER and Army Research Office young investigator awards. He obtained a PhD and MS in Electrical Engineering from Stanford and BS in Computer and Software Engineering from the Technion.
Wei Hao is a third-year Ph.D. student in the Computer Science department, co-advised by Prof.Asaf Cidon and Prof.Junfeng Yang. His interest lies in the intersection of Systems and Machine Learning and his goal is to build smart, scalable, and safe ML systems for the next generation.
Neural Causal Models with Abstractions
Kevin is a CS PhD Student in the Columbia CausalAI Lab, advised by Prof. Elias Bareinboim. His research interests lie in the intersection of causal inference and deep learning, where he aims to find out (1) how deep learning models can be used to perform causal inference, and conversely, (2) how causal information can guide deep learning. While many topics in causal inference are only understood theoretically, Kevin hopes to develop principled approaches to causal deep learning and causal representation learning which can be applied to real-world data such as high-dimensional medical, image, or language data.
Scalable Computation of Causal Bounds
Madhu is a Ph.D student in the IEOR Department working with Garud Iyengar. Previously, she received her Bachelor's degree from Princeton in 2020, majoring in Operations Research and Financial Engineering and obtaining a certificate in Statistics and Machine Learning. She is interested in Causality, Optimization and Probabilistic Modeling.
Single-Leg Revenue Management with Advice
Rachitesh is a fourth-year PhD student in the IEOR department, where he is advised by Christian Kroer and Santiago Balseiro. Prior to joining the department, he received his Bachelor's degree in Mathematics from Indian Institute of Science. His primary research interests are game theory and data-driven optimization, with a focus on applications in online advertising and revenue management.