Funded Projects & Fellows
2023 Funded Projects & Fellows
Faculty Research Projects
Algorithmic Fairness through a Causal Lens
PI: Elias Bareinboim
Artificial intelligence (AI) plays an increasingly prominent role in society since decisions that were once made by humans are now delegated to automated systems. These systems are currently in charge of deciding bank loans, criminals’ incarceration, and the hiring of new employees, and it is not hard to envision that they will in the future underpin most of the decisions in society. Despite the growing concern about issues of transparency and fairness and the high complexity entailed by this task, there is still not much understanding of basic properties of such systems. For instance, we currently cannot detect whether an AI system is operating fairly (i.e., is abiding by the decision-constraints agreed by society) or if it is reinforcing biases and perpetuating a preceding prejudicial practice. Additionally, there is no clear understanding of the various metrics used to evaluate different types of influence the protected attribute (such as race, gender, or religion) exerts on the outcomes and predictions. In practice, this translates into the current state of affairs where a decision is, almost invariably, made without much discussion or justification. To assist AI designers in developing systems that are ethical and fair, we will build on recent advances in causal inference to develop a principled and general causal framework for capturing and disentangling different causal mechanisms that may be simultaneously present. Besides providing a causal formalization for fairness analysis, we will investigate admissibility conditions, decomposability, and power of the proposed fine-grained causal measures of fairness. This will allow us to quantitatively explain the total observed disparity of decisions through different underlying causal mechanisms, that are commonly found in real-world decision-making settings.
Facially Expressive Robotics
PI: Hod Lipson
About 55% of human face-to-face communication is nonverbal. Whereas ongoing progress in language models such as ChatGPT and BingChat is radically advancing the verbal component of conversations, the nonverbal portion of human-machine interaction is not keeping pace. Animatronic robotic faces are stiff and slow. Robots cannot express a natural smile, let alone more sophisticated expressions. Robots cannot sync their lip motion to properly match annunciated phonemes. They cannot match their facial expression to their speech tone or content. This growing chasm between the advancing verbal content, and the poor nonverbal ability, will prevent AI from reaching its potential in full human engagement.
The goal of this research pilot is to explore architectures that will allow robots to begin to learn the subtle but critical art of physical facial expressions. Our lab has developed a soft animatronic face platform containing 26 soft actuators, most of them around critical expression zones such as lips and eyes. We aim to study two key communication pathways: The first is learning what facial expression to make (and when) based on conversational context, and the second is learning how to physically articulate these expressions on a given soft face.
Neural Methods for Describing and Interpreting Works of Art
The ubiquity of art on the internet demands better ways of organizing and making sense of visual art. We propose an investigation into unified methods for representing and describing these artistic images. We propose beginning with an investigation of representations produced by large pre-trained vision and language models to understand the kinds of aesthetic information they encode (e.g., color, form, style, emotion, subject matter). We build on our preliminary investigation in this area. Using that knowledge, we propose a follow-up study of greater difficulty: generating descriptive (i.e. describing only what a sighted person sees) and interpretative captions (i.e. incorporating contextual information about the art, artist, time period and movement). We believe this line of inquiry has the potential to drive social good and commercial value, expanding access to the visually impaired while simultaneously enabling better tools for a range of commercial scenarios.
DataEx: A Data Market System for Modern Data Users
We propose to develop a scalable and differentially private data market system, and to deploy a version for the Columbia campus. The data market system allows anyone that has a machine learning task to upload their training dataset (in a differentially private), and search for other datasets that could be used to augment their training data to produce a higher accuracy model. At the same time, data providers can upload differentially private summaries of their datasets to be indexed by the platform. Differential privacy is the gold standard in data privacy, and guarantees anonymity for individuals, and is a way to allow sensitive data such as medical records or student grade information to be shared for research but without leaking individually identifiable information. In a Columbia deployment, researchers and teams throughout the Columbia would be able to register the data they have available, and benefit from the collective capacity of the whole university.
Haoxian Chen – IEOR
Chen, advised by Henry Lam, an associate professor of industrial engineering and operations research, is “developing scalable machine learning (ML) methods with provable performance guarantees by leveraging tools of applied probability, uncertainty quantification, and stochastic analysis.” His goal is “to create a new methodology named Pseudo-Bayesian Optimization (PBO), that provides a principled and theoretically justified approach to building an uncertainty quantifier that leads to computationally efficient exploration-exploitation strategy.”
Rachitesh Kumar – IEOR
Kumar, advised by Christian Kroer, an assistant professor of industrial engineering and operations research, seeks “to develop algorithms for data-driven revenue management that are robust to demand uncertainty and perform well in both these settings. Our robust algorithms can be modiﬁed based on the conﬁdence one has in the demand forecast and guarantee good performance when the forecast is accurate while maintaining a minimum level of worst-case performance.”
Melanie Subbiah – Computer Science
Melanie Subbiah, advised by Kathleen McKeown, the Henry and Gertrude Rothschild Professor of Computer Science and an Amazon Scholar, is investigating “automatic summarization of narrative, a problem at the intersection of artificial intelligence and the humanities and social sciences. Much of the previous work in summarization has focused on single-document news. Our lab has shown, however, that while this domain is important, the task is limited due to the nature of how factual news articles are written — the critical information is almost always contained in the first couple sentences of the article. Focusing on narrative therefore poses a much more interesting challenge for summarization.”
Kevin Xia – Computer Science
Kevin Xia, advised by Elias Bareinboim, associate professor of computer science, is conducting research “focused on machine learning, speciﬁcally causal inference. In particular, I hope to answer two research questions: (1) How can deep learning models be used to perform causal inference and, conversely, (2) how can causal information guide deep learning?”
Gift-Funded Research Projects
Exponentially Faster Parallel Algorithms for Machine Learning
This proposal aims to develop fast optimization techniques for fundamental problems in machine learning. In a wide variety of domains, such as computer vision, recommender systems, and immunology, objectives we care to optimize exhibit a natural diminishing returns property called submodularity. Off-the-shelf tools have been developed to exploit the common structure of these problems and have been used to optimize complex objectives. However, the main obstacle to the widespread use of these optimization techniques is that they are inherently sequential and too slow for problems on large data sets. Consequently, the existing toolbox for submodular optimization is not adequate to solve large scale optimization problems in ML.
This proposal considers developing novel parallel optimization techniques for problems whose current state-of-the-art algorithms are inherently sequential and hence cannot be parallelized. In a recent line of work, we developed algorithms that achieve an exponential speedup in parallel running time for problems that satisfy a diminishing returns property. These algorithms use new techniques that have shown promising results for problems such as movie recommendation and maximizing influence in social networks. They also open exciting possibilities for further speedups as well as for applications in computer vision and public health, where important challenges remain.
Conveying Empathy in Spoken Language
Much research has been done in the past 15 years on creating empathetic responses in text, facial expression and gesture in conversational systems. However almost none has been done to identify the speech features that can create an empathetic sounding voice. Empathy is the ability to understand another’s feelings as if we were having those feelings ourselves and Compassionate Empathy includes the ability to take action to mitigate any problems. This type of category has been found to be especially useful in dialogue systems, avatars, and robots, since empathetic behavior can encourage users to like a speaker more, to believe the speaker is more intelligent, to actually take the speaker’s advice, to trust and like it more, and to want to speak with the speaker longer and more often. We propose to identify acoustic/prosodic as well as lexical features which produce empathetic speech by collecting the first corpus of empathetic podcasts and videos, crowdsourcing their labels for empathy, building machine learning models to identify empathetic speech and the speech and language features as well as the visual features which can be used to generate it.
Sponsored Research Projects
Joint Selection and Inventory Optimization under Limited Capacity
Fueled by the insatiable customer desire for faster delivery, e-tailers have begun deploying "forward" distribution centers close to city centers, which have very limited space. Our proposal is to develop scalable optimization algorithms that allow e-tailers to systematically determine the SKU variety and inventory that should be placed in these precious spaces. Our model accounts for demand that depends endogenously on our SKU selection, inventory pooling effects, and the interplay between different categories of SKU's. Our model is designed to yield insights about: the relationship between demand variability and SKU fragmentation; sorting rules for selecting a few SKU's within a given category; and the marginal value of capacity to different categories.
Confidence-Aware Reinforcement Learning for Human-in-the-loop Decision Making
We propose novel methods for leveraging human assistance in Reinforcement Learning (RL). The sparse reward problem has been one of the biggest challenges in RL, often leading to inefficient exploration and learning. While real-time immediate feedback from a human could resolve this issue, it is often impractical for complex tasks that require a large number of training steps. To address this problem, we aim to develop new confidence measures, which the agent computes during both training and deployment. In this paradigm, a Deep RL policy will train autonomously, but stop and request assistance when the confidence in the ultimate success of the task is too low to continue. We aim to show that expert assistance can speed up learning and/or increase performance, while minimizing the number of calls for assistance made to the expert.
A Tale of Two Models
Full-precision deep learning models are often too large or costly to deploy on edge devices such as Amazon Echo, Ring, and Fire devices. To accommodate to the limited hardware resources, models are often quantized, com-pressed, or pruned.While such techniques often have a negligible impact on top-line accuracy, the adapted models exhibit subtle differences in output compared to the full-precision model from which they are derived.
We propose a new attack termed Adversarial Deviation Attack, or ADA, that exploits the differences in model quantization, compression and pruning, by adding adversarial noise to input data that maximizes the output difference between the original and the edge model. It will construct malicious inputs that will trick the edge model but will be virtually undetectable by the original model. Such an attack is particularly dangerous: even after extensive robust training on the original model, quantization, compression or pruning will always introduce subtle differences, providing ample vulnerabilities for the attackers. Moreover, data scientists may not even be able to notice such attacks because the original model typically serves as the authoritative model version, used for validation, debugging and retraining. We will also investigate how new or existing defenses can fend off ADA attacks, greatly improving the security of edge devices.
2021 Funded Projects & Fellows
Gift-Funded Research Projects
Fairness and Incentives in Machine Learning
Christos Papadimitriou and Tim Roughgarden, in collaboration with their Amazon Research contacts Michael Kearns and Aaron Roth, will use machine learning, algorithms, and social science techniques to explore through analysis and experiment ways in which the tremendous power of machine learning can be applied to render machine learning more fair. Can deep nets be trained through synthetic fairness criticism to treat their data more equitably, and can the unfair treatment of subpopulations be discovered automatically? How can one predict and mitigate the detrimental effect a classifier can have on people by incentivizing them to modify their behavior in order to "game" the classifier? And what is the precise nature of the incentives and learning behavior involved in the interaction of users with online software platforms?
Inventory Control for Multi-Location and Multi-Product Systems
Inventory management is as old as retail - keeping too much inventory on hand results in locking up capital, and incurring high storage costs; keeping too little risks selling out, losing revenue, and customer dissatisfaction. Retail has changed in significant and dramatic ways over the last two decades - demands are now fulfilled from complex fulfillment networks, facilities are often located in increasingly urban areas with very limited storage capacity, and an enormous variety of product compete for space in these facilities. In this project, we build upon a long line of research on this problem, and extend it to be able to cope with the myriad new faces of retail and fulfillment in the 21st century.
Sponsored Research Projects
Using Speech and Language to Identify Patients at Risk for Hospitalizations and Emergency Department Visits in Homecare
This study is the first step in exploring an emerging and previously understudied data stream - verbal communication between healthcare providers and patients. In partnership between Columbia Engineering, School of Nursing, Amazon, and the largest home healthcare agency in the US, the study will investigate how to use audio-recorded routine communications between patients and nurses to help identify patients at risk of hospitalization or emergency department visits. The study will combine speech recognition, machine learning and natural language processing to achieve its goals.
Counterfactual Reinforcement Learning for Personalized Decision-Making
One pervasive task found through data-driven fields (including medical research, education, business analytics) is the problem of personalized decision-making, i.e., to determine whether a certain intervention will lead to a desirable outcome based upon the individual's characteristics and experiences. We note that the current generation of off-policy/online learning that tries to solve this problem (1) ignores (off-line) observational data, except for some idealized scenarios, or (2) is oblivious to the invariances present in the underlying causal structure. This leads to poor decision-making performance and a lack of explainability. This project will develop new machinery for advancing the state-of-the-art of personalized policy learning through causal lenses.
Extremely Abstractive Summarization
Most research in text summarization today focuses on summarization of news articles and for this genre, much of the wording in the summary is directly copied from the summarized article. In contrast, in many other genres, the summary uses language that is very different from the input. The summary may use extensive paraphrasing, large amounts of compression, syntactic rewriting at the sentence level and fusion of phrases from different parts of the input document. This kind of summarization is quite difficult for today's deep learning systems. In this proposal, we plan to develop methods to enable generation of three forms of abstraction: paraphrasing, compression and fusion; we aim to develop separate models for each and compare with a joint learning approach. This work will be done within a controllable generation paradigm, where the system can determine the abstractive technique that is most appropriate depending on context.
Noemie Perivier, IEOR (advisor: Vineet Goyal)
Interest: Sequential decision making under uncertainty, design of online algorithms in data-rich environments, with applications in revenue management problems.
Mia Chiquier, CS (advisor: Carl Vondrick)
Interest: Computational framework that integrates sound and vision; to improve current machine perception systems by adopting a more integrated understanding of agents in environments.