Funded Projects & Fellows

2022 Funded Projects & Fellows

Gift-Funded Research Projects

Exponentially Faster Parallel Algorithms for Machine Learning

Eric Balkanski

Eric Balkanski (Columbia Engineering)


 

This proposal aims to develop fast optimization techniques for fundamental problems in machine learning. In a wide variety of domains, such as computer vision, recommender systems, and immunology, objectives we care to optimize exhibit a natural diminishing returns property called submodularity. Off-the-shelf tools have been developed to exploit the common structure of these problems and have been used to optimize complex objectives. However, the main obstacle to the widespread use of these optimization techniques is that they are inherently sequential and too slow for problems on large data sets. Consequently, the existing toolbox for submodular optimization is not adequate to solve large scale optimization problems in ML.

This proposal considers developing novel parallel optimization techniques for problems whose current state-of-the-art algorithms are inherently sequential and hence cannot be parallelized. In a recent line of work, we developed algorithms that achieve an exponential speedup in parallel running time for problems that satisfy a diminishing returns property. These algorithms use new techniques that have shown promising results for problems such as movie recommendation and maximizing influence in social networks. They also open exciting possibilities for further speedups as well as for applications in computer vision and public health, where important challenges remain.

Conveying Empathy in Spoken Language

Julia Hirschberg

Julia Hirschberg (Columbia Engineering)


 

Much research has been done in the past 15 years on creating empathetic responses in text, facial expression and gesture in conversational systems. However almost none has been done to identify the speech features that can create an empathetic sounding voice. Empathy is the ability to understand another’s feelings as if we were having those feelings ourselves and Compassionate Empathy includes the ability to take action to mitigate any problems. This type of category has been found to be especially useful in dialogue systems, avatars, and robots, since empathetic behavior can encourage users to like a speaker more, to believe the speaker is more intelligent, to actually take the speaker’s advice, to trust and like it more, and to want to speak with the speaker longer and more often. We propose to identify acoustic/prosodic as well as lexical features which produce empathetic speech by collecting the first corpus of empathetic podcasts and videos, crowdsourcing their labels for empathy, building machine learning models to identify empathetic speech and the speech and language features as well as the visual features which can be used to generate it.

Sponsored Research Projects

Joint Selection and Inventory Optimization under Limited Capacity

Will Ma

Will Ma (Columbia Graduate School of Business) 


 

Fueled by the insatiable customer desire for faster delivery, e-tailers have begun deploying "forward" distribution centers close to city centers, which have very limited space. Our proposal is to develop scalable optimization algorithms that allow e-tailers to systematically determine the SKU variety and inventory that should be placed in these precious spaces. Our model accounts for demand that depends endogenously on our SKU selection, inventory pooling effects, and the interplay between different categories of SKU's. Our model is designed to yield insights about: the relationship between demand variability and SKU fragmentation; sorting rules for selecting a few SKU's within a given category; and the marginal value of capacity to different categories.

Confidence-Aware Reinforcement Learning for Human-in-the-loop Decision Making

Matei Ciocarlie

Matei Ciocarlie (Columbia Engineering)


 

Shuran Song

Shuran Song (Columbia Engineering)


 

We propose novel methods for leveraging human assistance in Reinforcement Learning (RL). The sparse reward problem has been one of the biggest challenges in RL, often leading to inefficient exploration and learning. While real-time immediate feedback from a human could resolve this issue, it is often impractical for complex tasks that require a large number of training steps. To address this problem, we aim to develop new confidence measures, which the agent computes during both training and deployment. In this paradigm, a Deep RL policy will train autonomously, but stop and request assistance when the confidence in the ultimate success of the task is too low to continue. We aim to show that expert assistance can speed up learning and/or increase performance, while minimizing the number of calls for assistance made to the expert.

A Tale of Two Models

Asaf Cidon

Asaf Cidon (Columbia Engineering)


 

Junfeng Yang

Junfeng Yang (Columbia Engineering)


 

Full-precision deep learning models are often too large or costly to deploy on edge devices such as Amazon Echo, Ring, and Fire devices. To accommodate to the limited hardware resources, models are often quantized, com-pressed, or pruned.While such techniques often have a negligible impact on top-line accuracy, the adapted models exhibit subtle differences in output compared to the full-precision model from which they are derived.

We propose a new attack termed Adversarial Deviation Attack, or ADA, that exploits the differences in model quantization, compression and pruning, by adding adversarial noise to input data that maximizes the output difference between the original and the edge model. It will construct malicious inputs that will trick the edge model but will be virtually undetectable by the original model. Such an attack is particularly dangerous: even after extensive robust training on the original model, quantization, compression or pruning will always introduce subtle differences, providing ample vulnerabilities for the attackers. Moreover, data scientists may not even be able to notice such attacks because the original model typically serves as the authoritative model version, used for validation, debugging and retraining. We will also investigate how new or existing defenses can fend off ADA attacks, greatly improving the security of edge devices.
 

Amazon Fellows

(Graduate)

Tuhin Chakrabarty

Tuhin Chakrabarty, CS (advisor: Smaranda Muresan)

Interests: Knowledge-aware models for natural language understanding and
generation

Madhumitha Shridharan

Madhumitha Shridharan, IEOR (advisor: Garud Iyengar)

Interests: Optimization methods for computing causal bounds

2021 Funded Projects & Fellows

Gift-Funded Research Projects

Fairness and Incentives in Machine Learning

Christos Papadimitriou

Christos Papadimitriou (Columbia Engineering)


Tim Roughgarden

Tim Roughgarden (Columbia Engineering)

Christos Papadimitriou and Tim Roughgarden, in collaboration with their Amazon Research contacts Michael Kearns and Aaron Roth, will use machine learning, algorithms, and social science techniques to explore through analysis and experiment ways in which the tremendous power of machine learning can be applied to render machine learning more fair. Can deep nets be trained through synthetic fairness criticism to treat their data more equitably, and can the unfair treatment of subpopulations be discovered automatically? How can one predict and mitigate the detrimental effect a classifier can have on people by incentivizing them to modify their behavior in order to "game" the classifier? And what is the precise nature of the incentives and learning behavior involved in the interaction of users with online software platforms?

Inventory Control for Multi-Location and Multi-Product Systems

Awi Federgruen

Awi Federgruen (Business)


Daniel Guetta

Daniel C. Guetta (Business)


Garud Iyengar

Garud Iyengar (Columbia Engineering)

Inventory management is as old as retail - keeping too much inventory on hand results in locking up capital, and incurring high storage costs; keeping too little risks selling out, losing revenue, and customer dissatisfaction. Retail has changed in significant and dramatic ways over the last two decades - demands are now fulfilled from complex fulfillment networks, facilities are often located in increasingly urban areas with very limited storage capacity, and an enormous variety of product compete for space in these facilities. In this project, we build upon a long line of research on this problem, and extend it to be able to cope with the myriad new faces of retail and fulfillment in the 21st century. 

Sponsored Research Projects

Using Speech and Language to Identify Patients at Risk for Hospitalizations and Emergency Department Visits in Homecare

Zoran Kostic

Zoran Kostic (Columbia Engineering)


Maxim Topaz

Maxim Topaz (Nursing)


Maryam Zolnoori

Maryam Zolnoori (Nursing)

This study is the first step in exploring an emerging and previously understudied data stream - verbal communication between healthcare providers and patients. In partnership between Columbia Engineering, School of Nursing, Amazon, and the largest home healthcare agency in the US, the study will investigate how to use audio-recorded routine communications between patients and nurses to help identify patients at risk of hospitalization or emergency department visits. The study will combine speech recognition, machine learning and natural language processing to achieve its goals. 

Counterfactual Reinforcement Learning for Personalized Decision-Making

Elias Bareinboim

Elias Bareinboim (Columbia Engineering)

One pervasive task found through data-driven fields (including medical research, education, business analytics) is the problem of personalized decision-making, i.e., to determine whether a certain intervention will lead to a desirable outcome based upon the individual's characteristics and experiences. We note that the current generation of off-policy/online learning that tries to solve this problem (1) ignores (off-line) observational data, except for some idealized scenarios, or (2) is oblivious to the invariances present in the underlying causal structure. This leads to poor decision-making performance and a lack of explainability. This project will develop new machinery for advancing the state-of-the-art of personalized policy learning through causal lenses.

Extremely Abstractive Summarization

Kathleen McKeown

Kathleen McKeown (Columbia Engineering)

Most research in text summarization today focuses on summarization of news articles and for this genre, much of the wording in the summary is directly copied from the summarized article. In contrast, in many other genres, the summary uses language that is very different from the input. The summary may use extensive paraphrasing, large amounts of compression, syntactic rewriting at the sentence level and fusion of phrases from different parts of the input document. This kind of summarization is quite difficult for today's deep learning systems. In this proposal, we plan to develop methods to enable generation of three forms of abstraction: paraphrasing, compression and fusion; we aim to develop separate models for each and compare with a joint learning approach. This work will be done within a controllable generation paradigm, where the system can determine the abstractive technique that is most appropriate depending on context.

Amazon Fellows

(Graduate)

Noemie Perivier

Noemie Perivier, IEOR (advisor: Vineet Goyal)

Interest: Sequential decision making under uncertainty, design of online algorithms in data-rich environments, with applications in revenue management problems.

Mia Chiquier

Mia Chiquier, CS (advisor: Carl Vondrick)

Interest: Computational framework that integrates sound and vision; to improve current machine perception systems by adopting a more integrated understanding of agents in environments.