Democratizing AI

Amazon Alexa AI VP kicks off new lecture series from the Center of Artificial Intelligence Technology

July 23, 2021

How is the accelerating revolution in artificial intelligence transforming our world? And how can increasingly accessible AI tools bring the promise of these advances to all?

Addressing these questions and more was the focus of the inaugural lecture of the CAIT Distinguished Lecture Series from the Center of Artificial Intelligence Technology (CAIT), founded last year as a collaboration between Columbia and Amazon to advance research, support scholarship, and convene cross-disciplinary thought leaders.

Prem Natarajan, vice president of natural understanding for Amazon’s Alexa AI organization, explained that AI is on the verge of entering an “age of self”, an age in which AI systems such as Alexa become more self-aware and more self-learning, and in which they lend themselves to self-service by experienced and novice developers and even end users.

Self-awareness, Natarajan said, is the ability to maintain an awareness of ambient state (e.g., through sensor data) and to employ commonsense reasoning to make inferences that reflect that awareness and prior/world knowledge. Self-awareness can make AIs more proactive and personalized.

Self-learning is the ability of AI to improve and expand its abilities without explicit human intervention—for example, by leveraging implicit feedback from human users, rather than requiring annotated training data. “Every user request is a learning opportunity,” Natarajan said.

Self-service is the use of autonomous AI to democratize the development of sophisticated machine learning models, enabling users to, say, create their own models simply by specifying sample uses.

All of these technologies, Natarajan said, have the goal of making interaction with AI more natural and intuitive.

“All of the leaps over the past few decades have really brought us back to our roots, because as humans our brains are wired for speech,” Natarajan said. “We’ve been interacting with computers for less than a hundred years at best, and for the most part, that interaction has been anything but natural. With voice interfaces, the aspiration is to bring naturalness to AI.”

But as much progress as has been made over the past several years, the common sense and contextual awareness required to interpret subtle social cues have remained elusive. “Figuring out when it’s our turn to talk is a cognitively demanding task,” Natarajan said. “Knowing what to say in an open-ended conversation is even more challenging.”

Natarajan also talked about the importance of academic research into AI fairness and about the NSF’s Fairness in AI program, a collaboration with Amazon that he helped launch in 2019.

Self-service, Natarajan said, means the democratization of AI by making it easier to develop useful AI systems.

“In building all of this AI in the last five or six years, we’ve built frameworks like the Alexa Skills Kit and the Alexa Voice Service,” Natarajan said. “Other providers have built similar frameworks that allow developers to create conversational experiences without having to be AI experts, where you can build upon these frameworks to launch rich, exciting experiences for users. That makes development easier, allowing more people to participate in this environment and this opportunity.”

Following his talk, Natarajan took questions from CAIT director Shih-Fu Chang, current interim dean of Columbia Engineering. Former Dean and current University Provost Mary C. Boyce, introduced the proceedings. In the following lecture in the series, incoming Professor of Computer Science Richard Zemel discussed building agile machine learning models on May 17.