A.I. needs to get real—and other takeways from this year’s NeurIPS

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Hello and welcome to the last “Eye on A.I.” of 2020! I spent last week immersed in the Neural Information Processing Systems (NeurIPS) conference, the annual gathering of top academic A.I. researchers. It’s always a good spot for taking the pulse of the field. Held completely virtually this year thanks to COVID-19, it attracted more than 20,000 participants. Here were a few of the highlights.


Charles Isbell’s opening keynote was tour-de-force that made great use of the pre-recorded video format, including some basic special effects edits and cameos by many other leading A.I. researchers. The Georgia Tech professor’s message: it’s past time for A.I. research to grow up and become more concerned about the real-world consequences of its work. Machine learning researchers should stop ducking responsibility by claiming such considerations belong to other fields—data science or anthropology or political science.

Isbell urged the field to adopt a systems approach: how a piece of technology will operate in the world, who will use it, on whom will it be used or misused, and what could possibly go wrong, are all questions that should be front-and-center when A.I. researchers sit down to create an algorithm. And to get answers, machine learning scientists need to collaborate far more with other stakeholders.

Many of the invited speakers picked up on this theme: how to ensure A.I. does good, or at least does no harm, in the real world.


Saiph Savage, director of the human computer interaction lab at West Virginia University, talked about her efforts to lift the prospects of A.I.’s “invisible workers”—the low-paid contractors who are often used to label the data on which A.I. software is trained—by helping them train one another. In this way, the workers gained some new skills and, possibly, by becoming more productive, could earn more from their work. She also talked about efforts to use A.I. to find the best strategies to help these workers unionize or engage in other collective action that might better their economic prospects.


Marloes Maathuis, a professor of theoretical and applied statistics at ETH Zurich, looked at how directed acyclic graphs (DAGs) could be used to derive causal relationships in data. Understanding causality is essential for many real world uses of A.I., particularly in contexts like medicine and finance. Yet one of the biggest problems with neural network-based deep learning is that such systems are very good at discovering correlations, but often useless for figuring out causation. One of Maathuis’s main points was that in order to suss out causation it is important to make causal assumptions and then test them. And that means talking to domain experts who can at least hazard some educated guesses about the underlying dynamics. Too often machine learning engineers don’t bother, falling back on deep learning to work out correlations. That’s dangerous, Maathuis implied.


It was hard to ignore that this year’s conference took place against the backdrop of the continuing controversy over Google’s treatment of Timnit Gebru, the well-respected A.I. ethics researcher and one of the very few Black women in the company’s research division, who left the company two weeks earlier (she says she was fired; the company continues to insist she resigned). Some attending NeurIPS voiced support for Gebru in their talks. (Many more did so on Twitter. Gebru herself also appeared on a few panels that were part of a conference workshop on creating “Resistance A.I.”) The academics were particularly disturbed Google had forced Gebru to withdraw a research paper it didn’t like, noting that it raised troubling questions about corporate influence over A.I. research in general, and A.I. ethics research in particular. A paper presented at the “Resistance A.I.” workshop explicitly compared Big Tech’s involvement in A.I. ethics to Big Tobacco’s funding of bogus science around the health effects of smoking. Some researchers said they would stop reviewing conference papers from Google-affiliated researchers since they now could not be sure the authors weren’t hopelessly conflicted.


Here were a few other research strands to keep an eye on:

• A team from semiconductor giant Nvidia showcased a new technique for dramatically reducing the amount of data needed to train a generative adversarial network (or GAN, the type of A.I. used to create deepfakes). Using the technique, which Nvidia calls adaptive discriminator augmentation (or ADA), it was able to train a GAN to generate images in the style of artwork found in the Metropolitan Museum of Art using less than 1,500 training examples,  which the company says is at least 10 to 20 times less data than would normally be required.

• OpenAI, the San Francisco A.I. research shop, won a best research paper award for its work on GPT-3, the ultra-large language model that can generate long passages of novel and coherent text from just a small human-written prompt. The paper focused on GPT-3’s ability to perform many other language tasks—such as answering questions about a text or translating between languages—with either no additional training or just a few examples to learn from. GPT-3 is massive, taking in some 175 billion different variables and was trained on many terrabytes of textual data, and it’s interesting to see the OpenAI team concede in the paper that “we are probably approaching the limits of scaling,” and that to make further progress new methods will be necessary. It is also notable that OpenAI mentions many of the same ethical issues with large language models like GPT-3—the way they absorb racist and sexist biases from the training data, their huge carbon footprint—that Gebru was trying to highlight in the paper that Google tried to force her to retract.   

• The other two “best paper” award winners are worth noting too: Researchers from Politecnico di Milano, in Italy, and Carnegie Mellon University, used concepts from game theory to create an algorithm that acts as an automated mediator in an economic system with multiple self-interested agents, suggesting actions for each to take that will bring the entire system into the best equilibrium. The researchers suggested such a system would be useful for managing “gig economy” workers.

• A team from the University of California Berkeley scooped up an award for their research showing that it is possible, through careful selection of representative samples, to summarize most real-world data sets. The finding contradicts prior research which had essentially argued that because it could be shown that there were a few datasets for which no representative sample existed, summarization itself was a dead end. Automated summarization, of both text and other data, is becoming a hot topic in business analytics, so the research may wind up having commercial impact.

I will highlight a few other things I found interesting in the Research and Brain Food sections below. And for those who responded to Jeff’s post last week about A.I. in the movies, thank you. We’ll share some of your thoughts below too. Since “Eye on A.I.” will be on hiatus for the upcoming few weeks, I want to wish you happy holidays and best wishes for a happy, healthy new year! We’ll be back in 2021. Now, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Leave a Reply

Your email address will not be published. Required fields are marked *