skip to Main Content

Issue 9: CLIP-based Generative Art, “Horror show” of YouTube’s Recommender System

A new Mozilla study on the “horrorshow” of YouTube’s recommender system, notes from a Turing Lecture given by three deep learning pioneers, building data teams at a mid-stage startup, and more

Welcome to issue #9 of The Comet Newsletter! 

This week, we share a new Mozilla study that suggests YouTube’s recommender system remains a “horrorshow”, as well as notes from a Turing Lecture from three deep learning pioneers.

Additionally, we highlight a look at the start of generative art, and an incisive look at what it’s like building a data science team at a mid-stage startup.

Like what you’re reading? Subscribe here.

And be sure to follow us on Twitter and LinkedIn — drop us a note if you have something we should cover in an upcoming issue!

Happy Reading,

Austin

Head of Community, Comet

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION

YouTube’s Recommender AI Still a “Horrorshow”, finds Crowdsourced Study by Mozilla

 

Big Tech continues to draw scrutiny around the impact of their recommendation algorithms. In this article, researchers from Mozilla talk about their crowdsourced research project, Regrets Reporter, a browser extension that lets users self-report YouTube videos they “regret” watching. The reports were generated between July 2020 and May 2021.

Natasha Lomas, author of the TechCrunch article, notes, “The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets,’ including videos spreading COVID-19 fear-mongering, political misinformation, and ‘wildly inappropriate’ children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.”

71% of the videos in the regret reports came from YouTube’s recommendation algorithm—a percentage that highlights its role in pushing poor or malicious content to users. The researchers also noted that regrets were 60% higher in countries without English as a primary language. 

“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’ … And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues,” said   Brandi Geurkink, Senior Manager of Advocacy, Mozilla. “But I was just like — oh wow, it’s actually coming out really clearly in our data.”

A lot of the flagged videos in these reports would be classified by YouTube as “may” consider “borderline content.” This is any type of content that toes the line of being acceptable, and makes it much harder for algorithmic moderation systems to flag. YouTube currently does not provide a clear definition of what it considers “borderline content”, thus making it impossible for the Mozilla researchers to verify whether the content flagged in their study falls into this category. 

The Mozilla report is based “on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos that the report draws on directly.”

The research team did note that “regret” is crowdsourced reporting of a bad user experience, and so it is a subjective metric; but they also noted that videos that were flagged as regretful did acquire 70% more views than other videos watched by volunteers on the platform. This suggests that the video recommendations are independent of their actual content and more based on what drives clicks.

Read the full TechCrunch article here.

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION

Building a data team at a mid-stage startup: a short story

 

In a short work of fiction that hits close to home, Erik Bernhardsson writes about the experience of introducing data-driven practices at a mid-stage startup.

Successful Data Science is as much about processes as it is about the tools being used. In this story, Bernhardsson talks about some of the barriers and pushback aspiring data scientists can anticipate when trying to make their organizations more data driven—from convincing product teams of the importance of good experimentation, to acquiring the engineering resources to build out good data infrastructure, to setting clear expectations in the organization about the function of a Data Science team.

This story is an incisive glimpse into the day-to-day life of a Data Scientist, especially as it relates to dealing with people and processes rather than tools and datasets. 

Read Erik’s full post here.

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION

Alien Dreams: An Emerging Art Scene

 

In this blog post, Charlie Snell (@sea_snell) writes about the impact of OpenAI’s CLIP model on the generative art community, while also showcasing some really impressive examples of the model in action. 

CLIP is a large multi-modal model that consists of an image encoder and a text encoder, which both map onto the same representational space for text and image data. 

Snell writes, in summary, “[A]rtists, researchers, and deep learning enthusiasts have figured out how to utilize CLIP as a an effective ‘natural language steering wheel’ for various generative models, allowing artists to create all sorts of interesting visual art merely by inputting some text.”

Check out Charlie’s full write-up here.

——————————–

INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION

Turning Lecture: Deep Learning for AI (Recap)

In this lecture recap for ACM, Turing award winners Yoshua Bengio, Yann Lecun, and Geoffrey Hinton review the current state of Deep Learning research and its role in building AI.

The key takeaways are the directions for improvement of AI. 

  • Supervised learning requires too much labelled data, and reinforcement learning requires too many trials and can be brittle across tasks.
  • Current systems are not as robust to changes in distribution as humans, who can quickly adapt to such changes with very few examples.
  • Current deep learning is most successful at perception tasks and generally what are called system 1 tasks. Using deep learning for system 2 tasks that require a deliberate sequence of steps is an exciting area that’s still in its infancy.

Below, we’ve highlighted some of the more in-depth bits of the conversation, with quotes from the paper.

Robustness to Change in Data Distributions: 

The I.I.D assumptions sets the expectation that the test cases for an algorithm come from the same distribution as the training set. This isn’t true in the real world. Training data cannot capture all the variability in the real world, and quick adaptation to out of distribution information is critical to building a truly intelligent system. Humans generalize in a more sophisticated way than these I.I.D systems, and need significantly fewer examples to do so. Reducing this sample complexity is another key component of building the robustness of AI. 

Moving from Homogenous Layers to Groups of Neurons That Represent Entities:

“Neuroscience suggests that in humans, groups of nearby neurons are tightly connected and might represent a kind of higher-level vector-valued unit able to send not just a scalar quantity but rather a set of coordinated values. 

This idea is inherent in the use of soft-attention mechanisms, where each element in the set is associated with a vector, from which one can read a key vector and a value vector (and sometimes also a query vector).”      

Higher-level Cognition:

Current Deep Learning systems excel at perception, which are generally called System 1 tasks. Using the output of these perception tasks for planning and execution at a higher level, System 2, is still an open area of research. Maybe it builds on the spirit of the value functions which guide Monte-Carlo tree search for AlphaGo.

Understanding causality is another key problem to address when building higher-level cognition. 

“The ability of young children to perform causal discovery suggests this may be a basic property of the human brain, and recent work suggests that optimizing out-of-distribution generalization under interventional changes can be used to train neural networks to discover causal dependencies or causal variables. How should we structure and train neural nets so they can capture these underlying causal properties of the world?”

Early AI systems in the 20th Century started out as hand-coded, rule-based symbolic systems. The symbols themselves had no meaning other than their relation to other symbols. They did, however, have the ability to perform reasoning by being able to factorize knowledge into pieces that could easily be recombined in a sequence of computational steps, and were able to manipulate abstract variables, types, and instances. 

“We would like to design neural networks which can do all these things while working with real-valued vectors so as to preserve the strengths of deep learning which include efficient large-scale learning using differentiable computation and gradient-based adaptation, grounding of high-level concepts in low-level perception and action, handling uncertain data, and using distributed representations.” 

Read the full report here.

Austin Kodra

Austin Kodra

Austin Kodra is the Head of Community at Comet, where he works with Comet's talented community of Data Scientists and Machine Learners.
Back To Top