Considering End Users in the Design of News Credibility Annotations

Last week, I attended a working group meeting at the Brown Institute at Columbia to discuss a credibility schema for annotating the credibility of news content and other information online. The working group, hosted by Meedan and Hacks/Hackers, grew out of discussions started at MisinfoCon and incorporates perspectives from industry, academia, journalism, nonprofits, and design, among others.

As part of the day’s schedule, I gave a ~5 minute talk on end user applications for credibility annotations. This was slotted in a segment on use cases, or how credibility annotations could potentially be used by different stakeholders. I’ve now cleaned up my notes from the talk and present them below:


 

I am an HCI researcher designing and building end user tools for collaboration, and in my group, the systems we build tend to have a focus on giving end users direct control of what they see online, instead of ceding that control to opaque machine learning systems. Thus, today I am speaking on direct end user applications of the annotations as opposed to using them as inputs towards machine learning models to be used by news or social media organizations. In this case, I am using the phrase “end user” to describe basically a non-expert member of the general population for whom a tool would be designed.

First I want to make the point that, before we jump to thinking about training data and building machine learning models, credibility annotations that made by people can be immediately useful to other people just as is. In fact, there are cases where it may be beneficial to not have a machine learning intermediary or a top-down design enforced by a system.

Who Gets to Choose What You See?

So what might these cases be? One case we need to consider is the importance of visibility in an interface when it comes to attracting attention, and how attention can distort incentives and lead to problems such as fake and misleading news being spread widely on social media. Here it is helpful to consider who gets to determine what is being shown and whether their incentives are aligned with those of end users. For instance, on social media, system designers want to show end users engaging content to keep them active on the site, and thus site affordances and algorithms are shaped by engagement. In addition, news organizations also want to show end users engaging content to get them to click and visit their site to collect ad revenue. So what happens? In the end, we get things like clickbait and fake headlines.

Instead, let’s consider what it would take to center the news sharing experience around end user needs. To explore this idea, we built a tool called Baitless as a proof-of-concept. The idea is really simple. It’s an RSS reader where anyone can load an existing RSS feed and then rewrite the headline for any article in the feed and also vote on the best headlines that others have contributed.

Credibility WG.004

We then provide a new RSS feed where the titles are replaced by the best headline written by users. And if a user clicks on a link in their RSS feed reader, they are directed to a page where they can read the article and afterwards suggest new headlines directly on the page. In this way, end users can circumvent existing feedback loops to take control of their news reading experience.

Credibility WG.005.jpeg

At a higher level, right now end users cede control over everything they see in their social media feeds to systems that for the most part prioritize engagement, as opposed to other qualities such as veracity. Given that, how could end user-provided annotations help give end users control over their news feeds beyond simply headlines? Imagine if other people could perform actions on my feed such as removing false articles from my feed entirely or annotating news articles with links to refutations or verifications.

Who Annotates?

One aspect that is crucial when giving other people or entities power over one’s experience is the concept of trust. That is, who produces credibility annotations could also be an important signal for end users. After all, who I trust could be very different from who you trust. And this notion of trust in the ability to verify or refute information can be very different from the friend and follower networks that social media systems currently have. So if we have this network of the people and organizations that a person trusts, we can then do things like build news feeds that surface verified content as opposed to engaging content, and build reputation systems where actions have consequences for annotators. If you’re interested in this topic, please let me know, as we’ve just begun a project that delves into collecting trust network information and building applications on top of it.

An open question, which we don’t know the answer to yet is, is it good to put people in control of their experiences in this way or do we actually need something like machine learning to direct us to what is credible? Will this make filter bubbles worse, in that people will see less opposing content, or better? More importantly, given the recent research on the backfire effect, how might it affect how people react when they encounter opposing information? Might it make people more receptive if it’s from a trusted source?

Process over Means to an End

I also want to make the point that annotation, rather than just being some necessary but tedious work that goes into training models, is also a process that could actually be beneficial to end users in certain cases. For instance, news annotation can be a way to educate end users about media literacy. It’s also a way for readers to have more structured engagement with news content and a deeper relationship with news organizations beyond just firing off a comment into the abyss. After all, reading the news is a form of education, and journalists often play the role of educators when they write on a topic.

One project that we’ve done in this area is a tool that aims to teach readers to recognize moral framing while reading news. Using Bloom’s Taxonomy of Educational Objectives as a guide, we can imagine certain activities that readers could perform that would allow them to learn and also apply skills related to moral framing. To explore the various ways that users could annotate while reading, we built a browser extension called Pano (built on top of another system of ours exploring social browsing called Eyebrowse). It allows users to highlight and annotate passages on an article with the moral framing in that passage, leave comments and votes on a particular annotation, leave comments and chat on the page, as well as contribute towards a wiki-like summary describing the article’s point-of-view.

Untitled.png

We conducted a field study comparing the use of our tool to simply participating in a written tutorial on moral framing and found that users who used our tool over a period of 10 days actually got better at writing arguments framed in the other side’s moral values. We also saw heavy use of the highlighting and annotation feature compared to low usage of the other features, such as wiki-editing a summary or commenting.

I wanted to leave you with some parting questions that I hope you’ll consider during this process:

  • When and why might we want to give end users the ability to annotate?
  • How do we design interfaces and interactions for consuming annotations that benefit end users?

Thanks to my collaborators at MIT who helped me create this talk: my advisor David Karger, along with Sandro Hawke, as well as Jessica Wang, a masters student who built Pano. And thank you to An Xiao Mina and Jenny 8 Lee for inviting me to the working group.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s