Larry Hosken. Technical writer. Puzzlehunt enthusiast.
288 stories
·
4 followers

Warnings That Work: Combating Misinformation Without Deplatforming

2 Comments and 4 Shares

Ben Kaiser, Jonathan Mayer, and J. Nathan Matias

This post originally appeared on Lawfare.

“They’re killing people.” President Biden lambasted Facebook last week for allowing vaccine misinformation to proliferate on its platform. Facebook issued a sharp rejoinder, highlighting the many steps it has taken to promote accurate public health information and expressing angst about government censorship.

Here’s the problem: Both are right. Five years after Russia’s election meddling, and more than a year into the COVID-19 pandemic, misinformation remains far too rampant on social media. But content removal and account deplatforming are blunt instruments fraught with free speech implications. Both President Biden and Facebook have taken steps to dial down the temperature since last week’s dustup, but the fundamental problem remains: How can platforms effectively combat misinformation with steps short of takedowns? As our forthcoming research demonstrates, providing warnings to users can make a big difference, but not all warnings are created equal.

The theory behind misinformation warnings is that if a social media platform provides an informative notice to a user, that user will then make more informed decisions about what information to read and believe. In the terminology of free speech law and policy, warnings could act as a form of counterspeech for misinformation. Facebook recognized as early as 2017 that warnings could alert users to untrustworthy content, provide relevant facts, and give context that helps users avoid being misinformed. Since then, Twitter, YouTube, and other platforms have adopted warnings as a primary tool for responding to misinformation about COVID-19, elections, and other contested topics.

But as academic researchers who study online misinformation, we unfortunately see little evidence that these types of misinformation warnings are working. Study after study has shown minimal effects for common warning designs. In our own laboratory research, appearing at next month’s USENIX Security Symposium, we found that many study participants didn’t even notice typical warnings—and when they did, they ignored the notices. Platforms sometimes claim the warnings work, but the drips of data they’ve released are unconvincing.

The fundamental problem is that social media platforms rely predominantly on “contextual” warnings, which appear alongside content and provide additional information as context. This is the exact same approach that software vendors initially took 20 years ago with security warnings, and those early warning designs consistently failed to protect users from vulnerabilities, scams, and malware. Researchers eventually realized that not only did contextual warnings fail to keep users safe, but they also formed a barrage of confusing indicators and popups that users learned to ignore or dismiss. Software vendors responded by collaborating closely with academic researchers to refine warnings and converge on measures of success; a decade of effort culminated in modern warnings that are highly effective and protect millions of users from security threats every day.

Social media platforms could have taken a similar approach, with transparent and fast-paced research. If they had, perhaps we would now have effective warnings to curtail the spread of vaccine misinformation. Instead, with few exceptions, platforms have chosen incrementalism over innovation. The latest warnings from Facebook and Twitter, and previews of forthcoming warnings, are remarkably similar in design to warnings Facebook deployed and then discarded four years ago. Like most platform warnings, these designs feature small icons, congenial styling, and discreet placement below offending content.

When contextual security warnings flopped, especially in web browsers, designers looked for alternatives. The most important development has been a new format of warning that interrupts users’ actions and forces them to make a choice about whether to continue. These “interstitial” warnings are now the norm in web browsers and operating systems.

In our forthcoming publication—a collaboration with Jerry Wei, Eli Lucherini, and Kevin Lee—we aimed to understand how contextual and interstitial disinformation warnings affect user beliefs and information-seeking behavior. We adapted methods from security warnings research, designing two studies where participants completed fact-finding tasks and periodically encountered disinformation warnings. We placed warnings on search results, as opposed to social media posts, to provide participants with a concrete goal (finding information) and multiple pathways to achieve that goal (different search results). This let us measure behavioral effects with two metrics: clickthrough, the rate at which participants bypassed the warnings, and the number of alternative visits, where after seeing a warning, a participant checked at least one more source before submitting an answer.

In the first study, we found that laboratory participants rarely noticed contextual disinformation warnings in Google Search results, and even more rarely took the warnings into consideration. When searching for information, participants overwhelmingly clicked on sources despite contextual warnings, and they infrequently visited alternative sources. In post-task interviews, more than two-thirds of participants told us they didn’t even realize they had encountered a warning.

For our second study, we hypothesized that interstitial warnings could be more effective. We recruited hundreds of participants on Mechanical Turk for another round of fact-finding tasks, this time using a simulated search engine to control the search queries and results. Participants could find the facts by clicking on relevant-looking search results, but they would first be interrupted by an interstitial warning, forcing them to choose whether to continue or go back to the search results. 

The results were stunning: Interstitial warnings dramatically changed what users chose to read. Users overwhelmingly noticed the warnings, considered the warnings, and then either declined to read the flagged content or sought out alternative information to verify it. Importantly, users also understood the interstitial warnings. When presented with an explanation in plain language, participants correctly described both why the warning appeared and what risk the warning was highlighting.

Platforms do seem to be—slowly—recognizing the promise of interstitial misinformation warnings. Facebook, Twitter, and Reddit have tested full-page interstitial warnings similar to the security warnings that inspired our work, and the platforms have also deployed other formats of interstitials. The “windowshade” warnings that Instagram pioneered are a particularly thoughtful design. Platforms are plainly searching for misinformation responses that are more effective than contextual warnings but also less problematic than permanent deplatforming. Marjorie Taylor Greene’s vaccine misinformation, for example, recently earned her a brief, 12-hour suspension from Twitter, restrictions on engagement with her tweets, and contextual warnings—an ensemble approach to content moderation.

But platforms remain extremely tentative with interstitial warnings. For the vast majority of mis- and disinformation that platforms identify, they still either apply tepid contextual warnings or resort to harsher moderation tools like deleting content or banning accounts.

Platforms may be concerned that interstitial warnings are too forceful, and that they go beyond counterspeech by nudging users to avoid misinformation. But the point is to have a spectrum of content moderation tools to respond to the spectrum of harmful content. Contextual warnings may be appropriate for lower-risk misinformation, and deplatforming may be the right move for serial disinformers. Interstitial warnings are a middle-ground option that deserve a place in the content moderation toolbox. Remember last year, when Twitter blocked a New York Post story from being shared because it appeared to be sourced from hacked materials? Amid cries of censorship, Twitter relented and simply labeled the content. An interstitial warning would have straddled that gulf, allowing the content on the platform while still making sure users knew the article was questionable. 

What platforms should pursue—and the Biden-Harris administration could constructively encourage—is an agenda of aggressive experimentalism to combat misinformation. Much like software vendors a decade ago, platforms should be rapidly trying out new approaches, publishing lessons learned, and collaborating closely with external researchers. Experimentation can also shed light on why certain warning designs work, informing free speech considerations. Misinformation is a public crisis that demands bold action and platform cooperation. In advancing the science of misinformation warnings, the government and platforms should see an opportunity for common ground.

We thank Alan Rozenshtein, Ross Teixeira and Rushi Shah for valuable suggestions on this piece. All views are our own.

Read the whole story
lahosken
4 days ago
reply
People ignore warnings that appear alongside misinformation. What works? "Interstitial warnings dramatically changed what users chose to read."
San Francisco, USA
Share this story
Delete
1 public comment
MotherHydra
3 days ago
reply
Now that social platforms are all-in with editorial control it’s time to revisit their status as publishing entities. Clearly they are. And clearly this is a tool that in the wrong hands can be used to wield an agenda and disseminate propaganda. History will look upon all of this unfavorably and make the parallels to Orwell and other prescient media such as V for Vendetta.
Space City, USA

Nuclear power’s reliability is dropping as extreme weather increases

1 Comment
Image of two cooling towers above a body of water.

Enlarge / Cooling water is only one factor that limits the productivity of nuclear power plants. (credit: Getty Images)

With extreme weather causing power failures in California and Texas, it’s increasingly clear that the existing power infrastructure isn’t designed for these new conditions. Past research has shown that nuclear power plants are no exception, with rising temperatures creating cooling problems for them. Now, a comprehensive analysis looking at a broader range of climate events shows that it’s not just hot weather that puts these plants at risk—it's the full range of climate disturbances.

Heat has been one of the most direct threats, as higher temperatures mean that the natural cooling sources (rivers, oceans, lakes) are becoming less efficient heat sinks. However, this new analysis shows that hurricanes and typhoons have become the leading causes of nuclear outages, at least in North America and South and East Asia. Precautionary shutdowns for storms are routine, and so this finding is perhaps not so surprising. But other factors—like the clogging of cooling intake pipes by unusually abundant jellyfish populations—are a bit less obvious.

Overall this latest analysis calculates that the frequency of climate-related nuclear plant outages is almost eight times higher than it was in the 1990s. The analysis also estimates that the global nuclear fleet will lose up 1.4 percent—about 36 TWh—of its energy production in the next 40 years, and up to 2.4 percent, or 61 TWh, by 2081-2100.

Heat, storms, drought

The author analyzed publicly available databases from the International Atomic Energy Agency to identify all climate-linked shutdowns (partial and complete) of the world’s 408 operational reactors. Unplanned outages are generally very well documented, and available data made it possible to calculate trends in the frequency of outages that were linked to environmental causes over the past 30 years. The author also used more detailed data from the last decade (2010 – 2019) to provide one of the first analyses of which types of climate events have had the most impact on nuclear power.

While the paper doesn't directly link the reported events to climate change, the findings do show an overall increase in the number of outages due to a range of climate events.

The two main categories of climate disruptions broke down into thermal disruptions (heat, drought, and wildfire) and storms (including hurricanes, typhoons, lightning, and flooding). In the case of heat and drought, the main problem is the lack of cool enough water—or in the case of drought, enough water at all—to cool the reactor. However, there were also a number of outages due to ecological responses to warmer weather; for example, larger than usual jellyfish populations have blocked the intake pipes on some reactors.

Storms and wildfires, on the other hand, caused a range of problems, including structural damage, precautionary preemptive shutdowns, reduced operations, and employee evacuations. In the timeframe of 2010 to 2019, the leading causes of outages were hurricanes and typhoons in most parts of the world, although heat was still the leading factor in Western Europe (France in particular). While these represented the most frequent causes, the analysis also showed that droughts were the source of the longest disruptions, and thus the largest power losses.

The author calculated that the average frequency of climate-linked outages went from 0.2 outages per year in the 1990s to 1.5 outages in the timeframe of 2010 to 2019. A retrospective analysis further showed that for every 1°C rise in temperature (above the average temperature between 1951 and 1980), the energy output of the global fleet fell about 0.5 percent.

Retrofitting for extreme weather

This analysis also shows that climate-associated outages have become the leading cause of disruptions to nuclear power production—other causes of outages have only increased 50 percent in the same timeframe. Projecting into the future, the author calculates that, if no mitigation measures are put into place, the disruptions will continue to increase through the rest of this century.

“All energy technologies, including renewables, will be significantly affected by climate change,” writes Professor Jacapo Buongiorno, who was not involved in the study, in an email to Ars. Buongiorno is the Tepco Professor of Nuclear Science and Engineering at the Massachusetts Institute for Technology (MIT) and he co-chaired the MIT study on The Future of Nuclear Energy in a Carbon Constrained World. “The results are not surprising—nuclear plants can experience unplanned outages due to severe events (e.g., hurricanes, tornadoes) or heat waves, the frequency of which is increasing.”

Although there is relatively little research on the topic of climate effects on nuclear power specifically, some projects are already underway to adapt nuclear plants to the changing climate. For example, the US Department of Energy recently invested in a project researching methods to reduce the amount of water needed by nuclear facilities (e.g. advanced dry cooling).

“Existing nuclear plants are already among the most resilient assets of our energy infrastructure,” writes Buongiorno. “The current fleet is adapting to rising sea levels (for those plants located in areas at potential risk of flood) and the increasing intensity of storms. New nuclear reactor technologies will be even more resilient, as in many instances that are being designed to be dry cooled (i.e. not using river/ocean water for rejecting heat to the ambient) as well as capable of operating in 'island mode,' i.e. disconnected from the grid and ready to restart before other large power plants in the event of a blackout.”

Other nuclear technologies, such as pebble-bed, molten salt, and advanced small modulator reactors, may also provide more climate-resistant solutions, but these are all still under development. In general, the strict regulations in place for nuclear reactors make it particularly difficult to incorporate newer technologies. Even as these technologies become available, it will likely require further reactor downtime to install new components. So, at least in the short term, even nuclear power will likely contribute to the increasing frequency of climate-related power shortages.

Nature Energy, 2021.  DOI: 10.1038/s41560-021-00849-y

Read Comments

Read the whole story
lahosken
7 days ago
reply
"The two main categories of climate disruptions broke down into thermal disruptions (heat, drought, and wildfire) and storms (including hurricanes, typhoons, lightning, and flooding). In the case of heat and drought, the main problem is the lack of cool enough water—or in the case of drought, enough water at all—to cool the reactor. However, there were also a number of outages due to ecological responses to warmer weather; for example, larger than usual jellyfish populations have blocked the intake pipes on some reactors."
San Francisco, USA
Share this story
Delete

Why Facebook really, really doesn’t want to discourage extremism

1 Comment
Our research finds that outrage is what goes viral – and makes money.
Read the whole story
lahosken
16 days ago
reply
"The most viral posts tended to be about the opposing political party. Facebook posts and tweets about one’s political out-group (that is, the party opposed to one’s own) were shared about twice as often as those about one’s own political group."
San Francisco, USA
Share this story
Delete

When a new district attorney takes office, police kill fewer people, our research finds

1 Comment
We found no accompanying changes in arrest rates or assaults on police, data that suggests that officers can deescalate while remaining just as effective.
Read the whole story
lahosken
16 days ago
reply
This makes me want term limits for district attorneys
San Francisco, USA
Share this story
Delete

AI Doesn’t Understand ScaleOne of my favorite ways to mess around with text-generating AI is...

2 Comments

AI Doesn’t Understand Scale

One of my favorite ways to mess around with text-generating AI is to see what it does with new games that didn’t exist on the internet back when its training data was collected.

OpenAI’s GPT-3 models were trained with data that cut off in late 2019 which means that they’re unaware of certain major world events, as well as of the latest internet memes.

Recently someone posted this hilarious bit of startup advice that had people posting terrible advice of their own:

You don’t understand scale. 
Use $50 to buy 10 tomato plants. 
In 6 months you have 250 tomatoes.
Plant those into 250 plants.
...6 mo you have 3.9MM tomatoes.
Sell them for $1 each.

I guessed that even if a neural net like GPT-3 can’t possibly have seen this meme before, it would still be capable of giving terrible startup advice. So, I entered the entire above sequence of brilliant entrepreneurial moves as prompt text, preceded by the word [tomatoes] in brackets to help indicate this was supposed to be a short thing about tomatoes. Then I wrote “[” and let it write a short thing about something else.

DaVinci

Unsurprisingly, the largest GPT-3 variant was the best at catching on to the format. It made the leap that you could amplify eggs in a similar way. And you end up making $1 million on each egg!

[eggs]
Same as the tomatoes.
Eggs sell for $1 each.
He uses 39 chickens to make a dozen eggs in 9 minutes.
One hen can lay 355 eggs in 355 days. That’s 11.844 muffins a day.
9:11AM go to the best grocery store in town and buy ALL THE CHICKENS. Keep your receipt.
9:19AM - you take $9.8MM and buy back all your chickens.
9:30AM bring the chickens home.
9:39AM - sell the chickens back to the store for $9.8MM
Speculation in eggs prices rises dramatically. If you buy all the eggs, you can sell them back and make $1MM on each egg.

Object permanence is not a strength of DaVinci’s - keeping track of the status of physical objects isn’t really in its job description. Its world is text.

[wigs]
You don’t understand scale.
Buy one $50 wig.
Use it for 6 months until it breaks.
Throw it away.
Buy another $50 wig.
Use it for another 6 months until it breaks.
Throw it away.
Repeat until you have 182 wigs.
Sell them for $50 each.

GPT-3 DaVinci does at least realize that you should end with profitable things happening.

[hammer & nails]
You don’t understand scale.
Use $100 to buy a hammer.
Use the hammer to build with 4 nails.
Burn the nail building for warmth.
Repeat indefinitely.[chocolate milk]
1 cow gives 160 glasses of chocolate milk per day.
A single glass sell for 10k.
Lots of profit.[fork]
You don’t understand scale.
Buy 3 forks.
Solve any problem that takes less than 3 forks.

Ada

I also turned to the smallest GPT-3 variant, Ada. It caught onto the format only partially. It got that it was supposed to be about tomatoes. And possibly about money?

[tomatoes]
Use $1 to buy 1.5MM tomatoes.[Tomatoes]
Do you intend to build a city? ?“
Build $500,000 lots and then sell $1,000,000 lots.
You want a house outfitted with 50 toilets, 50 outdoor shower systems along with 132 rooms.[tomatoes]
They are botanical machines.
Of all the tomato varieties, they only grow once!
Traversing is their way of existence.
They do this every day.
They can do "rainbow fruit trees”, making “green smoke and crumble flower.”

Had I not stopped the generation process, I’m not sure how long Ada would have continued this next one. When people talk about researchers shutting down their AIs because of an unwanted outcome, this is often the kind of thing they mean.

[tomatoes]
You don’t understand that it’s basically the same thing as a bowl of cereal.
Potatoes in a can.
Pour the potatoes into 35 quart jugs and fill jugs to the brim.
Whisk soapmaker.
In little tub layers stout ash alum pot paper.
Divide into four equal parts of qt and rl into segments measuring 3.2 cm by 0.8 cm.
Wash wire wool in 80-95% dewashed water and rinse under tepid water.
Prepare three inches of hot coals, gently heat in center of stove 4 inches deep.
Roll up soap in crinkly napkins two times to fit into form according to volume-diameter. To remove bubble wrap insert aprons.
Heat a small hole as necessary per each pot.
Dip each slip to the brush onto each pot, then collect the whole wrap under the string from the soap’s top to bottom.
Crackfish, partially hulled and cut to about 3 cm across, on an upright flake computer of sufficient volume.
Measure and cut parts of soap into even thickness, add by 1 cm to diameter of each bowl.
Take soap left out to apartments with sheets and wrap it up in a towel.
Fill Sanitary Pail with clay pot suders, layers clay that been finished of three times, each surface 3 cm thick, to the top of the pails.
Split soap in half 3 cm ¼ x 0.7 cm just below bottom of tub.
Fill bathtub lengthways and level with edge.
Lay soap at 4 cups depth, with top down.
In the dryer attach film and cling wrap, then lay sheet-hooks 1.6 cm apart on the film.
Place plastic apple cores on top of film.
Lay film and cardboard pails on top of each other.
Cap and secure, then dry for 7-10 days.

If you don’t do the entire list, are you really cut out to be an entrepreneur?

I also decided to see what the two midsized GPT-3 models did with this startup advice. Become an AI Weirdness supporter to read the results as bonus material! Or become a free subscriber to get new AI Weirdness posts in your inbox.

Read the whole story
lahosken
34 days ago
reply
"Repeat until you have 182 wigs." Oh yeah, this is one of those Big Hairy Audacious Goals they teach about in business school.
San Francisco, USA
Share this story
Delete
1 public comment
fancycwabs
36 days ago
reply
The last [tomatoes] variant is a Beck song.
Nashville, Tennessee

The Terrestrial Status of Boston

2 Shares

The terrestrial status of Boston is an unexpectedly fascinating topic. A city built on land rescued from the sea, it is not only unusually at risk from sea-level rise; it also hides parts of its marshy past beneath its streets and buildings.

As a project by the Norman B. Leventhal Map & Education Center recently wrote, “No city in the U.S. has a more striking history of landmaking than Boston, with about a sixth of its present land area sitting on estuaries, mudflats, coves, and tidal basins that would have been submerged at high tide prior to the seventeenth century. Mapping the growth of the city into the surrounding ocean has been an interest of Boston’s geographers for centuries, and our modern maps of shoreline change are some of the most popular objects in our digital collections.”

[Image: Boston, courtesy of the Norman B. Leventhal Map & Education Center.]

Indeed, the Wall Street Journal explained last year, some of Boston’s most expensive houses are more like docks or wharves, sitting atop wooden pilings driven deep into flooded ground. In one specific case, “the underground wooden pilings supporting the foundation had been rotting for years, to the point where the building’s walls were ‘almost floating,’ [the home’s owner] recalled.”

Recall the the incredible story of William Walker, a diver who “saved” Winchester Cathedral in England by diving beneath it for a period of six years, repairing its aquatic foundations from below. “When huge cracks started to appear in the early 1900s,” we read, “the Cathedral seemed in danger of complete collapse. Early efforts to underpin its waterlogged foundations failed until William Walker, a deep-sea diver, worked under water every day for six years placing bags of concrete.”

Ben Affleck’s next movie, perhaps—scuba diving beneath the streets of Boston and saving the city from below…

While the bulk of the Leventhal Center’s project focuses on the economic value of reclaimed land in the Boston area—what they call “the ultimate financial asset: brand-new urban land, ready for development”—there is at least one amazing detail I wanted to post here.

Like buried ships in New York City and San Francisco, Boston has its own maritime archaeology: “Sophisticated networks of fish weirs can still be found buried beneath the streets of the [Back Bay] neighborhood, which were laid out in a tidily gridded pattern in the nineteenth century to facilitate the engrossment and sale of property.” Indigenous hydrological infrastructure, hiding in plain sight.

Writing just today, meanwhile, in an op-ed for WBUR, Courtney Humphries suggests that, ironically, Boston’s future survival might depend on doing more of what got it into trouble with the sea in the first place: building more land and further modifying the shoreline.

What future weirs and dams and levees and pilings, architectural anchorages all, might we see beneath the streets of Boston, a city halfway between terrestrial and maritime, ground and ocean, bedrock and marsh, in the years to come?

Read the whole story
lahosken
41 days ago
reply
San Francisco, USA
Share this story
Delete
Next Page of Stories