People are angry at Facebook. People are angry at YouTube. People are angry at Twitter.
With Facebook CEO Mark Zuckerberg set to testify before Congress less than a week after an enraged YouTube user wounded three people before killing herself at the company’s headquarters, the growing debate over what Silicon Valley’s social media giants do with the content and information we give them has hit a new pitch.
“We’re reaching a boiling point,” said Frances Zelazny, chief of strategy at cybersecurity firm BioCatch. “Nobody even knows where their information is and how it’s going to be used.”
Facebook, YouTube and Twitter have grown into massive global platforms, attracting everyone from travelers who share vacation photos and political junkies to agitators, trolls and nation-state operatives seeking to manipulate the platforms and public opinion. Google-owned YouTube estimates 300 hours of video are uploaded to its site every minute. Facebook claims 2 billion users around the world. And Twitter says it has 330 million monthly active users.
“We have to acknowledge the tremendous reach that social media companies have in our lives — they have emerged as a central focus point for many of the things we do every day,” said Santa Clara University law professor Eric Goldman, who studies the technology industry. “It’s not surprising there’s going to be friction given the reach and power they have in our lives.”
When Facebook first launched back in 2004, followed soon after by YouTube and Twitter, the social media companies’ were focused on building spaces where people could connect and share information, photos and videos. The firms did not build their platforms with safety and trust at front of mind, said Zelazny of BioCatch.
“Social media networks have grown and proliferated by trying to make it as easy as possible to bring as many people as possible into their network,” Zelazny said.
As they grew, gathering unprecedented amounts of data and content from users, advertisers flocked to the platforms to reach groups they wanted to target. So did everyone else, including some who posted content that others found offensive, hateful or even dangerous. Under pressure from the public, politicians and advertisers to do more to control their content, the companies started to respond. In doing so, they made themselves targets for anyone who disagreed with their decisions.
Facebook’s 2017 removal of a photo showing a statue of the Roman god Neptune naked drew widespread criticism and mockery.
YouTube, facing a revolt by advertisers last year over the pairing of ads with extremist and offensive comment, began more aggressive policing of content and which channels could make money from advertising. That upset conservatives who said their views were being censored and users such as YouTube shooter Nasim Aghdam, angry about new policies that stripped ads from some channels and filtered content.
Twitter, also frequently accused of stamping out conservative expression, responded last fall to abuse of its platform by Russian election meddlers with new transparency policies for political and “issue-based” ads. It also appears to have booted many thousands of bots and fake accounts from its platform.
The charges of election meddling on social media, coupled with fresh waves of violent, extremist content, prompted new calls to regulate the big companies. Facebook, Twitter and YouTube announced they would hire more humans to review content. But revelations that Facebook gave Trump-linked data-mining firm Cambridge Analytica access to 87 million users’ data turned an already hot issue into a four-alarm fire.
Facebook has said it now will limit the information it shares with data brokers and announced new restrictions on political advertising and new privacy settings for users. CEO Mark Zuckerberg and COO Sheryl Sandberg have apologized in media interviews, with Zuckerberg admitting he didn’t “take a broad enough view of (Facebook’s) responsibility” to ensure that the company’s tools were being used to connect people instead of being abused.
But as users learn their personal information can be “co-opted” and their attention “manipulated,” consumer outrage against social media firms has surged, said Electronic Frontier Foundation researcher Gennie Gebhart.
And the level of discontent, Gebhart continued, “feels unprecedented.”
Though users have to apply appropriate privacy controls to their accounts, Gebhart said, companies must take responsibility for protecting their users’ information. Because the firms share similar practices, YouTube and Twitter are not immune to the kind of furor Facebook is facing, she said.
“Given different timing and different whistleblowers going to different news outlets, it could’ve been any of them,” she said.
The outcry has led to increasing calls for regulation, but few expect new rules to be a quick fix.
“What’s happened so far is these companies have only responded when the pressure’s gotten too high,” said Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee.
Warner, in an interview with this news organization, outlined possible regulatory actions. Forcing Facebook, YouTube and Twitter to let users dissatisfied with one platform transfer their data to another is possible but would require the companies to make technical changes, he said. Or users’ data could be deemed their property, with companies required to pay for it unless users are given full control over use of their information, he said.
Such regulations would allow competition from smaller companies, who could become alternatives to the major firms or could work for users brokering the use or sale of their data, Warner suggested.
Other regulatory action could focus on anti-trust issues and the social media giants’ content-moderation practices, said UC Berkeley law professor Chris Hoofnagle, who teaches internet law. “You could see the left and right begin to unify around a competition agenda to deal with the censorship issues,” Hoofnagle said.
Santa Clara University’s Goldman noted that new regulation is already under way, including the Honest Ads Act to regulate online political advertising, which Facebook endorsed Friday, and the Stop Enabling Sex Traffickers Act passed by the U.S. Senate in March, which would hold internet companies more accountable for certain types of third-party content on their websites.
“We are manufacturing regulations by the truckload,” Goldman said. “The machinery of government has ramped up its investigations into social media companies. There’s only one end to that process: more regulation.”
While that might initially appear damaging to the companies, Goldman believes Facebook’s support for the anti-trafficking bill highlights the fact that regulation might benefit the dominant social media companies. That’s because in any industry, new regulations tend to stifle smaller competitors, allowing large established firms to safely stagnate, he said.
“The public loses in that dynamic because we don’t get robust competitive markets,” he said. “The existing incumbents iterate less because they’re not having the small companies nipping at their heels. They’ll keep looking like they look instead of constantly evolving.”