Thursday, April 12, 2018

Don’t ask whether Facebook can be regulated. Ask which Facebook to regulate.




What’s the right regulation for Facebook? That depends on which Facebook you’re talking about.

The Washington leg of Mark Zuckerberg’s “apology tour,” precipitated by Facebook’s Cambridge Analytica Scandal, has seen a tense two days of lawmakers grilling the Facebook founder about the extent of Facebook’s mounting privacy issues, its role in shaping the global democratic process, and what he is doing to solve its many woes.

During Tuesday’s joint congressional hearing of the Senate Judiciary and Senate Commerce, Science, and Transportation committees, lawmakers covered a wide range of questions from lawmakers. But none was more integral to understanding the issues and challenges facing Facebook — and those in favor of regulating it — than a single exchange between Sen. Lindsey Graham (R-SC) and Zuckerberg.


Graham attempted to corner Zuckerberg about whether Facebook is a monopoly. In the middle of that line of questioning, Graham asked, “Do you embrace regulation?”

“I think the real question, as the internet becomes more important in people’s lives,” Zuckerberg replied, “is what’s the right regulation?”

The question of what the “right” regulation for Facebook might be is a very complicated one. To understand why, a quick look at the second day of hearings might be helpful. Zuckerberg stood Wednesday before the House Energy and Commerce Committee, where Chair Greg Walden (R-OR) asked him, “Is Facebook a media company?”

“I consider us to be a technology company,” Zuckerberg replied.

Walden continued, asking Zuckerberg if Facebook was a financial institution.

“I do not consider us to be a financial institution,” Zuckerberg replied, before adding, “although you’re right, we do provide tools for people to send money.”

Walden’s series of questions underscored the main problem with determining the “right” regulation for a company as massive as Facebook. Determining how Facebook should be regulated requires knowing whether you’re regulating it as a media company, a tech company, or as something else altogether.

Facebook could face regulation under laws already applied to traditional media companies — if it were treated like a media company
If we think about regulating Facebook as a media company, or even as a publishing company, then most of the options fall under preexisting regulations and guidelines that have existed for years.

For example, traditional media companies have to abide by a litany of rules set forth by the Federal Election Commission when it comes to disclosing what entity is sponsoring paid campaign advertisements. These rules have recently been applied to Facebook and other social media platforms via the proposed Honest Ads act, which would force Facebook to disclose who’s paying for any and all political ads on its platform, and require campaigners to declare their approval of every campaign ad posted on their behalf.

Facebook recently promised to abide by the Honest Ads Act even before it comes to a vote — a good thing, since Congress likely won’t have sorted out the mess of applying its regulations to Facebook before midterm elections.

There’s another good reason to sidestep enshrining the Honest Ads Act into law, which brings us to the primary problem with regulating Facebook as a media company: it’s also a tech company.

Regulating Facebook as a media company could infringe upon core internet principles — but regulating it as a tech company could be even harder
There’s one major problem with the Honest Ads Act. If Facebook were officially held liable for how third-party ads are disclosed on its platform, the regulation would directly contravene what is basically the internet’s version of Biblical law: Section 230 of the 1996 Communications Decency Act.

This rule says publishers and ISPs aren’t liable for what users do on their websites. While controversial recent legal challenges have threatened to undermine its protections, it’s still the internet’s primary governing principle.

Section 230 has traditionally been upheld in the court system when it comes to holding companies like Google and Facebook responsible for the content of ads on their platforms. So to get around it, the Honest Ads Act would have to create an exemption to Section 230.

Exceptions to Section 230 do exist, and it is possible for a website to lose its protection under the law. But for the most part, it has remained dominant for two decades, and many internet freedom advocates have argued that allowing any exemptions to Section 230, no matter how well-intentioned, is a dangerous precedent to set.

So what if we were to treat Facebook like a tech company, as Zuckerberg suggests, instead of as a media company? Well, in that case, many of the options for regulating Facebook involve entering new territory for lawmakers — mainly because lawmakers don’t seem to understand the technical ins and outs of Facebook well enough to regulate it.

While it’s arguable that this issue shouldn’t matter when it comes to holding Zuckerberg responsible for the way he’s historically run his company, it is important for lawmakers to develop a deep understanding of the issues tech culture faces before it passes reform that could be detrimental to online freedom. For example, as Congress recently demonstrated with the passing of the highly controversial FOSTA/SESTA bill package, it has a tendency to opt for sweeping, high-level reform with the potential to damage to internet freedom and infrastructure.

In the case of the problems plaguing Facebook, that’s the last kind of regulatory influence we need. What we do need, as writers like Zeynep Tufekci and Roger McNamee have eloquently argued, is a system of regulatory processes that create accountability and empower tech companies to create tools responsibly.

Ideally, such processes would deal not only with questions of whether data is being mined ethically, with full user consent, but whether that data is being aggregated ethically, and applied at scale in ways that don’t cause unforeseen, widespread harm. (For an example of the unforeseen dangers of data being harvested and analyzed at scale, see the recent fiasco in which the Strava fitness app inadvertently revealed the locations of secret government military bases.)

Another major issue that Congress has barely touched on during its two-day hearings with Zuckerberg is an ethical question that has been haunting Silicon Valley lately: Are we building predictive algorithms using models that allow for transparency, accountability, and swift correction?

These are all issues increasingly faced by platforms like Facebook and YouTube, and ones that tech evangelists like Tristan Harris have increasingly attempted to make consumers aware of. But they aren’t really finding their way into conversations on Capitol Hill, which have instead focused on outcomes rather than processes.

Any legislation that in turn focuses on outcomes, rather than assuring accountability at a granular level for the way we’re building the internet, could potentially wind up penalizing the good parts of the internet while failing to prevent the bad parts from flourishing.

There are other options for regulating Facebook, but none are all-encompassing
There are two other major avenues for regulating Facebook. The first involves regulating internet privacy generally, rather than attempting to regulate most of what Facebook does as a company. This is the route Europe has taken with the recent passage of sweeping data privacy reform. Facebook has made numerous changes to comply with European law, and has pledged to make those changes available to all of its users.

Another avenue involves treating Facebook as a monopoly and regulating it using existing antitrust law. Proponents of treating Facebook and other tech giants like Google, Amazon, Apple, and Microsoft like monopolies argue that these companies haven’t only monopolized competition in the tech sector, but have favored their own goods and services when presenting information to viewers. They also argue that these companies have monopolized political and cultural discourse as well by having such tremendous influence over the way information is accessed online.

Antitrust law, then, might block acquisitions or force such giant tech companies to split apart into different companies with different roles and functions — but only if a strict guideline known as the consumer welfare standard, which determines whether consumers are actively being hurt by the presence of these monopolies, can be met.

Europe has been swift to hold these companies to tougher standards. In addition to its strict new set of data privacy laws, it also slapped a $2.7 billion fine on Google last year in the EU’s first antitrust case. But US lawmakers have historically been reluctant to step into the fray where Facebook is concerned — perhaps because of the difficulty of pinning down how to regulate tech companies this large.

Usually, American lawmakers will use consent decrees — essentially settlements that have to be approved by court order — to wrangle accountability from companies like Facebook. But it doesn’t always work. Currently, the Federal Trade Commission is investigating whether Facebook violated a 2011 consent decree it entered into with the FTC after a previous row over user privacy.

The lack of consistency in US law concerning what and isn’t a monopolistic action is also at issue, especially in the internet realm, where courts often display an apparent lack of comprehension of how the internet works. For example, in a major ongoing case between Google and Oracle, a federal appeals court recently issued a ruling in Oracle’s favor that misinterprets how information on the internet is freely exchanged, and which threatens, on a developmental level, innovation and online entrepreneurship.

This is precisely the kind of legal interference the internet doesn’t need, and it doesn’t exactly engender confidence in the government to make high-level decisions about reforming Silicon Valley.

Still, where Facebook is concerned, the government seems eager to do what it can to rein in the tech giant — even though it’s arguably already too late. The feeling seems to be that anything is better than letting Facebook continue to inadequately police itself. That might ultimately mean we have to live with worse internet policies before we get better ones — but hopefully it will also mean that eventually, all of Silicon Valley’s internet behemoths will have to stand accountable to their users.

No comments:

Post a Comment

DONATE