Our Internet Challenge: Squaring Digital Platforms and Society
The Internet’s promise was that it could be a digital town square. And more often than not it has been. But that town square can be in a beautiful and safe town – or in a war-torn and dangerous village. We get to choose that.
As the Internet has broken down barriers to communication and commerce and enabled people from across the globe to connect in positive ways, it has also removed barriers to the dark side of human nature and connecting those engaged in illicit and harmful behavior. Fraud. Malware. Identity theft. Terrorist propaganda. Cyber espionage. Unlawful sale of opioids and other drugs. Human trafficking.
As a society, we are now trying to understand how we got there and what we can do about it.
Part of the problem is that no one is minding the virtual store. For twenty years, we have placed our faith in the companies that create the online platforms that we rely upon. But the legal protections that were vital to getting them off the ground, such as safe harbor from liability, have also encouraged them to turn a blind eye to harmful activities occurring over their services, and to build their businesses without regard for the cost.
The platforms promised that in exchange for the safe harbors they would proactively curb abuse of their services. Clearly, that is not working.
Online communication and e-commerce now rival their offline counterparts in scope, influence, and importance. These platforms must be just as accountable. It has taken far too long and far too many crises, but some platforms are starting to confront the problem: that they are facilitating illegal activity and other troubling conduct.
In the wake of the Cambridge Analytica affair, for example, Facebook CEO Mark Zuckerberg finally acknowledged:
“It’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake….I think we need to take a broader view of our responsibility. We’re not just building tools, but we need to take full responsibility for the outcome and how people use those tools as well.”
Acknowledging the weaknesses in the system is a good start. Next, we have to understand how their business models create those weaknesses. For example, the algorithms that online platforms have designed and the business models they have chosen influence what millions of Americans are or are not exposed to. The platforms of course have a First Amendment right to decide what content does or does not appear on their services. And while it is important for platforms that disclaim bias to avoid viewpoint discrimination, that in no way diminishes the increasingly accepted truth that platforms must be accountable for curbing clearly illegal activity on their services. Like all businesses, they should be accountable for the ways their platforms are used and for the reasonably foreseeable harms that result.
In other words, the Internet is grown up. It’s time for us to treat it as a grown up. That means the platforms we rely upon must not hide behind legal protections or allows bad actors to use their platforms with near impunity. That leads to disinformation and the skewing of public discourse. We should do what, as a society, we always do in these instances. Re-examine our laws and policies. If they aren’t effective we need to change them.