Here is a question for you: Is Facebook evil?
In the wake of the purported Russian election meddling and the Cambridge Analytica scandal, that question is swirling around quite a bit.
Chances are, Congress will ask some version of that question, over and over, to CEO Mark Zuckerberg when he testifies before the esteemed body this week. It is a fair and reasonable question, and possibly even an important one. Facebook certainly knows an awful lot about us. But does that make it evil?
There is an old adage amongst internet companies that goes something like this: If you are not paying for the product, you are the product. That is certainly true of Facebook. It has convinced us to provide it with data on our demographics, our political preferences, our whereabouts, our relationship status, our favorite foods, books, music, television shows and so on. We give it that information for free, and Facebook sells it for a substantial profit.
The line between Facebook being an “evil” company and simply an effective company is pretty thin. Have no fear, though, because the federal government is looking into it. The problem is that the only things the government can really do in this situation are ask questions and write regulations. The questions might be helpful, the regulations probably not so much.
Let’s accept for a moment that we are Facebook’s “product.” Well then, Facebook is already incentivized to protect us, right?
Sort of. Facebook is a growth-based company that flaunts its ability to “connect” people. In a 2016 internal memo, Facebook VP Andrew Bosworth warned that a blind ambition to connect people could lead to both good and bad actions. He continued by saying “the ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is ‘de facto’ good.”
That was a warning shot about the potential evil that could come from a platform with blinders on.
So where does that leave us in answering our question? Does Facebook need to be more transparent? Absolutely. Does it need to work unbelievably hard to secure data from hackers? Of course.
In order to not “be evil,” does Facebook need to protect us from so-called “bad actors” who are using data from its platform to manipulate us? Boy, that is one incredibly slippery slope.
Do we trust Facebook to decide who the bad actors are? Do we trust the government to tell us? Our preference is to trust ourselves and our own discernment whenever possible. Up to a point, that is.
Transparency is the key issue here. As the tools that deliver information to us are getting more complex, it is becoming similarly more complex to understand who is delivering the information. In the newspaper, radio and television industries, for example, a political advertisement must be accompanied by information within the ad regarding who paid for it. The internet does not work that way, but it could.
There is legislation in Congress right now called the Honest Ads Act that would force internet companies to disclose who paid for an advertisement. That makes sense. The Honest Ads Act, however, also wants public disclosure of how the ad was targeted and how much it cost. There’s that slippery slope again. Such legislation would not just be ineffective, it would actually give the “bad actors” a playbook on how to better – and more cost-effectively – manipulate us.
As the government begins to look more closely at Facebook, we encourage them to only give a thumbs up to the minimal amount of regulation needed to foster transparency.