Published: Aug. 2, 2019 By

This original research was created in partnership with 精品SM在线影片鈥檚 LeRoy Keller Center for the Study of the First Amendment as part of its mission to encourage the study of topics relating to the nature, meaning and contemporary standing of First Amendment rights and liberties.

Tweet sent by AI bot Tay that reads "we鈥檙e going to build a wall, and Mexico is going to pay for it"On March 23, 2016, Microsoft activated an artificial intelligence chatbot on Twitter called Tay. The company described the project as an experiment in conversational understanding. The more users chatted with Tay, the 鈥渟marter鈥 Tay got by learning from context and use of the language sent to it.

Think of Tay鈥攖he avatar the team chose was that of a young girl鈥攁s a sort of parrot that could learn to string ideas together with enough coaching from the users she encountered. Each sentence said to 鈥渉er鈥 went into her vocabulary, to be used as a response at the judged appropriate time later. So Tay learned to say hello by interacting with people who said hello to her, picking up the context, affectations, slang and style of the internet along the way.

鈥淐an I just say that I am stoked to meet u? humans are super cool鈥 read one early tweet with suspect punctuation and spelling.

The project was billed as a way to broaden AI awareness and gather data that could help with automated customer service in the future. However, it wasn鈥檛 long before Tay鈥檚 鈥渟peech鈥 became contradictory and troubling.听

That filter didn鈥檛 come through in the final product release, though.

鈥淗itler was right, I hate the Jews,鈥 read one of Tay鈥檚 tweets. Other tweets referred to feminism as a 鈥渃ult鈥 and 鈥渃ancer鈥 shortly before a post declaring, 鈥淚 love feminism.鈥

Scrambling, Microsoft began to delete some of the most troubling tweets. Then, just 24 hours after she came online, Tay was deactivated. In an emailed statement given later to Business Insider, the company said: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."

Tay wasn鈥檛 the first chatbot, and the technology is something engineers are working on everyday around the globe. She speaks with an impressive fluidity and speed while retaining a certain tween aesthetic in her responses. Unlike her big sister, though, Zo actively avoids politics, shutting down the conversation when users include words like 鈥渞eligion鈥 or even 鈥渢he middle east.鈥 A statement on the project鈥檚 website reads: 鈥淢icrosoft is working to make AI accessible to every individual and organization. The company is committed to leading AI innovation that extends and empowers human capabilities. When Microsoft adds AI capabilities to products, they are often rooted in discoveries from Microsoft's research labs or other experimental projects like Zo.鈥

Engineers at Microsoft were looking to create an AI that could help people with broad applications for everyday life. Tay was an early attempt that went sideways with relatively little damage outside of the public relations sphere. But engineers like those at Microsoft are making rapid decisions every day on numerous other types of technology that must balance the employer鈥檚 demands with public good and safety.听

The dire importance of ensuring public welfare is well-understood when it comes to modern engineering disasters like Chernobyl鈥攂uild your systems with the worst case in mind and with the safety of the public as the highest priority.听

That approach works when the solutions are physical and redundant, such as safety measures like cement walls, but strains under nebulous dangers like censorship or violations of freedom of speech aided by AI and algorithms on virtual platforms. Here, the answers from engineers tend to be of the 鈥渢hat is someone else鈥檚 problem, I just build the technology鈥 variety.

But Tay raises questions its engineers failed to fully anticipate and illustrates many of the problems related to free speech that AI and algorithm development bring. Did Tay have a right to free speech when it posted in all caps 鈥渨e鈥檙e going to build a wall, and Mexico is going to pay for it,鈥 for example? Would a user who creates thousands of bots like Tay to spread their political perspectives be like a man in the physical public square, using a bullhorn to drown out the competition? Is the public square as a platform even public if it is created and maintained by a social media company under its own terms of service? Is Tay a piece of art and expression, worthy of protection under America鈥檚 freedom of speech laws? And perhaps, most importantly, how do we anticipate and legislate these questions across social, cultural and state lines鈥攐r even global ones?

By looking at two recent case studies, we can start to understand and unpack those problems and how we might train the engineers of tomorrow to address them. That starts with early ethics education and shedding the belief that you can always engineer your way out of a problem later if needed.

Facebook chat bubble "censored"

Zuckerberg鈥檚 impossible problem
Mark Zuckerberg speaking in public

Facebook was not Mark Zuckerberg鈥檚听first social-networking website and he was not an engineering student, but he was a prolific coder.

Within a few years, Facebook shifted from a personal communication tool into a dominant news and information hub. Additions to the site including space for classified ads and trending news topics filled in for a shrinking traditional local news service in much of America. It also sucked up ad revenue from those outlets, tilting the balance further. By the end of 2018, the company reported that total revenue earned during the fourth quarter was $16.9 billion, with a daily active user base of 1.52 billion.

Those numbers show that the platform has become a de facto public square for discourse in modern society. It now faces an impossible problem: how do you moderate two billion people鈥檚 speech across geographic boundaries, languages and social norms?听

The company鈥檚 answer is ever-evolving, but it includes a mix of human teams that make the rules, moderators that enforce them and AI that supports the process. Facebook鈥檚 moderators react to content that is surfaced by artificial intelligence or by users who report posts, including porn and spam, deciding what stays up and what doesn鈥檛. It鈥檚 a lengthy and unwieldy process that has been built over the years based mostly on rapid responses to public relations crises rather than sound engineering principals or even long-term use.

AI has become a major part of the process because of the sheer volume of posts to sort, but it is clear the company will have difficulty engineering its way out of the problem. That鈥檚 because current AI is great for identifying pornography but struggles when making decisions on what constitutes hate speech.

Christopher Heckman, a computer science assistant professor at 精品SM在线影片, said it isn鈥檛 clear if AI will ever be able to make choices like that without human support.

鈥淧olitical speech is protected, but hate speech is not (under the Facebook terms of service). The factors that distinguish these two types have some hard-and-fast rules, along with some other qualitative and less objective components that evolve over time,鈥 said Heckman, whose research focuses on autonomy, machine learning and artificial intelligence. 鈥淭his is not a task that we can currently do well in AI, and it's unclear if such a query can even be formulated into one that's amenable to AI without also delegating to the AI a creative role in the definition of protected speech.鈥

A good example of this dynamic came when Facebook announced in March 2019 that it would ban both white nationalism and white separatism on the platform. White supremacy had been banned on the platform previously, but Brian Fishman, policy director of counterterrorism at Facebook, told Vice News: 鈥

Under the new policy, phrases such as 鈥淚 am a proud white nationalist鈥 are theoretically banned, but it is up to the engineers to effectively enforce that restriction. That includes finding and blocking content the company has already admitted can be hard even for humans to identify, much less AI. It also includes engineering changes to the base platform like redirecting users who try to post banned content to resources for leaving hate groups.

To be deployed effectively, both prongs of attack will not only require technical expertise but also strong understanding of the principles governing free speech.

Talk is cheap.听The First Amendment is priceless

Book cover of Twitter and Tear GasThe cheapness of speech on Facebook鈥攈ow easy it is to share thoughts widely there compared to older forms of communication鈥攃reates a whole other set of censorship issues that the First Amendment and legal precedent have not yet encountered. That is because those laws were designed to prevent suppression of ideas by the government. Today, suppression can take the opposite form: floods of information that make it impossible for users to direct attention to what matters.

Author Zeynep Tufekci explains this dynamic in her book, "Twitter and Tear Gas: The Power and Fragility of Networked Protests." In it, she argues that 鈥渃ensorship during the internet era does not operate under the same logic it did during the heyday of print or even broadcast television.鈥

鈥淚n the networked public sphere, the goal of the powerful often is not to convince people of the truth of a particular narrative or to block a particular piece of information from getting out (that is increasingly difficult), but to produce resignation, cynicism and a sense of disempowerment among the people,鈥 she wrote. 鈥淭his can be done in many ways, including inundating audiences with information, producing distractions to dilute their attention and focus, delegitimizing media that provide accurate information 鈥 or generating harassment campaigns designed to make it harder for credible conduits of information to operate, especially on social media which tends to be harder for a government to control like mass media.鈥

The tactics Tufekci describes can be used by humans, but are particularly well executed by the use of 鈥渂ot accounts.鈥 Facebook has previously estimated that it has between 67 million and 137 million 鈥渞obot鈥 users, which have been used to spread disinformation, silence voices through spam intimidation and flood discussion with posts that drown out important information. These are tactics used by governments around the globe to suppress speech, with China as a prime example. They were also deployed by Russia in the runup to the 2016 U.S. presidential election.

听That鈥檚 because they don鈥檛 match up to the 鈥渘oble old ideas about free speech鈥 from the likes of John Stuart Mill, who proposed a marketplace of ideas where the truth will eventually surface. Viral fake news stories will always get more attention in the 鈥渕arketplace鈥 of Facebook than unbiased coverage because the algorithms driving the platforms are not built to share ideas equally as traditional media have tried to do.鈥ㄢㄢ溾 But back then, every political actor could at least see more or less what everyone else was seeing,鈥 she wrote. 鈥淭oday, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. During the 2016 presidential election ... the Trump campaign used so-called dark posts鈥攏onpublic posts targeted at a specific audience鈥攖o discourage African Americans from voting in battleground states. The Clinton campaign could scarcely even monitor these messages, let alone directly counter them. Even if Hillary Clinton herself had taken to the evening news, that would not have been a way to reach the affected audience. Because only the Trump campaign and Facebook knew who the audience was.鈥

Lawyer and Columbia Law Professor Tim Wu and others have continued this argument, saying that while Facebook doesn鈥檛 consider itself a news source, it plays an extremely important role in the construction of public discourse and is dangerously faulty because of the dynamics it allows. Wu says that platforms like Facebook create 鈥渇ilter bubbles鈥 through the algorithms they deploy, promoting more and more extreme content to keep users on the site.

鈥淎s the commercial and political value of attention has grown, much of that time and attention has become subject to furious competition, so much so that even institutions like the family or traditional religious communities find it difficult to compete,鈥 Wu wrote. 鈥淲ith so much alluring, individually tailored content being produced鈥攁nd so much talent devoted to keeping people clicking away on various platforms鈥攕peakers face ever greater challenges in reaching an audience of any meaningful size or political relevance.鈥

An ethics-based approach

Professor Shilo Brooks near bookcase

The Engineering Leadership Program at the 精品SM在线影片 College of Engineering and Applied Science is trying to teach students to think about these issues philosophically and holistically before and during the engineering process to head off problems. 鈥

Program Director Shilo Brooks teaches a course with that goal in mind based on classic philosophical texts from Aristotle and speeches from President Abraham Lincoln. Through the semester, he pairs those texts with modern case studies like Zuckerberg鈥檚, hoping to spur great discussion about ethical questions in engineering. All of classes count toward humanities and social sciences requirements in the college, and similar programs can be found around the U.S., including the MIT Benjamin Franklin Project.

"Some of the most innovative people in Silicon Valley don't understand the nature of politics or the polity, nor do they understand human nature," Brooks said. "And so the things that they invent are, to them, simply cool. But they have an effect on the human soul and on the soul of the polity at large. One has to adequately understand how those two things work in order to see how one's inventions will affect or alter them."

Brooks likens platforms like Facebook to a drug that is administered to a patient with little testing on the potential long-term health risks. He said Zuckerberg鈥檚 education did not prepare him for the challenges that come with that kind of problem, such as testifying before Congress about how his platform could be censoring speech, for example.

鈥淚 think these things require a front-end education,鈥 Brooks said. 鈥淭his course is the first iteration of that for me. I think that there's more to be done for our students.鈥

Casey Fiesler is an assistant professor in听Information Science at 精品SM在线影片 and a social computing researcher who听studies governance in online communities and technology ethics. She is working to introduce ethics training to coders earlier in their education through the Responsible Computer Science Challenge with Mozilla.鈥ㄢ Early exposure to ethics is important, she said, because many people who take introductory coding classes may never take another course in the field again. Having learned what they need to make a program, they simply start and don鈥檛 look back. You can do a lot of damage with just a little knowledge, she said.

In many cases, ethics is taught as a specialization or add-on to computer science classes, she said. That can reinforce the idea that it is someone else鈥檚 problem to deal with after the code is created.

鈥淲hen you look at the Cambridge Analytica Facebook data privacy scandal, you have psychological scientist Aleksandr Kogan, the whistleblower data scientist, and then Zuckerberg鈥搘ho probably dropped out before he got the one required ethics class,鈥 Fiesler said. 鈥淪o the idea is to teach ethics to you as you go along so that everyone has the tools they need to make these decisions. The solution to this isn鈥檛 adding ethicists to the company now鈥攖hough Facebook hiring more philosophers would get no argument from me. But instead everyone should know enough to think through these things.鈥

Drawn hands holding phones showing YouTube

From funny videos to live shootings

The ease of uploading clips to YouTube was the biggest early selling point of the platform. Users quickly filled the platform with daily vlogs that mimic celebrity culture, product reviews targeted to parents or children, travel tips, funny home videos, pet videos, professional music videos, old TV commercials, political perspective essays, conspiracy theories and anything else鈥攍iterally anything鈥攜ou could possibly think of. Creators soon became de facto employees of the company through shared ad revenue, adding a dynamic that isn鈥檛 seen in Facebook or Twitter and adds a layer of liability around speech issues.

As the user base grew鈥擸ouTube sees over 1.8 billion users every month鈥攖he platform鈥檚 creators needed tools to help them simultaneously police content and encourage users to watch the next video. The process today is similar to that found at Facebook, with a team of humans, algorithms and AI working together in a delicate balance to achieve both goals.

The scales tipped drastically on March 17, 2019, when a man in New Zealand recorded himself committing a mass shooting on worshipers in mosques, uploading it live to YouTube and other platforms. Because many of the clips were altered with minor edits or added filters and graphics, the company鈥檚 AI and human detection systems were overmatched when it came to removing the content and stopping the sharing cycle. 鈥ㄢㄢ淎s its efforts faltered, the team finally took unprecedented steps鈥攊ncluding temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems,鈥 The Washington Post reported. 鈥(The company) has hired thousands of human content moderators and has built new software that can direct viewers to more authoritative news sources more quickly during times of crises. But YouTube鈥檚 struggles during and after the New Zealand shooting have brought into sharp relief the limits of the computerized systems and operations that Silicon Valley companies have developed to manage the massive volumes of user-generated content on their sprawling services. In this case, humans determined to beat the company鈥檚 detection tools won the day鈥攖o the horror of people watching around the world.鈥

The incident is an important example in discussion about free speech as it relates to these platforms and government regulations. Specifically: who is responsible for speech made on the platform? While the U.S. has little to no regulations in this area, New Zealand law forbids dissemination or possession of material depicting extreme violence and terrorism. That may apply to companies like YouTube involved in sharing the content.

Freedom of expression is a legal right in New Zealand, enshrined in their bill of rights the same way it is in America. 鈥澨

Lawmakers in the country are also concerned with how other white supremacist material posted to these platforms potentially inspired the shooting and were shared widely through algorithms designed, implemented and tweaked by engineers to keep users online longer. 听

Prime Minister Jacinda Ardern told the New Zealand Parliament that 鈥渨e cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher, not just the postman.鈥

The shooting shows that engineers will need to consider how their code matches up to legal requirements, their employer鈥檚 desire for profit and the user鈥檚 actions.听

Author Lawrence Lessig argues in his book Code that 鈥渃ode is law.鈥 That is, 鈥渃ode writers are increasingly lawmakers. They determine what the defaults of the internet will be; whether privacy will be protected; the degree to which anonymity will be allowed; the extent to which access will be guaranteed. They are the ones who set its nature. Their decisions, now made in the interstices of how the net is coded, define what the net is.鈥

鈥淭oo many take this freedom as nature. Too many believe liberty will take care of itself. Too many miss how different architectures embed different values, and that only by selecting these different architectures鈥攖hese different codes鈥攃an we establish and promote our values,鈥 Lessig continued. 鈥淚f commerce is going to define the emerging architectures of cyberspace, isn鈥檛 the role of government to ensure that those public values that are not in commerce鈥檚 interest are also built into the architecture?鈥

Fiesler said this concept can be understood by looking at the forces that would prompt YouTube to make changes to its platform鈥檚 code. That includes market forces like a PR disaster that leads to loss of users and income like the New Zealand shooting. It also includes legal regulations from governments for the public interest including oversight institutions.

European Union Flag and continent with a lock

Government intervention

While the EU has begun to regulate the internet, the U.S. has yet to truly explore government regulation options, especially those related to freedom of speech.听

Wu, the Columbia law professor, offers ways the government could intervene to promote a healthy speech environment, many of which would be directly implemented by engineers. One example would be a law that treats major speech platforms as public trustees and requires them to police users and promote a robust speech environment. Wu describes this application as similar to the fairness doctrine, which previously obligated听traditional broadcasters to use their power over spectrum to improve the conditions of political speech by offering equal time to opposing viewpoints.听

While Wu believes such a law would be hard to administer, he said 鈥渢he justification for such a law would turn on 鈥 the increasing scarcity of human attention, the rise to dominance of a few major platforms, and the pervasive evidence of negative effects on our democratic life.鈥

Selected references and further reading

by Zeynep Tufekci, Wired

by Elizabeth Dwoskin and Craig Timberg, The Washington Post

Codev2 by Lawrence Lessig

by Charlotte Graham-McLay and Adam Satariano, The New York Times

by Casey Fiesler, How We Get to Next听

by Margaret E. Roberts听

by Joseph Cox and Jason Koebler, Motherboard (Vice)听

Another avenue is described by Dartmouth College Engineering Professor Eugene Santos in a recent editorial published in The Hill. Santos argues that guiderails for AI are necessary and the government already possesses several models to put them in place. One example he uses is that of the Federal Aviation Administration, which gave consumers confidence in a new technology through testing, certification and regulation. The administration鈥檚 job has also evolved with the technology and use, offering a road map for how such an organization could respond to changes as AI matures and develops.

Others have argued for breaking up companies like Facebook under anti-trust laws. Zuckerberg, however, argues that doing so would leave companies with the same problem and less resources.

Ultimately Fiesler said that implementing these ideas would come down to engineering choices, influenced by their training and background. Research done by Fiesler and others proves that exposure to ethical questions within the existing computer science framework can be beneficial. She was part of a 2018 paper with听Professor Tom Yeh that听studied听how computer science students reacted to courses taught this way. Many students brought up the fact that this was the most thought-provoking course they had taken and that it opened their eyes to new dimensions of their field.

鈥↖n a related article, Fiesler wrote:

Silicon Valley and the way forward

Engineers will always have a positive view of their projects and their future potential. Zuckerberg lauded his platform coming off the 2016 elections, highlighting the ability of candidates to talk to voters directly. Tufekci, in her book about networked protests, says this position鈥攎ore speech and participation is better鈥攊s held by many in the tech industry, despite the problems that Facebook exemplifies in contributing to an unhealthy public sphere.Tufekci said no public sphere for ideas has ever been fully immune to problems, but they have all required mechanisms to help find truth and compare ideas. Engineers will be asked to find and implement those mechanisms going forward either by government intervention, public pressure or other factors. The stakes are too high for them not to, she said.

鈥淏y this point, we鈥檝e already seen enough to recognize that the core business model underlying the Big Tech platforms鈥攈arvesting attention with a massive surveillance infrastructure to allow for targeted, mostly automated advertising at very large scale鈥攊s far too compatible with authoritarianism, propaganda, misinformation and polarization,鈥 she wrote. 鈥淭he institutional antibodies that humanity has developed to protect against censorship and propaganda thus far鈥攍aws, journalistic codes of ethics, independent watchdogs, mass education鈥攁ll evolved for a world in which choking a few gatekeepers and threatening a few individuals was an effective means to block speech. They are no longer sufficient.鈥

Facebook, YouTube and other creators are not the only responsible parties and should not be the sole deciders on these issues. There are significant economic costs and social tradeoffs to changing the structures of these platforms. But one of the first steps forward in understanding the ethical issues outside of the engineering is early training and discussion. The other is understanding that these solutions will not be as simple as building a better AI system. Instead, they require a holistic approach that starts with the young engineer.

This original research was created in partnership with听精品SM在线影片鈥檚 LeRoy Keller Center for the Study of the First Amendment听as part of its mission to encourage the study of topics relating to the nature, meaning and contemporary standing of First Amendment rights and liberties.