All three executives have declined to take responsibility for spreading misinformation that resulted in the Capitol riots earlier this year
The CEOs of tech giants Facebook, Google and Twitter faced some tough questions in the US Congress on Thursday as they appeared in a virtual hearing held by the Energy and Commerce Committee and two Senate subcommittees over the proliferation of disinformation on their social media platforms.
The hearing was announced in February, over a month after the riot in which Trump supporters stormed the Capitol Building while lawmakers were attempting to tally votes for the US presidential election.
The virtual hearings on Thursday continued for more than five hours.
The session began with the chair Mike Doyle asking all three executives whether they felt they bore responsibility for spreading misinformation that resulting in the Capitol riots earlier this year.
All of them declined to answer the question with a simple "yes" or "no".
Zuckerberg pinned the blame completely on former US President Donald Trump and a "political and media environment" in the country that has been driving "Americans apart".
Zuckerberg said Facebook took all required steps "to secure the integrity of the election" but then Trump gave a speech on 6th January - "calling on people to fight".
He also said that "Congress should consider making platforms' intermediary liability protection for certain types of unlawful content conditional on companies' ability to meet best practice to combat the spread of this content".
Google's CEO Sundar Pichai claimed that YouTube had removed thousands of videos that violated its rules in the lead-up to the violence.
"We had clear policies and we were vigorously enforcing this area," he said,
Twitter CEO Jack Dorsey told lawmakers that his company worked hard to take down posts and also took various steps to ensure that misinformation was not amplified.
"We didn't have any upfront indication this would happen, so we had to react to it quite quickly," Dorsey said.
Rep. Mike Doyle, chair of the House subcommittee on Communications and Technology, told the CEOs that his staff could easily spot lots of anti-vaccine content on Twitter, Facebook, Instagram and YouTube.
"You can take this content down. You can reduce the vision. You can fix this. But you choose not to," Doyle said.
"You have the means. But time after time you are picking engagement and profit over the health and safety of users."
The lawmakers also voiced concerns about the negative impact that social media platforms could have on children.
Republican Florida Congressman Gus Bilirakis asked Zuckerberg whether Facebook was developing a new Instagram app for children younger than 13.
"I find that very concerning, targeting this particular age bracket, 13 and under, given the free services, how exactly will you be making money?" Bilirakis asked.
"Are you trying to monetise our children, too, and get them addicted early?"
Zuckerberg confirmed that they are "early in our thinking" in how an app for kids would work.
"There is clearly a large number of people under the age of 13 who would want to use a service like Instagram," Zuckerberg said.
Google CEO Sundar Pichai was asked whether Google has researched the effects of YouTube and other Google products on the mental health of children.
Pichai said that Google consults many experts, including mental health organisations, on the issue, and works with partners to create good content for kids, for example, video about cartoons, Sesame Street and science.
Pichai said that this was an important issue, and that he too worries about the screen time of his children.
Commenting on virtual hearing of the CEOs of leading social media firms, Emma Ruby-Sachs, executive director of advocacy organisation SumOfUs said: "The January 6th insurrectionists are being charged and arrested while these three men, who fed them the money, followers, and tools to make the whole thing happen, get away with a softball grilling from Congress. Break up Big Tech or just sit back and wait for the next armed rebellion.
"Lawmakers and the media tend to focus on Facebook and Twitter, while Google gets away with being a massive contributor to the disinformation machine. These websites have huge reach on Facebook, but are able to sustain themselves thanks to Google ads. Until Google changes its policies on the monetisation of disinformation, the company is equally responsible for the violence on January 6."
Cris Pikes, CEO and founder of Image Analyzer, which is a member of the Online Safety Tech Industry Association (OSTIA), said, "It is perfectly feasible to apply AI technology today which can automatically check billions of posts before they are permitted onto a digital platform where they could harm other users."
"When Mark Zuckerberg says it is not 'feasible' to remove harmful posts, is he saying it is not technically feasible to moderate content posted by Facebook's users? Or is he saying that it is not commercially viable for Facebook to put those technical measures in place to protect users?"
"With regard to Mr Zuckerberg's second observation that large platform operators should not be held liable if a particular piece of content evades its detection, where does that leave the users? That's like a water treatment firm saying that it shouldn't be held responsible if a particular piece of content evades its detection and thousands of people get sick. The response in both cases is to improve your filters. The technology is there, there are no excuses. This is precisely why legislators in the UK, Europe and US are stepping in to make it compulsory for platform operators to exercise more control over user-generated content and combat online harms."