Transparency in AI is Good, but Transparency in Online Content Moderation is Better 

Please log in or register to do it.

NEWYou can now listen to Fox News articles!

President Joe Biden’s October 30 executive order on artificial intelligence is very broad and focuses a lot on safety but very little on freedom. 

It includes clauses on protecting personal privacy and copyrights of training data, watermarking AI content and deep fakes, equitable civil rights, employee replacement by AI tech, and safety of use cases that may affect the military or critical infrastructure.  

It also provides a vague reference to minimal transparency by requiring AI companies to share results of red-team safety tests of their platforms which will eventually follow criteria of not-yet-developed standards by the National Institute of Standards and Technology.  


Many other government agencies, including Departments of Commerce, Treasury, Energy, Defense and Homeland Security are invoked with roles in AI regulation and safety standards. Indeed, there are only a few lonely agencies not directly invoked to participate in regulating AI.  

Government regulations on AI transparency have to include content guidelines as well. (Getty Images)

The goals of ensuring safe and non-military uses of AI are laudable. What is missing is real transparency on the training data and content restrictions that govern how these black-box AI systems generate content and whether “safety” standards will include viewpoint neutrality.  

If one or a few AI platforms become a monopoly, and potentially replace the current monopoly of Google search platforms, then such transparency and viewpoint-neutrality are crucial. This will ensure free and fair distribution of news, opinions and academic debates without censorship or bias from so-called authoritative sources that promote the then-current government narrative.  

Today, a few well-known online social media, search and video sharing platforms are already monopolies that completely dominate 80-90% of the visibility and sharing of content. 

Americans now report they get most of their news from these online sources, so these monopoly platforms have become the new town square of news, political opinion and academic debate. If a content creator or news site is not visible on the front page of Google search results, that site will disappear from the planet in terms of reader access.  

If ensuring both online safety and viewpoint neutrality is a bridge too far for our current divided Congress to tackle, a first step that can gather bipartisan support and have a strong positive impact is to mandate public transparency for the online content moderation rules of these monopoly platforms. Such transparency would not need to ask companies to expose their trade secrets and core intellectual property, but would focus on the following:  

1. Publish online content moderation standards with examples so that users can easily understand which content is allowed and which content may be moderated in some way.  

2. For any enforcement action, an explanation of what specific content rules were broken and which specific content broke the rules.  

The goals of ensuring safe and non-military uses of AI are laudable. What is missing is real transparency on the training data and content restrictions that govern how these black-box AI systems generate content and whether “safety” standards will include viewpoint neutrality.  

3. If any third-party fact-checkers are involved in the enforcement action, publish the names, qualifications, funders and history of each fact-checker or fact-checking organization.  


4. Rapid public reporting of any communications to/from government entities including entities funded by the government, all employees and contractors of such entities. Exception for official police or national security requirements.   

5. If an AI platform approaches monopoly status with more than 50% share of AI content generation measured by active users or revenues, that platform would need to publish all relevant training data sources and content rules related to the content generated by such AI platform.  

Enforcement could be handled like the enforcement of false advertising, with financial penalties for the companies with platforms that fail to publish and follow their content moderation rules. However, the glare of publicity is likely more powerful and important to help these companies publish and consistently and fairly follow reasonable content rules.  


These requirements or similar transparency rules have appeared separately in various proposals from both Democrat and Republican sides of Congress.  

Transparency alone does not solve all the challenges of online safety and viewpoint neutrality and related issues with existing Section 230 law, but it is a strong start and has a far higher chance of garnering bipartisan support for passage.  


Related Posts
Schumer says ‘good night for American people’ after Senate passes bill averting government shutdown

The Senate passed a continuing resolution (CR) late Wednesday night to fund federal agencies into early next year, temporarily averting Read more

Man Obsessed With Online Conspiracy Theories Convicted in Paul Pelosi Attack

A jury on Thursday convicted David DePape of federal crimes for breaking into the San Francisco home of Nancy Pelosi Read more

Former cops charged after stealing thousands in disability payments, joking about being a ‘good actor’

Two former California police officers, who are married, were charged with multiple felonies in connection with their false workers' compensation Read more

‘Sopranos’ and ‘Good Fellas’ actress Suzanne Shepherd dead at 89

Actress Suzanne Shepherd, known for her roles in the television program "Sopranos" and film "Good Fellas," has passed away, Fox Read more

OpenAI Board Says Sam Altman Will Not Return as C.E.O.
OpenAI Board Backs Decision to Oust Sam Altman as C.E.O.

Leave a Reply

Your email address will not be published. Required fields are marked *