Panic about overhyped AI risk could lead to the wrong kind of regulation

Divyansh Kaushik and Matt Korda – Vox

This article discusses the proliferation of sensationalist narratives and misinformation surrounding artificial intelligence (AI) and its potential impact on responsible AI governance. It addresses the exaggeration of risks and threats, such as comparing AI to nuclear weapons, and argues for nuanced regulations, transparent operations, and accountability in AI applications. The article emphasizes the need to address genuine catastrophic risks, promote responsible AI research, implement data privacy reforms, and foster collaborations between academia and industry to develop effective AI policies that benefit humanity and minimize risks.

A college student created an app that can tell whether AI wrote an essay

Emma Bowman – National Public Radio

Princeton University senior Edward Tian has developed GPTZero, an app to detect whether text is written by the AI chatbot ChatGPT, addressing concerns about AI plagiarism in academia. The app uses “perplexity” and “burstiness” indicators to distinguish human-written text from AI-generated content. While not foolproof, the app aims to bring transparency to AI and promote responsible adoption of AI technologies. Other efforts to curb AI plagiarism include OpenAI’s plan to watermark GPT-generated text, and Hugging Face’s tool to detect AI-written content. The New York City education department has blocked access to ChatGPT in schools due to concerns about its impact on learning.

Self-Taught AI Shows Similarities to How the Brain Works

Self-Taught AI Shows Similarities to How the Brain Works August 11, 2022 Authored: By Anil Ananthaswamy Published: Quanta Magazine Summary: AI tools, like generative AI, are increasingly being used by researchers for writing scientific papers, grant applications, and more. These tools have the potential to transform scientific communication and publishing, benefiting non-native English speakers and […]

Principled Artificial Intelligence

Principled Artificial Intelligence January 15, 2020 Authored: Jessica Fjeld & Adam Nagy Published: MAPPING CONSENSUS IN ETHICAL AND RIGHTS-BASED APPROACHES TO PRINCIPLES FOR AI Summary: The article discusses the development of AI principles by various organizations and highlights eight key themes found in these principles: Privacy, Accountability, Safety and Security, Transparency and Explainability, Fairness and […]

Hold Artificial Intelligence Accountable

Chamith Fonseka
Harvard University, The Graduate School of Arts and Sciences


The article discusses the concept of invisible AI, where AI algorithms operate behind the scenes to predict outcomes and make decisions in various fields. It highlights how invisible AI can be used for price steering, job hiring, and even criminal sentencing, and how lack of transparency in AI algorithms can lead to biased and unfair outcomes. The article calls for regulations and transparency in AI to ensure ethical and equitable use of the technology.