Artificial-Intelligence

How to talk about AI (even if you don’t know much about AI)

How to talk about AI even if you dont know | itkovian

Deeper learning

Catching crappy content in the age of artificial intelligence

Over the past 10 years, Big Tech has gotten really good at a few things: language, prediction, personalization, storage, text analytics, and data processing. But it’s still surprisingly bad at catching, tagging, and removing malicious content. It is enough to recall the spread of conspiracy theories about elections and vaccines in the United States in the last two years to understand the real damage this causes. Ease of use of generative AI could boost the creation of more harmful online content. People are already using AI language models to create fake news sites.

But could AI help with content moderation? The latest large language models are much better at interpreting text than previous AI systems. In theory, they could be used to boost automated content moderation. Read more from Tate Ryan-Mosley in his weekly newsletter, The Technocrat.

Bits and bytes

Scientists have used artificial intelligence to find a drug that can fight drug-resistant infections
Researchers from MIT and McMaster University have developed an artificial intelligence algorithm that has allowed them to find a new antibiotic to kill a type of bacteria responsible for many drug-resistant infections that are common in hospitals. This is an exciting development that shows how AI can accelerate and support scientific discovery. (MIT News)

Sam Altman warns that OpenAI may leave Europe for AI rules
At an event in London last week, the CEO said OpenAI could « cease to operate » in the EU if it fails to comply with the upcoming AI Act. Altman said his company has found much to criticize in the how the AI ​​Act was worded and that there were « technical limits to what is possible ». This is probably an empty threat. I’ve heard Big Tech say this many times before on one rule or another. More often than not, the risk of losing revenue to the second largest trading bloc in the world is too great and they figure something out. The obvious caveat here is that many companies have chosen not to operate, or have a limited presence, in China. But this is also a very different situation. (Time)

Predators are already taking advantage of artificial intelligence tools to generate child pornography
The National Center for Missing and Exploited Children has warned that predators are using generative AI systems to create and share fake child sexual abuse material. With powerful generative models implemented with inadequate and easy-to-hack protections, it was only a matter of time before we saw cases like this. (Bloomberg)

Tech layoffs have devastated AI ethics teams
This is a nice overview of the drastic cuts that Meta, Amazon, Alphabet and Twitter have made to their teams focused on internet trust and safety, as well as AI ethics. Meta, for example, finished a fact-checking project that had taken six months to complete. As companies race to implement powerful AI models into their products, executives like to boast that their technology development is safe and ethical. But it’s clear that Big Tech sees teams dedicated to these problems as expensive and expendable. (CNBC)

Hi, I’m Samuel