Today’s random pick of 3 of my 12 favorite problems:
- How do we save the world? How do we figure out what that even means?
- How can we create new models of education that are fun and cultivate a better world while working within systems that make this work challenging?
- How can we create cultures where giving and helping are the norms when so many times, the prevailing norm is that of survival?
Today I read an article from Erik Hoel’s The Intrinsic Perspective. I believe that I learned about this publication from Elle Griffin’s recommendations. I was drawn into the publication because of its focus on interdisciplinarity:
The Intrinsic Perspective is about consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Here lies the internet, murdered by generative AI
I think this article would be a great one to have my students read.
Erik gives several examples of poor quality AI-generated content that is only there for profit:
- Erik wrote a book called The World Behind the World in 2023, and very soon after that, AI-generated “workbooks” for his book were appearing on Amazon.
- Deepfake porn being so accessible
- A company executive bragging on Twitter about how they used AI to effectively steal another company’s content and users
- Sports Illustrated articles written by AI-generated authors with AI-generated bios
Erik discusses a particularly horrifying problem of AI-generated YT videos. Some of these channels have millions of subscribers.
They don’t use proper English, and after quickly going through some shapes like the initial video title promises (albeit doing it in a way that makes you feel like you’re going insane) the rest of the video devolves into randomly-generated rote tasks, eerie interactions, more incorrect grammar, and uncanny musical interludes of songs that serve no purpose but to pad the time. It is the creation of an alien mind.
All around the nation there are toddlers plunked down in front of iPads being subjected to synthetic runoff, deprived of human contact even in the media they consume. There’s no other word but dystopian. Might not actual human-generated cultural content normally contain cognitive micro-nutrients (like cohesive plots and sentences, detailed complexity, reasons for transitions, an overall gestalt, etc) that the human mind actually needs? We’re conducting this experiment live. For the first time in history developing brains are being fed choppy low-grade and cheaply-produced synthetic data created en masse by generative AI, instead of being fed with real human culture. No one knows the effects, and no one appears to care.
Apparently from one of OpenAI’s blog posts in 2019, they were mainly worried about:
- Fake news
- Impersonating people online
- Automating the production of faked and abusive social media posts, spam and phishing content
Erik believes, which I agree with, that the cultural pollution will continue because ChatGPT was so bad at forseeing its negative consequences:
That is, the OpenAI team didn’t stop to think that regular users just generating mounds of AI-generated content on the internet would have very similar negative effects to as if there were a lot of malicious use by intentional bad actors. Because there’s no clear distinction! The fact that OpenAI was both honestly worried about negative effects, and at the same time didn’t predict the enshittification of the internet they spearheaded, should make us extremely worried they will continue to miss the negative downstream effects of their increasingly intelligent models. They failed to foresee the floating mounds of clickbait garbage, the synthetic info-trash cities, all to collect clicks and eyeballs—even from innocent children who don’t know any better.
The above quote is a great one to share with my students. Doing a good job at imagining unintended consequences is really hard because it’s so hard to infer intent from a given piece of content. Whether a human used AI to write it with good or bad intent, the quality of the content is the same—as long as close to 100% was written with AI—the quality of content in which AI is a helper for a creative is likely higher—see next paragraph.
This is not to say that AI is all pollution. Creatives use AI to augment their creative process. David Perell uses AI to augment his writing process. Jeremy Nguyen posts about AI prompts to augment creativity.
Erik goes on to talk about how AI pollution is a tragedy of the commons. When I read this header I was immediately on guard because I remember one of my Causal Inference students in Spring 2023 was an economics student and frustrated by prevailing economic notions like the tragedy of the commons, which assume that humans are bad at heart. She told me that there were several examples in other cultures of communities using and sustaining a resource.
We are currently fouling our own nests. Since the internet economy runs on eyeballs and clicks the new ability of anyone, anywhere, to easily generate infinite low-quality content via AI is now remorselessly generating tragedy.
The solution, as Hardin noted, isn’t technical. You can’t detect AI outputs reliably anyway (another initial promise that OpenAI abandoned). The companies won’t self regulate, given their massive financial incentives. We need the equivalent of a Clean Air Act: a Clean Internet Act. We can’t just sit by and let human culture end up buried.
While I agree with my former student that humans are fundamentally good, I think the key point here is that AI has already done so much harm. But we can fix it with work.