Late one Sunday evening in March, Connecticut's junior U.S. Senator, Chris Murphy, took to Twitter to raise the alarm over one of the Internet's most buzzed-about artificial intelligence programs, ChatGPT.
In a tweet that quickly became the subject of ridicule from the tech world, Murphy claimed that the ChatGPT had "taught itself to do advanced chemistry," without prompting from its developers, and subsequently made that knowledge public.
"Something is coming," Murphy warned. "We aren't ready,"
Critics of Murphy’s message argued that it fundamentally misrepresented the technology behind chatbot programs like ChatGPT, which rely on algorithms to identify language patterns from pre-fed texts, in order to respond to questions and other prompts, such as a request to write an essay or short story.
"We believe that a practical approach to solving AI safety concerns is to dedicate more time and resources to researching effective mitigations and alignment techniques and testing them against real-world abuse," ChatGPT’s developer, OpenAI, said in a lengthy safety statement posted to its website on Wednesday.
Still, Murphy has responded to the backlash by accusing critics of shaming him for an apparent lack of tech proficiency, while also doubling down on some of his claims. "I’m not an engineer," Murphy said in a subsequent tweet. "But I’m also not an idiot. I know something dangerous when I see it."
The senator spoke with CT Insider by phone this week to articulate his concerns about AI, social media and why he isn’t fearful that sentient computers will destroy humanity. The following conversation has been edited for length and clarity.
It took Facebook four years to acquire 100 million users. It took ChatGPT two months, and the disruptive potential of ChadGPT-style AI and other types of AI are fundamentally more significant than conventional social media.
We're talking about a technology that has the potential to outsource basic human functions, like creativity, composition and conversation. It used to be that machines were replacing human's physical labor. Now machines are replacing human's mental labor. That is civilization changing, and I just don't believe we should let it happen without some conversation about how to get the good from AI, without all of the potential downside.
Yeah, I haven't seen it.
But I just don't think we understand what's coming. Already, you can have an interesting conversation with ChatGPT that'll keep you intrigued for a couple of hours, and this is just the start. So to your question, I'm not sitting around worrying about AI becoming sentient and destroying humanity. I'm interested in the moral question of what it means when the things we produce are no longer necessary.
There's plenty of evidence to show how automation has already dramatically affected wage growth. The increasing separation between rich and poor in this country is driven in large part by automation, but we still live in a world in which everybody who wants a job can get one. AI is going to further consolidate economic gains in the hands of a small group of winners who know how to manipulate AI, but it also risks eliminating certain job classes altogether. I think there is a question as to whether AI is going to reduce the number of jobs necessary in the economy and start to really dry up sources of employment.
I said this online, I think the shaming campaign of that tweet from the so-called technology class was really instructive. I'm not a computer scientist, I'm not an engineer, I talk about AI like regular people. I think some of the criticism assumes that I thought AI had sentient capabilities. I don't. I understand that AI is not teaching itself in the way that a human being would teach itself. But I think the words I used in that tweet are the words that regular people use when they talk about AI, and I think the technology class wants to shame people like me into shutting up so that they set the rules, and they get to capture all the profits from AI.
I think we need to have a public discussion about the future of artificial intelligence, and I think that we should get to set the rules for what the terminology is, not the class of technologists that has already made a bunch of pretty catastrophic mistakes in the rollout of recent technologies.
What doomsday prediction did I make?
There are lots of people who claimed I said things I didn't say. I didn't make a doomsday prediction. But I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she's smarter and people like her are smarter than the rest of us.
I don't think we're there yet. I think right now we're just learning about the capabilities of the current products like ChatGPT and what they will be capable of six months and six years from now. So I don't have a policy recommendation yet.
But it does feel like we need to decide what we want from AI and what we don't want from AI. For instance, deep fakes are a real problem and are going to increasingly be a very big problem both in our economy and in our social lives. Some of that technology is AI, some of it isn't. But you know, it is a little strange that there's no statutory prohibition against somebody pretending to be me, pretending to use my voice, either in social settings or in commercial settings. So that's something we should look at pretty quickly.
I will be doing meetings and briefings with industry leaders. This firestorm that was created by my Sunday-night tweet has caused a lot of my colleagues to approach me wanting to work on this issue together. So I'll be working with colleagues, I'll be doing some meetings with experts. And then I'll just be having a conversation with regular people here in Connecticut, about what they want and what they don't want.
I do. I think what we're learning is that virtual connection cannot replace in-person connection. We thought social media was going to connect us more easily to each other, and it has ended up making many of us miserable.
Watching others on TikTok, or connecting only via text is just not emotionally satisfying in the way that in-person connection is. AI is even more dangerous. Because, you know, these AIs give you the sensation that you're having a conversation with a human being without ever being in contact with a human being, and they're giving you advice on your social life. I just think that AI has the potential to drag many people sort of deeper into lives of isolation, because your smartphone and your computer are going to be able to deliver to you sort of more advanced and more personally-tailored content than ever before.
A lot of it. Some of the technology people may criticize my terminology, but I'm pretty technologically adept. You know, I'm one of the only senators who personally posts on social media. I've got teenage kids who are all over social media. Both my kids are pretty healthy, but I've watched some of their friends become very unhealthy in part through their social media usage. Through my kids, I see the dark corners of social media that can breed envy and resentment and self hate. I'm pretty convinced that [social media] is doing if not more harm than good, than no more good than harm.
I mean, I hear their concerns about TikTok, I don't think we should be in the business of handing a potential espionage software package to the Chinese. But I think this myopic focus on TikTok is dangerous.
TikTok is the product that most of our young kids are using. If we ban TikTok, some other dangerous product is going to come along that has the same capabilities. So my colleagues act as if the only problem is Chinese espionage. I'm not saying that's not a problem. But the bigger problem is the damaging impact that all social media has on our culture, and on our economy. So I am much more focused on Congress stepping up and beginning to control social media, regardless of who the owner is.
I'm still learning. We passed that bill at the federal level. I didn't object to it, so I'm on record as supporting banning TikTok on government devices at the federal level. But I still am worried that it's symptomatic of the government's hyper-focus on TikTok at the exclusion of a broader debate over the impact of social media.
I wouldn't have written the Utah bill the same way if it was my legislation. But I think parents right now feel very impotent and powerless when it comes to their kids' interaction with social media. So I'm very open to proposals to give parents more power when it comes to kids' social media accounts. I think that there's going to be some bipartisan agreement around these questions of parental empowerment when it comes to kids’ access to social media.
I mean, I have a pretty good handle on what my kids are doing on social media, but given that the algorithms are constantly changing the content that they see, I can never be completely aware of what my kids are seeing online. It used to be kids were watching TV, you were looking at the same TV. They knew what channels they were watching. Now, they're looking at a tiny phone. The content they're seeing is changing all the time based on the algorithm and parents have very little visibility into that. That's frightening for a lot of parents, deservedly so.
Well, you're sort of asking me to draft a piece of legislation, which I'm not ready to do yet. But I think there's a legitimate question as to how much visibility parents should have. You know, the Utah bill, I think, gives the right to parents to essentially surveil their kids online at all times, and I have mixed feelings about that level of intervention.
Let's back up for a minute and talk about all the positive things that happen on social media. There are a lot of communities that you would never be able to find in person that you can find online. There's just a lot of fun that happens online. My kids are constantly showing me the stupidest TikTok and YouTube videos, and I totally get a kick out of that. I'm not trying to be a Luddite Scrooge, I understand there's a lot of value when it comes to social media, but there's tremendous downside.
Social media polarizes our political debate in this country. It elevates the most extreme voices like Alex Jones. While it allows positive communities to be fostered it also allows really dangerous communities, like the people who organized January 6, to find each other. So there's a lot of bad stuff happening on social media today, side by side with a lot of positive things.
I think there's a growing bipartisan agreement around the need to more tightly control social media. So I think you will see some pretty new and exciting bipartisan partnerships this year, around the regulation of social media. I think a lot of that will be connected to traditional social media like Facebook or Twitter, TikTok.
When it comes to AI, the technology is moving so fast, that it's going to take a little bit longer for Congress to learn and catch up. So I would hope that by the end of this year, we're discussing legislation to regulate AI but we're just not ready, my sense is we're not ready yet.