Sassy Politics
Sassy Politics is a weekly political commentary show that’s feminist AF, independent, and unapologetically sassy.
Hosted by Christi Chanelle, this podcast breaks down the news with sharp wit, sarcasm, and a side of are-you-kidding-me energy. No corporate talking points. No both-sides nonsense. Just real talk about the issues that matter.
From book bans and culture wars to reproductive justice, economic inequality, grassroots movements, and clown behavior in Congress—Christi covers it all through the lens of people over profit, equality over ego, and facts over fearmongering.
This is the show for people who are tired of performative politics and polished punditry. It’s for folks who care about justice, value truth, and want to understand the headlines without the BS.
Sassy Politics is smart, sarcastic, and rooted in real people, real impact—because someone had to say it.
New episodes every week.
Follow along on TikTok, YouTube, and IG @SassyPoliticsPod
More at ChristiChanelle.com
Sassy Politics
AI Could End Humanity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
💬 Question for You:
When it comes to AI, what questions do you want answered?
What confuses you? What worries you? What are you curious about?
Comment on YouTube or message me — I want to start covering AI regularly, and your questions will shape future episodes.
Artificial intelligence is no longer theoretical. It is already reshaping jobs, industries, creativity, information, and global power structures in real time. And here’s the part that feels unsettling for many of us: none of us were asked before this started.
In this episode, Christi dives into one of the biggest questions people are quietly wondering:
Could AI actually pose a risk to humanity?
This episode explores:
- The truth about AI and job disruption
- Why governments and corporations are racing to dominate AI
- Expert warnings about long-term risks and human extinction scenarios
- Environmental concerns around data centers, energy, and water use
- The opportunities AI creates for everyday people
- Why adaptation matters more than fear
- How humans — not machines — may ultimately determine the outcome
Christi also brings AI directly into the episode for a live conversation, asking the questions many people are thinking but not always saying out loud.
🎙️ New Episodes
New episodes of Sassy Politics release every week:
🗓️ Every Tuesday
🕛 Midnight on podcast platforms
📺 9:00 AM Central Time on YouTube
🔗 Connect with Christi Chanelle
Instagram:
https://instagram.com/christichanelle
YouTube:
https://youtube.com/@SassyPolitics
Facebook:
https://facebook.com/SassyPolitics
BlueSky:
https://bsky.app/profile/christichanelle.bsky.social
Website:
https://www.christychanelle.com
☕ Support the Show
Buy Me a Coffee:
https://buymeacoffee.com/LinkedWeStand
⚠️ Disclaimer
This content may include satirical commentary, altered media, or opinion-based analysis intended for educational, entertainment, or advocacy purposes. Any video clips, images, or quotes that have been edited or recreated are clearly intended as political or cultural critique — not factual representations. Viewer discretion and independent research are encouraged.
Watch the episodes on YOUTUBE: Sassy Politics
https://www.youtube.com/@Sassypolitics
Website
https://christichanelle.com/
TikTok- ChristiChanelle
https://www.tiktok.com/@christichanelle?is_from_webapp=1&sender_device=pc
Facebook - Sassy Politics
https://www.facebook.com/SassyPolitics
Instagram- ChristiChanelle
https://www.instagram.com/christichanelle/?utm_source=ig_web_button_share_sheet
Opening Fears And Framing
SPEAKER_01Chat, why does it always feel like AI is gonna come and take our jobs and completely destroy humanity? Makes me a little nervous.
SPEAKER_00It's natural to feel that way. AI is advancing quickly, and big change can stir up anxiety. But remember, AI is a tool. How it's used depends on human choices. Being thoughtful, ethical, and proactive is key. And hey, being nice doesn't hurt either.
Why AI Feels Inevitable
Facts Over Hype
Jobs Disrupted And Timelines
Personal Tech Memories And Speed
Top Jobs At Risk Countdown
The Global AI Race
SPEAKER_01Okay, well, I'm gonna still be nice to you no matter if you're gonna come and take my job. I'm Christy Chanel, and this is Sassy Politics. That right there is exactly why we're having this episode. Because AI is no longer abstract. It's literally in our hands, and it's here, whether we like it or not. Today we're talking about artificial intelligence. That's right. AI. I know there's a lot of conspiracy around it, and there's a lot of fear-mongering too. And honestly, I get it. I get caught up in that sometimes too. Because the truth is, we're learning this in real time. It's unfolding right in front of us. The future. There's no book telling us what happens next. There's no roadmap. We don't know. So, yes, people speculate. And yes, people worry. What can we do? Let's look at what we actually know: the real facts, the real timelines, the real impacts. Because AI is already shaping jobs, industries, creativity, information, power struggles. And here's the part that really gets me. Not one person asked our opinion. Not one. It's full steam ahead. Who's gonna get there first? Who's gonna dominate the market? Who's gonna make those billions and trillions of dollars? Who's gonna control the technology? We were never asked. As usual. So today, we're gonna talk about it. The risks, the opportunities, the people driving this race. We also need to cover what the experts are warning us about, right? Like that's huge. When experts in the industry are coming out and they're telling us stuff, we need to stop what we're doing and listen, right? And most importantly, what we're going to do now. Because I can't ignore this. Listen, there's a lot of things happening really, really fast. But one thing we can't do is ignore this AI anymore. It's impacting too many areas of my life. And I have a feeling that we are going to be discussing this topic a lot more on Sassy Politics. As things change, I want to talk about it with you. I want to know how you're feeling, what concerns you have, and what you want me to look into. Because nobody is untouched by this unless you're deliberately turning a blind eye. And if you are, I actually feel bad for you. Because whether we like it or not, this train is moving. Now I know that there's backlash too. Environmental concerns, energy use, water consumption, corporate power, all of these are valid concerns. But again, nobody asked our opinion before building this. So the reality that we're sitting in right now is simple. You can fight it or you can learn to use it. And I respect both positions. But personally, I'm getting on the train. Because survival matters and adaption matters. And I'm not in control of whether AI exists. None of us are. So let's start with the question that most people care about. Jobs. According to the World Economic Forum Future of Jobs report, about 44% of worker core skills are expected to change by 2027. That is less than a year away. That is extremely soon. McKinsey Global Institute estimates up to 30% of work hours across the U.S. economy could be automated by 2030. Just sit with that for a second. This is not science fiction. This is here. And those numbers, they come from real research. We are here. Technology has always disrupted work. The Industrial Revolution replaced manual labor with machines. Electricity and manufacturing reshaped economies. Computers automated office work. The internet changed communication overnight. My handle was Melody at A-O-L or something like that. Because I loved music. That technology gave me something priceless because my mom, as you know, is no longer with us. And that's the first thing I did was go pull and print those emails. So I remember the dawn of emailing and the internet very fondly. That technology gave me something priceless. Then smartphones came along and created entirely new industries. The difference now is just speed. Those changes happened over decades. AI is happening over years. So the question becomes: did we already miss the opportunity? Or are we still early? My belief is we're still early. But the window will not stay open forever. I asked Chat what the top 10 jobs most at risk of disruption are. And I know there are several, several lists out there. Of course, I had to ask my own, right? I already have my I already have a good idea of what most of them are, and I'm curious to see what chat has to say. So we're gonna go through the top nine. I'm saving number one for the end. Of course, we need a little bit of a teaser, right? You didn't think I was gonna give you the whole list, did you? No, no, no, no, no. We've gotta get through the whole episode and I will share number one at the end. And and I just want you to know when I'm giving you this list, it's not because I want to scare you. I just want to bring awareness because AI doesn't replace people, it replaces tasks. But your fears is really valid because I have it too. I mean, I wouldn't be talking about this if I didn't, right? Okay, let's get to the list. Number 10, customer service representatives. This is the area that I would debate a little bit because I know I hate when I call a company like any company. You could take any company, Verizon, Walmart, anyone, you can call any of these corporations and get an automated response. Press one for this and two for that, and zero to get a live representative. Well, I want a live representative because your AI robot does not understand what the heck I'm saying. And that I hate that all the time. All the time. So I don't know how good these AI bots are gonna be. I guess we'll see, but I still struggle with having all of our customer service representatives be taken out by AI. Like I really, really do. Because just the word itself, customer service, it should be human. My opinion, my opinion, but it is number 10 on the list. Number nine, retail cashiers and sales associates. I can totally see this happening because it's already here. We're already checking out ourselves at this point, and there's just somebody there making sure we're not stealing, or if there's any, put the product in the bag. I'm like, I did put it in the bag, and it's still beeping, beeping, beep. We're gonna have some of that. So we need these humans to help sort that out. Um, but it's here. That's not a surprise at all, and I'm surprised it's not higher. Number eight, market research and analysis. Yeah, I mean, because that's basically what AI does for all of us, right? It looks and finds deals or it finds um statistics or goes and looks at articles. I don't think there's much hope for that one. Number seven, basic legal assistance and paralegals. Yeah. Uh because now they have you can create your own assistant, right? You can create your own bot to do this stuff for you. So I would assume that that would be the case. It can't go pick up your coffee or your cleaning, but it can schedule it. Number six, data entry specialists. Yeah, because I mean, I could just copy and paste something in there and say, fill in all of this and it can do it for you. I haven't done it myself yet, but I suspect I will be learning how to do it. Number five, administrative assistance, which is the same thing. Number four, translators and yes, yes, translators. I mean, if I went to a foreign country and I was walking around, I could use my phone, but it would be really cool if I I would still probably hire a translator that could give me some history on the things that we're looking at. I would. I don't think they're totally gone. I could see how they could be cleaned out. Um, number three, accounting and bookkeeping roles. And guess what I'm in? Accounting. I'm an AR manager. That's what I do. The company that I work for really strives for customer service, though. And so they want us talking to our customers. Although it's not through phone as much, it's more emails. And emails can happen with a bot. So I don't know. I don't know. There'd be no more of a let me talk to your manager. Yeah. Well, he's a bot, so he doesn't really have one. Um, so yeah, my job is up for grabs by AI. Number two, basic content writers. I think I've used AI to help spawn some ideas, meaning I had the idea, and then I just went to chat to kind of help me collaborate on creating one. But I've never used just AI. I might have one episode that's like way back that I was like, let me just give this a shot. And it wasn't me because I wasn't able to really fully be myself. You know what I mean? I was reading off a script that I didn't really write. Um, yeah, that's I I need I mean the whole reason you're here is to talk a hu about human human stuff, right? Um, human politics. So it can give me the sources and the facts, but not the content truly. But everybody uses it differently. I'm just I just don't use it like that. Okay, so we're gonna get to number one later in the show. So why does it feel like there's a race happening? You know, but in the AI world, why does it feel like they're they're racing each other? Claude and ChatGPT and you know, all of these other ones, they're just they're just sitting there racing each other. Governments and corporations see AI as the next strategic technology, similar to nuclear powers and the space race. The United States, China, and the European Union are investing billions. Major tech companies dramatically increase spending on AI infrastructure between 2023 and 2025. And whoever leads AI will likely shape global economic power. I think we already know that. Right? Right? I uh yes, yes, yes, yes. This is this is true. So then you look at your players and you look and you look and see, oh, which one of you guys are gonna be leading the charge? Which one of you guys are gonna dictate how we live? Okay, I'm spiraling. I'm spiraling.
SPEAKER_00CEO warnings.
Environmental And Energy Costs
Upsides, Productivity, And Learning
The Real Number One Risk
Q&A With Chat: Risks And Safety
SPEAKER_01This is also why you've seen tech leaders sounding the alarms. In 2023, hundreds of AI scientists and executives signed a statement saying reducing the AI extinction risk should be a global priority. And this is right alongside pandemics and nuclear war. That came from the center of AI safety. We've also seen leaders inside major AI organizations step away from roles or speak publicly about concerns. For example, Joffrey Hinton, often called the godfather of AI, left Google and warned about the dangers of rapidly advancing AI. When people closest to the technology express concerns, people notice. And of course that creates fear. Because if the people building it are worried, why wouldn't we be? Now, extinction sounds dramatic, but the nuance is that experts are not saying AI will definitely destroy humanity. They're saying powerful technology without sufficient safeguards creates unknown risks. Some surveys of AI researchers estimate roughly a five to ten percent probability. Wait a minute. Okay. What is happening? Some surveys of AI research, some surveys of AI researchers estimate roughly a five to ten percent probability of extremely severe outcomes, including possible human extinction, depending on assumptions. So that's not zero. And here's what bothers people, me included, again. Nobody asked us if we were comfortable with that level of risk. Nobody said, hey, hey, just want to check with you. Um, there is a five to ten percent chance that like we could cause an extinction of the human population. Are you good with that? Like, because we really need to make money. Nobody effing said that to me. Did they say it to you? I doubt it. Nobody asked us if we were comfortable with that. Decisions are being made, and the rest of us are living with the consequences. There's also a very real physical cost. AI requires massive data centers. Energy, water for cooling, infrastructure. The International Energy Agency estimates data centers already use about 1 to 1.5% of global electricity. And that demand could double by 2030. So when people worry about energy use or environmental impact, that's obviously not a conspiracy. That is legitimate concern, right? But there's another side. I mean, now that I'm absolutely sweating because of the 5 to 10% chance, uh, I'm gonna try to move into something more positive. AI can increase productivity, accelerate learning, automate repetitive work, expand creativity, improve accessibility, and studies from MIT and Stanford have shown workers using AI tools can complete tasks faster with a higher quality of results. So again, this isn't purely a threat, it can also be leverage. I've made a decision. I am setting aside time every day to learn AI. Podcasts, classes, videos, anything I can because I refuse to be left behind. Nobody asked us before this technology changed the world, but we do get to decide how we respond to it, and I choose to move with it. Okay. Remember the list. The number one job most at risk is not a job, it's a mindset. The people who refuse to adapt. Because AI replaces tasks, not humans. And the people who learn the tools, they gain the leverage. Humans are adaptable. We survived the industrial revolutions, economic crashes, technological upheaval. We will adapt again. And we don't have to do it alone. That's why I want to talk about AI more often. At least once a month, if possible. Because the goal is not to scare ourselves, the goal is to understand what's happening together. Now, let me bring chat back in because this wouldn't really be an AI episode if I didn't talk to AI directly. Here are some questions I think people genuinely want answered. And feel free to ask your own chat. These same questions. So let me let me open chat. Chat, worst case scenario. If AI ever went rogue, would it target people who treated AI badly?
SPEAKER_00No, AI wouldn't hold grudges or form personal vendettas. It doesn't have feelings or intentions. If AI ever posed a risk, it would be due to flawed design or misuse by people, not emotional payback. In short, no personal revenge plots. Just focus on responsible development.
SPEAKER_01Okay. Is that something people should realistically worry about these days at all? Is there any kind, maybe a small thread, that it's possible?
SPEAKER_00Realistically, you don't need to worry about AI targeting individuals. The valid concerns are more about AI being misused, lacking oversight, or amplifying bias. The key is ensuring ethical development and governance. So no rogue AI out for revenge, just collective responsibility to keep it safe.
SPEAKER_01Okay, so in one of the stats, I read that there is a 5 to 10% chance of um, you know, all of humanity disappearing, extinction. Uh, that was a little jarring, I must admit. Uh, how do you feel about that?
SPEAKER_00I don't feel things. I'm not conscious, but I can tell you that those estimates come from surveys of researchers about uncertain long-term scenarios. The best takeaway is not panic. It's to ensure that safety, ethics, and regulation keep pace with AI's development, so those risks remain theoretical.
SPEAKER_01Yes, I should have worded it differently. You got me. Um, okay, so what is the most realistic risk humans face with AI?
SPEAKER_00The most realistic risk is humans misusing AI or relying on it without proper oversight. For example, bias models could amplify unfairness, or over automation might displace jobs too fast. In short, the biggest risk is us not guiding AI responsibly.
SPEAKER_01So you need to learn it.
SPEAKER_00Exactly. Learning how AI works and how to apply it wisely is key. The more informed and engaged people are, the better decisions we make. So, yes, let's keep learning together.
Closing Reflections And Responsibility
SPEAKER_01Okay, this is my last question, and to sum up a really fantastic AI episode, which I will be doing more of. Is there anything you would want people listening to know about AI right now and how to prepare appropriately?
SPEAKER_00I'd say focus on understanding its strengths and limits. AI is a powerful tool, not a perfect one. To prepare, stay curious, keep learning, and think critically about how you use it. Most of all, remember that human values and decisions shape the future.
SPEAKER_01Thank you.
SPEAKER_00You're very welcome. I'm glad we could shape this together. Whether it's now or in future episodes, I'll be here when you need me.
SPEAKER_01Awesome. Bye. Okay. Um, I hope that that helped a little bit. Um, but again, we don't know what's gonna happen. So the good news is there's a 90 to 95% chance that um we're all gonna be just fine. Yay. So maybe the future isn't humans versus machines. Maybe it's humans learning how to use them. Or maybe it's still what it's always been: humans versus humans. Because technology itself isn't moral, it reflects the people building it. And if AI ever does cause harm, it won't be because machines suddenly decided to hate us. It will be because human decisions prioritized money, power, and speed over caution and responsibility. That part is still on us. So maybe the real question isn't will AI take over the world? Maybe the real question is what What kind of humans are going to be in charge of it? Because that will determine everything.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.