How AI Is Transforming a Lawmaker’s Life After a Terrible Diagnosis
Rep. Jennifer Wexton stepped to the lectern on the House floor in late July and addressed her colleagues as she had done countless times since being elected to Congress five years ago. Except this time, for the first time, her voice was generated entirely by artificial intelligence.
The Virginia Democrat was diagnosed last year with progressive supranuclear palsy (PSP), a rare, incurable and ultimately fatal brain disease. Following the diagnosis, she announced that she would not seek reelection after her current term ends this year. The disease has affected her ability to walk and made her natural speaking voice weaker and less clear. But with the help of a company called ElevenLabs, Wexton used old recordings to recreate it.
That moment on the House floor thrust assistive technologies into the national spotlight and served as a hopeful counterpoint to all of the doom often associated with AI — after all, the same technology generated a deepfake of President Joe Biden’s voice back in January. Wexton and her colleagues are now navigating the tension between AI’s harms and benefits as Congress weighs whether to regulate the technology.
In our interview for the POLITICO Tech podcast, Wexton used her AI-generated voice to discuss how that debate now holds much more personal meaning and why she’s become such an advocate for assistive technology in her final term in office. As she put it, “This disease has to be good for something.”
The following has been edited for length and clarity. You can listen to the full interview with Wexton here:
Back in July, you used AI to speak on the House floor for the first time. Perhaps for the first time in history. Tell me how you got to that moment.
Because of PSP’s impact on the volume and clarity of my voice, what normally would be everyday aspects of serving in Congress, like speaking on the floor, questioning witnesses in committee, and giving interviews like this were becoming not possible. I even had to turn down opportunities to speak publicly for a while, and that was really frustrating.
After some time working with the robotic text-to-speech app and dealing with the challenges of trying to get the pronunciation, cadence and tone to sound more like me, I took ElevenLabs up on an offer to create an AI model of my voice. My team sent them over an hour of old audio clips of me, mostly delivering floor speeches or other public remarks, and the AI model was ready in just a couple of days. Having a new “old” AI voice of myself has been remarkable.
My team and I developed the AI voice model at the beginning of July, and I received it the day before I was scheduled to be at the White House for President Biden’s signing of the National Plan to End Parkinson’s Act into law in an intimate gathering in the Oval Office.
This new law is something I championed in Congress after my diagnosis. Working with leaders on both sides of the aisle in the House and Senate, I shared my personal story about struggling to get my diagnosis and find treatments that helped manage my symptoms and what it would mean to have the greater resources this bill could deliver to step up our fight against Parkinson’s and related diseases like my PSP. I’m proud that after much behind the scenes work, it passed both chambers with overwhelming bipartisan support.
Being at the White House to see this monumental legislation be signed into law was a truly special moment for me and my family, and I wanted to make it even more special by debuting my new AI voice for the very first time in front of the president. My friend and colleague, and one of the co-leads of the bill, Congressmember Gus Bilirakis, had never heard my pre-PSP voice. I was worried that my mom would start crying, but she kept it together. I was able to share with President Biden and my family just how much it meant to see my advocacy make a difference. Not in some robotic voice, but my own.
Hearing your voice for the first time generated by AI, what was that feeling like?
My husband was with me when I first heard a sample of my AI voice reciting Hamlet’s soliloquy. “To be or not to be, that is the question.” So we both heard it for the first time at the same time. I cried happy tears. It wasn’t just because it sounded like me, which it does, but it was also because it sounded so much more natural than the text-to-speech app I had been using. My AI voice stopped and took a breath in between sentences. It was pretty awesome. My husband got a big smile on his face. I hadn’t seen him so broadly and genuinely smiling in too long. And I received many, many texts from colleagues and friends telling me how much they had missed hearing my voice.
My AI voice will never be me, but it’s more me than I ever believed I would hear again. And it’s empowered me to keep doing this job I love to the fullest and even helped in my personal life as well. It’s been important to me, especially because I found myself with a unique platform that I want to use to be an advocate for people facing similar health and ability challenges that I am, and my AI voice has done that for me. I’ve been able to share my story, my challenges and how I’m fighting in Congress.
The model we’ve created with ElevenLabs is good to use in official speeches and events like that. I’m able to adjust the qualities of the model. My team and I can work together on speeches in the same manner we always have, and then use the AI model just through a normal internet browser interface to create the audio, which takes only a matter of seconds. The next step I hope to take with this AI voice model is to build different options for different speaking styles. So, for example, this current model can sometimes feel a bit too formal and not as conversational. Everything can sound like some big proclamation. So I don’t use it, for example, to ask my husband to please pass me the ketchup. But because I’ve been in public service for over two decades, there are many, many old audio clips of me in different settings, including TV interviews or campaign rallies. I’d like to try building a model from some of those clips, so that I can be more dynamic and can adapt with how I employ my AI voice for different occasions.
I know with any assistive technology there are flaws and there are challenges, no matter how great the technology is. I was wondering what that experience has been like for you, navigating some of the imperfections of the technology.
The biggest challenge is that I don’t type as fast as I used to, as you’ve observed. I also need a strong signal in order for it to work. Finally, my version of it keeps changing my default voice to some man named Adam, who sounds great but not like me.
I’m curious how using AI has changed your perspective on the technology itself. This is something Congress is talking about regulating. You obviously have a very personal experience now working with it.
The remarkable opportunity to hear my voice again, even an AI recreation of it, and the ways it has empowered me and my working life has given me new perspectives on AI. What this kind of technology can do for people facing health challenges and other disabilities is nothing short of life changing. That sentiment has been reflected in many of the messages I’ve received since debuting my AI voice. A common theme from those messages was an admission that, yes, AI does at least have some positive applications, and I agree. I think one of the challenges facing us is finding out how to make the most of those advantages, while protecting against the dangers.
We’ve seen the potential for abuse, particularly with deepfakes and cloning voices. And it’s only becoming more dangerous as the technology improves. A few years ago, I questioned Facebook CEO Mark Zuckerberg about his platform’s deepfake policy after a manipulated video of Speaker Pelosi went viral that made her appear to slur her words and seem incoherent. He wasn’t able to give me a clear answer then, and to this day, I believe many social media platforms don’t have adequate guardrails in place to sort fact from falsehood.
We in Congress, not exactly known for our fast pace, are certainly no better in keeping up with the quickly developing AI frontier. There’s for sure more work to be done to ensure this tool is used responsibly, and that we invest in the benefits it can provide. For our part, since my team and I have developed my voice, we’ve limited who can access the voice model. Only my chief of staff, my communications director and I can use it. We recognize the power of a tool like this, and the reality that abusing that and using my voice to say something without my consent could cause real problems. Overall, my feelings can be summed up how I jokingly replied to some friends who texted me about my AI voice: AI isn’t entirely evil, just mostly.
I don’t think many people realize that a lot of insurance plans don’t actually cover some of these medical technologies. If you weren’t a member of Congress, do you think you’d have access to this technology? And what can be done to get it into the hands of more people?
I didn’t really think much about assistive tech until I was the one who needed it, but I recognized that using technology like this in such a significant setting and with such a spotlight on me means a lot to the many Americans of differing abilities whose words are often not given the respect they deserve because they may not be expressed in the same way.
I hope that using this assistive tech in Congress can help normalize it. If someone can use assistive tech on the floor of Congress of all places, why not in their own daily lives, like at school or a coffee shop? I also hope that it helps make it more accessible. I recognize that I am in a unique position of privilege by having access to years of audio clips with which to build an AI model of my voice. Most people facing a health challenge like mine might not know the possibility to build an AI voice even exists. So helping to shine a light on that possibility could help others prepare for that, like making recordings of their voice. It’s something I’ve started encouraging other PSP and Parkinson’s patients to do.
Just recently, the company that helped create my AI voice model, ElevenLabs, announced that they were partnering with a couple of organizations that support Americans with ALS to help people battling that disease make use of their AI voice platform free of charge. I think that’s a great thing, and I hope I can help even more people find accessible ways to take advantage of these remarkable advances in technology to overcome the challenges of their health struggles.
I don’t think people often talk about the experience of developing any kind of disability as an adult. It’s much more common than people recognize. I was curious how this experience for you changes the way you want to govern and the impact that you hope to have here on the Hill.
Having a progressive disease sucks. There’s nothing easy about it. Everything that I used to do so easily is now hard. Even making myself breakfast and eating it this morning was a challenge. I broke a bone in my foot in 2001. I spent three days on crutches, and I used to joke that they were the worst three days of my life. A sidewalk curb presented an insurmountable challenge. Now that’s my future. I’m pretty sure I’ll be in a wheelchair before my term ends. As a point of reference, in 2016, I ran the Marine Corps Marathon.
If it happened to me, it can happen to anyone. But going through everything that I am now makes me even more determined to use my platform to help those who come after me — not those who will follow me in Congress, but those who are diagnosed with PSP, MSA, ALS or any of the other neurodegenerative diseases that hundreds of Americans are diagnosed with every day. It’s why I fought so hard for the passage of the National Plan to End Parkinson’s. I’m not afraid to play that “I’m dying and this is a priority” card, because this disease has to be good for something.