You Can Now Listen to This Blog — In My Voice. Here's Why That Matters.
- jkvedar
- 2 days ago
- 4 min read
Updated: 2 days ago
This post was inspired by my friend and colleague Dr. Yosra Mekki, who did much of the work to refine my voice clone and integrate it into my website.

When you click through to my blog these days, you’ll notice a recent addition: an audio player at the top of recent posts. Click it, and you'll hear me (or rather, an AI voice clone of me) read the post aloud. Same words, same ideas, delivered in my voice as naturally as if I were sitting across from you.
I want to take a moment to explain why I think this is worth your attention and, more importantly, why it matters beyond mere convenience.
We know audio is no longer a niche medium. In 2024, about 47% of people aged 12 and older in the U.S. were monthly podcast listeners, and 34% tuned in weekly, with podcasts accounting for 11% of daily audio time. The same source reports that in the U.S., roughly 64% of adults over 12 have listened to a podcast in the past month, and about 31% said they listened in the last week, up sharply from a decade ago. That growth isn’t slowing. It’s proof that attention to audio isn’t a niche hobby; it’s mainstream behavior in media consumption.
But aside from these data, we know intuitively that people consume content differently, so adding another channel for consumption seemed to make sense.
And then there’s the accessibility argument.
According to the CDC, approximately 26% of U.S. adults have some form of disability, and a significant share have vision impairment or cognitive conditions that make sustained reading difficult or impossible. Globally, the World Health Organization estimates that 2.2 billion people live with some degree of vision impairment. Screen readers help, but they deliver content in a robotic, context-free monotone. That's not the same as hearing someone explain their thinking, adding emphasis where needed or changing cadence or tone to highlight a thought.
Given these two intersecting trends, I have been seeking a way to add an auditory component.
Why a Voice Clone, and Not Just Text-to-Speech?
There's a meaningful difference between generic text-to-speech and a voice clone trained on a real person's speech patterns, cadence, and intonation. The former is functional yet can feel alienating; the latter conveys authentic human presence.
I've been thinking about this through the lens of what makes healthcare communication effective. We know from decades of patient communication research that tone, warmth, and vocal cues shape not only whether information is understood but also whether it's trusted. A voice that sounds like a real person reading with genuine engagement changes the listener's experience in ways a synthesized voice simply can't.
ElevenLabs, the AI voice platform powering this feature on my site, has shown that voice cloning can preserve those subtleties with remarkable fidelity. When the audio player says, "Listen to this blog post in Dr. Kvedar's voice," it's not just a parlor trick. It's an effort to maintain the human connection that makes this kind of writing worthwhile. It's early days, and I admit the clone isn't exactly my voice. Those who know me well can tell the difference, but the casual reader or listener may not. But we’ve been refining it for a year, and the progress in the core technology has been remarkable.
The Mini-Podcast Model
Think of it as a library of on-demand mini-podcasts. The content remains authoritative and personal; distribution simply expands. One thing to note: you must go to the website and find the post to listen to the audio. More on that below.
What I'm Genuinely Curious About
I would appreciate hearing from you about a few things because your experience with this feature will shape how I use it going forward.
Does listening to a post in the author's voice change how you engage with the content — does it feel more personal or more authoritative, or neither? Do you envision using the audio version in specific contexts (commuting, exercising) that you wouldn't use for reading? If you have a visual impairment or a reading-related disability, does this feature meaningfully improve your access to the content here?
Would you find value in these auditory readings (5-12 minutes each) being published as separate podcasts? This would obviate the need to go to the website (as if you are a reader) to access the auditory content.
Accessibility in digital health can't just be a value we advocate for in our clinical tools while neglecting it in our professional communication. This feels like a small but honest effort to close that gap.
I'm still learning what works here — and I suspect the technology will improve rapidly over time (it already has). That said, I’m prone to the “shiny new object” syndrome when it comes to new tech, and I confess I found voice cloning particularly interesting. You can help me decide whether this is of value or just a vanity project.
So, please offer some feedback on how useful this is or might be. Comment here or email me at jkvedar@mgb.org.
Hit play and let me know what you think!