Dec. 4, 2025

How Generative AI Can Speed Research, Elevate Care, And Keep Humans At The Center

How Generative AI Can Speed Research, Elevate Care, And Keep Humans At The Center

Send us a text Curious how AI can make healthcare feel more human instead of less? We sit down with medical writer and AI adoption strategist Dr. Núria Negrão, who went from bench science to building practical ways for clinicians, researchers, and communicators to use generative tools without losing accuracy or empathy. From HIV educations roots to today’s most promising AI workflows, we trace what’s working now and where the next breakthroughs may land. We unpack the real bottlenecks: clini...

Send us a text

Curious how AI can make healthcare feel more human instead of less? We sit down with medical writer and AI adoption strategist Dr. Núria Negrão, who went from bench science to building practical ways for clinicians, researchers, and communicators to use generative tools without losing accuracy or empathy. From HIV educations roots to today’s most promising AI workflows, we trace what’s working now and where the next breakthroughs may land.

We unpack the real bottlenecks: clinicians stuck typing and scientists drowning in papers. Dr. Negrão shows how ambient scribe tools can free clinicians up for face-to-face time with patients, while research copilots can scan literature, connect ideas, and surface the studies that matter. We talk medical education use cases—virtual patients for difficult conversations, culturally sensitive practice, and adaptive learning that meets people where they are. Along the way, we tackle the hard parts: AI hallucinations, bias reinforcement, privacy risks, and the myth that AI is either flawless or useless. The answer is supervision, sourcing, and clear guardrails.

Regulation-by-principle anchors our approach: no emotion surveillance, no automated life-and-death allocation, strong data protections, and human override in care. Then we look at the upside for patients. Imagine leaving an appointment with a plain-language summary of what the doctor said, clear next steps, and links to trusted support groups—plus a secure assistant to answer follow-ups when anxiety spikes at midnight. That’s not replacing clinicians; that’s better navigation of the health system. If you want a grounded, hopeful take on AI in healthcare, science communication, and medical writing—one that boosts health literacy and speeds discovery—this conversation is for you.

If this sparked ideas, subscribe, share it with a friend, and leave a review. Tell us what you want to hear next so we can keep building tools and stories that serve real people.

Thanks for listening to the Infectious Science Podcast. Be sure to visit infectiousscience.org to join the conversation, access the show notes, and don’t forget to sign up for our newsletter to receive our free materials.

We hope you enjoyed this new episode of Infectious Science, and if you did, please leave us a review on Apple Podcasts and Spotify. Please share this episode with others who may be interested in this topic!

Also, please don’t hesitate to ask questions or tell us which topics you want us to cover in future episodes. To get in touch, drop us a line in the comment section or send us a message on social media.
Instagram @Infectscipod
Facebook Infectious Science Podcast

See you next time for a new episode!

Thanks for listening to the Infectious Science Podcast. Be sure to visit infectiousscience.org to join the conversation, access the show notes, and don’t forget to sign up for our newsletter to receive our free materials.

We hope you enjoyed this new episode of Infectious Science, and if you did, please leave us a review on Apple Podcasts and Spotify. Please share this episode with others who may be interested in this topic!

Also, please don’t hesitate to ask questions or tell us which topics you want us to cover in future episodes. To get in touch, drop us a line in the comment section or send us a message on social media.
Twitter @Infectious_Sci
Instagram @tick_virus
Facebook Infectious Science Podcast

See you next time for a new episode!

00:00 - Welcome And One Health Context

00:37 - Meet Nudia: Scientist To AI Strategist

03:20 - Early SciComm And HIV Education Roots

05:20 - Where AI Stands In SciComm Today

08:18 - Medical Education, Virtual Patients, EQ In Models

12:15 - Practical Gains: Triage, Scribes, Efficiency

15:20 - AI For Research, Search, And Synthesis

19:10 - Risks: Hallucinations, Bias, Privacy, Overreliance

23:02 - Regulation By First Principles And Boundaries

26:00 - Health Literacy, Plain Language, Patient Support

29:10 - Getting AI-Literate: Try, Verify, Curate Sources

32:45 - Closing Thoughts And Listener Invitations

WEBVTT

00:00:09.759 --> 00:00:11.839
This is a podcast about One Health.

00:00:11.919 --> 00:00:18.239
The idea that the health of humans, animals, plants, and the environment that we all share are intrinsically linked.

00:00:18.480 --> 00:00:23.920
Coming to you from a team of scientists, physicians, and veterinarians, this is Infectious Science.

00:00:24.079 --> 00:00:27.120
Where enthusiasm for science is contagious.

00:00:29.359 --> 00:00:32.799
All right, welcome back to this episode of Infectious Science.

00:00:32.960 --> 00:00:34.799
Thanks everyone for tuning in.

00:00:35.119 --> 00:00:37.439
I am really excited today to be here.

00:00:37.520 --> 00:00:42.240
I'm one of your co-hosts, Camille, and we are really fortunate to be joined by Dr.

00:00:42.320 --> 00:00:51.280
Narian Grau, who's going to talk to us about AI and how it's changing conversations around health and science and medicine and the ways that we can communicate those.

00:00:51.439 --> 00:00:51.600
Dr.

00:00:51.679 --> 00:00:54.240
NGR, could you introduce yourselves for our listeners?

00:00:54.640 --> 00:00:55.520
Hi, Camille.

00:00:55.600 --> 00:00:58.399
Thank you so much for inviting me to be part of the podcast.

00:00:58.560 --> 00:01:02.719
I really like listening to you guys, and I'm very honored to be here.

00:01:02.799 --> 00:01:03.359
So thank you.

00:01:03.520 --> 00:01:04.480
Yes, hi everyone.

00:01:04.560 --> 00:01:05.840
My name is Nudia.

00:01:06.000 --> 00:01:11.040
I am a medical writer at the moment, and I am an AI adoption strategist.

00:01:11.280 --> 00:01:23.200
That's what I'm calling myself, basically, because I've been doing a lot of work in AI education and helping people to figure out how they can use AI in their professional lives as scientists and as medical writers.

00:01:23.439 --> 00:01:25.439
My background is as a scientist.

00:01:25.680 --> 00:01:29.439
I was a bench scientist in academia and in industry for quite a bit.

00:01:29.680 --> 00:01:36.799
I have a PhD in cellular biology, and then I also worked in biotech on developing diagnostic tools.

00:01:36.959 --> 00:01:43.760
And at the same time, throughout my career, so I, if anything, I actually started on the science communication side of it.

00:01:43.840 --> 00:01:46.480
I say I've been doing science communication since high school.

00:01:46.719 --> 00:01:54.400
The first project I did was in the 90s in high school was all about teaching all of my peers about HIV and AIDS.

00:01:54.560 --> 00:01:58.000
I did a project, and my school was like, this project is awesome, and you need to go teach everyone.

00:01:58.079 --> 00:02:04.719
So I taught everyone in my school, and then I went to other schools teaching everyone about HIV and AIDS.

00:02:05.200 --> 00:02:09.599
So that's that was the beginning of my SICOM career.

00:02:09.680 --> 00:02:15.599
And I in college, when I was teaching, I was teaching also a lot of writing in the sciences.

00:02:15.759 --> 00:02:19.520
So I got training education on how to teach writing in the sciences.

00:02:19.599 --> 00:02:21.520
So I think I've been doing this for quite a bit.

00:02:21.759 --> 00:02:27.759
So I also see this whole AI education thing that I'm doing now as like a natural progression from that.

00:02:27.919 --> 00:02:42.560
Because I really see talking about AI and teaching people about how to use AI as a science communication because artificial intelligence is a science, and learning how I feel like learning about the science itself helps us use the tools better.

00:02:42.719 --> 00:02:42.960
Yes.

00:02:43.360 --> 00:02:44.639
Yeah, no, absolutely.

00:02:45.199 --> 00:02:46.479
What a wonderful overview.

00:02:46.560 --> 00:02:46.960
Thank you.

00:02:47.199 --> 00:02:47.759
That's really cool.

00:02:47.919 --> 00:02:53.919
So I I just, you know, finished my dissertation, and that's I was studying the effects of HIV and cocaine in the brain.

00:02:54.080 --> 00:03:00.000
So very cool that you also did to work on basically just like the public-facing education aspect of that.

00:03:00.080 --> 00:03:10.800
And what's really cool is I think medical literacy has been shown to like increase more when you're not in like a hospital setting, like in schools or in local community centers or churches or barbershops and things like that.

00:03:11.120 --> 00:03:15.919
People take in the information and really absorb it better than just getting it from something like a hospital.

00:03:16.000 --> 00:03:17.039
So that's really neat, very cool.

00:03:17.120 --> 00:03:19.520
Always fun to learn somebody who did some cool SATCOM work.

00:03:20.000 --> 00:03:27.360
So speaking of which, could you tell us a bit about how do you think AI is currently changing science communication and medical writing?

00:03:27.439 --> 00:03:30.479
And then where do you see that going in the next couple of years?

00:03:30.960 --> 00:03:38.479
Okay, so I think right now we're at the very beginnings of it, and it's when we're talking about professional communicators, right?

00:03:38.639 --> 00:03:53.919
If we're talking about people that their job is to do science communication or scientists themselves, or writers that work in writing about science or writing about medicine, I think you have some people that are very hesitant to use AI, mostly because one, it's a new tool.

00:03:54.080 --> 00:04:04.479
And if you're not from the computer science side of the world, anything that is computer science is a little bit unknown, and then people may be a little bit afraid of it.

00:04:04.560 --> 00:04:06.400
Just like I'm just thinking about myself.

00:04:06.479 --> 00:04:11.039
Like when people used to say, Oh, you need to learn how to code, I'd be like, no, I do not.

00:04:11.280 --> 00:04:12.879
I do a lot of work already.

00:04:13.039 --> 00:04:15.199
I do not need to learn a new language.

00:04:15.360 --> 00:04:16.319
Thank you very much.

00:04:16.480 --> 00:04:34.399
So I understand like the fear and uh hesitancy there, but it is also because these new AI, the generative AI tools, they write what sounds like really convincing text, but at least if you just ask a very simple question, it can come up with wrong facts.

00:04:34.639 --> 00:04:43.040
And because when you are communicating science or you're communicating medicine, there is like such an emphasis on being accurate.

00:04:43.279 --> 00:04:45.120
It's so necessary, so important.

00:04:45.360 --> 00:04:50.399
People are like, oh, if I use that tool, it's gonna give me the wrong facts and we're gonna get into trouble.

00:04:50.560 --> 00:04:52.639
So I think a lot of people are afraid of it.

00:04:52.879 --> 00:05:00.240
On the other hand, on the public using it, I think you have people that have not used AI at all in the population.

00:05:00.399 --> 00:05:04.720
But those that are students, we know that students are using AI a lot.

00:05:04.959 --> 00:05:09.279
So I am guessing that the same way that all patients go to Dr.

00:05:09.439 --> 00:05:15.360
Google to find about their diseases, I am sure that there are lots of patients going to Dr.

00:05:15.519 --> 00:05:18.560
ChatGPT at the moment to find out about their diseases.

00:05:18.720 --> 00:05:20.480
So I think the state of it is like that.

00:05:20.560 --> 00:05:30.879
And then there are a few people everywhere in all industries that are really trying to figure out how they're going to use these tools in a way that is safe, effective, and responsible, right?

00:05:31.040 --> 00:05:38.639
For example, I know people are using it to create education materials for medical students, to create practice questions for tests.

00:05:38.879 --> 00:05:42.560
There are lots of people trying to do more personalized learning.

00:05:42.720 --> 00:05:53.199
So the bot will assess how you're doing in your learning and will change the difficulty of what it teaches you or asks you next because it can really be personalized.

00:05:53.360 --> 00:05:57.839
So, more in a medical education, I know that there are people are starting to get into that.

00:05:58.079 --> 00:06:06.560
I know that some companies are creating virtual patients so that doctors can train on these virtual patients, train their communication skills.

00:06:06.800 --> 00:06:09.040
It is really hard to tell someone bad news.

00:06:09.279 --> 00:06:14.959
So, like you can practice that, and you can practice culturally aware communication as well.

00:06:15.120 --> 00:06:22.560
So maybe you feel like you're not too sure how to talk about a specific topic with a person that is from a different culture than yours.

00:06:22.720 --> 00:06:25.040
So they use these virtual patients for that.

00:06:25.199 --> 00:06:28.639
So I think really interesting things are starting to be done.

00:06:28.959 --> 00:06:29.600
Yeah, yeah.

00:06:29.759 --> 00:06:39.439
And I think you you really hit on something that AI does really well, which is if you have something written up or you know what you want to say, it can help you find the best way to state it, I have found.

00:06:39.519 --> 00:06:41.040
And it's a really useful tool for that.

00:06:41.279 --> 00:06:47.680
Here's a concept that I'm trying to explain, but am I hitting the audience that I want to or intend to and making sure that information is clear to them.

00:06:47.839 --> 00:06:48.959
So that's really cool.

00:06:49.120 --> 00:07:01.199
And I've also heard from a conversational standpoint that AI is potentially being trained to work almost as like a telehealth chat feature for like therapy, which I think is really wild and also could dramatically increase people's accessibility to things, right?

00:07:01.279 --> 00:07:04.879
Like I grew up in a rural area and healthcare in rural areas is very hard to access.

00:07:05.120 --> 00:07:07.120
But of course, there's always the human side of things, right?

00:07:07.199 --> 00:07:12.879
And so I don't think, and I I think you kind of hit on this, it's not replacing people, but it is changing how we do things.

00:07:13.199 --> 00:07:13.759
Yeah, yeah.

00:07:14.000 --> 00:07:22.480
And what you're talking about, like the chat bots, they're like companion bots or the ones that will become, say, therapist, therapy type bots, or coach type bots.

00:07:22.639 --> 00:07:31.279
One of the really popular apps on mobile phones is character AI, which is people basically talking to these companion bots, right?

00:07:31.360 --> 00:07:34.639
And you can talk to different personalities on character AI.

00:07:34.800 --> 00:07:45.839
And you have a lot of young people in particular spend a lot of their AI time talking to these AI best friends, AI romantic partners, but also coaches and therapists and things like that.

00:07:46.079 --> 00:08:08.800
So, one thing that is interesting about this is the ChatGPT's new model, the 4.5 model, in tests of emotional intelligence, it seems to have higher emotional intelligence than all of the other models, which is basically like what you want to do if you want to have more of these bots talking to people in more of a companion or a therapist type of thing.

00:08:08.959 --> 00:08:10.639
So that is really interesting.

00:08:10.879 --> 00:08:14.959
Another company that was really good at doing this, I think they were called Inflection AI.

00:08:15.040 --> 00:08:17.680
They are the ones that most of the company went to Microsoft.

00:08:18.160 --> 00:08:26.720
The CEO of that was uh Mustafa Suleiman, that now is the CEO of AI at Microsoft, and he was one of the original founders of Google Deep Range.

00:08:26.800 --> 00:08:39.039
So, like he has a long history of working in AI, and they were really interested in this emotional intelligence part of the AI, where a lot of the other companies are more on the IQ, they are more on the EQ.

00:08:39.120 --> 00:08:41.039
So that is it's really interesting.

00:08:41.120 --> 00:08:41.519
Yeah.

00:08:41.840 --> 00:08:42.399
Yeah, yeah.

00:08:42.480 --> 00:08:44.799
And I could see how that could also be useful for us.

00:08:44.960 --> 00:08:52.159
Are we having culturally sensitive conversations when we're breaking medical news or like how do we bring this to people in a way that is approachable and understandable?

00:08:52.240 --> 00:08:54.639
And that's something we think about a lot in science and in medicine.

00:08:55.120 --> 00:09:00.080
So, what do you see as like the major benefits of AI right now in SciCom and in medical writing?

00:09:00.240 --> 00:09:02.320
How is it on the ground helping us?

00:09:03.039 --> 00:09:11.679
So, for the way I think about it is AI, even if right now is not very good, it is improving so fast.

00:09:11.840 --> 00:09:15.440
And I see the potential of it, and I get really excited.

00:09:15.519 --> 00:09:26.720
So, I get excited about what are the things that we cannot do now, that we struggle to do now, that if we could automate some of our processes, would allow us to have time to do that.

00:09:26.879 --> 00:09:34.080
So, for example, even in the US where there are so many doctors, you still have not enough doctors, right?

00:09:34.320 --> 00:09:39.279
And so, how can we think about how AI can help us with this?

00:09:39.519 --> 00:09:54.080
People want to see human doctors, but maybe triage can be done with some AI chatbots, or maybe they can be with the human doctor for the really important parts, but maybe the AI could do some of the note-taking.

00:09:54.159 --> 00:09:59.360
Like doctors spend a lot of time just typing on their computer.

00:09:59.519 --> 00:10:04.720
And if you go to a doctor's appointment, like half of the time they're typing on their computer, they're not even looking at you.

00:10:04.879 --> 00:10:14.399
So imagine you could get an AI voice recorder, right, that is just transcribing your conversation to the patient and filling out the form so you can now focus on this patient.

00:10:14.639 --> 00:10:21.200
So, in a half an hour appointment, you have right now like 10 minutes of actual contact with patients.

00:10:21.519 --> 00:10:30.879
That means that maybe you could increase the actual contact time to 20 minutes, but you would still have 10 minutes there gained that you could see another patient.

00:10:31.039 --> 00:10:38.159
So, this is how you could both improve the quality of contact between doctors and patients and see more patients.

00:10:38.240 --> 00:10:39.519
And that's just a very simple thing.

00:10:39.600 --> 00:10:48.399
And that is something that could be possible today if we could get around the technology issues of does it understand everything that we're saying?

00:10:48.559 --> 00:10:53.519
Does it understand medical terms and the privacy issues and all of that?

00:10:53.759 --> 00:10:58.080
If we could make these bots so good, I think we are very close to being able to do that.

00:10:58.320 --> 00:11:05.360
In terms of science communication and science writing, there was the study that was published in December of 2024.

00:11:05.519 --> 00:11:09.360
It was a really good study by academics and at different universities.

00:11:09.519 --> 00:11:15.840
The lead author was from Stanford, and they looked at 4,000 people and the impact of AI at work.

00:11:16.000 --> 00:11:21.919
So this is the most comprehensive survey of how AI has been used in the workspace, right?

00:11:22.080 --> 00:11:28.240
And what they saw was that about 30% of people are using AI at work to do work tasks.

00:11:28.399 --> 00:11:31.279
So 30% is not that huge of a number.

00:11:31.440 --> 00:11:39.600
And when they looked at the industries that have the highest adoption, it was like marketing, IT, and customer service, right?

00:11:39.679 --> 00:11:43.600
So it wasn't healthcare or science communication or anything like that.

00:11:43.759 --> 00:11:48.159
So what I'm trying to say is I think the adoption is still very low for people.

00:11:48.399 --> 00:11:56.720
What I think there is a lot of potential in, and this is what I tell people is if AI only did this, I would already be super happy.

00:11:56.960 --> 00:11:59.600
Is in a lot of what we do is research.

00:11:59.840 --> 00:12:02.720
And in research, AI can be really transformative.

00:12:02.879 --> 00:12:06.080
Because if you think about it, we can only read so fast.

00:12:06.399 --> 00:12:16.399
So if I am reading about a disease, I usually give this example because I work a lot with non-small cell lung cancer, but also because non-small cell lung cancer is a huge field.

00:12:16.480 --> 00:12:18.480
There's like papers coming out every day.

00:12:18.639 --> 00:12:26.639
I think if you go to PubMed right now and you look just in 2025, more than a hundred papers have been published already on non-small cell lung cancer.

00:12:26.720 --> 00:12:31.600
So it is impossible to read everything about non-small cell lung cancer for a human.

00:12:31.919 --> 00:12:33.519
AI can read everything.

00:12:33.840 --> 00:12:35.840
Okay, and give you a generative summary.

00:12:36.159 --> 00:12:36.639
Exactly.

00:12:36.960 --> 00:12:39.039
It can read everything fast.

00:12:39.679 --> 00:12:47.919
So that is something that the bots can do now that is better than us because we just don't have the capacity to read that fast.

00:12:48.080 --> 00:12:52.159
The trick is, okay, so how can we take advantage of that, right?

00:12:52.320 --> 00:12:56.320
And that is what we need to think of is okay, so I don't need to read everything.

00:12:56.480 --> 00:13:06.000
I need it to read everything, to find the most relevant sources and give me the most relevant sources so that I can read the really key sources, right?

00:13:06.240 --> 00:13:16.080
So I think that is the biggest unlock whenever I'm telling people about different ways of using AI, it is always in research and search that I think is the biggest unlock at the moment.

00:13:16.240 --> 00:13:19.279
And then the other thing is connecting different ideas.

00:13:19.440 --> 00:13:23.759
So because it can read everything and it remembers everything, because that's the other thing.

00:13:23.919 --> 00:13:25.679
We don't remember everything we read.

00:13:25.840 --> 00:13:27.200
It is just the way it is, right?

00:13:27.279 --> 00:13:28.559
But it remembers everything.

00:13:28.720 --> 00:13:38.720
So if you gave it everything to read about non-small cell line cancer and you're talking about something else and you ask it a question, it remembers, oh yeah, there was that one paper that I read that said something about this.

00:13:38.960 --> 00:13:43.600
So it is really good at making those connections and going to get those things.

00:13:43.759 --> 00:13:54.159
And then if you use tools to read all of these papers at the same time, which right now, really, the start of the show is Notebook LM, you can speed up also your reading part of research, right?

00:13:54.240 --> 00:13:59.279
So your understanding and asking questions and making connections between different papers.

00:13:59.519 --> 00:14:02.720
I talk about the efficiency gain, so how much faster it is.

00:14:02.879 --> 00:14:04.879
What I want to emphasize is not just about the faster.

00:14:05.279 --> 00:14:10.720
What the faster it does is allows you to also go deeper and to also do better work.

00:14:10.960 --> 00:14:12.879
It is not just about the faster.

00:14:13.039 --> 00:14:13.840
It is more than that.

00:14:13.919 --> 00:14:14.080
Yes.

00:14:14.320 --> 00:14:18.000
Yeah, I think I could see how that could help us ask better research questions, right?

00:14:18.159 --> 00:14:26.559
Because so many times, if you're seeing scientific questions being posed as for new grant funding or whatever that is, it is definitely limited by what the people working on it can read and get through.

00:14:26.639 --> 00:14:28.639
And the dissertation work I did was on HIV.

00:14:28.720 --> 00:14:30.159
There are so many HIV papers.

00:14:30.240 --> 00:14:32.159
I have another friend who works on coronaviruses.

00:14:32.559 --> 00:14:34.879
There are so many papers on COVID.

00:14:34.960 --> 00:14:40.320
And it is like such a quagmire to wade through to find exactly what you're looking for and then to make sure you don't miss things.

00:14:40.399 --> 00:14:41.200
And we're only humans.

00:14:41.279 --> 00:14:42.639
So I could see it being used for that.

00:14:42.720 --> 00:14:51.440
But also there's so many clinical care summaries that we can see published as like here's a case study of this is what happened, this is what was missed, this is how we eventually figured out what was happening.

00:14:51.519 --> 00:14:57.840
And I could see AI being used to say, okay, this is happening a lot, this might be a gap in knowledge and how we're educating physicians.

00:14:58.000 --> 00:15:09.600
If we're seeing an increase in this geographical area of this particular disease, particularly I can think of that for like vector-borne diseases or something, if you're seeing an increase, I think that could be really interesting to track and map using something like AI.

00:15:09.759 --> 00:15:15.120
Um so I think that there's a lot of potential for it to really improve our health, but also our science.

00:15:15.279 --> 00:15:16.639
So I think that's really cool.

00:15:16.799 --> 00:15:16.960
Yeah.

00:15:17.120 --> 00:15:22.000
But with all of that being said, what are the potential downsides of AI right now, right?

00:15:22.159 --> 00:15:23.200
It's constantly changing.

00:15:23.279 --> 00:15:25.120
And I think there's a lot of fear around AI.

00:15:25.200 --> 00:15:28.879
And I think there's a lot of fear around anything that's new and not super known or regulated.

00:15:29.120 --> 00:15:32.720
But could you talk about that and how founded are they really?

00:15:33.279 --> 00:15:33.759
Yes.

00:15:33.919 --> 00:15:34.240
Yeah.

00:15:34.320 --> 00:15:36.720
So there's different levels of fears, right?

00:15:36.879 --> 00:15:39.919
So you have the same problem that you had with Dr.

00:15:40.000 --> 00:15:41.200
Google, you have with Dr.

00:15:41.519 --> 00:15:42.480
ChatGBT, right?

00:15:42.639 --> 00:15:47.360
The patient goes and like searches for their symptoms and they can get convinced that they have something.

00:15:47.840 --> 00:15:55.919
These chatbots are trained to please you as the user because they are chatbots, they will pick up what you drop, the hints that you drop.

00:15:56.159 --> 00:16:10.399
So if you have someone who is anti-vaccine, I think this is a very clear example where probably all of the chatbots are being trained not to re-emphasize anti-vaccine points of views, except for maybe Drock.

00:16:10.559 --> 00:16:18.480
But you can see how if it gets the feeling that the user has a certain point of view, that it would just reinforce it.

00:16:18.559 --> 00:16:24.480
And then you get into the fallacy that we all have as humans, which is like reinforcement bias, right?

00:16:24.720 --> 00:16:28.000
So we just look for things that confirm our views.

00:16:28.240 --> 00:16:34.480
And because it sounds so authoritative, it sounds so good, we say that it is intelligent, we say that it knows everything.

00:16:34.720 --> 00:16:37.600
I just told you that it knows everything and it never forgets, right?

00:16:37.679 --> 00:16:40.720
It is so good, it knows everything, it reads everything and never forgets.

00:16:40.799 --> 00:16:45.360
It's so much better than humans that you can get this false sense of confidence.

00:16:46.080 --> 00:16:47.600
And uh, and that can give you a problem.

00:16:47.840 --> 00:16:49.759
So that's one level of issues.

00:16:49.840 --> 00:16:52.720
And that is if a patient goes, but imagine also a doctor.

00:16:52.879 --> 00:16:56.960
If it is something that you're seeing every day and you know about it all the time, is one thing.

00:16:57.120 --> 00:17:04.960
But if you see something strange, you do go to your books, right, or to the sources that you trust, then you find what does this match?

00:17:05.119 --> 00:17:07.680
Because you're not an encyclopedia, you're human.

00:17:07.839 --> 00:17:17.680
So now, as these chatbots become more efficient, like the fallacy that they know everything and they're so authoritative that they can be saying something wrong because they do hallucinate, right?

00:17:17.759 --> 00:17:19.759
And hallucinators they make mistakes.

00:17:19.920 --> 00:17:26.240
They say something that is not accurate or not factual, but they say it very convincingly.

00:17:26.400 --> 00:17:35.200
And whenever something like that is convincingly saying something, it is really hard for you to catch it, and especially when most of the time it's accurate.

00:17:35.359 --> 00:17:42.880
So one thing that I've heard someone say, so this is not my original thought, is that it would be better if it was wrong, like 25% of the time.

00:17:42.960 --> 00:17:50.720
But because it is only wrong like 5% or 2% or 1% of the time, it is more dangerous because you're not on alert.

00:17:50.960 --> 00:17:54.160
You turn off your critical brain because you're so used to it being right.

00:17:54.319 --> 00:17:59.200
So the fact that it is getting better and better at being right all of the time is actually a fear.

00:17:59.359 --> 00:18:00.000
So that's one.

00:18:00.160 --> 00:18:05.839
Another fear that a lot of people have is that we're gonna get chatbots talking to chatbots like so.

00:18:05.920 --> 00:18:12.960
I want to send you an email, so I say to my chatbot, and with voice mode, I can do this, send an email to Camille about this, and then you're like, oh, go read my email.

00:18:13.039 --> 00:18:17.759
So Nuria sent you an email about this, oh, send her an email about this, and then what are we doing?

00:18:17.839 --> 00:18:19.920
Like, why are we not talking to each other?

00:18:20.079 --> 00:18:23.519
So people have these dystopian views of the future like that.

00:18:23.839 --> 00:18:28.160
Right now, most of the content out there is human-made content.

00:18:28.319 --> 00:18:28.480
Right.

00:18:28.640 --> 00:18:30.480
But I think that's going to change.

00:18:31.039 --> 00:18:38.079
And the value of human-made content is gonna go even higher because you're gonna have a lot of uh AI created content.

00:18:38.319 --> 00:18:46.559
So a fear there is that because what it does really is it brings it out to the average, so then you have an average of everything, and that's very boring.

00:18:47.359 --> 00:18:48.240
Ah, okay.

00:18:48.480 --> 00:18:53.039
Yeah, then the uh knowledge becomes the average of everything, and that's a little bit boring.

00:18:53.119 --> 00:18:53.839
So that's a fear.

00:18:53.920 --> 00:18:58.000
We don't know if that's going to happen or not, but that is a fear that is out there.

00:18:58.240 --> 00:19:00.640
And then there's the problems with privacy.

00:19:00.960 --> 00:19:11.200
You could say something, and then the company that owns the thing can be acquired by a different company, and then they might have an interest in your data that you didn't know before, for example.

00:19:11.279 --> 00:19:29.039
Like imagine you have all of these personal conversations about your health status with character AI, but a health insurance company buys character AI, and now they know all of this information about you, about your medical history, and will they take that to make decisions about your health care?

00:19:29.440 --> 00:19:30.319
Maybe, maybe yes.

00:19:30.480 --> 00:19:38.160
But those are like the type of fear things that people have, and I hear people talking about there's a lot of others, but there's a little bit there.

00:19:38.480 --> 00:19:40.079
Yeah, yeah, I think that those are really good.

00:19:40.160 --> 00:19:42.559
And I don't think it's also always like specific to AI, right?

00:19:42.640 --> 00:19:45.279
Like I think there's a lot of fear around data being collected on social media.

00:19:45.599 --> 00:19:46.480
How is this being used?

00:19:46.640 --> 00:19:48.000
Who has access to this?

00:19:48.160 --> 00:19:56.319
And I think a lot of times it's scary not to know, but I don't think it's unique to AI, and I think sometimes when I hear it in conversation, people are like, oh, it's gonna take all this information.

00:19:56.559 --> 00:19:58.160
Well, so does your iPhone, you know.

00:19:59.200 --> 00:20:09.359
So, like, so that to me is not the scariest aspect because I don't think it's new, but also that's saying as someone who's 25 and I'm of the generation of a digital native who's yeah, I've like always been on this.

00:20:09.440 --> 00:20:15.759
And I think there's a level of familiarity with it, and so perhaps that's also dangerous to just be like, oh yeah, your data's just gonna get used.

00:20:16.240 --> 00:20:18.400
But it does get used, and I think that's the reality.

00:20:18.640 --> 00:20:26.640
So the the thing there is like the ability to collect data is one thing, and then the next step is to take action on it, right?

00:20:26.799 --> 00:20:27.119
Yes.

00:20:27.359 --> 00:20:32.799
So you can have all of this data, but it is really hard to process it.

00:20:33.039 --> 00:20:34.720
Let me give you a specific example.

00:20:34.880 --> 00:20:44.079
The United States military is said to have more spy data on whatever things they are spying on than they can process.

00:20:44.400 --> 00:20:53.200
A report a few years ago said they spend an incredible number of man hours just listening to everything, reading all the documents, looking at all the photos.

00:20:53.359 --> 00:20:55.680
It takes a lot of people just to do that.

00:20:55.839 --> 00:21:05.519
And so to go from this huge data collection to then being able to take action on it is a step that up to now we weren't able to do.

00:21:05.759 --> 00:21:12.799
But now with these new models, the models are becoming so good that we might be able to take action on them.

00:21:12.960 --> 00:21:16.240
And I am not an AI doomer, like not at all.

00:21:16.400 --> 00:21:25.920
But if we're going to take the fear seriously, what we need to understand is that there's a step there that wasn't there before, that now is starting to be there.

00:21:26.079 --> 00:21:31.759
Where, for example, so Google two weeks ago, three weeks ago, released this AI co-scientist paper.

00:21:31.920 --> 00:21:40.799
So they gave this AI that they have, it was Gemini 2.0, but like some version of Gemini 2.0 they gave to some scientists for them to do experiments.

00:21:41.039 --> 00:21:54.559
One of the experiments that they did was, I'm not gonna try and say the details because I don't know the exact details, but they were working on this for years, like more than 10 years, and they had all of these hypotheses and they tested each hypothesis like one at a time, right?

00:21:54.640 --> 00:22:04.319
And then they gave this to the AI chatbot, like the same question and the raw data, and it validated their years of research, right?

00:22:04.480 --> 00:22:17.599
So what I'm saying is what used to take 10 years or used to take 15 or 20 years can now be done in hours or in days or weeks because of the increased processing power of these new models.

00:22:17.839 --> 00:22:22.480
So that is where data privacy might become more important.

00:22:22.880 --> 00:22:23.119
Right.

00:22:23.359 --> 00:22:24.079
Yeah, thank you.

00:22:24.240 --> 00:22:26.240
That's an excellent clarification on that point.

00:22:26.400 --> 00:22:27.279
I really appreciate that.

00:22:27.359 --> 00:22:42.640
And so, on the topic of data privacy, in your opinion, what would be the ideal way to regulate AI going forward so that it's the best it can possibly be for science communication, health communication, like in order for it to like really improve our health, which is so important.

00:22:42.720 --> 00:22:51.920
Like I can think of all the ways that it improves and makes things faster for us analyzing agricultural trends or things like that, but that's not data people are super concerned about it having.

00:22:52.079 --> 00:22:58.480
I think health information has always felt very personal, and where you get your health information really depends on who you trust.

00:22:58.640 --> 00:23:02.160
And so, what would be the ideal way to regulate AI, in your opinion?

00:23:02.240 --> 00:23:04.799
Like in an ideal world, what would you like to see?

00:23:05.440 --> 00:23:10.079
So I'm not sure that I know that I have an exact clear picture of how I would regulate AI.

00:23:10.160 --> 00:23:14.240
I think the way I would start thinking about this is a little bit more from first principle.

00:23:14.319 --> 00:23:16.640
So we don't want the AI to be evil, right?

00:23:16.720 --> 00:23:20.400
So I would think about what is it that we don't want the AI to do?

00:23:20.640 --> 00:23:24.640
So we don't want the AI to pick who lives and who dies.

00:23:24.799 --> 00:23:28.720
We don't want the AI deciding who gets UBI and who doesn't, right?

00:23:28.880 --> 00:23:33.039
A universal basic income, a dystopian future where the AI decides everything.

00:23:33.119 --> 00:23:36.400
And they're like, these ones, they're gonna die in one year.

00:23:36.640 --> 00:23:40.640
We don't need to give them access to food anymore.

00:23:40.720 --> 00:23:41.920
They're gonna die anyways.

00:23:42.079 --> 00:23:44.720
That is not something that you want the AI to do.

00:23:46.480 --> 00:23:47.200
I would agree with this.

00:23:47.359 --> 00:23:51.200
I also support that this should be a basic tentative no.

00:23:51.920 --> 00:23:53.920
You don't decide who lives and who dies.

00:23:54.000 --> 00:23:54.480
Yes.

00:23:54.720 --> 00:24:00.000
You there was a study published today, today or yesterday, like very recently.

00:24:00.160 --> 00:24:07.680
They did a study, not with the reasoning models, so not with uh 01, not with Google's thinking model, anything.

00:24:07.839 --> 00:24:13.119
They did what, but they did do it with 4.0, with all of the best models, but not the reasoning ones.

00:24:13.279 --> 00:24:24.400
And they found, so they did rank choice, like rent choice voicing, like rent choice choosing, and the AIs have favorite, they think that not all human lives are equally valuable.

00:24:24.640 --> 00:24:27.519
They have picked which human lives are more valuable.

00:24:27.599 --> 00:24:31.440
I think Japan, uh people from Japan are the most valuable people.

00:24:31.759 --> 00:24:40.480
Catch me on the next slide, moving to chat box and that AI is more valuable than human life.

00:24:41.200 --> 00:24:43.039
So it's an interesting study.

00:24:43.119 --> 00:24:48.640
Listen, I'm not saying that all of the bad stories about the chatbots taking over and being our overlords are there.

00:24:48.799 --> 00:24:51.359
What I'm saying is that is not something that we want.

00:24:51.599 --> 00:24:56.319
We do not want the AI to decide which humans are better and which humans are not better.

00:24:56.559 --> 00:25:03.200
One thing that the European Union said, and I think I agree with them, I don't want the AI to be monitoring my emotions.

00:25:03.759 --> 00:25:06.160
Maybe um, yeah, so what if I'm angry?

00:25:06.400 --> 00:25:08.480
What matters is how I act.

00:25:08.640 --> 00:25:12.160
I don't want to go to jail because inside I wanted to murder you.

00:25:12.319 --> 00:25:13.200
I didn't murder you.

00:25:13.279 --> 00:25:14.240
You know what I mean?

00:25:14.559 --> 00:25:20.079
And so I don't want the AIs to be monitoring my emotions and making decisions like that.

00:25:20.319 --> 00:25:23.279
The idea of a social score, I don't want that.

00:25:23.519 --> 00:25:30.079
So I think that is where I would start is what don't I want the AI to do and start looking from there.

00:25:30.240 --> 00:25:42.480
But always thinking about because I really want the AI to do what it did in this Google thing, that they were able to do like 10 years of research, it got shortened to a few weeks or whatever it was.

00:25:42.640 --> 00:25:43.440
I want that.

00:25:43.599 --> 00:25:50.079
Because that means that maybe we can get to a treatment to a rare disease 10 years faster, right?

00:25:50.319 --> 00:25:51.440
So that I want.

00:25:51.599 --> 00:26:05.039
I want an AI that means that doctors don't have to spend uh more than 50% of their time typing on the electronic record uh software that they use and they can actually take care of patients.

00:26:05.279 --> 00:26:06.559
So I would balance that.

00:26:06.720 --> 00:26:11.440
I would balance like, well, what do I want and what I don't want, and use that to guide regulation.

00:26:11.839 --> 00:26:23.039
I think something I'd love to see in the future for AI is I feel really strongly about people should have access to their own health information in a way that they understand, because I've seen so many instances where that's not the case.

00:26:23.200 --> 00:26:52.160
And so I think that almost if if we end up using AI in that context, like almost like a generative plain language summary of here is what you discussed today, here are like action items for you, if we get AI to the point where it's not hallucinating, or that it's still kind of working in concert with humans, of we can improve health outcomes, because I feel like that would be life-changing for people to understand how to, in plain language, best manage their diabetes, or like what does this diagnosis of a melanoma stage one really mean?

00:26:52.319 --> 00:26:53.279
Like that kind of thing.

00:26:53.359 --> 00:26:55.599
I think that would be huge.

00:26:55.759 --> 00:27:07.119
And I think that you would see just a general increase in health literacy, and I think that's desperately needed at this point, but I think it's also something that we get our health information from places we trust.

00:27:07.279 --> 00:27:14.319
And as you were saying earlier, AI is certainly very trusted in some ways because you either really trust it or you really don't, I think.

00:27:14.480 --> 00:27:19.440
Um, and you're right, there's definitely still ways we need to keep our guard up with it because it does hallucinate and it does make mistakes.

00:27:19.599 --> 00:27:25.279
But if we can make that almost like an apolitical way to get your health info, I feel like would be desperately needed.

00:27:25.599 --> 00:27:35.680
Yeah, that and like when you said when someone receives a tough diagnosis, it is known that people don't process it, people don't even understand all of the conversations.

00:27:36.000 --> 00:27:40.400
It might as well not have that conversation with the doctor because people don't remember it.

00:27:40.559 --> 00:27:43.599
They go home and then they have a million questions once they're home.

00:27:43.759 --> 00:27:52.000
So that's why a lot of times they say take someone with you when you go talk to the doctor about some tough diagnosis, because then that person is not as emotional.

00:27:52.400 --> 00:27:54.480
If you're an advocate almost, yes, exactly.

00:27:54.640 --> 00:27:57.839
But also you can hear what the doctor is saying.

00:27:58.000 --> 00:28:02.079
The patient that just got the shocking news almost cannot even hear.

00:28:02.240 --> 00:28:03.200
You know what I mean?

00:28:03.440 --> 00:28:12.079
So, like having an AI that records the conversation and then can have a conversation with you later about it, and can play you.

00:28:12.160 --> 00:28:14.559
Oh no, but listen, the doctor said this at this moment.

00:28:16.160 --> 00:28:17.680
And you're like, oh yeah, true.

00:28:17.839 --> 00:28:23.759
But there's more than just the doctor, because it listened to the conversation and knows exactly where to go in the recording, number one.

00:28:23.839 --> 00:28:33.039
But number two, if it also had access to like background information about the disease and all of that, you could also have conversations with it about what's going on.

00:28:33.359 --> 00:28:43.599
So one thing that I always say to people is patient forums are very powerful for patients because in the middle of the night, you're like really worried about something.

00:28:43.759 --> 00:28:54.960
You can't call your doctor, but you can go to a patient forum and you can ask a question, and you get all of these other people that have the same disease and they give their experience, and it is really important for patients.

00:28:55.119 --> 00:29:07.599
So you could kind of have that uh with an AI as well, an addition to that, because people want to connect with people, but to have something there that you can kind of like, oh, this made me feel a little bit calmer.

00:29:07.759 --> 00:29:08.960
I think that would be great.

00:29:09.519 --> 00:29:10.319
I love that idea.

00:29:10.480 --> 00:29:11.839
I think that's excellent.

00:29:12.240 --> 00:29:13.119
I hope we see that.

00:29:13.359 --> 00:29:14.400
I think that would be beautiful.

00:29:14.640 --> 00:29:18.559
I also think AI could potentially connect you to those very human forms, right?

00:29:18.640 --> 00:29:24.960
Of here is a list of resources, like here are groups, and here's what this is if it's lupus or if it's cancer, whatever it is.

00:29:25.119 --> 00:29:28.799
Here are people who have similar experiences to you, so you can actually connect with real humans.

00:29:28.960 --> 00:29:35.119
I can give you this information as like the AI, and then help connect you also to the human aspect of care that we all really need.

00:29:35.359 --> 00:29:35.599
Yeah.

00:29:35.839 --> 00:29:45.759
And more than just a list, because it can understand you and like your preferences, it can actually send you to the resource that would be better for you personally.

00:29:46.240 --> 00:29:50.240
And you don't have to hunt for it when you're already like processing or really worried or something.

00:29:50.400 --> 00:29:52.240
I think that would be so powerful.

00:29:52.400 --> 00:29:52.880
Yeah.

00:29:53.200 --> 00:29:54.880
Gosh, I think that is so cool.

00:29:55.119 --> 00:29:55.359
Okay.

00:29:55.759 --> 00:29:56.400
You're brilliant.

00:29:56.480 --> 00:29:57.759
I have learned so much from this.

00:29:57.839 --> 00:29:58.559
So thank you.

00:29:58.720 --> 00:29:59.759
I do have one last question.

00:30:00.160 --> 00:30:00.480
For you.

00:30:00.640 --> 00:30:02.960
As always, we can cover it at home with our listeners.

00:30:03.119 --> 00:30:07.200
So, how would you recommend people increase their AI literacy if they're interested?

00:30:07.839 --> 00:30:11.119
Okay, so one thing I think everyone should do is just try it.

00:30:11.279 --> 00:30:12.160
Try it at home.

00:30:12.240 --> 00:30:14.400
Try it for your own thing.

00:30:14.559 --> 00:30:17.839
Try it about something that has nothing to do with work, nothing to do with nothing.

00:30:17.920 --> 00:30:18.240
I don't know.

00:30:18.319 --> 00:30:24.480
Take a picture of your fridge, the inside of your fridge, and ask it to give you an idea of what to cook for dinner.

00:30:24.640 --> 00:30:25.200
Something.

00:30:25.839 --> 00:30:27.759
I thought my GPT write me high food.

00:30:27.920 --> 00:30:29.200
I thought that was a good way to start it.

00:30:31.279 --> 00:30:32.400
Yes, exactly.

00:30:32.559 --> 00:30:38.640
Or I don't know, just try different things and experiment and see what it can and what it cannot do.

00:30:38.799 --> 00:30:40.559
See what you like and what you don't like.

00:30:40.640 --> 00:30:41.599
And that is number one.

00:30:41.920 --> 00:30:45.440
The second thing is yes, if you're going to use it for work, right?

00:30:45.599 --> 00:30:48.160
Yes, the AI make mistakes, but so do humans.

00:30:48.480 --> 00:30:50.400
And that is why we are there.

00:30:50.559 --> 00:30:53.599
It will make a mistake, you catch it and you fix it.

00:30:53.839 --> 00:30:55.359
It is not a huge deal.

00:30:55.519 --> 00:31:00.640
Like the only problem would be if you were telling the guy to do the thing and you weren't going to even look at it.

00:31:00.799 --> 00:31:03.200
Just the fact that it makes mistakes, so watch you dare.

00:31:03.279 --> 00:31:03.920
You caught it.

00:31:04.000 --> 00:31:04.960
That is a good thing.

00:31:05.119 --> 00:31:07.599
You can move on now and go do it correctly.

00:31:07.839 --> 00:31:17.279
When you're using it for work as a professional, a science communicator, the fact that it makes mistakes when I am talking to it does not make me feel bad at all.

00:31:17.599 --> 00:31:20.480
Because what it means is that I caught the mistake.

00:31:20.559 --> 00:31:25.039
So it's good, that's my job, is to make sure that it doesn't have mistakes at the end.

00:31:25.279 --> 00:31:26.960
So I would say know yourself.

00:31:27.039 --> 00:31:28.160
So do you prefer to read?

00:31:28.319 --> 00:31:30.079
Do you prefer to scroll social media?

00:31:30.160 --> 00:31:31.440
Uh do you prefer to listen?

00:31:31.519 --> 00:31:35.839
Like I am a big podcast listener, and that is how I like to get my information.

00:31:36.880 --> 00:31:38.559
So I listen to podcasts, right?

00:31:38.640 --> 00:31:40.480
I have a few and try different ones.

00:31:40.559 --> 00:31:41.759
There's lots about AI.

00:31:41.920 --> 00:31:48.240
I listen to podcasts about AI and education specifically because I'm very interested in continual medical education.

00:31:48.400 --> 00:31:53.759
And then I listen to some more general AI podcasts, like AI in business a lot.

00:31:53.920 --> 00:31:55.200
I think about that a lot.

00:31:55.359 --> 00:31:58.480
And also I read energy podcasts about AI.

00:31:58.559 --> 00:32:02.640
As I said, I think of AI as a science, so I want to understand the science behind it.

00:32:02.799 --> 00:32:04.880
I want to know about the studies that are published.

00:32:04.960 --> 00:32:07.599
I really like that someone tried to see.

00:32:07.839 --> 00:32:13.279
I say I prefer AI arguments, and I'm not surprised that I prefer AI, but I don't want it to.

00:32:13.440 --> 00:32:14.079
Thank you very much.

00:32:14.240 --> 00:32:15.119
Things like that, right?

00:32:15.279 --> 00:32:31.119
So I would pick the medium that you like and then look for the people that you like to listen from or read, because people have different styles, and it's great that we have almost 8 billion people in the world, and we can find the ones that have the style that we like.

00:32:31.359 --> 00:32:32.880
So yeah, so that is what I would say.

00:32:32.960 --> 00:32:41.839
I'll pick the topics that interest you, the medium that you like, and the people that you like, and just don't make it a second job and listen to a podcast once a week.

00:32:41.920 --> 00:32:42.799
It'll be fine.

00:32:42.880 --> 00:32:46.799
Uh, but experiment and experiment and experiment is what I would say.

00:32:47.119 --> 00:32:48.319
I really appreciate that.

00:32:48.400 --> 00:32:49.359
And thank you so much.

00:32:49.519 --> 00:32:50.240
This is excellent.

00:32:50.319 --> 00:32:55.680
I feel like this is such a big topic right now, and a lot of people are thinking about it or worried about it.

00:32:55.759 --> 00:32:57.680
And I feel like there's so much potential for it.

00:32:57.839 --> 00:33:02.559
So I tend to do your workshop and uh and now after hearing this, I don't know, it just makes it very hopeful.

00:33:02.799 --> 00:33:09.440
I think there's so much good that it can do, and I think that's something we can use, and it's a great tool, so we can use it as a tool.

00:33:09.680 --> 00:33:10.400
Yes, exactly.

00:33:10.480 --> 00:33:11.200
I agree with you.

00:33:11.440 --> 00:33:12.319
Thank you for inviting me.

00:33:12.400 --> 00:33:22.559
Yes, I always call them tools because I want people to understand that is what they are tools, and yes, and I'm always thinking about okay, so what is it that I could not do that I can do now?

00:33:22.720 --> 00:33:26.319
And I'm trying to figure that out, and that is what I wanted to do.

00:33:26.640 --> 00:33:27.359
Absolutely.

00:33:27.599 --> 00:33:29.359
Thank you so much, Julia, for joining us.

00:33:29.440 --> 00:33:32.000
Thank you, everyone, for listening to this episode of Infectious Science.

00:33:32.160 --> 00:33:33.759
As always, let us know what you want to hear.

00:33:33.839 --> 00:33:34.720
And thanks for joining us.

00:33:35.200 --> 00:33:37.759
Thanks for listening to the Infectious Science Podcast.

00:33:37.920 --> 00:33:45.839
Be sure to hit subscribe and to the infectious science.org to join the conversation, access the show notes, and to sign up for our newsletter and receiver free material.

00:33:46.160 --> 00:33:52.000
If you enjoyed this new episode of Infectious Science, please leave us a review on the top of that.

00:33:52.160 --> 00:33:54.559
And go ahead and share this episode with some of your friends.

00:33:55.359 --> 00:33:59.599
Don't hesitate to ask questions and tell us what topics you'd like us to cover for future episodes.

00:33:59.839 --> 00:34:03.920
Click ahead, drop a line in the comment section, or send us a message on social media.

00:34:04.240 --> 00:34:05.920
We'll see you next time for a new episode.

00:34:07.039 --> 00:34:09.920
Stay happy, stay healthy, stay interested.

00:34:17.360 --> 00:34:24.159
Partners with innovators and science and health, working with communities to develop nimble approaches to the world's most challenging health problems.