WEBVTT
00:00:09.759 --> 00:00:11.839
This is a podcast about One Health.
00:00:11.919 --> 00:00:18.239
The idea that the health of humans, animals, plants, and the environment that we all share are intrinsically linked.
00:00:18.480 --> 00:00:23.920
Coming to you from a team of scientists, physicians, and veterinarians, this is Infectious Science.
00:00:24.079 --> 00:00:27.120
Where enthusiasm for science is contagious.
00:00:29.359 --> 00:00:32.799
All right, welcome back to this episode of Infectious Science.
00:00:32.960 --> 00:00:34.799
Thanks everyone for tuning in.
00:00:35.119 --> 00:00:37.439
I am really excited today to be here.
00:00:37.520 --> 00:00:42.240
I'm one of your co-hosts, Camille, and we are really fortunate to be joined by Dr.
00:00:42.320 --> 00:00:51.280
Narian Grau, who's going to talk to us about AI and how it's changing conversations around health and science and medicine and the ways that we can communicate those.
00:00:51.439 --> 00:00:51.600
Dr.
00:00:51.679 --> 00:00:54.240
NGR, could you introduce yourselves for our listeners?
00:00:54.640 --> 00:00:55.520
Hi, Camille.
00:00:55.600 --> 00:00:58.399
Thank you so much for inviting me to be part of the podcast.
00:00:58.560 --> 00:01:02.719
I really like listening to you guys, and I'm very honored to be here.
00:01:02.799 --> 00:01:03.359
So thank you.
00:01:03.520 --> 00:01:04.480
Yes, hi everyone.
00:01:04.560 --> 00:01:05.840
My name is Nudia.
00:01:06.000 --> 00:01:11.040
I am a medical writer at the moment, and I am an AI adoption strategist.
00:01:11.280 --> 00:01:23.200
That's what I'm calling myself, basically, because I've been doing a lot of work in AI education and helping people to figure out how they can use AI in their professional lives as scientists and as medical writers.
00:01:23.439 --> 00:01:25.439
My background is as a scientist.
00:01:25.680 --> 00:01:29.439
I was a bench scientist in academia and in industry for quite a bit.
00:01:29.680 --> 00:01:36.799
I have a PhD in cellular biology, and then I also worked in biotech on developing diagnostic tools.
00:01:36.959 --> 00:01:43.760
And at the same time, throughout my career, so I, if anything, I actually started on the science communication side of it.
00:01:43.840 --> 00:01:46.480
I say I've been doing science communication since high school.
00:01:46.719 --> 00:01:54.400
The first project I did was in the 90s in high school was all about teaching all of my peers about HIV and AIDS.
00:01:54.560 --> 00:01:58.000
I did a project, and my school was like, this project is awesome, and you need to go teach everyone.
00:01:58.079 --> 00:02:04.719
So I taught everyone in my school, and then I went to other schools teaching everyone about HIV and AIDS.
00:02:05.200 --> 00:02:09.599
So that's that was the beginning of my SICOM career.
00:02:09.680 --> 00:02:15.599
And I in college, when I was teaching, I was teaching also a lot of writing in the sciences.
00:02:15.759 --> 00:02:19.520
So I got training education on how to teach writing in the sciences.
00:02:19.599 --> 00:02:21.520
So I think I've been doing this for quite a bit.
00:02:21.759 --> 00:02:27.759
So I also see this whole AI education thing that I'm doing now as like a natural progression from that.
00:02:27.919 --> 00:02:42.560
Because I really see talking about AI and teaching people about how to use AI as a science communication because artificial intelligence is a science, and learning how I feel like learning about the science itself helps us use the tools better.
00:02:42.719 --> 00:02:42.960
Yes.
00:02:43.360 --> 00:02:44.639
Yeah, no, absolutely.
00:02:45.199 --> 00:02:46.479
What a wonderful overview.
00:02:46.560 --> 00:02:46.960
Thank you.
00:02:47.199 --> 00:02:47.759
That's really cool.
00:02:47.919 --> 00:02:53.919
So I I just, you know, finished my dissertation, and that's I was studying the effects of HIV and cocaine in the brain.
00:02:54.080 --> 00:03:00.000
So very cool that you also did to work on basically just like the public-facing education aspect of that.
00:03:00.080 --> 00:03:10.800
And what's really cool is I think medical literacy has been shown to like increase more when you're not in like a hospital setting, like in schools or in local community centers or churches or barbershops and things like that.
00:03:11.120 --> 00:03:15.919
People take in the information and really absorb it better than just getting it from something like a hospital.
00:03:16.000 --> 00:03:17.039
So that's really neat, very cool.
00:03:17.120 --> 00:03:19.520
Always fun to learn somebody who did some cool SATCOM work.
00:03:20.000 --> 00:03:27.360
So speaking of which, could you tell us a bit about how do you think AI is currently changing science communication and medical writing?
00:03:27.439 --> 00:03:30.479
And then where do you see that going in the next couple of years?
00:03:30.960 --> 00:03:38.479
Okay, so I think right now we're at the very beginnings of it, and it's when we're talking about professional communicators, right?
00:03:38.639 --> 00:03:53.919
If we're talking about people that their job is to do science communication or scientists themselves, or writers that work in writing about science or writing about medicine, I think you have some people that are very hesitant to use AI, mostly because one, it's a new tool.
00:03:54.080 --> 00:04:04.479
And if you're not from the computer science side of the world, anything that is computer science is a little bit unknown, and then people may be a little bit afraid of it.
00:04:04.560 --> 00:04:06.400
Just like I'm just thinking about myself.
00:04:06.479 --> 00:04:11.039
Like when people used to say, Oh, you need to learn how to code, I'd be like, no, I do not.
00:04:11.280 --> 00:04:12.879
I do a lot of work already.
00:04:13.039 --> 00:04:15.199
I do not need to learn a new language.
00:04:15.360 --> 00:04:16.319
Thank you very much.
00:04:16.480 --> 00:04:34.399
So I understand like the fear and uh hesitancy there, but it is also because these new AI, the generative AI tools, they write what sounds like really convincing text, but at least if you just ask a very simple question, it can come up with wrong facts.
00:04:34.639 --> 00:04:43.040
And because when you are communicating science or you're communicating medicine, there is like such an emphasis on being accurate.
00:04:43.279 --> 00:04:45.120
It's so necessary, so important.
00:04:45.360 --> 00:04:50.399
People are like, oh, if I use that tool, it's gonna give me the wrong facts and we're gonna get into trouble.
00:04:50.560 --> 00:04:52.639
So I think a lot of people are afraid of it.
00:04:52.879 --> 00:05:00.240
On the other hand, on the public using it, I think you have people that have not used AI at all in the population.
00:05:00.399 --> 00:05:04.720
But those that are students, we know that students are using AI a lot.
00:05:04.959 --> 00:05:09.279
So I am guessing that the same way that all patients go to Dr.
00:05:09.439 --> 00:05:15.360
Google to find about their diseases, I am sure that there are lots of patients going to Dr.
00:05:15.519 --> 00:05:18.560
ChatGPT at the moment to find out about their diseases.
00:05:18.720 --> 00:05:20.480
So I think the state of it is like that.
00:05:20.560 --> 00:05:30.879
And then there are a few people everywhere in all industries that are really trying to figure out how they're going to use these tools in a way that is safe, effective, and responsible, right?
00:05:31.040 --> 00:05:38.639
For example, I know people are using it to create education materials for medical students, to create practice questions for tests.
00:05:38.879 --> 00:05:42.560
There are lots of people trying to do more personalized learning.
00:05:42.720 --> 00:05:53.199
So the bot will assess how you're doing in your learning and will change the difficulty of what it teaches you or asks you next because it can really be personalized.
00:05:53.360 --> 00:05:57.839
So, more in a medical education, I know that there are people are starting to get into that.
00:05:58.079 --> 00:06:06.560
I know that some companies are creating virtual patients so that doctors can train on these virtual patients, train their communication skills.
00:06:06.800 --> 00:06:09.040
It is really hard to tell someone bad news.
00:06:09.279 --> 00:06:14.959
So, like you can practice that, and you can practice culturally aware communication as well.
00:06:15.120 --> 00:06:22.560
So maybe you feel like you're not too sure how to talk about a specific topic with a person that is from a different culture than yours.
00:06:22.720 --> 00:06:25.040
So they use these virtual patients for that.
00:06:25.199 --> 00:06:28.639
So I think really interesting things are starting to be done.
00:06:28.959 --> 00:06:29.600
Yeah, yeah.
00:06:29.759 --> 00:06:39.439
And I think you you really hit on something that AI does really well, which is if you have something written up or you know what you want to say, it can help you find the best way to state it, I have found.
00:06:39.519 --> 00:06:41.040
And it's a really useful tool for that.
00:06:41.279 --> 00:06:47.680
Here's a concept that I'm trying to explain, but am I hitting the audience that I want to or intend to and making sure that information is clear to them.
00:06:47.839 --> 00:06:48.959
So that's really cool.
00:06:49.120 --> 00:07:01.199
And I've also heard from a conversational standpoint that AI is potentially being trained to work almost as like a telehealth chat feature for like therapy, which I think is really wild and also could dramatically increase people's accessibility to things, right?
00:07:01.279 --> 00:07:04.879
Like I grew up in a rural area and healthcare in rural areas is very hard to access.
00:07:05.120 --> 00:07:07.120
But of course, there's always the human side of things, right?
00:07:07.199 --> 00:07:12.879
And so I don't think, and I I think you kind of hit on this, it's not replacing people, but it is changing how we do things.
00:07:13.199 --> 00:07:13.759
Yeah, yeah.
00:07:14.000 --> 00:07:22.480
And what you're talking about, like the chat bots, they're like companion bots or the ones that will become, say, therapist, therapy type bots, or coach type bots.
00:07:22.639 --> 00:07:31.279
One of the really popular apps on mobile phones is character AI, which is people basically talking to these companion bots, right?
00:07:31.360 --> 00:07:34.639
And you can talk to different personalities on character AI.
00:07:34.800 --> 00:07:45.839
And you have a lot of young people in particular spend a lot of their AI time talking to these AI best friends, AI romantic partners, but also coaches and therapists and things like that.
00:07:46.079 --> 00:08:08.800
So, one thing that is interesting about this is the ChatGPT's new model, the 4.5 model, in tests of emotional intelligence, it seems to have higher emotional intelligence than all of the other models, which is basically like what you want to do if you want to have more of these bots talking to people in more of a companion or a therapist type of thing.
00:08:08.959 --> 00:08:10.639
So that is really interesting.
00:08:10.879 --> 00:08:14.959
Another company that was really good at doing this, I think they were called Inflection AI.
00:08:15.040 --> 00:08:17.680
They are the ones that most of the company went to Microsoft.
00:08:18.160 --> 00:08:26.720
The CEO of that was uh Mustafa Suleiman, that now is the CEO of AI at Microsoft, and he was one of the original founders of Google Deep Range.
00:08:26.800 --> 00:08:39.039
So, like he has a long history of working in AI, and they were really interested in this emotional intelligence part of the AI, where a lot of the other companies are more on the IQ, they are more on the EQ.
00:08:39.120 --> 00:08:41.039
So that is it's really interesting.
00:08:41.120 --> 00:08:41.519
Yeah.
00:08:41.840 --> 00:08:42.399
Yeah, yeah.
00:08:42.480 --> 00:08:44.799
And I could see how that could also be useful for us.
00:08:44.960 --> 00:08:52.159
Are we having culturally sensitive conversations when we're breaking medical news or like how do we bring this to people in a way that is approachable and understandable?
00:08:52.240 --> 00:08:54.639
And that's something we think about a lot in science and in medicine.
00:08:55.120 --> 00:09:00.080
So, what do you see as like the major benefits of AI right now in SciCom and in medical writing?
00:09:00.240 --> 00:09:02.320
How is it on the ground helping us?
00:09:03.039 --> 00:09:11.679
So, for the way I think about it is AI, even if right now is not very good, it is improving so fast.
00:09:11.840 --> 00:09:15.440
And I see the potential of it, and I get really excited.
00:09:15.519 --> 00:09:26.720
So, I get excited about what are the things that we cannot do now, that we struggle to do now, that if we could automate some of our processes, would allow us to have time to do that.
00:09:26.879 --> 00:09:34.080
So, for example, even in the US where there are so many doctors, you still have not enough doctors, right?
00:09:34.320 --> 00:09:39.279
And so, how can we think about how AI can help us with this?
00:09:39.519 --> 00:09:54.080
People want to see human doctors, but maybe triage can be done with some AI chatbots, or maybe they can be with the human doctor for the really important parts, but maybe the AI could do some of the note-taking.
00:09:54.159 --> 00:09:59.360
Like doctors spend a lot of time just typing on their computer.
00:09:59.519 --> 00:10:04.720
And if you go to a doctor's appointment, like half of the time they're typing on their computer, they're not even looking at you.
00:10:04.879 --> 00:10:14.399
So imagine you could get an AI voice recorder, right, that is just transcribing your conversation to the patient and filling out the form so you can now focus on this patient.
00:10:14.639 --> 00:10:21.200
So, in a half an hour appointment, you have right now like 10 minutes of actual contact with patients.
00:10:21.519 --> 00:10:30.879
That means that maybe you could increase the actual contact time to 20 minutes, but you would still have 10 minutes there gained that you could see another patient.
00:10:31.039 --> 00:10:38.159
So, this is how you could both improve the quality of contact between doctors and patients and see more patients.
00:10:38.240 --> 00:10:39.519
And that's just a very simple thing.
00:10:39.600 --> 00:10:48.399
And that is something that could be possible today if we could get around the technology issues of does it understand everything that we're saying?
00:10:48.559 --> 00:10:53.519
Does it understand medical terms and the privacy issues and all of that?
00:10:53.759 --> 00:10:58.080
If we could make these bots so good, I think we are very close to being able to do that.
00:10:58.320 --> 00:11:05.360
In terms of science communication and science writing, there was the study that was published in December of 2024.
00:11:05.519 --> 00:11:09.360
It was a really good study by academics and at different universities.
00:11:09.519 --> 00:11:15.840
The lead author was from Stanford, and they looked at 4,000 people and the impact of AI at work.
00:11:16.000 --> 00:11:21.919
So this is the most comprehensive survey of how AI has been used in the workspace, right?
00:11:22.080 --> 00:11:28.240
And what they saw was that about 30% of people are using AI at work to do work tasks.
00:11:28.399 --> 00:11:31.279
So 30% is not that huge of a number.
00:11:31.440 --> 00:11:39.600
And when they looked at the industries that have the highest adoption, it was like marketing, IT, and customer service, right?
00:11:39.679 --> 00:11:43.600
So it wasn't healthcare or science communication or anything like that.
00:11:43.759 --> 00:11:48.159
So what I'm trying to say is I think the adoption is still very low for people.
00:11:48.399 --> 00:11:56.720
What I think there is a lot of potential in, and this is what I tell people is if AI only did this, I would already be super happy.
00:11:56.960 --> 00:11:59.600
Is in a lot of what we do is research.
00:11:59.840 --> 00:12:02.720
And in research, AI can be really transformative.
00:12:02.879 --> 00:12:06.080
Because if you think about it, we can only read so fast.
00:12:06.399 --> 00:12:16.399
So if I am reading about a disease, I usually give this example because I work a lot with non-small cell lung cancer, but also because non-small cell lung cancer is a huge field.
00:12:16.480 --> 00:12:18.480
There's like papers coming out every day.
00:12:18.639 --> 00:12:26.639
I think if you go to PubMed right now and you look just in 2025, more than a hundred papers have been published already on non-small cell lung cancer.
00:12:26.720 --> 00:12:31.600
So it is impossible to read everything about non-small cell lung cancer for a human.
00:12:31.919 --> 00:12:33.519
AI can read everything.
00:12:33.840 --> 00:12:35.840
Okay, and give you a generative summary.
00:12:36.159 --> 00:12:36.639
Exactly.
00:12:36.960 --> 00:12:39.039
It can read everything fast.
00:12:39.679 --> 00:12:47.919
So that is something that the bots can do now that is better than us because we just don't have the capacity to read that fast.
00:12:48.080 --> 00:12:52.159
The trick is, okay, so how can we take advantage of that, right?
00:12:52.320 --> 00:12:56.320
And that is what we need to think of is okay, so I don't need to read everything.
00:12:56.480 --> 00:13:06.000
I need it to read everything, to find the most relevant sources and give me the most relevant sources so that I can read the really key sources, right?
00:13:06.240 --> 00:13:16.080
So I think that is the biggest unlock whenever I'm telling people about different ways of using AI, it is always in research and search that I think is the biggest unlock at the moment.
00:13:16.240 --> 00:13:19.279
And then the other thing is connecting different ideas.
00:13:19.440 --> 00:13:23.759
So because it can read everything and it remembers everything, because that's the other thing.
00:13:23.919 --> 00:13:25.679
We don't remember everything we read.
00:13:25.840 --> 00:13:27.200
It is just the way it is, right?
00:13:27.279 --> 00:13:28.559
But it remembers everything.
00:13:28.720 --> 00:13:38.720
So if you gave it everything to read about non-small cell line cancer and you're talking about something else and you ask it a question, it remembers, oh yeah, there was that one paper that I read that said something about this.
00:13:38.960 --> 00:13:43.600
So it is really good at making those connections and going to get those things.
00:13:43.759 --> 00:13:54.159
And then if you use tools to read all of these papers at the same time, which right now, really, the start of the show is Notebook LM, you can speed up also your reading part of research, right?
00:13:54.240 --> 00:13:59.279
So your understanding and asking questions and making connections between different papers.
00:13:59.519 --> 00:14:02.720
I talk about the efficiency gain, so how much faster it is.
00:14:02.879 --> 00:14:04.879
What I want to emphasize is not just about the faster.
00:14:05.279 --> 00:14:10.720
What the faster it does is allows you to also go deeper and to also do better work.
00:14:10.960 --> 00:14:12.879
It is not just about the faster.
00:14:13.039 --> 00:14:13.840
It is more than that.
00:14:13.919 --> 00:14:14.080
Yes.
00:14:14.320 --> 00:14:18.000
Yeah, I think I could see how that could help us ask better research questions, right?
00:14:18.159 --> 00:14:26.559
Because so many times, if you're seeing scientific questions being posed as for new grant funding or whatever that is, it is definitely limited by what the people working on it can read and get through.
00:14:26.639 --> 00:14:28.639
And the dissertation work I did was on HIV.
00:14:28.720 --> 00:14:30.159
There are so many HIV papers.
00:14:30.240 --> 00:14:32.159
I have another friend who works on coronaviruses.
00:14:32.559 --> 00:14:34.879
There are so many papers on COVID.
00:14:34.960 --> 00:14:40.320
And it is like such a quagmire to wade through to find exactly what you're looking for and then to make sure you don't miss things.
00:14:40.399 --> 00:14:41.200
And we're only humans.
00:14:41.279 --> 00:14:42.639
So I could see it being used for that.
00:14:42.720 --> 00:14:51.440
But also there's so many clinical care summaries that we can see published as like here's a case study of this is what happened, this is what was missed, this is how we eventually figured out what was happening.
00:14:51.519 --> 00:14:57.840
And I could see AI being used to say, okay, this is happening a lot, this might be a gap in knowledge and how we're educating physicians.
00:14:58.000 --> 00:15:09.600
If we're seeing an increase in this geographical area of this particular disease, particularly I can think of that for like vector-borne diseases or something, if you're seeing an increase, I think that could be really interesting to track and map using something like AI.
00:15:09.759 --> 00:15:15.120
Um so I think that there's a lot of potential for it to really improve our health, but also our science.
00:15:15.279 --> 00:15:16.639
So I think that's really cool.
00:15:16.799 --> 00:15:16.960
Yeah.
00:15:17.120 --> 00:15:22.000
But with all of that being said, what are the potential downsides of AI right now, right?
00:15:22.159 --> 00:15:23.200
It's constantly changing.
00:15:23.279 --> 00:15:25.120
And I think there's a lot of fear around AI.
00:15:25.200 --> 00:15:28.879
And I think there's a lot of fear around anything that's new and not super known or regulated.
00:15:29.120 --> 00:15:32.720
But could you talk about that and how founded are they really?
00:15:33.279 --> 00:15:33.759
Yes.
00:15:33.919 --> 00:15:34.240
Yeah.
00:15:34.320 --> 00:15:36.720
So there's different levels of fears, right?
00:15:36.879 --> 00:15:39.919
So you have the same problem that you had with Dr.
00:15:40.000 --> 00:15:41.200
Google, you have with Dr.
00:15:41.519 --> 00:15:42.480
ChatGBT, right?
00:15:42.639 --> 00:15:47.360
The patient goes and like searches for their symptoms and they can get convinced that they have something.
00:15:47.840 --> 00:15:55.919
These chatbots are trained to please you as the user because they are chatbots, they will pick up what you drop, the hints that you drop.
00:15:56.159 --> 00:16:10.399
So if you have someone who is anti-vaccine, I think this is a very clear example where probably all of the chatbots are being trained not to re-emphasize anti-vaccine points of views, except for maybe Drock.
00:16:10.559 --> 00:16:18.480
But you can see how if it gets the feeling that the user has a certain point of view, that it would just reinforce it.
00:16:18.559 --> 00:16:24.480
And then you get into the fallacy that we all have as humans, which is like reinforcement bias, right?
00:16:24.720 --> 00:16:28.000
So we just look for things that confirm our views.
00:16:28.240 --> 00:16:34.480
And because it sounds so authoritative, it sounds so good, we say that it is intelligent, we say that it knows everything.
00:16:34.720 --> 00:16:37.600
I just told you that it knows everything and it never forgets, right?
00:16:37.679 --> 00:16:40.720
It is so good, it knows everything, it reads everything and never forgets.
00:16:40.799 --> 00:16:45.360
It's so much better than humans that you can get this false sense of confidence.
00:16:46.080 --> 00:16:47.600
And uh, and that can give you a problem.
00:16:47.840 --> 00:16:49.759
So that's one level of issues.
00:16:49.840 --> 00:16:52.720
And that is if a patient goes, but imagine also a doctor.
00:16:52.879 --> 00:16:56.960
If it is something that you're seeing every day and you know about it all the time, is one thing.
00:16:57.120 --> 00:17:04.960
But if you see something strange, you do go to your books, right, or to the sources that you trust, then you find what does this match?
00:17:05.119 --> 00:17:07.680
Because you're not an encyclopedia, you're human.
00:17:07.839 --> 00:17:17.680
So now, as these chatbots become more efficient, like the fallacy that they know everything and they're so authoritative that they can be saying something wrong because they do hallucinate, right?
00:17:17.759 --> 00:17:19.759
And hallucinators they make mistakes.
00:17:19.920 --> 00:17:26.240
They say something that is not accurate or not factual, but they say it very convincingly.
00:17:26.400 --> 00:17:35.200
And whenever something like that is convincingly saying something, it is really hard for you to catch it, and especially when most of the time it's accurate.
00:17:35.359 --> 00:17:42.880
So one thing that I've heard someone say, so this is not my original thought, is that it would be better if it was wrong, like 25% of the time.
00:17:42.960 --> 00:17:50.720
But because it is only wrong like 5% or 2% or 1% of the time, it is more dangerous because you're not on alert.
00:17:50.960 --> 00:17:54.160
You turn off your critical brain because you're so used to it being right.
00:17:54.319 --> 00:17:59.200
So the fact that it is getting better and better at being right all of the time is actually a fear.
00:17:59.359 --> 00:18:00.000
So that's one.
00:18:00.160 --> 00:18:05.839
Another fear that a lot of people have is that we're gonna get chatbots talking to chatbots like so.
00:18:05.920 --> 00:18:12.960
I want to send you an email, so I say to my chatbot, and with voice mode, I can do this, send an email to Camille about this, and then you're like, oh, go read my email.
00:18:13.039 --> 00:18:17.759
So Nuria sent you an email about this, oh, send her an email about this, and then what are we doing?
00:18:17.839 --> 00:18:19.920
Like, why are we not talking to each other?
00:18:20.079 --> 00:18:23.519
So people have these dystopian views of the future like that.
00:18:23.839 --> 00:18:28.160
Right now, most of the content out there is human-made content.
00:18:28.319 --> 00:18:28.480
Right.
00:18:28.640 --> 00:18:30.480
But I think that's going to change.
00:18:31.039 --> 00:18:38.079
And the value of human-made content is gonna go even higher because you're gonna have a lot of uh AI created content.
00:18:38.319 --> 00:18:46.559
So a fear there is that because what it does really is it brings it out to the average, so then you have an average of everything, and that's very boring.
00:18:47.359 --> 00:18:48.240
Ah, okay.
00:18:48.480 --> 00:18:53.039
Yeah, then the uh knowledge becomes the average of everything, and that's a little bit boring.
00:18:53.119 --> 00:18:53.839
So that's a fear.
00:18:53.920 --> 00:18:58.000
We don't know if that's going to happen or not, but that is a fear that is out there.
00:18:58.240 --> 00:19:00.640
And then there's the problems with privacy.
00:19:00.960 --> 00:19:11.200
You could say something, and then the company that owns the thing can be acquired by a different company, and then they might have an interest in your data that you didn't know before, for example.
00:19:11.279 --> 00:19:29.039
Like imagine you have all of these personal conversations about your health status with character AI, but a health insurance company buys character AI, and now they know all of this information about you, about your medical history, and will they take that to make decisions about your health care?
00:19:29.440 --> 00:19:30.319
Maybe, maybe yes.
00:19:30.480 --> 00:19:38.160
But those are like the type of fear things that people have, and I hear people talking about there's a lot of others, but there's a little bit there.
00:19:38.480 --> 00:19:40.079
Yeah, yeah, I think that those are really good.
00:19:40.160 --> 00:19:42.559
And I don't think it's also always like specific to AI, right?
00:19:42.640 --> 00:19:45.279
Like I think there's a lot of fear around data being collected on social media.
00:19:45.599 --> 00:19:46.480
How is this being used?
00:19:46.640 --> 00:19:48.000
Who has access to this?
00:19:48.160 --> 00:19:56.319
And I think a lot of times it's scary not to know, but I don't think it's unique to AI, and I think sometimes when I hear it in conversation, people are like, oh, it's gonna take all this information.
00:19:56.559 --> 00:19:58.160
Well, so does your iPhone, you know.
00:19:59.200 --> 00:20:09.359
So, like, so that to me is not the scariest aspect because I don't think it's new, but also that's saying as someone who's 25 and I'm of the generation of a digital native who's yeah, I've like always been on this.