Subscribe: Apple Podcasts | Spotify | YouTube

In this episode, Messellech “Selley” Abebe chats with Precious Tate about how parents can raise confident, critical thinkers in a world shaped by artificial intelligence. They talk about why it’s important to guide kids toward curiosity instead of fear, how families can set simple guardrails around new technologies, and what it takes to help children become creators rather than just consumers of tech. Tate is a parenting advocate and digital literacy educator who works with families to build healthy relationships with technology. Together, Abebe and Tate explore the opportunities and challenges AI brings to childhood, offering practical tips for protecting kids’ privacy, sparking creativity, and keeping human connection at the heart of parenting in the digital age.

To learn more about Precious Tate and her work, you can follow her on Instagram

Want to keep digging into the real-life impact policy decisions have on children? Here’s some of what First Focus on Children has published recently:

To join the conversation, follow First Focus on Children on Instagram, LinkedIn, and Twitter

Send us comments on thoughts via email: SpeakingOfKids@firstfocus.org

Find us on Twitter/X: @SpeakingOfKids and @First_Focus

Want to be a voice for kids? Become an Ambassador for Children here. To support our work and this podcast, please consider donating to First Focus on Children here.

Transcript

Selley Abebe  0:00  

Robots are coming. Are you ready to parent your kids through this brave new world of AI? What does AI mean, artificial intelligence? And do you believe the future there will be robot friends?

Precious Tate 0:16  

I don’t think so. I mean, if people are lonely enough, I think people could create that, but I don’t think it would really be useful, so I would hope not.

Selley Abebe  0:30  

Hey, ambassadors, welcome back to Speaking of Kids. I’m Selley. Okay, so artificial intelligence is here, and it’s going to impact everything your kids will know, see, do and learn. So let’s be honest. How do you personally feel about that? Is it excitement? Is it a little bit of panic? We’re talking about what AI really means for kids with someone who brings both technical brilliance and Big Mama Bear energy to the table. Precious is a tech expert, a founder, a foster care alum, and a mother of two, who’s really working on what it means to raise children in a world shaped by algorithms, apps and automations and the now ever present artificial intelligence we cover so much in this episode, from AI in schools to raising critical thinkers to why some kids are turning to chat bots for therapy. It’s a wide ranging and deeply human conversation about power, privacy, opportunity and the real work of parenting in the digital age. This is a great episode filled with lots of really helpful information. So share this with a parent. You know, they’ll thank you for it. So my two year old has developed a very special relationship with Alexa. I don’t know if you can relate to that. He gives her full commands, like Alexa play wheels on the bus, or Alexa play ocean sounds. And the funny thing is, when she doesn’t understand him, he gets a little attitude and assumes that Alexa must be sleepy and just straight up unplugs her, kind of like he’s the boss and Alexa needs to take a break. And then there’s the Roomba, which he has apparently named Roomba Papa. I have no idea why, but it really stuck. And so, you know, when we talk about AI, this is more than book reports and more than things that bigger kids need to know. This is what he’s growing up with. This is what kids nowadays are growing up with in their everyday lives, starting at two. So let’s kick start this conversation with precious, precious. Thank you so much for being here today. You know I wanted to start because you have such a unique perspective on the issue of AI, artificial intelligence. You know you work at Amazon. You recently founded Gems AI, what inspired your journey into tech and into AI?

Precious Tate  3:06  

Oh, wow. So, actually, my journey into tech started really young. I was a foster kid, and right around, I think I want to say I was in ninth or tenth grade, there was a shift to starting to prepare, you know, certain age of foster kids for when they age out of the system. And so I was put into this kind of life skills program. And in addition to that, there was a an opportunity to learn some tech skills through some company that had a contract with the county, and I finished the program, and I received a used desktop computer, a copy of Microsoft Office Suite. From there, I just really got interested in learning technology. You know, being a foster kid, I don’t really think I was one of those kids that was like, I want to be a doctor when I would grow up. We had this. I was a kid that just wanted to get out of my situation. So my whole goal in life was, how do I prepare myself so that I can take care of myself? I had been working since I was 14. I got accepted into college. Actually got a couple of scholarships, and I did not go, simply because I didn’t have the safety net of family back home and I didn’t I couldn’t, in my young mind, figure out what my plan would be when I came back, where would I live while I got my career started. So the logical thing for me at that point was, whenever you leave high school, you’re going to have to go to college and work at the same time to maintain yourself. So that computer skills program presented a world for me that I had not explored before. I was I was always a chatterbox. I was a book. So people said, Oh, you’d be a great attorney. You might be a great teacher. And I would have loved to be an attorney, but it just didn’t, you know, go according to plan, but technology allowed me to picture myself working and going to school at the same time, paying bills, taking care of myself, and that’s what I did. So once I got that computer, I started learning HTML and how to build websites, and then later, I did enroll at UDC and started traditional computer science, programming in COBOL and Visual Basic and all the old school languages. Later, I learned JavaScript, Java CSS, and by 21 I was working my first tech job,

Selley Abebe  5:41  

Wow, wow. That is crazy. So that’s how I got started. That is amazing, yeah, it’s amazing what opportunities that are just presented to you, and where your mind goes with that, you know. And fast forward, you’ve recently created Gems AI.

Precious Tate  6:01  

Yeah, I, you know, being a foster kid and the work that you know your organization does anything related to children, advocating for children, on behalf of children, is is and has always been in my heart. So actually, later, late last year, I was invited into a mentoring program for middle school aged girls. And in the middle of developing this program, I ended up having to be a speaker at one of the events, which was not the plan, and it was on Gen AI, which obviously it’s in my work, but me delivering that talk to teenage girls really made me start to think, wow, we have this technology that I know for a fact is moving at the speed of lightning. Kids are using it, everyone’s using it. But people really understand it. Do people really know how we got here. And more importantly, how do we position children and other vulnerable populations, especially black people, in my from my worldview, how do we position certain populations to not just consume new technology? You know, we have, we got social media people scroll and spend all this time on it, and you have a small amount of the population that may find a way to leverage it to make money or to do some good, but that’s not typically the norm, and I thought that understanding the power of artificial intelligence, we will be remiss to not step back and have a conversation about Okay, guys, let’s not Just play with chat GPT and have a good old time with it. It’s nothing wrong with that. But there’s real potential here. There’s real potential for you to be a builder in this space. There’s potential for a little black girl, a little brown girl, a brown boy, to be the next inventor of the next chat GPT, you know, so I started really getting excited about the possibility of positioning us to be more than just consumers of this technology. And so gems AI was my way of having a conversation where we first all got on the same page in terms of understanding the basics about AI, building some real core literacy around it, understanding the data and privacy implications of using some of these systems, and then from there, starting to see ourselves becoming users and adopters of the technology to leverage building either things that we want to build, or building more of the same type of technology that we see that exists.

Selley Abebe  8:46  

And I think you just you answered it in terms of, you know, what are the gaps that gems AI is trying to fill? Because it does feel like things are moving so quickly, and I don’t even think we’ve started to see to even scratch the surface of what the full potential is within AI.

Precious Tate  9:05  

I totally agree. I mean, I think about people like my late mother in law, who you know she passed, but in the years to come, she would have seen herself probably engaging less with human doctors and more with AI. How would she have been able to navigate that? Would she understand it? Would she be able to still advocate for herself in the same way? So the potential is great, there’s potential for great and there’s a potential for harm. And I think we need to understand both sides of the spectrum, not to necessarily take a stance on one end of the other, but to really just be informed. I think when you say it’s moving fast, one of the things that we should understand is it’s not new, correct. We have been using AI for it probably the last decade or more. It’s just that the generative aspect of it that was introduced to the. General population with chat GPT specifically, that’s the new that’s the thing that’s been the most groundbreaking in terms of advancement in AI in most recent years. But what’s to come robotics, physical AI is something that’s very near on the horizon. Some of us already have physical AI in our homes. We have Alexis. We have some of these other things that do things on our behalf. Those are all types of AI, and I think we should understand that, you know, we are comfortable with it, but there’s some stuff that’s coming that we want to make sure that we’re informed and aware of what it is and what it can

Selley Abebe  10:41  

do, I mean. And to that point, you hear the potential of AI disrupting so many different industries. But when you think specifically for kids, sometimes, oftentimes, what comes to mind for a lot of our listeners and a lot of people in general, is education. And so when you think of AI shaping even like K through 12 education or higher education. How do you think things are going to shift even in the next five to 10 years?

Precious Tate  11:07  

Well, right now, about 60% of schools are using some form of artificial intelligence in the classrooms, right? So I see that increasing. I see the potential for tailored and customized learning approaches being more available because of AI. What I like to think about AI when it comes to education is this, it is a tool. It’s like fire, right? It could cook our food, or it can burn down the house, and by the cook the food. Part of it is all of the ways that it could even the playing field. Right there are certain students that need specialized attention in certain areas. They need focused study models or ways of learning that AI can come and assist that would even the playing field for them. I recently saw a study, I want to say, was MIT, and they were saying how they put a group of people in a room and asked them to write a story, and certain people that use chatgpt All the time really, really struggled to write that story, because I think the point of this was trying to prove that overuse of technology, or specifically AI, is starting to reduce people’s critical thinking capacity. Correct? And while I see that, I’m always what’s under the hood, right? Because what’s not being said is, what was that person’s capacity to write the story before they got chatgpt? Were there someone like my daughter who is a gifted storyteller and can write a story lickety split? Or was it were they already a person that struggled to do something like that, and now chat GPT is letting them take an idea from their brain and help them form a story? And if that’s the case, I would want to look a little deeper into that and seeing what good we can extract from that part of it, right? That is an opportunity where certain people with different learning abilities can use this technology to to learn and understand in ways that their peers don’t need. I love that aspect of having AI in schools in education, there is the aspect also of children increasingly depending on AI to write papers to essentially, and I’m putting this in air quotes, cheat. And again, when I look at that, I say, Well, what’s cheating? Right? Because if people and children were already using Google and not having to go to the library like we did, they already have a tool at their disposal that some people, I can imagine at that time felt was cheating, right? So what I see happening in the next five to 10 years Sally is more and more and more the technology that we have is easily accessible. People are going to use it. What I do think that we will need to do better of is finding different ways of testing knowledge, right? And I think that’s something that starts at home. So if I know my child has a paper due, my kids will know they can use artificial intelligence. I’m pro AI with guardrails in place. They will use it. But I will want to know, can you tell me about that story you wrote? Can I ask you questions and you can stand up there and articulate or respond to me without using assistance. That’s how I know that you actually understand what you delivered, versus you were just able to use AI to spit out some things and hand it in. I think some of it will look a lot like going back to the basics, will look a lot like handwritten book reports where it’s appropriate or. Debates in the classrooms or speeches, more and more of those things, and finding new ways to make sure that our children are actually absorbing the content and the education that we want them to have, without villainizing the technology, but without overly empowering them to use it to the point where they don’t think for themselves.

Selley Abebe  15:20  

You know that is such interesting perspective. I know the study that you’re talking about, and as someone that doesn’t come from that world, we’re both moms of young children, and so immediately My mind went to, wow, there’s really a gap in critical thinking, some deductive reasoning in terms of the analysis. And I haven’t done a deep dive, but it’s fascinating to hear your perspective, because I can, I can see that in my mom mind. You know, we’re already thinking of again, how is all of this gonna could potentially benefit our children, right? Like, how can we do this in a responsible way that does make sure that they’re getting a balanced education, but just learning, right? So I think it’s interesting the insights that you made around public education, because that’s, again, where my mind goes through even as a parent, right? It’s like, can you still articulate this without the assistance, correct? Can you do fundamental math while obviously being able to have access to a calculator. And it’s

Precious Tate  16:23  

the same thing. Think about it. Yeah, when the calculator was invented, that that created opportunity that wasn’t there opportunity for, again, air quotes, cheating. I love the critical thinking that you brought up, because, again, I’m a little bit of a antagonist. So the first thing I ask myself is, well, what’s the definition of good when it comes to critical thinking? Who decided what I see really happening here and why we are having these, what I consider polarizing debates around AI? Is it because AI is actually exposing our human weaknesses in many ways, and what it’s exposing is that the things that I still believe are fundamentally and uniquely human, like critical thinking, like empathy, like resilience, creativity. How well were we doing at those things before AI came along as a society, we’re not doing great with empathy. We’re not, you know, critical thinking is always and will be on a spectrum. I watched my young son the other day. There was a fly in the house, and my my husband was trying to teach him how to get the fly to slow down so that we he could essentially kill it, right? And he was saying, flies get quicker with more heat in the air. So when it’s hot they it energizes them. They get faster, but when it’s cold air, it slows them down. So I watched my six year old turn to the thermostat and say, Oh, well, let’s turn that down. That’s critical thinking, because what he’s doing is taking pieces of knowledge that he’s picked up over time, with this new information that he just learned about flies and their behavior depending on the climate, and he made a choice, he made a decision, and he also was able to take an action that is really the fundamentals of critical thinking. We’re not going to lose that overnight, right? But I also think that we have gaps in critical thinking. Because what does that even mean? First of all, and who defined it, and is there bias in that definition? Right? Because I described to you what my childhood was like earlier in this conversation, my resilience as a human being is going to be far greater than the resilience my children will have to have. And guess what? I’m happy about that, because there’s certain experiences that develop critical thinking, empathy and resilience and even creativity at another level, just because you have to live life differently. So I’m always going to question all of these things, because I’m like, This sounds like a bit of a convenient argument to say, let’s not, you know, engage with this technology. And what frightens me about the idea of certain people reading a study like that and saying, oh, hands off, no, AI, for me, is the technology isn’t going

Selley Abebe 19:19  

anywhere. Correct? Yeah, whether we like it or not, it’s coming, yeah, coming

Precious Tate  19:23  

fast and furious, if we you know, I love to say, follow the money. There’s been so many investments and before you see Claude by anthropic or chat GPT by open AI, there’s been hundreds of millions of dollars thrown into data centers and train data training and, you know, GPUs and all of this stuff that goes into even creating this technology that the course has left the barn. What we need to do, in my opinion, is to find a balance. There will always be reading in my. Household, we will always have books open reading them. We’re always going to be talking when I see something. We question everything, and I think that is where we fill the gap, especially on critical thinking.

Selley Abebe  20:16  

Let’s take a quick breather, and when we come back, precious pulls back the curtains on how all of these tools actually work and why it matters so much for parents to understand what our kids are really engaging with. We’re back, and we’re with precious a tech founder and mom who’s helping us navigate this new frontier of parenting in the age of AI. Before we go any further, I want to remind you to share this episode. I know I’m sharing it with my entire mom squad. Before the break, we were talking about hope and possibility. Now we’re really diving into what these tools actually do and how to stay aware of the risks, especially for our kids on the flip end for parents listening that are hearing everything that you’ve outlined so far around AI and they want to raise tech savvy kids. What are things that they can do now to really make sure that their children aren’t essentially consumers and users have a better understanding of how they can use it, how they can apply this to enhance their curiosity or their skills or their interests?

Precious Tate  21:23  

Sure, that’s a great question. Not to oversimplify, but one of the fundamental ways parents can educate themselves and then engage your children is to use them themselves, know and understand the systems that your children are going to be exposed to. That way you can also know the risk. I mean, we, you know, I said I’m pro AI, but I’m also a mother, so I understand that there are risks, and I am very, very open to discussing those, because I think that’s just as critical. And I know we’ll talk a little bit about that, but using it yourself, opening a chat bot and just typing in something, you will be surprised at what you will find. And the reason why I believe parents really be using these systems is because you get to stress test the systems for yourself a lot of times when it comes to, you know, discussions around policy, we treat children like little adults. And I think that that’s a fallacy. We really have to look at kids as kids and understand that their experience will be different. This is why I say that I don’t know all the guardrails that are in or have been built in to something like chat GPT, because there are certain things that I will never prompt check GPT about, right? But that does not mean that my child won’t one day, right? So I want to know that, wow, chat, GPT is capable of giving this type of information. That means anything that my child could think about one day they could go into the system, type it in and start consuming information about that without my knowledge. And it’s very simple. And the reason why I bring that up is this, you know, another funny, not funny, but interesting, study that I read recently, I think it was by Harvard, showed that between 2024 and early 2025 the number one use case for AI specifically chat GPT is therapeutic conversations, right? Okay, so some people sit hear that and they’re immediately like, oh, that’s pathetic. Oh, my God, people need to get a life. But again, I’m always going to ask the why? Because, why are people using a system like this in this way, right? Remember, we talked about empathy. What have they not been able to find in human connection that they’re now finding in a system? Right? And it could be something as simple as capacity. Chat, TPT has much higher capacity than humans, than even your therapist. You get them for an hour, you pay a lot of money. You’re not going to unpack the whole house during that time, but you can unpack for hours if you want. And how do I know? I’ve used it, I’ve done it, you know? But what it made me realize is my child could one day do it right, and they won’t have the wherewithal to understand that they’re talking to a system. Unfortunately, there has been an instance of a child using AI asking certain questions that led that child to commit suicide. And before we have more of that. I think there’s some policy conversations that need to happen. But really what it is is, at the time I read about that story, I had not engaged with even, you know, some of the system the way I do now, I didn’t realize it. I just want to set the context by saying because I don’t want to ramble on this topic, but I do want to set the context. Chad. GPT has been trained on all the publicly available data on the internet, I think, up until the last data cut off that I read was September 2024 let that sit for a minute. So when we talk about therapeutic use, you know what that tells me? Every paper, every article, every nuance conversation around mental health, personality types, mental struggles, relationship struggles, that has ever been published in the public sphere, chat, gpts data has been trained on that. So what that does is it gives it an incredibly powerful capacity to answer prompts the way a human would. It can take your mental rambling and literally restructure it and feed it back to you and said, Yep, this is exactly what you meant. There are not therapists that can do that. And so again, with that context, it makes it incredibly powerful, but it also gives us an incredibly need to pause and understand what it can do and what those implications could be

Selley Abebe  26:19  

for our children. Yeah, no. I mean, these are all key points in my mind. Goes in so many different directions. I’m not trying to scare you, no, like, I remember that that article. Think he was a young boy, 1415, and you’re right. I think it was like, late 2023 or maybe it was even 2024 but I remember I was like, Wow. You know, I engage with chat GPT a little, but since then, it’s now become part of my daily use, right for professional reasons. You know, person, whatever it is, it’s literally like a live consultant on anything under the sun, which, again, is very powerful. And so I think, kind of coming full circle. It is important for parents, grandparents to understand what this is, because you know, whether you read studies that may give people pause or anxiety or reservations, the technology’s coming and our kids are going to be they will consumers, no matter

Precious Tate  27:16  

what they will. And again, I love the scary stuff. I think one of the things that I enjoy when I talk about AI is, let’s talk about all of it. Even though I may work in tech and I am an advocate, I love that Amazon has it’s not just for kids, but it’s for anybody. It’s called Party Rock, and it’s an open platform that, you know, people kids, can go and tinker and build apps. You don’t have to log in with any specific username and password, so it protects kids in that way. I really love advocating and sharing that those resources with parents. I love sharing the resources of you know, whether it’s chat, GPT or Claude or any pre trained model for specific things, when you talk about education, most of the AI in the in the classrooms are education specifically pre trained models. So with the way AI works, you have your large language models like a chat GPT, right? Those are the ones that are like the big Kahunas. But then you have a lot of individual, small models that are pre trained or trained to do a specific thing. So they it might be a tutoring type of app, or it might be something for reading their specific models for law and, you know, pharmaceuticals and biotech, all of that, all of that is true, but I think most people right now, one in three teenagers, use chat GPT for homework. So that is probably the one that most people are using, because it’s the thing that came on the scene and just disrupted everything, right? But there are so many out there. And actually, that story about the young man that committed suicide, it wasn’t one of the larger models. It was a small model that he was using for something that he ended up chatting, you know, in a hurtful way, in a harmful way, and it led to his suicide. So when I say parents need to use this, what I mean is we have to build awareness about what’s out there and what their capabilities are.

Selley Abebe  29:16  

And, you know, to that point, precious, you mentioned this earlier, and I think this is an important point. Is AI has been around for a while, right? Like we’re just now the general public consumers are now being aware of what its potential is, and oftentimes, we’ve seen technology moves way faster than policy, yeah, you know, and when we’re thinking about this, and then oftentimes, even in policy conversations, in budget decisions, in significant policy legislation, kids are typically always an afterthought, you know, but in this situation, when so much is going to impact the future, what should policymakers be doing right now to really prepare kids in this next. Generation for really an AI driven future.

Precious Tate  30:03  

Unfortunately, the United States is woefully behind on this. European Commission has already created standards around AI use in 2024 they created specific standards around AI use and actually banned certain models that didn’t fit in. And I’m I would assume they’re actively evaluating models to make sure that they are falling under the regulation we have the, I think it’s called the Children’s Online Privacy Protection Act COPPA, that was created more than 25 years ago. That regulation online Use and Privacy for children predates Tiktok, it predates Alexa, it predates a lot of voice assistance. It definitely predates chat GPT. So we need to update, if nothing else, we need to go back and look at what is in the regulation that needs an update based on what we have learned, and what have we learned if we take a step back, AI, is an inflection point, but social media has really been the thing that is getting us to this point where we’re so concerned. Why? Because we let the flood gets open on social media, and then later we’re like, oh, wow, this is affecting kids mental health. Shocker, we can’t afford to do that this time, but we’re doing it because there is no conversation that I’m aware of. Maybe in your work, you’re aware of some conversation that’s happening on the Hill around regulations for AI. And I don’t know if I can say this on here, but I’m gonna say it we’re not exactly under a regulation friendly administration. So I’m afraid that by the time we do get some different people in the room that are like, Whoa, it might be too late. So right now, short of national standards, national laws, it’s us, parents, it’s us. There is statistics out there that the average 12 year old already has millions of personal data are out on the internet. How did that happen? Every app that your child goes to, there’s a privacy declaration that they have to hit, I agree, and it collects data about them, right? When it comes to data, listen, I’m all in. I consider myself screwed. It’s I’m not going backwards. It knows everything about me. Hey, it is what it is. But I’m very strict when it comes to my children. I mentioned Party Rock because it allows you to use it without putting in a username and password. I like that. But anything that my children engage with, it’s my email address, it’s my name, it’s my you know, because at this point, there’s no one else protecting them, we have to also understand that when it comes to the training data, anytime you’re engaging with chatgpt, some of what we say, not specifically you and I, but of an aggregate of the data that we put in is going back into training that data, so we have to be very careful on that. Again, back to basics, whatever. We don’t allow our children to share about themselves with a stranger. They shouldn’t share about themselves online, whether it’s an app or a website, whatever that is, where we are, what should be done, though, is really taking a step back on guardrails. I believe that a child or any person, to be honest, should not be able to go and chat, GPT and prompt any and everything they think about that, to me, could, could be a national security risk. You know, I don’t know if there’s monitoring for certain searches in AI apps like there are for online searches, right? Think about it. I meant again, all the data is there, and when you when you click it, the difference between an AI model and Google, you don’t have to click the link to read further. It brings the data to you, right? So policy really has to be around what people can put in the systems. There should be some guardrails around that. In my professional work, when we build solutions for customers, they predetermine we don’t want someone coming in and putting in, you know, sexually inappropriate content. We don’t want them to type in this type of thing. Those are called guardrails. And I know there will be people that will fight me on this, but at the end of the day, there should be age limits on children being able to access some of these systems, and if a parent wants to override that age limit, that would be on them. But just like with social media, there are age limits that people don’t always honor them, but there has to be at least some material. Attempt to set some some guardrails around these systems. Age limits are a big thing to me. Also what type of data you can put in without it being flagged for inappropriateness or harm. That would be something that I would love to see in policy. And also not requiring kids to put in usernames and passwords and email addresses when I’ve identified myself as a minor or under a certain threshold, like I’m at least 13, but I’m below 21 I should not have to provide certain data that the 21 and over crowd would would be required, right? Those are the types of things that I would like to see in policy when it comes to kids. I think oftentimes, like I said earlier, we treat kids like many adults, and they’re not the policy can be and should be, very, very granular and layered based on the age and the type of data that we’re looking to the systems want to collect and provide.

Selley Abebe  36:00  

Yeah, I think that’s a really important piece for kids and also for adults too. A long time ago, I used to work in a technology company that worked with a lot of data, and the biggest takeaway was, oftentimes, this is self reported data, or publicly available data that you don’t think you’re just giving data away willy nilly, but the data that’s already available on you and on our kids is is already pretty comprehensive,

Precious Tate  36:27  

and you have to understand and even in education, they’re collecting biometric data on kids we and that stuff may be happening in the classroom that doesn’t come home on a permission slip before it happens the Safety Net, there is most of those systems in the education field or in classrooms. Are specific models, but data, it’s out. There is so much of it. And while we parents can teach our kids how to be responsible online, you know, we don’t share our addresses, we don’t share our full names, we don’t share our ages, et cetera. Sometimes they will try to do things without your knowledge, and so that’s where I would like to see policy that now has my back as a parent, to reinforce what it is I’m trying to do.

Selley Abebe  37:14  

Precious, it has been amazing talking to you. This was so informative, but also it’s layered. And we asked this question of all our guests, and I feel like you must have a song, or you must have an album. Sometimes, when you think about this stuff and the implications, like, is there something that you just turn to? Because even though I feel like I need a WUSA break or a coffee break to process this because, you know, it can be a lot.

Precious Tate  37:43  

Yeah. So there is a group that hails from Virginia Beach, Virginia. They are blood brothers, lyrical geniuses, national treasures, and they have a new album dropping next week. They go by the name of the Clipse. They are my all time favorite rappers of all time in their duo. And Pusha T actually had his own solo effort a couple of years ago, and that’s also one of my favorite albums. If I had to choose a song from the Clipse body of work. Wow, that’s tough, but I gotta say, because of the nature of the work that I have to do as a professional, as a business woman, as a mama, grinding has got to be the jam. And, you know, it’s funny, because when I thought about it, I was like, of course, everybody who knows me knows I love the Clipse, but the Clipse, actually, if you follow their journey, they are a representation of hope, right? They come from, you know, not the worst, but not the best. And they both, you know, taken paths that have allowed them to evolve into the human that we see now, family men, businessmen, you know, doing really well in fashion, and even one of the brothers has been in ministry for years. So the word that I hear in my heart, when I hear or think about the clips, is actually hope, you know, and even as I have delivered some food for thought today that maybe a little, you know, polarizing or anxiety inducing, we have a lot of hope, I think, for children, for adults, for people who are trying to get ahead. AI definitely has a lot of potential to support you, to help you, to help you, bring create frameworks for new businesses, get your ideas and your creativity out. I think it’s a great tool for fostering creativity and idea generation and kids. But I also want to make sure that as I continue to talk about it, we do it all with an underlying of caution. Freedom of reserve and allowing history and the past to inform us to do better going best

Selley Abebe  40:08  

answer, hands down. Thank you so much, Precious. Here’s what I’m holding on to after this conversation with Precious, technology is not neutral. It reflects the people who build it, and it impacts the people that we really love the most. But it doesn’t mean we can afford to throw our hands up. It means we need to lean in. We need to get curious, and we need to stay involved, because at the end of the day, AI is not going anywhere. It’s here to stay. You don’t have to be a coder to raise a tech-savvy kid. You just have to keep asking questions, stay in the conversation, and model the kind of digital awareness you want them to carry into the future. I said it before, and I’m going to say it again. If this episode got you thinking, please share it. You’ll really be helping someone you care about make smart choices when it comes to AI. Speaking of Kids, is a podcast by First Focus on Children. It’s produced by Windhaven Productions and Bluejay Atlantic. Elizabeth Windom is the supervising producer. Julia Windom is the editor and Jay Woodward is the Senior Producer. For more about this episode, visit firstfocus.org.