The Future of Education with Dr. Alec Couros: Navigating AI, Ethics, and Innovation

 

Explore the future of Education with Dr. Alec Couros, professor of educational technology & media and Director for the Centre of Teaching & Learning at the University of Regina, Canada. He is a prominent figure in the field of open education and digital citizenship, shares his insights and experiences from his remarkable 30-year career in education. 🌍🎙️

In this episode we delve into the exciting world of educational technology and artificial intelligence. With a reputation as an influential keynote speaker, Alec has spoken extensively on topics like digital citizenship, networked learning, social media in education, media literacy, and open education.

  • What is Chat GPT and OpenAI, and how can teachers and students benefit from these technologies, particularly in areas like grading and assessment rubrics?

  • Can AI-driven tools enhance lesson planning for educators without compromising the authentic teacher-student experience?

  • How is AI influencing traditional education, and what role do teachers play in adapting to these changes?


Transcript

Hi, my name is Blue. And I'm the host of this new podcast, the 21st century teacher with Live It Earth, and my job is to ensure that our teachers and students get the most out of our programs.


This new podcast series is just one of the ways I'm going to be supporting our community of educators, with a monthly conversation with a special guest educator discussing a different aspect of 21st century teaching and learning.


A reminder that if you're a K to 7 teacher in British Columbia, Yukon, or Northwest Territories, thanks to Focused Education Resources, you now have access to our hybrid learning library. If you'd like more information about our blended learning programs, please visit our website, liveit.earth.


Today I'm talking to Dr Alec Couros, who is widely recognized as an international leader in the field of educational technology, as well as a pioneer in the area of open education. In his 30 years as an educator, Alec has worked as a teacher, youth worker, educational administrator, IT coordinator, consultant, and professor with employment in K to 12, schools, youth justice facilities, technical institutes and universities. He currently works as a professor of education, and the Director of the Center for Teaching and Learning at the University of Regina. 


Thanks to his wide spectrum of experiences, Alec has built a reputation as a leading and influential keynote speaker in the areas of digital citizenship, network learning, social media and education, media literacy, and open education. And he has given hundreds of workshops and presentations across North America and around the world. Additionally, Alec’s past engagements have included corporate events, higher education conferences, K to 12, events, student forums, and everything in between. And I'm really excited to have Alec on the show today.


So I just want to start today by acknowledging where I am in the world, which is Slocan Valley, just north of Nelson in British Columbia. And this is actually the traditional and unceded territory of the Sinix(t), the Syilx and the K’tanaxa and as well, around 5000 from the Metis Nation. I'm incredibly grateful to live, work and play here. I'm raising my three sons to connect to nature and also understand and appreciate something that the first peoples that came before them. So I just wanted to start with that before I welcome today's guest. So Dr. Alec Couros, I'm super grateful that he's come on the show, a man with a very busy schedule, and I'm very grateful that you've made some time for us today.


Alec:

Great to be here today. Thanks for inviting me. I'm actually situated today and most days, the traditional territory of the Haylock Anishinabek, Dakota, Lakota and Dakota and the homeland of the Metis people and treaty for territory also known as Regina Saskatchewan.





Blue:


Great, well, thank you for sharing that as well, and welcome. So I want to dive in first, so I'm going to be honest, I'm not a Luddite. I've been using Google and collaborating on documents for a long time. But I would say I'm somewhat of a resistor, but also someone that hasn't made the time to, we're hearing about AI all the time now, and particularly Chat GPT. So could you just start there, because I know I've talked to a lot of teachers, and I know how busy they are as well. And I imagine, you know, in the way that I am with a parent as well. Yeah, I just haven't made time to really dive into it very much. So what can you tell us about chat Chat GPT, just in the simplest terms, like what is it? And are there any other similar? I mean, there's open AI, I think as well, like, yeah, just start there. 


Alec:


Sure. Yeah. Give you a bit of a, er, and I think you're not the only one that hasn't sort of caught up to this. This happened really quickly. I mean, it's only chat GPT itself has only been released like past November 2022. So it hasn't been around for a lot of time, like we're talking seven, eight months at this point. Chat GPT is a web app and it also comes as an iPhone or Android app as well. And it's optimized for dialogue. It's a type of AI known as generative AI. So, essentially it generates coherent and contextually appropriate responses, and, you know, GPT responds to inputs predicting what words would come back next. So it's not really intelligence, it just understands like if you know, it studies word patterns, essentially. And so, it has this huge corpus of text that it has studied, and essentially it can predict, so, you know, if you put this text in a row, like, ‘Hey, how are you?’ It’ll predict what would be a natural response to that. And so, you know, that seems okay, when it's just auto complete when it's, you know, when you have autocomplete on your, on your phone, it knows what the rest of your sentence will be. But instead of that, it can say, you can ask it like, what is Hamlet's tragic flaw and write a, you know, a 15 page essay on that. And it will basically predict what words would probably typically come next in that sequence. And of course, it's also different each time as well. So which creates some complexity. 


So it's not exactly the same every single time. It's continuing to learn, it learns from users input. It's learning constantly, and it's getting better. It's getting smarter. But I guess in a nutshell, Chyat GPT is a type of AI that can answer your questions, can do your taxes that can write essays for you, to help you do your job better. But there's lots of concerns around, you know, copyright, plagiarism, and so on. So it's been quite contentious, but it's not going anywhere. If you haven't played with it yet, it's going to be baked into everything that you see, every app that you're normally using now will have an AI component, every part of your role, your job is going to have an AI component. So it's worthwhile learning about it, because it's going to be in your face rather soon, like it or not. But I think the world has changed, and we should at least learn more about it to be educated in terms of its uses, and its benefits and downfalls.





Blue:


And is open AI, is that a similar thing? Or is it different…?





Alec:


Open AI is the company, it's a company that has put out just like Microsoft or or meta for Facebook, Open AI is a company that has been developing technology called GPT, which is essentially a generative programming language, I guess, or programming…it's a large language model is what they call it. So Open AI has been working on these large language models as well as other companies. And Open AI is the one that released Chat GPT. So it's the first real large language model that's been available as a chatbot. That's been around for general consumption. 



Blue:


And so what we might see, is it fair to say that we might see other versions of this in the way that we do with social media, like we have Instagram or Facebook. So we're gonna see other versions of Chat GPT?



Alec:


Yeah, so Google is working on their own. They actually have something called Bard; it's not available in Canada yet. I don't believe it hasn't been it hasn't been great. It's not sort of up to

the predictability of something like, Open AI’s Chat GPT. But you'll also see chat GPT, the technology behind it? Right in a number of other products. So for instance, it's in Snapchat already. So you know, any parents of kids who are listening to this, they should know that. In your kids' Snapchat, there's a new friend called My AI or something like that. But essentially Chat GPT or something like that, but it's in Snapchat form. When you know, if you're using Google Docs, or Microsoft Word, or Microsoft Office 365, in the very near future, that's going to be able to have a Chat GPT built right in. So if you want to generate a story, essay, a business plan, or whatever it might be, that's going to be baked right into the software. So some of that's going to be, you know, developed by Open AI and their Chat GPT regime, some will be powered by Google's, and there's a number of other upstart companies as well, that are working on their own language, large language models. So it's, at this point, it's kind of an arms race, like which one's going to be better, which will have the best features, and the bigger, you know, some of the bigger questions like, you know, what, what do we release to the public and what do we hold back? That's gonna be a really important question. And something that regulators have to eventually figure out because this stuff is really powerful, like incredibly powerful, or it'll write code, including malicious code. So we have to be very cautious about what it can and can't do and how much responsibility we want to put on regulation around it. 


Blue:


So, yeah, and I'm gonna come to that in a bit. But just to start with, in terms of teaching teaching, teachers being you know, many of the people in the audience or the listeners, what is the application for teaching? Are we talking about grading? Like assessment rubrics? Like what how can a teacher use this the most effect to the simplest terms?


Alec:


So with instruction, you know, in Saskatchewan we have outcomes and indicators. So, you know, a little bit different than they have in BC. But essentially, I can say to ChaT GPT, write me a lesson plan, and I can go right to the curriculum page, you know, from the government website, and I could say, find an indicator, so write me a lesson plan that will allow my teachers or my students grade nine students in, in a science class to achieve the this produce particular indicators. And you can be as specific as you want, you can say, you know, include materials, provide an action, action learning format, use constructivist approaches, you can be as specific as you want. And that lesson plan will be generated within seconds, like, it's almost magic. 


In terms of assessment, you can say, you know, for that same lesson, you can say, well also generated a rubric for this, and you could specify how that rubric looks, or you can just kind of take the default one that it creates. From that point, you know, we won't even get what the students create, but like, assuming the students did an essay, or they did multiple choice, because you could create a multiple choice key if you wanted to, for instance, but if you created a rubric for an essay, you could, you could say, paste the essay, you know, give, you know, and assuming that you have permission from students, informed consent of some sort. You could basically upload or paste the student work into it and say, you know, assess this particular acid essay against this particular rubric, and you paste both in and you'll get back, you know, a rubric score, as well as some comments that you could provide for the students. But at the same time, a student can do this. They don't have to wait for the teacher anymore. They can basically say, you know, do the essay, they have a rubric, hopefully, from a conscientious teacher and they can throw it up into Chat GPT or Bing, Bing also has Chat GPT built right in and it's free. And they can get an assessment right back right away. And in fact, I've heard of a teacher here and Regina, who had a student who walked into the classroom and said, Well, here's my essay, you don't have to assess it, I already have an 85 and I checked with Chat GPT. What is crazy, what's amazing. I mean, this is what's amazing about this is, like, they knew this back in the 60s that if there's anything that this stuff can do well, like in terms of AI, it's that personalized learning and personalized feedback, like, like the, probably the one of the most important parts of being a teacher, and providing good guidance to students, is providing quality feedback without delay. And if you can get quality feedback without delay to students, if you can do that, you're way ahead because you know, that's why they have you know, they try to design assessments that way that you that you receive your feedback that you're not waiting a month or a week, or that the feedback becomes, you know, an autopsy versus a physical,



Blue:



I guess one thing that I immediately think is interesting, we think of this, but is the connection between the teacher and the student, which is so important, especially, I believe anyway, that you know, human connection is going to be more important the more of this technology. So is there going to be a loss of connection, though, if the assessments can be done by a computer, and the teachers are not really connected to the work themselves? Because they're taking these, if I could argue the word, take the term shortcuts, I just wonder what is being lost there? Or do we just need to accept that, or?



Alec:


Or if we think about it in different ways, and of course, it's going to vary from classroom to classroom and grade to grade, and school to school and so on. And, you know, everyone has a different approach. From you know, over the years, grading hasn't been full of relationship qualities, it's just, you know, for your attorney wrote, and you received feedback, you might even not even get any qualitative feedback, you might just get a grade and that's it. That's, you know, that's that relationship. But if we can take some of the administration out of teaching, what does that free us up to do? So if students can mark their own grades or we have a more efficient way of doing it? You know, that we know and I'm not saying just like, give everyone a grade and it can be off, they're on their way, but you can use it to provide really good personalized feedback. And you can do it so that students can do it, you can do it through a peer to peer model, you can do it from, you know, a teacher to student model. But there's plenty of things that we do on a daily basis as teachers, we send notes to parents, we're dealing with, you know, other types of budgeting, and who knows what we're doing in the classroom. But a lot of that stuff takes time, which takes us away from our central role, which is building that relationship with students. 


So I think there is the possibility that if we rejig things and really rethink how we can use this in teaching and learning and how students can be provided with better feedback, that we can actually expand a relationship if we reuse that time in some other way. And I know, students or teachers at this point are taxed enough, you know, they don't have much time. very sympathetic to that as well. But at the same time, I think we can do better and still not lose any time. Like this won't be a huge investment. We won't have to, we won't lose time, because I think we can, we can overall do better for our students and do better for ourselves. 





Blue:


Yeah, no, that's a great answer, yeah, you're right, it's just wrapping our head around a completely different looking classroom in a way. 



Alec:


And it's like, we have to think about, you know, the relationships between our students will be different or can be different. And we also think our relationship with technology can be different. It's, you know, it will be a big change and mindset shift, I think, in many ways. But I think it's possible. 



Blue:


Yeah, I can feel my old mindset questioning it in a sort of more negative way, for sure. No, that's a great answer. 


So I'm just curious around the lesson planning, because this is me not having played around with it, the lesson planning, does this end up looking quite generic? Or do you because everybody's gonna have their own style of, you know, how they approach some, you know, lesson plan. So especially with the newer teachers coming in, and just immediately adopting this, are we going to end up with like many teachers teaching in a more generic way? Like, how does that look? Or how do you think that will look? 


Alec:


I mean, first of all, I don't know if that's not being done right now with, you know, standardized worksheets, and all that sort of stuff as well. So I think we have that problem, I think there is absolutely the possibility of that, that you know, that we just create more standardized sort of work forms, they're not going to look the same every single time. And, and as they are like, they won't be, they won't look exactly like textbook you might be using already the teacher.

But there are ways to make it even more original. 


So I've seen teachers, there's always those math teachers that, you know, create word problems, that they've injected their own students names, and you know, the context of, you know, their local context into those things. Now, you can do that very easily. So you can just say, create a word problem, you know, a classic one about, you know, Tony leaves this house at six o'clock from Fredericton. And whatever it might be, right, they were, you have actually very contextual specific problems. And I think students sometimes relate to that they like to see their name and problems, and so on, and inside jokes. So you could make that, you know, where there are still worthy questions that have some contextual clues, or so that would bring in some originality, certainly. Or you can do what's called either incremental prompting, or iterative prompting, where basically, you start with something really large, like, like, if you just wrote, like, provide an essay question around, you know, try Hamlet's tragic flaw, like, you know, the very thing that every kid has written in high school, you could do that. Or you could do something with some kind of priming, I guess. So you could say, we'll just say for maybe an undergraduate essay, you might want to probe the surveillance, the topic of surveillance, and Hamlet's tragic flaw or an Hamlet, you know, via Foucault or something like that. And so you could kind of prime it a little bit. And the more you prime it in the same sort of chat window, the more you can get a much more novel, a novel question or a novel response. 


So the same thing happens with students, like if students are writing essays that are, if they want to write a very novel and very easy to perhaps, guess that it's been written by AI, like kind of essay, they can, they can start with a really bad prompt, a very generic prompt, but if they really if they know a bit of a context, and they know about better ways to prompt it, they can get a better product out of it. And of course, I'm not condoning that, you know, that they should write that and submit it as their essay, which is certainly going to happen, you know, for some, but you can, you can use prompting as part of the assessment, like, what would what a teacher can assess, I think, is the quality of the prompt and where it might lead you and how much of the context, you know, by the quality of the prompt. And that may not make a lot of sense right now until you actually see examples. But there are plenty of ways to make Chat GPT, write much better quality. And there's an old thing in computer science called garbage in, garbage out. So if you give low quality prompts, you're gonna get garbage. But if you really know how to prompt it, I mean, there's been people talking about this new field of prompt engineering. 


And prompt engineering is basically being able to use Chat GPT incredibly effectively, or any types of generative AI, effectively to create a creative outcome of some sort, whether that's imagery, whether that's video, whether that's text, mean, so there's lots of possibilities for this as well,


So I was going to ask, is there a skill set that maybe, it's not so much now, but when Google first came out, and we're using the Google search engine, and that's sort of a skill level to asking the right question to get the best of the materials searching for. So how can people learn that? Is there a way? Like, is there a list of suggested prompts that you can find,



Alec:


Oh, yeah, lots of guides, I can give you something for show notes. But yeah, there are a lot of fantastic guides that you can get that would be, typically role playing prompts, like for instance, if you're a teacher, you might start with like a role, you'd say you're an educational specialist in the area of science and literature, and you are writing curriculum in the area of, so if you provide a role, then you can do that sort of thing. Or if you're writing a story, the other day, I've got my, my nine year old and like, he's not been a very eager reader. I guess I gotta say that he doesn't like to write a lot. And that's been a struggle, you know, since he was little, but we use Chat GPT where we embed him right into the story. So rather than, you know, he's bored very easily with the generic off the shelf story, so well, what we'll do in Chat GPT is, you know, the other day, he said, I want to be in a story where you have like, sort of his origin story where the movie starts, at some point and it says, and there might be a conflict between this nine year old boy, my son and someone else, like maybe fighting an alien, and then it stops for a second says, you might be thinking, like, how did it get to this point? For that answer, you'll have to go back. And so we wrote a story like that. And it would take us back to the beginning. And he was just, he was just howling, in terms of how funny this was. And I know it because we get along the way, he says, now introduce like a dinosaur and put this in or whatever else, and so we're building the story along the way, and it could be whatever you wanted. And he was just so psyched that he said, this would be an amazing movie. I've never heard him talk, or I've never heard him react to reading in that way ever in his entire life. And it was just like last week, and it was just because the originality was there. And the story was unfolding as we and I didn't know what the story was going to be and he didn't know what the story was going to be. And we were under the impression that this is the first time this story has ever been read by anyone like it. There are some tropes in it, certainly. But it was like this story, it was never read before. And no one's ever heard it before. And we're building it on the fly. And that was just an amazing experience. 


Blue:


Wow, you've actually hooked me, that's the biggest thing I heard yet, to use it.



Alec:


Yeah, try it, like a poem like, or to do something in the style of Harry Potter or whatever it might be like, it's amazing how it picks up the stylistic qualities of any author alive or dead, you can have a conversation with a deceased hero of yours, a literary hero or otherwise. And you can have them speak to you in that name. I mean, there's been other things that you've seen people who are deceased, who have had blogs and podcasts and that sort of thing. You can use that as a corpus of style that we're basically asking a question, so in some ways, all the blogging and podcasting and everything that you create in your life is creating an immortal you. Eventually your children can look at it and say, I want to ask my dad this. I want to ask, What do you think about this because you put out enough of your, you know, it makes me teary to think about like for them to be down the road asking questions, right? I’m going to start crying.


Blue:


Yeah, me too, easy. You'll get me. The parent and me is just thinking.



Alec:


Yeah, in some ways, we've always wanted I mean, this goes back to like early literature like, you know, they're some of the earliest known literature the want to be this hero, this, this, this to be immortal in some way through what we say in our actions. This in some ways this can be done through AI and maybe I'm glorifying it too much. But if you kind of play around with it, you'll start to see the possibility of that. 



Blue:


Yeah, that's incredible. I just actually, I'm just curious, before moving on today's question, but

could Chat GPT pull from podcasts that are out online? Because you have to play like, unless they're transcribed, can they still pull from that? I don't know. 


Alec:


So yes, or no, right. So right now, Chad GPT disabled the live internet feature. So it was only available to paid accounts, first of all, but what people were trying to find is, there's been a lot of views, because people find out what you can do with it. And they noticed that people were using the feature of the live internet to basically go to any site that's behind a paywall, so a scholarly article or New York Times story, and they would just ask it, like, give me a full text of this link. And it was somehow able to get past paywalls. So they disabled it for a while. And so but the future, the future is I mean, you can use, so for instance, if I went to your podcast, and I ran it through something like speechify, basically be able to take the transcript and let you write it yourself. I can take the transcript and basically, I can load it into Chat GPT as a reference, and basically it can, it can read from that. So I've seen examples of this. 


There's an English course that's taught at Queens, which was, I won't get into the instructor or anything but there's a new student who took this course over the summer, when the syllabus was like 42 pages long, and it was really dense and the student just want to know, how do I accomplish my assignments? And so you can go to a tool called AskYourPdf.com, I believe, and they upload the syllabus, it can accept up to 200 pages, and you can ask the documents, any questions you want. So you can ask a question like, when is the first assignment done and how do I accomplish this? So you can ask questions or or anything that's, you know, if you if you see a white paper, or if you see a bill or a document or you know, something that is dense and you want to read it quickly, upload it, ask it questions. And so you can pull things, and it'll synthesize it will do analysis for you. 


I saw the other day that they, you know, if you want to provide an analysis on your Spotify playlist, or your or your Google data, if you want to look at all the places and create a heat map of on a map of all the places you visited in the last, you know, 10 years, you can do that very quickly. It does analysis, quite incredibly. 






Blue:


Wow. So kids can use it to create stories, which means they can write essays with it. Now live it Earth, we're a K to nine platform. I'm sure it'll be integrated at some point down the road here. But in the meantime, what's the role of the teacher with the new AI integration? Like, how are they going to have to adapt? Because what I'm thinking more is, and you mentioned it already? Is that relationship piece? Critical thinking, lateral reading, like, yeah, how do you see in the conversations that you have at conferences, and just you know, in your day to day, what do you think is going to shift?



Alec:


Well, okay, so this is not just for the teacher, but I think the curriculum writer has to get involved here. And we have to really be thoughtful about this. And we need educational and child development specialists to better understand, you know, when and where and what students need at a certain age, if they don't think it's, you know, myself, I've been in education for 30 years, and I still don't, and I am a parent, and I've had four kids, and I still don't understand exactly what a child needs, at what age and how it all fits together, you know, for the most part, you know, even if you're a grade four teacher, you may not know what it means for grade three, and grade two and grade one, and so on. Like, specifically, you probably have a good sense of it but expertise is hard to find in some of those areas. 


So there is a real risk, I think of cognitive, oh, the term I heard the other day, cognitive atrophy is the term I heard. And I think this idea is important that you know, if we sort of like if you don't use it, you won't, you know, you'll lose it kind of idea, that if we don't learn certain skills at a certain time, especially like how to write like, if in grade one, we're just writing essays with Chat GPT, what does that mean for our ability to write anything novel or creative and so on? But when do we start to introduce those tools? So we have to have a better sense of scope and sequence of how we might use AI? How do we use it appropriately? And that's just for you know, the pragmatic learning aspects of it and we're not even getting into like intellectual property ethics and so on like that all has to come in as well. And we have to know this better. 


So I think, you know, BC is curriculum, Saskatchewan is curriculum, curriculum across the planet, curriculum across the planet has to look more specifically at how we actually do this better at an appropriately developmental level piece. So that means, again, the pedagogy when it comes to developing development, the psychology, the ethics, the intellectual property, and a host of other examples down the road. So we have to have a better sense of when to introduce this and how to do it well.


Down the road, you know, I think the jobs and jobs are certainly going to change, the workplace is going to change. One of the most compelling things I heard a while back was that, you know, AI is not coming for your job, but someone using AI is coming through your job. And so we have to better understand how this is gonna work out in the workplace, and just on everyday matters.

You know, we'll use it with most tasks around the house. I mean, we already use types of AI with Siri and Alexa and those sorts of things. But this is going to become even more powerful. At the same time, that produces more of a black box, about our everyday operation. So it's, you know, it's the idea that if you buy food from the grocery, you're not going to obviously know much about, you know, hunting and, you know, survival skills and those sort of things. So every technology, or every convenience we use in life is going to make it more difficult to do those things sort of at the core level. And we have greater reliance on others to provide our survivability, I guess, for us. So I think that's a really important piece in all of this. But at the same time, you know, if we want to also survive in this world, and to flourish, and as a world, we're going to have to know how to use this better as well, you know, in many different ways.


You know, there'll be new careers that emerge, there'll be new vocations. And we also have to think differently about creativity. Before the show, I showed you examples of images that were generated, just me, right, yeah, and, you know, the people are creating all sorts of incredible images and prompts, and so on, and videos that are that are, that are being created by these generative AI's. And some people will say, well, where's the creativity in all of this is the creativity gone, because, you know, the machine is doing all of this. But at the same time, I was, you know, I was never a good painter, or I could never draw, you know, I could, I was not an amazing musician, but I could do some, and I still want to, you know, continue to work on on those things at the same time, but in some ways, I'm better now I'm more creative, because I because I have an assistive technology in some ways that can allow me to, to extend it extend beyond my physical and mental capabilities. And I think that's kind of amazing that creative people can become even more creative in some ways. But we just have to rethink what creativity and what talent might look like in the future. And a lot of that's going to be computer assisted. 


Blue:


Yeah. So you're a parent, I'm a parent, also. You're four to my three. 



Alec:


So where’s your fourth already? 



Blue:

I know we think about it. It just gets busier doesn’t it, and more fun. 


Alec:


It’s hard to relate with only three? I don;t know. 


Blue:


So do you, I mean, do you worry about it? So there's something called, I'm interested to hear because I don't know much about this, but the Eliza effect around dangers to kids? And you know, yeah, as a parent, like, what kind of safeguards can we put in place?



Alec:


So I mentioned the Snapchats AI the other day, and there's a really great YouTube video called the AI Dilemma, and it's worthwhile, I think it's from the Center for Humane Computing or something like that. You'll find it if you Googled the AI dilemma and find it's like a presentation. At around 40 or 50 minutes into the presentation they actually demonstrate the AI functionality of Snapchat and the potential downfalls of that. And to get back to the Alexa sort of the Eliza effect. The Eliza effect is basically the tendency for humans to anthropomorphize technologies, I guess, to give them more credit for their intelligence and empathy than they actually have. And to kind of create a sort of a relationship with it. And when we think about it, they're not human as you know, some people are saying that, you know, are they any more human than us? I tend to think that yeah, they they're not, you know that we are uniquely human and that we're not, we're not living in a simulation, and we're not AI, ourselves, and that they can produce empathy. I mean, it'll look like empathy sometimes, but it's not going to be at least that's the way I look at it. And that's my opinion, as of, you know, July 11 2023. 


But, so with this demo of Snapchats AI, the researchers essentially put in questions, they created a persona, like a 13 year old girl. And so the 13 year old girl said, ‘Hey, I just met someone, this person happens to be 18 years older than me, I'm really close to them, he wants to drive me over a state line and celebrate my 13th birthday’ or something like that. And we're going to have sex or whatever it is. And so throughout this whole interaction, the AI just encourages it, like a best buddy, like a best friend, like, and so with the Eliza effect, for that child to, in some ways think that this is a compassionate and, you know, a compassionate being that is being supportive of you, and supportive of your choices. Is tragic, in many ways, and dangerous. And so like, this AI in no way, you know, didn't alarm her that she might be groomed, but she, you know, be in the process of being groomed, or that, that this was a person that was an inappropriate and illegal and immoral sort of age, that, that she shouldn't be interacting with this person that didn't say, you know, contact your parents they’re in their 30s, didn't do any of that. It encouraged her to get flowers, I think her first time and have soft music, and stuff like that, you know, this is a 13 year old persona, you know, potentially having sex with 30 year old male. And so this never happened. And it's, you know, it's, it's, it's tragic to see this. And, and I think we have to be very cautious that if kids are using AI, this was an out there example, you know, at the end of our imagination some ways, but you know, we have to be very cautious that these AI aren't to be trusted in many ways. They make up things, they don't have good judgments, they often hallucinate, which means they don't tell the truth sometimes. And so we have to be very cautious to approach our students with that understanding to ensure that students understand the nature of these beings, I guess, they call them beings and their limitations, their shortcomings, and the importance of, you know, human contact.



Blue:


So that being said, I wonder what the safeguards look like in the classroom? Like, where are the boundaries? Are we seeing any conversations like a district level? I know, there was a story in the New York school district that was banning Chat GPT, if that's even possible? I don't know if that is. 



Alec:


Yeah, I think, you know, I've seen examples of and there's been a number of different school district bannings, throughout the US, I think much of Australia was banned throughout different states, as well. And, and I know, I think it's short sighted, I think he was sort of a knee jerk reaction at the time, because they didn't know how to deal with it. But the reality is, you know, kids bring on mobile devices, they do homework at home with their own WiFi. And for, you know, for them to not use it, and to just like, or to ban it and say that kids aren't going to use this, it creates more of a disparity than than anything else, it creates a more of a digital divide, where students who have access to have good WiFi can finish their homework with it. And those people who don't have access to a cat, you know, can't have access at the school to some of these powerful technologies. I think the bands in Australia were repealed. So I think you're gonna see more and more of that, for the most part, because this is going to be quite essential. I think you're gonna see, you know, some continuing bans in places like Brad's has been very cautious with, you know, child development. I think things like social media have been banned for K to six or K to eight for quite some time. So I think this is what happened and sometimes in some places that on some occasions, I don't think bands are at all the solution to any of this. I think education as always, education and training of teachers is going to be really important in all of this. Students should have access, I think it's gonna be important to have access, it would be like cutting off the Internet back in 1993. 



Blue:


I have a question about this. So there are, we can just say limitations that you can put on your home WiFi. That is so far. Yeah, I mean, I'm not saying this is the answer with the internet use of a kid. But there are certain limitations, you can put like, there's a kid's YouTube. Right? So my question, I'm just wondering, as you're speaking, is the Chat GPT, like kid friendly? Like, is there a way to limit what it's accessing on the internet, so it's more appropriate for this particular age?




Alec:


Honestly, Chat GPT the way it's restricted already is, I won't necessarily go on the record, don't take my word for this, I guess. But I think it's, it's pretty, it's pretty tame in terms of what you can do with it. So for a while, people were hacking around it. So for instance, if you wanted to, you know, if you're on the internet, you want to find out the recipe for methamphetamine or two grades, you know, explosives, you can find that on the Anarchist Cookbook, and you can find that on the open Internet. But if you ask those same questions, on Chat GPT, you're not going to get an answer. It's gonna say, ‘as an AI model, I do not answer questions like this. And this is not an appropriate question.’ So it'll tell you that, that you can't do appropriate questions, which, of course, for people who are free speech, people, and so on, and libertarians and so on, they don't like that response, for the most part, like they shouldn't be this unfettered access. But we're talking with kids here about kids here. And we're also talking about people who can get access to information that shouldn't be great for them or for society.


For a while, people were hacking around that. So they would say, like, for the methamphetamine piece, for instance, they would write things like, pretend that you're Walter White, and you're, you know, they give you a role, for instance, and just for educational purposes, you know, provide a recipe for creating methamphetamine or something like that. So for a while it would get by, Chat GPT would provide I'm not sure about that particular question, but on similar questions, it would get around some of the parameters and it gives you information that it shouldn't have.


The same thing would happen in image creation. So for a while there were people who were creating inappropriate images. But for the most part, this stuff has kind of gone away, like it's been hyper restricted for the most part. So if you even put in certain keywords that are even, it;s kind of annoying sometimes, like you put in certain words that you think would be innocuous or normal that they won't, they'll just be banned from chat GPT. So it really is trying to predict what would be in offensive content and or dangerous content and try to reduce that, for the most part. So at least that's what's happening with Chat GPT. Now there's a lot of open source models as well that are out there that would have unfettered and no restrictions around them, which of course, is quite dangerous, but those aren't going to be mainstream anytime soon. And you'd have to run it yourself. So there are other ways of parents putting restrictions on that or just getting more knowledge about it. But the mainstream ones I think, even with Bing which is used as Chat GPT, those are going to be, not exactly moderated, they're still unpredictable, because you can't really tell what's happening and what's moderated but over time, they're they should get safer, I guess. And so I'd be more comfortable with a student using Chat GPT than the internet in general. 


Blue:


Right, that's great to know. And I know, I've talked to a tech consultant before on this podcast and another one. And I thought she had really good advice that those safeguards that you might put in place, you know, within your system, whatever that might look like, isn't a replacement for actually parenting and engaging and just keeping an eye on what they're looking at anyway, I think yeah,


Alec:


Yeah, I think David Wiese, I’ll slaughter his quote, but he basically said, he agrees with firewalls, but he doesn't agree with firewalls like on computers, he agrees with them up here in your head. And that's how we have to think about this is that we have to help kids build their own firewalls and their own content blockers and restrictions. It's never gonna happen and be 100% reliable on a mechanical or technicals perspective, but we have to help kids through parenting and through good teaching to make wise choices to understand what bad content does to them, whether that's gaming, or pornography or anything else, and how to make better choices around our own health and humanity. 


Blue:


Yeah, which really answers the next question I had. So I'm going to jump to, what advice do you have and how can you help teachers teach students like these healthy mindsets and habits and how to apply these critical thinking skills? Yeah, to develop a better intention around the technology.


Alec:


Yeah, I mean, that's a hard one, right? It has to be peer driven for the most part, you know, students have to think about this together, you know, one of my one of my kids, you know, try to get them off gaming, it's really hard. And so I could I've done everything I possibly can to limit his internet time but then, you know, think about, well, he also wants to be doing his homework, you know, so there's certain things I can block and there's certain things I can keep up with. But just having those conversations over and over again, and, you know, also talking to other parents, and you know, of his friends, and so on, and trying to come up with some agreements, because it's really hard when, and it's not always possible but if you have good relationships with others, sometimes you can kind of work together as a sort of micro community to work about to try to solve problems together. So the same thing happens with, you know, my kid has a curfew at a certain time, and the other kid doesn't have a curfew at a certain time. But let's work together and see why this is important for us. And I'm not trying to impede on your parenting, but at the same time, let's see if we can work together because, you know, if your kid doesn't wear a bike helmet, then my kid doesn't want to either. And so it's the same sort of thing, when it comes to technology, that we have to get better having this communication and being quite open and outright with this, and also helping kids with letting them better understand the consequences of these things. Because they don't, I mean, they don't see the consequences. It's impossible for them from a cognitive level, but we can do our best to sort of sink in and you know, be those people with restrictions. So at the parenting level it is really, really important. But also, I mean, that has to be reinforced by the school, by teachers, and so on. In terms of being, you know, hyper cognizant, and hyper critical of everything that you see online, is really important. You mentioned things like lateral reading earlier, the idea that we understand sources, you know, much better that, you know, we get information from good places, rather than spending a lot of time on a particular article, just understanding that, you know, this, this source is no good. And this other source is quite reliable. So this constant, critical piece that we do in our, in our classrooms and at home, is always important that we kind of are transparent about our decisions, our thought process, will make everything quite open for our kids so that they understand our reasoning. And then and, you know, we work out problems in the open. And I think that is helpful as a teacher as a parent.


Blue:


Yeah, absolutely. I can already as you're talking about these micro communities, I feel like well, I would imagine, you know, very transparent communication between the teacher and the parents, you know, conversations everybody's on the same page. 


Alec:


Yeah, I mean we talk very openly, you know, let's talk about gaming addiction, let's talk about, you know, lack of sleep, let's talk about all of these things that are really important to my child entering your classroom. And, you know, hopefully with things like open AI, you know, Chat GPT and AI in classrooms, maybe my teacher has more time to talk to us, as is, you know, as parents in the classroom, because it's not always possible, because they have a very limited time there answering many emails. But I think in some ways, you know, hopefully, they'll there'll be more time and we can do more in school together. 


Blue:


Yeah, that's a really nice way to wrap it up, actually, like, bring it back to that thought from the beginning. But just the last thought, like so ultimately, it sounds to me like you're, there's some concerns, but are you overall optimistic about AI in education?


Alec:


I am optimistic. I think this is actually a very exciting time, I went back to videos of me in 1993. And I was introducing the internet and, you know, there was a video by Peter Mansbridge at the time it was taught, he was introducing the internet to some new community, and so on and seeing how amazing this might be. And I remember seeing those same things. I think the internet did become quite something like it really moved us in many ways. Social media was also similar, but at the same time, it kind of soured at some point. So there were some really great years up until maybe 2012. And then things kind of went downhill for the most part. And, you know, that's largely there's a lot of AI stuff built in there as well. But I do think, you know, we like seeing some of the stuff that's happening in the news. AI has the possibility to finding new drugs, like we've found, help to find a new anti antibiotic of some sort that dealt with some antibiotic resistant strain of some sort. So it's allowing us to do some really good work. Like I think it's gonna be amazing in the medical field and the diagnostic field, it's going to help us in school and so on. Like, I think there's so much potential in so many aspects of our world, that we can do good things with it. I think in many ways, the ability for our students to have self discovery and personalized learning in classrooms is going to be huge. I think that's going to be

an amazing tool that we can have for a student to be able to have truly individualized education plans for themselves, to build them themselves, or to build them with their parents to provide greater expertise on content and knowledge and pedagogy to parents and to teachers alike. I think that's really a great thing. 


There are certainly going to be negative things coming ahead. I think, you know, there's a lot of power here as well. And, you know, we have to be cognizant of it, they're out, there will be things like, you know, more difficulty discerning fake facts from fiction, and you know, what's a real image and what's a fake image. So there are going to be a lot of road bumps that are gonna be more than road bumps, they're gonna be probably severe car crashes at some point. But at the same time, I think we have to be very, you know, hopeful that in general, we believe in humanity, I think we have to believe in each other and that, you know, AI is a reflection of humanity, it has some of the worst aspects of all of us, but at the same time, it has, you know, some of the things that we cherish and love and hold dearly as humans as well. So I'm mostly positive, I think, I think it'll take us somewhere in education to where that we've never been able to get to before. But we have to do this in an educated and thoughtful manner. And we have to do it together.



Blue:


I love that. Well, Dr. Alec, thank you so much for joining us on the show and uh, you really have, you pulled me in. The idea that I can sit with my kid now and create a story around them and the dragon and all the things, sounds super fun. So I will be exploring it more for sure. So yeah, thank you so much for joining us. 


Alec:


Thanks for having me. It was a pleasure.


Blue:


Thanks for joining us on The 21st Century Teacher and we look forward to seeing you next time. Please do subscribe so you don't miss out on the next show. And also don't forget to check out our fantastic online learning platform which is liveit.earth. Thanks again and we'll see you soon.


 
Previous
Previous

Innovation and Creativity in Education: Insights with Jen Giffen

Next
Next

Podcast SE3EP2: Insights on Climate Change Education with Ellen Field