This is the higher ed tech podcast season five, Episode 15. Taking a breath, recognizing and responding to problems and pitfalls with AI.
Tim Van Norman 0:22
This is the higher ed tech Podcast. I’m Tim Van Norman, the Instructional Technologist at Irvine Valley College.
Brent Warner 0:27
And I’m Brent Warner, Professor of ESL here at IVC. We both enjoy integrating technology into the classroom, which is what this show is all about. Welcome.
Tim Van Norman 0:36
We’re glad you’re here with us. So how are you doing?
Brent Warner 0:41
Yeah, holding up? Lots of conversations, these AI conversations are not going away anytime soon you know.
Tim Van Norman 0:48
And I’m hearing from more and more people that are telling me about how they’ve listened to our podcasts and questions and stuff like that. I’m loving it. I think this is great. It’s it’s very pertinent. And like the topic today, we in fact, chose it today because of the stuff going on in the news. Like right now. Yeah, yeah, for sure. So,
Brent Warner 1:08
So I think we’ll start with this kind of this big picture, at least for me, Tim, how I want to approach this conversation, which is, there are problems with AI, right? I don’t think anybody has ever said there’s not in fact, I think everybody has said their problems. And and we’ve been pretty pro AI, right? Hey, let’s check these things out. Let’s see how it works, right? All those things. But we also want to recognize, hey, there are there are things that are going on here that we need to understand. But we also and here’s my point, that I’ll probably reiterate a few times throughout here. We all need to take a breath, right? This stuff is moving so fast that if we buy into kind of the the outrage culture that’s so exacerbated these days, every time we see a problem with AI, we are going to, you know, work ourselves into a frenzy. And and I don’t think anything productive is going to come out of it. So. So I think we’re going to see a lot of problems. I think we’re going to have to deal with those problems. But we need to be willing to take a step back. Breathe, say, Okay, this is a problem, right? Now, let’s work on fixing it. Let’s not just ignore it, let’s work on fixing that problem. But also, we can’t just go get so angry about everything every time we see it or, you know, pretend not pretend. But treat it as though it’s the end of the world. When we see a problem, even if it’s an offensive or whatever problem. It’s, we need to look at it, you know, with a leveled mind, and work our way through this. Does that sound fair? Like a fair approach? Absolutely.
Tim Van Norman 2:49
And when we deal with all of these things, when we talk about AI, we could talk about almost any subject, and if something doesn’t go the way I expect it to, it’s not necessarily appropriate to blow up. And with AI, the moment you hear about it, somebody’s probably already starting to fix it. Yeah. So it’s not like this is ongoing. And it’s going to be like this forever. Some of the stuff that we’ll talk about, literally, by the time this episode comes out, will be fixed or will with by the, by the next episode, that fast things are moving, and and they need to be moving that fast. Yeah.
Brent Warner 3:29
It’s funny, because, you know, you showed me Hey, I’m doing this workshop. And here are the things that I’m talking about. And I’m like, Oh, I had to cover the same things in a workshop that I did for it, right. And people have been talking to me on social media about this problem or that problem, or, you know, whatever else it is. And so I think there’s just a conversation coming up. And we’ll get into that a little bit when we talk about sore a little bit later. But, but I think that there just happens to be kind of a new wave of concern or a new wave of, you know, I’ll say fear for a lot of people to that, you know, I think they come and right now, it just happens to be a big hit with a lot of things that are going on in the news, etc. So. So Tim, you came up with a list of several things that are problems that we need to start addressing. So let’s look at him. So
Tim Van Norman 4:16
first one, and, and by the way, for those of us those of you been listening for a little bit, you’ve heard us bring up a lot of these, okay, so we’re just kind of putting them all together into one plus bringing this up right now because of things that are going on. But number one accuracy. It’s getting better and better. But now we’re starting to realize the historical accuracy isn’t necessarily what we expect. Right? Right. And that can be things from images that can be things from text and all along. We’ve said check your accuracy. When you use it, always make sure that it’s right. When we’ve talked about it In the classroom, we’ve talked about students, verifying that it’s the that the what they’re producing is accurate. We’re seeing it, we’re, we’re seeing more and more of that. And people are having more and more confidence in the inaccurate information. That that is one of my biggest fears is the confidence people have in while the computer said it, so it must be true.
Brent Warner 5:26
That’s right. Yeah. So I’ll actually double down on this too, because a small example. But you know, in my project that I’m working on, I was doing a bibliography. And I was doing a, you know, a citation. And it’s like, okay, this is from, and I actually tested, I’m like, Oh, can you make it? Can you make a citation for this book, right? And so it’s like, yeah, here we go. And it said, whatever it was, and it’s like publisher is Routledge, and I’m like, hold on a second. The publisher is not rootlets. I’m looking at the book. It clearly says it’s from long men, right? And it’s like, and I told him, I’m like, hey, it’s not from not from Rutledge. It’s from Longman and I said, Oh, I’m sorry, let me change that. And it rewrote it. And it still said, Rutledge, and then I’m like, I’m like, Okay, wait a second, what’s going on here? So then I went and looked into it. And the history of this book, the first edition of the book was published by Rutledge and then I guess it got sold or something, I don’t know, whatever happened. I didn’t go that deep into it. But the current edition or the newest edition that I had was from Longman, right. And so. So when I was looking into that information, it was so locked into this idea that it was from Routledge, and then it’s like, oh, well, which version was draft and even changing that didn’t necessarily change the AI from getting it. Right. So that accuracy is something that is going to continue to be a problem for a while, because how do you pick apart one thing, one thing one thing and know it, you know, it’s like, it’s like, no one’s going in there to look at those things. It’s aI being able to process that and recognize what’s the newest or most correct information? And is that newest, most information actually, right? Or is it just someone out there on a blog saying it with the wrong information they’re pulling in? You know, so there’s a lot of tricks around that?
Tim Van Norman 7:05
Well, as well as the fact that you might have a book that was published after it was the AI was trained. So that, you know, it might have been accurate two years ago? Yeah. Or whenever that was. And so it’s things like that, that come in that yes, accuracy. We expect that immediate, you know, that it’s going to have if it was written today, and there was an earthquake today, it’s going to have information on that earthquake. But that’s not necessarily accurate. Yeah.
Brent Warner 7:38
And I want to be also careful with this too, because I am seeing a lot of teachers talking about how they’re recognizing if things are coming from AI, and they’re saying, Oh, well, it’s kind of, you know, like, you get these broad, beautifully written things, but they don’t really say anything specific or quite clear. And it’s like, yeah, so that’s also a missing accuracy point. But I’m worried about too many teachers going well, this is what’s happened. So now I’ll know. And then as soon as that changes in the future, that accuracy, metric is no longer valid. But we’re still built the building our assessment on tools that have already advanced, I guess, does that make sense here?
Tim Van Norman 8:17
Absolutely. Especially because just last week, I heard from some professors that, oh, this new AI that I found, is incredibly accurate, and incredibly detailed. And incredibly, you know, it doesn’t have those exact things that you were talking about where it sounds good, but they doesn’t say anything. They’re like, this is dead on for my subject, you know? And so, yes, yep. So you have to check that you have to check accuracy and stay on top of it. Exactly. Exactly. Okay, what else do we have? The next one is copyright questions. Okay. And that can everything from citations, you just mentioned the citation, that might not be exactly correct. But citations themselves, but also just who owned the original material? Yes, that training data, the training data, there’s, um, you know, we’ve got some software like Firefly from Adobe, that they claim that they’ve used Non copyrighted material to train it. So there’s nothing in there at the end, who then owns the copyright? Is that the AI? System? You know, the people who wrote the AI? Is it you as the person who typed in the prompt, you know, all of those questions come into play. And, and it really is going to depend that’s this is going to be an interesting world of what happens with copyright. Yeah. Yeah. Well,
Brent Warner 9:49
there’s lawsuits going on all over the place. And actually, there’s also software coming out that basically I can’t remember I think it’s called nightshade or something where it poisons The image inside and so if it if it tries to get read by AI, then some of this is built into the image. So the way they they mix the pixels somehow, so like, then it becomes a super glitchy looking thing, which is actually pretty amazing tech, right to be able to do that. But the, you know, there’s people out there trying to mess with it, which I think is cool, right? Like, go and figure out, you know, like, everybody should be playing their part of the game, right? But, but they’re doing that, because they want to be able to say, Hey, hold on a second, you can’t just train for free on my creative work or my style, or whatever else that is, and so. So there are a lot of people who have issues with anything that gets produced by AI, because they’re saying, well, these are all trained on other people’s ideas, right, or other people’s words, or whatever else that is, which is partly valid, but also partly, every word that we say is trained on other people’s words, right? Like we that’s how we learn languages, and that’s how we create our own language. And so that’s one of those things where we go, okay, I get it. And it’s, you know, that’s, that’s what being a human is to, for me, you know, I step back, and I say, I, you know, I wouldn’t be able to speak English, if I didn’t, if I wasn’t raised in an English speaking environment where everybody spoke English, and I read English, and, you know, and all of those ideas come from the xYc. It’s just right, altogether. So it is a the same. I mean, there’s conversations and philosophical conversations to be had there.
Tim Van Norman 11:33
Oh, absolutely. And, and along those lines, we’ve got it’s the copyright, it’s the ownership and stuff like that, but it’s also what’s training it, you know, what are? Are we making sure that whatever it is, is being trained correctly. And, and as I pointed out a number of times to different people, AI doesn’t necessarily know the difference between fact and fiction. Yeah, so if you haven’t read Shakespeare, it may think that Victorian England was exactly the way Shakespeare was written. That’s right. Yeah. And, by the way, that’s not uncommon for people to think the same thing. And so it is happening the way people do it. So that makes sense. Yeah, absolutely. And, and along those lines, also plagiarism. So is it plagiarism to use AI? What is Plagiarism? And I think plagiarism is being redefined right now quite a bit, for sure. For sure. Or I should say, not necessarily redefined, but narrowed in on what it really is.
Brent Warner 12:47
I think the conversation. Yeah. So So everybody’s using plagiarism to mean academic integrity, right? That’s where we’re seeing real problems with the conversation is everybody says, Oh, you’re plagiarizing. It’s like, well, you’re not plagiarizing because these words didn’t exist, right. Like that’s, you know, it is freshly newly made, but it’s ghost writing. Academic integrity issue. Absolutely. And so I I aligned it more with ghost writing than I do with if I’m talking about, you know, text generation, I would align it more with ghost writing, which is an academic integrity problem, as compared to plagiarism, you know, copying someone else’s ideas directly. Plagiarism, I think, is actually going to pretty much disappear. Because it’s right. Like, there’s no reason to plagiarize, when you can just have faster and better and have have this thing make it for you. But we will be having a lot more conversations around academic integrity. But that being said, Tim, an article came out or research came out from Stanford, and we can put the links in the show notes a little while ago, saying that the rates of cheating are no higher after chat GPT than they were before. Right? Like straight up research on it and saying, one, this is interesting. One, cheating rates are way higher than you probably ever thought they were and to, to they’re not any higher because of GPT or not because of tragedy, beauty or you know, whatever else it is. And so, so the the cheating conversation still needs to go on. But you don’t necessarily need to say that students are more likely to cheat because of this than anything else. Right?
Tim Van Norman 14:23
Absolutely. Absolutely. So the next part, and this I think is going to be a little bit longer topic and that is bias. Yeah. So. So AI is based, it has a bias towards whatever it was trained on. And recently, programmers will call have gone in to try to tweak it to change that bias. That’s right. And unfortunately, I think this is more like a pendulum, and it swung the other way. And so It’s become biased the other way. That’s
Brent Warner 15:02
right. Yeah, I can give a couple of examples of this. So. So there was a fairly popular complaint meme going around the internet about this was several months ago. It’s like drawn American, right? And said, Okay, well, here’s an American, and it was a white dude in a young, handsome white guy in a cowboy hat in front of a giant American flag. And it’s like, is that representative of what an American is? Right? Is the is the question a lot of people got kind of upset about it right and unfairly, right? Because it’s like, because it was regularly putting out that, you know, a variation of that same picture, right? And it’s like, oh, God, how do I get it? How do I get something else that I get something else? And it’s like, why is it a man? Why is he white? Why is he you know, all the all the questions that came up, which are fair questions. And so then they started playing with programmers went in and said, Hello, hold on a second, we don’t want to be accused of being biased here. And it’s like, the algorithm is training on whatever it’s training on, right. And so again, that data set is a problem, because that data sets what’s setting it up for all of this stuff. But so then they went in, and then I went and made one, you know, maybe two months ago, or something a little bit after that. And it was like drawing American. And then this time, it was a, a man, but a black man, and handsome looking, healthy, young, right? Every time. And then. And then I made it four times, and of the four times, it was three, three of them were a black man, and one time was a white man. And then I was like, okay, and where’s all other representation inside of that, right? And so then you get into this interesting question, you say, Okay, it’s being forced to try a better representation. But now, what does that number representation mean? Like 75% of Americans are, you know, black, young, black men and 25%? are young, white men, and where are the women’s representation? You know, so all these questions come in? And it’s a question of like, are they trying to put their thumb on the scale? And if so, how do you balance that out? Because there’s so many different directions to move in. This is a real, like, you have to be careful when you’re talking about it. I hope I’m being clear and what I’m trying to communicate here, which is, who’s making the decision on what that what representation? What bias is the right balance of appropriate bias, right? And who’s in the room, when they’re having those conversations at the company, for example, right? I know, for a fact, there’s not a lot of women in the room. And that’s, that’s its own set of problems, right? Or the ones coding coding these things. And so who is actually making these algorithms, right, that’s part of the conversation as well, there’s so many things to this. And it’s really easy to get upset about. And this is why I wanted to come back to my first point, it’s very easy to get upset about it. But we also need to recognize there are good people out there trying to change some of these things. They’re also, you know, not good people out there trying to make money off of these things. Right. So it goes in all sorts of different directions. But we need to be able to say hold on a second. Is this an intentional thing? Or is it something that’s going to be worked on and fixed and hopefully taken care of over time? Yes, we should bring it up. And we Yes, we should show people that there are these problems. But also, you know, it’s not necessarily something that we should get totally worked up in every time we see a problem with it as well. Maybe we should just point it out and go, Hey, hold on, let’s recognize this. Right? Well,
Tim Van Norman 18:37
and we also have to be cognizant of the fact that as humans, we’re wrong as well. Yeah. Okay. So ignore it. Ignore the fact that this is AI. If somebody said, describe to me a, an American, I might start with white male myself, just because that’s the world I live in. Yeah,
Brent Warner 19:01
right. And it’s also what’s your background? Where did you grow up? Right? Like, what’s your own? What’s your if you’re looking Yeah, your lens and your bias, right? And not necessarily seeing it as a negative word for bias, but just to say, Hey, this is the perspective that I default to and then I want to make changes I want to I want to be representative, all of those things, but maybe my first impression is this right? I understand that.
Tim Van Norman 19:25
And if I said, you know, describe 10 Americans, I would probably have a better representation than one, you know, that type of thing. But the other part to be aware of is, as this is being programmed, we have to be careful of historical versus I’ll call it aspirational or future. Okay. And the reason I say that is because in history if you take a people group and you say describe for me a people group that’s defined. Right? Okay. But if you say in the future, tell me about Americans, what would Americans look like you may have a very different view. And I’ve heard of, of issues like Nazis. Nazis were very clearly defined. And yet, AI right now, as we’re talking at this moment. And by the way, Google is saying that they’re going to fix that very quickly, because this is very specific one that came up. They’re giving all kinds of different races and different, all kinds of different things in an image of a Nazi. That’s, yeah.
Brent Warner 20:46
So like, so an Asian couple that was like a Nazi, like in Nazi uniforms were coming up or, you know, so it’s like, trying to trying to balance these, like, balance out the racial perspective, and, you know, racist, just one aspect or gender perspective, or whatever it is, right, but, but then totally get missing the mark. And, I mean, obviously, that’s highly offensive, and creates problems and all sorts of things. But yes, so What’s Google doing that with that? So
Tim Van Norman 21:14
they actually shut down their AI generator, or their, their, their new Gemini, they they went from Bard to Gemini, I’m sorry. And their image generator, they shut it down. And they’re saying for like, three weeks or so that they’re shutting it down? Because they’ve realized that wait, you know, we didn’t define this well enough. And you know what? That’s going to happen? That’s exactly what we’re going to see in every one of these things that we’re talking about. Things are going to get better. By the way, as humans, we have exactly the same thing going on. That’s right. Yeah. We’re constantly needing to get things tweaked. And, and a little better of this, a little less of that, and, and stuff like that. So we can’t, if we’re going to humanize AI, we need to realize that just like a human, we’ve got to tweak that too. Yeah,
Brent Warner 22:13
what’s the what’s the quote, like I, you know, you do the best you can. And then when you learn better, you do better. Right? And like, that’s, that’s the idea. And like, but there’s so many people unwilling to give back kind of grace to a baby technology, right. And so we kind of need to step back and say, okay, hold on a second, let’s think about what that means. And so, so yeah, this is this is a really interesting one. I think it’s a good live example. And then this all does tie in to with the next part, which is the videos, the fake video generations, in Surah. And, Tim, we were looking at this a little bit pre show. What What was your impression of this?
Tim Van Norman 22:55
I was highly impressed. Yeah. Now I’ve been impressed for months with the way it’s been able to generate realistic looking, basically image in in a background. Yeah, the stuff that we were looking at. The background was moving. And it looks like it could it much of it look like it could be real.
Brent Warner 23:17
Yeah, so surah is not available. But this is open AI is the same company as Chechi, BTS, video like text to video program that they’re they’re going to release at some point, but it is so realistic looking like they’ve got like these, a drone camera flying over the flying over an old western town. And like, as people are walking through the town, the mood camera moves, the people kind of stay in the same place. But it’s it looks like a real, absolutely real thing. And the there’s like the physics involved, where everything stays the right distance. And as, as the drone camera drops down, that people you know, the parallax movement, right? The people start moving in relationship to the building behind them, or whatever else it is, it is unbelievable. And so we are stepping into this world of like, Hey, hold on, are we going to be able to trust anything that we see on the internet, anything we don’t see with our own eyes? What can we trust from here on out and that is a real concern. And we knew this is coming. But it’s kind of here now or it’s right on the cusp of being out there in the world, where anybody can go and do this thing with just the stroke of a few keys. So how do we deal with that?
Tim Van Norman 24:34
That’s a good question. I mean, we’ve got the same thing. We’ve had the same thing as you were pointing out pre show cameras, taking pictures, and I’m not even talking about the current ones where you take a picture on your cell phone, you can get rid of all kinds of stuff. You can change all kinds of things, ignoring that going back even, you know, six months. I mean, going back a little further than that. We’ve always had the ability to You I mean, basically, we call it Photoshop. But when when I was taking pictures, when I was a kid, you know, with the camera, you you’d crop things out, you’d cut the picture, you’d whatever, that you just had a lot of different things that you could do. Right? And so and that was before Photoshop.
Brent Warner 25:23
So, yeah, well, it’s the way that you manipulate the framing or the way that you put something in front of whatever or you know, even in the original days of photography, they would have such a long exposure, they can actually move stuff in and out of the background of the, you know, without it so disappeared from the picture, even though it’s already in the exposure, for example, right. And so, there’s actually there’s a really interesting article, and I’ll put it in the show notes, but they they talk about this idea of the whole concept of, we only thought of photography as being trustable. You know, like, in the 20th century, at some point, right. And so like, Oh, this is what we believe, and we understand this to be mostly true. But we still kind of question that, like, so for example, we look at, we have forever, looked at pictures of people on covers of magazines, and you said, Oh, well, they’re photoshopped, right? That’s not really what they look like, or whatever else it is. And so, so we may be stepping into the zone, again, where we go, okay, hold on, everything that we look at is going to be questioned. First, we’re not going to take it as truth or we shouldn’t take it as truth. And then only when it’s verified, do we trust that we, we are seeing what we’re seeing. But I don’t know that our brains are actually properly developed for doing that on the fly, we actually have to apply critical thinking every time.
Tim Van Norman 26:46
Well, and we’ve just been talking about video and images. But the same thing with fake news. Again, we’ve had fake news forever. So this is it’s not new, it’s just sometimes that find it harder to harder to identify now. Yeah, especially because, I mean, what’s the comment that I’ve heard we’ve heard for a year or so that chat GPT is absolutely positive, but it’s wrong. You know, it’s it’s definitely it’s, it is, it’s guaranteed to be right, but it’s wrong. And, and you can’t tell just because something’s competent. Whereas that was one of the indication, you know, when somebody’s writing something, and they’re a little iffy, it raises red flags. This is an Fe S competent? Yeah,
Brent Warner 27:46
this is I think the the kind of wrap things up here, Tim, but like, this is the part where we’re saying, Well, hold on a second, how are we having these conversations in school, right? Even if we’re not necessarily worried about our students using it, we do need to open it up, it is our responsibility as educators to have these conversations with students with colleagues to understand how that’s impacting the future of learning, right? How it impacts what we’re trying to get done in our classes, what our institutions are actually trying to achieve with students coming through our programs, etc. And so there are just so many parts that it is so hard to stay on top of. And so again, I’m going to step back one more time and encourage people to write like, we need to recognize we’re in a moment, and it is okay to be in that moment. Well,
Tim Van Norman 28:35
and I also want to point people back to Episode 12, of season five here, just a few episodes ago, where we talked about, we have an article listing that with a little graphic on whether it’s whether or not it’s a good idea to use chat GPT, for instance, no, does it matter if the outcome is true?
Brent Warner 28:59
Yeah, no, it’s still relevant.
Tim Van Norman 29:00
It’s probably safe. It’s very relevant. And it really comes in and the whole hierarchy here, and there’s a lot of there’s only one case where it says it’s safe. And that’s if you don’t care about the outcome. That’s right. And the rest of it is maybe possible, but for the most part, it’s unsafe. And that’s, that’s really the way things are going right now. You just I guess I look at it as if I’m producing the work. I am responsible for the accuracy and completeness of that work. And that’s really what it boils down to is we each have to take our own responsibility for what we’re producing.
Brent Warner 29:48
Yeah, yeah, for sure. So I think that’s a you know, and ideally, you’re gonna want to know that the people that you’re working with or whose work you’re reading or interacting with, is a responsible person. And that is trying to communicate your their ideas clearly to you, right. And so whether they’re using AI to do that or not, they’re the ones who are checking in and putting on that final product. And so that’s where we start getting into the way that we interact with people is important. The connection between us as teachers and students is important, right? All of these things matter as we’re stepping into a different way of interacting with content in the world.
Tim Van Norman 30:26
Absolutely. Thank you for listening today. In this episode, we took a look at recognizing and responding to problems and pitfalls with AI. For more information about this show, please visit our website at the higher ed tech podcast.com. There you’ll find our podcasts and links to the information we’ve covered.
Brent Warner 30:47
As always, we do want your feedback. So please go to the higher ed tech podcast.com. And let us know your thoughts. And if you have ideas for future shows, there’s a link over there where you can give us your topic ideas
Tim Van Norman 30:58
for everyone at IVC. That’s listening. If you need help with technology questions, please contact IVC technical technology support. If you have questions about technology in your classroom, please stop by the IVC Training Center in a 322 or contact me Tim Van Norman AT T van norman@ivc.edu.
Brent Warner 31:17
And if you want to reach out to me about the show, you can find me on LinkedIn at Brent G Warner.
Tim Van Norman 31:22
I’m Tim Van Norman,
Brent Warner 31:24
and I’m Brent Warner and we hope this episode has helped you on the road from possibility to actuality. Take care everybody
Are you navigating the complexities of AI in your classroom or institution? In this thought-provoking episode of “The HigherEdTech Podcast,” Tim & Brent discuss the rapid evolution of AI and its implications. Join in as we unpack the necessity of a balanced approach to AI’s opportunities and challenges, emphasizing the importance of critical engagement over reactive responses.
- Navigating AI’s Challenges: Strategies for educators to identify and address AI-related issues in academic settings.
- Promoting Responsible Use: Insights into fostering a culture of accountability and ethical AI use among educators and students.
- Adapting to Change: Practical advice for staying informed and agile in the fast-evolving landscape of educational technology.
- Future-Proofing Education: Exploring the potential of AI to enhance learning experiences while preparing for its pitfalls.
Tune in to gain valuable perspectives on harnessing the power of AI responsibly in higher education.