What does it mean for OpenAI to embed their AI right into Canvas? We’re discussing the implications and the questions that came up when this announcement dropped – right during most teachers’ summer break.
What does it mean for open AI to embed AI learning experiences within Canvas? We’ll discuss it right now. Welcome back to the higher ed tech podcast season seven. Episode One. You
Tim Van Norman 0:20
Welcome to today’s HigherEdTech Podcast. I’m Tim van Norman, the instructional used to be instructional technologist at Irvine Valley College. I’m now the Interim Assistant Director of Technology Services at Irvine Valley College, Adjunct Professor of Business at Cypress College. And
Brent Warner 0:40
I’m Brent Warner, still professor of ESL here at Irvine Valley College, we both enjoy integrating technology into the classroom, which is what this show is all about. So Tim, before we get into anything, we have to talk about what you just said. So briefly, you got a job? Change a promotion?
Tim Van Norman 0:58
Yes, a promotion. I wound up going on vacation for for about a month. I took most of the month of June off, and literally, the day before I left, I had an interview. And so we’ve got a new position at Irvine Valley College that is above technology support and Ed Tech in the technology services area, and so I’m the Assistant Director of Technology Services in charge of that, yeah, basically I report to myself,
Brent Warner 1:33
Yay! Sorry my single voice doesn’t sound like the roaring crowds that it should be. But, but very cool. Congratulations. I know that people are happy, and this is something you and I have talked about for years. Really, it’s like, we need to kind of build it this way a little bit more. And so I’m very happy for you and for the school, because it’ll be a great level of improvement for everybody.
Tim Van Norman 1:57
Yeah, it’s it’s been an interesting couple of months as I’ve learned my new job and created the new job and all of those things, and it meant I wasn’t as up to speed as I wanted to be on a couple of items. But I’ve been I’ve been having fun. It’s been a great summer. So Brent, did you do anything this summer?
Brent Warner 2:18
I did take some time off. So I think we mentioned it before. Well, my book came out. So I wrote a book called ed tech for multilingual learners. It came out from ISTE. I also have a companion online piece for ESL teachers called the ELT Toolkit, which is through TESOL. So maybe people listening, there’s, there’s some ESL people that listen to the show, but that’s not the main focus. Of course, I’ve got another show that talks a lot about that type of stuff, but that was kind of my big thing. And then I was like, Oh my gosh, there’s so much work kind of getting that all going that I kind of turned my brain off for most of July. So you got your vacation time, I got mine. And then it’s like, oh my gosh. All this stuff is still happening. And like, I gotta get my brain back into it. And so Tim, I think today, we’re trying to talk as much as possible about one of these big shifts that’s going to affect everybody on our campus and up and down California, at least, or anybody working with Canvas.
Tim Van Norman 3:23
And so shall we just let’s dig into it?
Brent Warner 3:26
Yeah, let’s, let’s jump over and start the conversation.
Tim Van Norman 3:32
So at Instructure Con Canvas announced, and this has been coming. There’s been stuff like this going on for a year, but they announced a really big integration, basically Instructure and OpenAI announced a global partnership about embedding AI learning experiences, and that’s, by the way, the title of an article that we’re going to have in the show notes. Please hit our show notes. There’s going to be a lot of links and stuff in here about today, but basically, open AI and Instructure are working together to, in their words, improve the Embed AI learning experiences in Canvas. And that has a lot of people excited and probably more people concerned.
Brent Warner 4:21
Yeah, that’s a nice way to put it. I think, yeah, I mean it. So here’s the thing with this, right? This whole announcement, we’ll, we’ll try to dig into some parts of it, but we just want people to be aware of the conversation, at least, right? But, but the one, it was a bit of a surprise announcement to the CVC, so like, the, like, the California Community Colleges, up and down, and, like, everybody’s like, wait a second, what they they made this big, huge announcement, but didn’t prep anybody who are, like, major clients for the conversation. So part of that is like, Wait Well, what does this mean? So now, Tim, you and I, the way we have been talking about. AI stuff. It’s like, yeah, at some point this is this type of thing is inevitable, whether it’s going to be open AI doing it, or, you know, some other company being the the front face for this conversation. And there’s lots and lots and lots to dig into in terms of, like, well, what’s the motivation here? Like, all these types of things. And I, I think everybody has a very legitimate concerns. And there are some people who are, you know, legitimately excited on certain levels. But it did take people by surprise in the sense that, like, they did not really prepare people for this is the announcement that’s coming. But I think if you’ve been watching the AI conversation recognizing, well, something’s there’s going to be more and more AI integration. But I didn’t expect it to be, you know, when I came back this semester, I think that’s going to be a trick for a lot of people
Tim Van Norman 5:50
well. And part of the other problem is that nobody really knows yet what it means. The articles that we read and stuff like that. It’s full of a lot of marketing fluff. And so to me, that’s one of the worst things they can do, is have the marketing fluff second it. The only thing really worse than that is just to push it and make everybody do it and not tell us about it, which they did that last summer with some things. So I’m grateful they told us something, but we don’t know what this really means, other than the couple things that they’ve already turned on.
Brent Warner 6:32
Well, hold on to before. Had before. Yeah, we’ll get we’ll get to those in just a second. But I do just to follow up on on your term of marketing fluff. Let me just read some of this fluff here, because I’m like, What does that even mean? This is on in structures, actual page, the one that says, instruct an open AI announced global partnership. It says in structures, a strategic approach to AI emphasizes the enhancement of connections within an educational ecosystem comprising over 1100 ed tech partners and leading LLM providers. Well, I’m like, Do you know what that means? Tim
Tim Van Norman 7:10
fluffy, there’s, there’s a lot of different things that could mean. And that’s the that’s exactly my point. Yes, they didn’t give us any examples. They didn’t give us any key takeaways, or something like that. You read that, and the nicest thing you can do is say, so what? What does that mean? But you can also read it and go, I don’t want that, right? That doesn’t work for me, so I don’t, I don’t read that and go, Oh, well, here, here’s a huge positive, right? I only go for question to negative
Brent Warner 7:48
well, and I think that’s fair. I think we should be skeptical. I mean, I will say, like, when we’re, you know, I’ve enjoyed a lot of AI. I’ve really, you know, wanted to try using things. Some parts have really scared me in certain parts in the past, but I think it’s important to kind of maintain that pragmatism on both sides, where you’re saying, Hey, what’s going to work, what’s not right? And that’s hard to do. A lot of times, we get into feelings. And I me too. I get into feelings on one side or the other. I am. People always think, I’m always all Pro. But it’s not, not the I mean, you know, from me talking about these things, Tim, it’s not always the case. It’s like, okay, let’s take a look at what can be optimistic about these things. The thing that concerns me is, I am not sure. Well, I I lean towards, personally, the feeling that open AI might not be the most ethical company. And, and, you know, like, I want to be very careful about how we talk about this, because these things are, you know, who knows they’re up in the air some parts. But like, what happens to our information when it goes inside of here, right? What are these things can going to continue to be FERPA compliant or HIPAA compliant, right? And when they’re, when they’re tying these things in, and these things can kind of creep and seep into all sorts of corners of our information, especially with these agents that can just like, oh, log in and do the work for me. By the way, we haven’t even talked about that. I don’t think we’ll be able to talk about it on this episode. But there are, you know, open AI has made now, these study pals, and they’ve made these agents, and you can kind of give it your information, and it will go in and log in and do different, different things on your behalf inside of websites, right? Or, and that includes canvas and so, so, how does that integrate? Well, right? Or, how doesn’t that integrate? Or, how are they, how are they protecting the process of, you know, learning is a process of struggle with information. And when we don’t struggle with any information, like we like with AI stuff, then we have to set up what are now considered artificial boundaries around that, because you could just get the information. But now we need to start talking about, well, what parts do we need to set up, like speed bumps or roadblocks? To, like, allow for the cognitive, critical thinking that’s going to come along with the actual, well, that needs to be there for the actual learning to exist, right? And so it’s a lot to talk about, but I just want, I just want it to be there, like, out in the air, so that, like, we can actually, and I don’t think we’ll get to all those points right now, but I just want people to start thinking about that, because it’s on all of us to start figuring out how that’s going to work for us,
Tim Van Norman 10:25
absolutely, and we don’t want to come across as I mean, if you’ve listened to the last how many years on of our podcast, you know that we’re not negative on AI, no, no, but we also don’t want to come across. And I hope we’ve also that theme has come through, that we’re also not gung ho AI everything. You know, there’s nothing AI can do. No wrong. No, no, no, no, we, we want to take AI and and understand what we’re doing, not have aI just do everything for us. Yeah, and, and that’s, that is the question that comes up. What is going to happen with this? So, the the state of California, the CVC. So CVC California, Virtual College. It’s part. There’s an institutional part that all of the California Community Colleges work together in they’ve come up with a lot of questions as well, and and some answers, but not not clear answers. And it’s not because they’re trying to be they’re trying to be obtuse or or anything. It’s because we don’t know the answers. Yes, yeah, yeah. So they’re asking good questions. Yeah, nobody has the answer, right, right, right? We are
Brent Warner 11:49
all like, Hey, you’re asking these questions, and we don’t know them either. And like, this is part of my this is one of my hold back points around, like, chat. GPT is like, well, what’s going to happen? They’re like, I don’t know. We’re just doing it and seeing what happens. It’s like, well, that’s not a responsible approach, right? And so, so that is where I’ve seen a lot of educators very concerned around that level and saying, like, you can’t just, I mean, you can, I mean that, and that’s what’s happening, is just, you know, move fast and break things. And that’s the whole whatever. But it’s like, this can break a lot, and it, you know, we still need responsible and considerate people making thoughtful decisions in every single place up and down the line. And so that’s where it gets a little bit tricky for here. So, but Tim, we, we can talk about that part all day long, but like we have seen a couple of AI integrations inside of Canvas in the past, and so what have we seen so far?
Tim Van Norman 12:44
So think about a year ago, Canvas released Instructure, released two tools. One was called the Smart Search. Initially, they released it just for faculty, just for the teacher role, and they released it for teachers and students. And this is something that you can enable in your class. It’s in the course navigation, assuming that your your your instance, has turned it on, your institution has turned it on. And basically it allows you, as the teacher, to go through and search your class for a word or phrase and find every place that that’s located,
Brent Warner 13:23
which is cool, that is, I mean, incredibly useful. I love that tool, but also super obvious, like, Why didn’t, why didn’t this basic search, you know? I mean, like, that is the fundamentals of a basic search in any database ever, right? So,
Tim Van Norman 13:37
right there, there were other tools that did something similar, but what they did in this case is They also tried to add AI into it. And so if you had a phrase that was similar, yeah, it would include that too. And I do appreciate that for and and that has been really, really good. I love it. I think it’s a really nice tool for teachers, it will show you the whole class. For students, it only shows the students what they have access to, so
it doesn’t go in and give all of your
Brent Warner 14:07
information. Your teacher uploaded the answer key as a PDF, and here you go, right. Yeah,
Tim Van Norman 14:12
exactly. And, and it doesn’t get into certain parts of it. But it’s been around, like I said, about a year, and it’s been really useful. I think it has been a really nice tool that Canvas has done, another one that they’ve done, and again, it’s been out for a little while. Is the discussion summary and where I just talked about how nice the Smart Search is. Let’s flip to the opposite. This is my personal opinion. This Brent, you may not share my opinion, and other people do not share my opinion. I’m sure if they do, then Canvas wouldn’t have done this. But to me, first of all, it’s available to teachers, not students, which I like, but makes it even less useful. And what it does is it takes your discussion and it allows you to hit a button and you get. A summary of the discussion, what everybody has said about that discussion. Okay, that’s great. For students, I could see that to be useful in that they could see what did everybody say about this topic? Okay? For teachers, yeah, that’s great. But often when I’m looking at discussions, I’m looking at grading something. I’m looking at an individual student’s response. I’m not really looking for the overall of the whole thing. And so I think I clicked on it one time this last year and went, Oh, next, you know, it wasn’t even a cool it was just a, okay, this is kind of useless to me, yeah, and it so it’s not available to students, and it’s, you can click on it and you can see what an overall, an overview, a summary of your whole of the discussion is. Like I said, I don’t find it very useful, at least in how I’ve taught. Oh, I
Brent Warner 15:57
mean, I’m with you, Tim, I you know, I’ll just say it directly. It’s an absolute waste of time, and it is. It shows so little vision for what AI can possibly do to help people in their learning journeys. And it’s like, this is, it’s embarrassing, What a ridiculous feature. I’m speaking more directly,
Tim Van Norman 16:19
trying to be nice, but, yeah,
Brent Warner 16:21
forget it. Right? Be nice on the nice things, right? But like, but also like, we’re starting to see that it’s like, the problem is, you know, being nice on everything means that they think that everything’s a good idea, just like we get from our AIS, it’s like, Hey, I’m thinking about making, you know, a new pudding recipe with hair and AI is going to be like, that sounds like a delicious idea. Put extra hair inside of all your pudding. It’ll be great. And it’s like, no, we need we, we don’t need to reinforce bad ideas all the time, right? And so, so part of this thing here is it’s just so tricky, because there’s so much, and we don’t know what all the future integrations are going to be. I would assume Tim, you know, we had the Khan Academy thing temporarily, or maybe, maybe for a while it’s been active. It’s in there, it’s in there, right? And so it’s like, okay, click a button and you can make a lesson plan or click a button and you can, you know, whatever, right? So, so it has. So I’m assuming that basically, open AI and chat GPT are going to kind of consume that, right? Or, you know, maybe in certain parts right, take over those types of tools are going to start integrating more and more and more, and that may or may not be good. And so what I’m really hoping we find out is what level of individual control you Tim, have as the administrator, which is, you know, great if we trust you. I do at our school. I’m not sure that everyone at every school is going to trust, you know, hey, they’re not making decisions that match up with us. But then at the at the granular level, can the teachers go in and turn things on and off? And even more granular, because we’ve had a lot of our very smart students coming in, and they have their own problems with AI. And, you know, like, I get what they’re saying, and do they have the rights to turn things on and off right? And, like, to me, that’s one thing that I am not seeing any real or, I don’t know. I mean, I’m sure there’s bits of information out there. But like, how much can we control and how many lines down in granularity, are we able to choose how much control we have
Tim Van Norman 18:25
absolutely and so Khan Academy, the thing about that is it didn’t actually integrate into your class, into your Canvas shell. It it created, it didn’t read your Canvas shell to do anything, right, okay? And what we’re seeing from this new ignite AI, and by the way, there, some of the stuff we’re looking at is a chat bot that will be included in the class and stuff like that, but which is great, but where is that going to really get the information? It’s really kind of going to come out of your course. So it is reading your course, yeah, if it’s turned on. And so yes, we want to know. We want to make sure that the privacy, by the way, is also there. So does that information from my course bleed into your course?
Brent Warner 19:14
Well, not only that, so yes, 100% we want to make sure all those things are there. But then we also know that currently, you know, and we can mention this briefly, we don’t get very political very often here in our conversations. Tim, but the current administration is saying to chat GPT, and chat GPT is agreeing with this, that, hey, every single chat needs to be saved and it needs to be accessible for, you know, for subpoena later on, right? So there goes privacy, at least in public chat, GPT and and when the government’s saying like, Hey, we’re holding your funding, you know, based on whether or not you agree or or make these things possible, then we’ve got a whole another set of problems, right? Because even if it’s not a you know, like, hey, political. Dissidents, or whatever else it is, maybe, maybe it’s just a hey, how well is this student doing? Are we going to be offering them jobs in the future, right? And it’s like, well, what if you had a struggling semester? Or what is, you know, like the all the individual parts of things, right? Are, are then going to be stripped down, possibly, to numbers, right? And so how do or and just like a little judge, without a human ever looking at it and just gives the green thumbs up, red thumbs down, and that’s all we ever see or know about the process. Those are things that we don’t want either, right? We want to tie in more humanity. We want to have people more looking at each other and understanding where we’re coming from. And I get worried that information, one privacy, tons of tons and tons of stuff around privacy could possibly be violated here and or unclear, and then also things like, when we’re saying, hey, what’s the bias going into this? This actual AI as it integrates, right? So if you’re teaching an earth science class, and we have an administration that says things like, Hey, don’t teach about, you know, climate change. And don’t, you know, that’s not a thing that should be put into the AI bits. It’s like, Well, okay, so, so where are we standing here? And Tim, you brought up a pre show. You brought up a good point about the other side of that too,
Tim Van Norman 21:18
right, right? Yeah, if you, even if you agree with what they’re saying. Now, understand that we don’t just have one political party and one way of thinking. We have one of the things that makes our country so strong is the diversity of thought. And so if you agree with what’s going on right now and you will disagree later, Mm, hmm, something is going to happen, and you will disagree. And seldom does it. Do we get to a point where everybody becomes happy? Okay? And I’m saying seldom, because even though I can’t think of a time, I’m hoping that there’s one in the past, but in learning looking at history, we never get, never seem to get to the point where everybody’s happy about something. And so from that perspective alone, I find it very dangerous. Yeah, and, and by the way, I know now we’re sounding more like we’re anti AI. I love AI. I think AI can be a really useful tool. But it goes back to what we’ve talked about all along, you need to critically think about and evaluate the information coming out of AI. You need to take ownership of whatever you’re producing using AI as a tool. Yeah, and we’ve talked about that for two years, and basically we want to, one of the things that we’d like to see in this is that, and I’m speaking specifically for myself, I would like to see an ability for faculty to see what the AI is saying. I’d say, Oh no, that was wrong. Oh yes, that was right. Right. Now, I guarantee you, most people won’t do it, but it would be nice to see it and to have that option. And I, you know, we’ll see what we wind up getting to,
Brent Warner 23:04
yeah, and so I think there’s a number of questions, you know, that we should all be pushing all the time, right? Which is, and this is not just when we’re unhappy about it, but when we’re happy about it too, right? Just like you’re saying to him, it’s like, like, hey, eventually things are going to change and what’s going to what’s going to continue to be something that’s a solid approach to this, regardless of, you know, what agenda is being pushed on one side or another, right? So I think bias is hugely important, right? Like, we have to do our best to say, like, hey, where do we recognize the biases? Are they intentional, or are they being, you know, responded to all those kinds of things. Privacy, huge. FERPA, specifically, Tim, I just, I still struggle with this, right? Because it’s like, how do we know what parts of our canvas, shell this thing is getting and and scooping up and using and keeping and then using the process to give us feedback on right, like that is a concern to me. And this is actually where I still kind of lean more towards the small language models. You and I have talked about a little bit and saying, like, hey, what if we implemented something where we control the the AI, and it’s on one of our servers and it’s, you know, we are the ones who choose what’s, you know, what is trained conversation, and what is trained on, yeah,
Tim Van Norman 24:24
agreed. And, and that’s, you know, that’s what we’ve gotta see. We we do need to see what happens with this, where they integrate it, how they integrate it, what guidelines and guardrails they put in place. But it’s going to be interesting at least.
Brent Warner 24:39
Yeah, yeah, for sure. And so Tim, I think we’re, I mean, we’re, hopefully, we’re leaving people this time around with more questions than answers. Normally, we ideally go the other way around, but it’s just unclear. But I do hope, and my hope is that people out there going, okay, hold on a second. I wasn’t aware of this because they sprung this all on this on the summer. Right? And I was on vacation mode and checked out, and so I don’t know what’s going on. And then all of a sudden, hey, this and this and this and this and this has happened. Okay? Well, what do we do about that? And it’s, again, i It can’t be any one person’s choice on campus, right? It can’t be, you know, you and I are on the AI Task Force, and it can’t be the task force decision alone, right? It’s about that. You know, we’re trying to open the conversations. We’re trying to make sure people are talking about these things, but it needs to be, it can’t just be like, Well, I’m just going to wait until someone figures it out on my behalf. It’s like, No, you are all we are all responsible for broaching this conversation and trying to figure out what we’re going to do
Tim Van Norman 25:42
next. Absolutely
Brent Warner 25:45
alright. So unsatisfactory ending,
Tim Van Norman 25:49
but, but a realistic ending. I think that we’re leaving questions on the on purpose. I think I’m hoping that people will go and and do some research on your own. Think about it. Think about what this means to you and and one of the keys we were talking about it beforehand, one of the things that I really want to get across in this is we can’t be the only source of information for anybody. We don’t try. We want to give you a place to start the conversation and start the investigation, not to finish it,
Brent Warner 26:26
which is all to say we will definitely be talking about this more as we discover what’s going on. I mean, I think our whole, our whole next season, is kind of taken care of,
Tim Van Norman 26:36
but we’ll have some other stuff too. But
Brent Warner 26:39
yes, yeah. So please send us your questions. If you say, hey, we want you to talk about this. We, we’re we’re trying to figure these things out. If you’ve figured out solutions on your campuses and you want to collaborate with us, please reach out and talk to us. I think we’re always open to those types of things. Tim, you and I have done that quite a few times in the past, and so let’s keep working together. You know, we we are the ones shaping, shaping the futures of our institution, so let’s make the right decisions as we do it
Tim Van Norman 27:07
absolutely. Thank you for listening today. For more information about this show, please visit our website, at the higher ed tech podcast.com
Brent Warner 27:19
as always, we do want your feedback, so please go to the higher edtech podcast.com and let us know your thoughts for
Tim Van Norman 27:25
everyone at IVC that’s listening. If you need help with technology questions, please contact IVC technical support you have. If you anybody has questions about technology in your classroom, I’d love to be involved in that conversation. Contact me. Tim Van, Norman AT tvannorman@ivc.edu
Brent Warner 27:43
And if you want to reach out to me about the show, you can find me on LinkedIn at @BrentGWarner.
Tim Van Norman 27:49
I’m Tim Van Norman
Brent Warner 27:51
and I’m Brent Warner, and we hope this episode has helped you on the road from possibility to actuality. Take care, everybody.
Resources
- Instructure Press Release: Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas
- Leon Furze: OpenAI has come for Education
- Instructure: Ignite AI