What’s happening with AI and higher ed we have a conversation with playlab. This is the HigherEdTech podcast season 6, Episode 18.
Tim Van Norman 0:21
Welcome to today’s HigherEdTech podcast. I’m Tim Van Norman, the Instructional Technologist at Irvine Valley College and Adjunct Professor of Business at Cypress College.
Brent Warner 0:30
And I’m Brent Warner, Professor of ESL here at IVC. We both enjoy integrating technology into the classroom, which is what this show is all about.
Tim Van Norman 0:39
Welcome. We’re glad you’re here with us. So last month of the semester, yeah, we’re here wrapping things up. We’ve made it this far, and we’re in our last few episodes. We’ve talked about play lab a number of times, and I’m excited. A couple episodes ago, we talked about the conference. We were at the AI conference in San Diego, and I’ve heard, literally today, I was talking to somebody. They were talking about it, and this is where we actually got to meet face to face. I got to meet Eric and some of his team. So let’s, let’s talk about play lab. And AI,
Brent Warner 1:23
Eric, welcome to the show. So we’ve got Eric Lavin. Eric is in charge of partner success at play lab. So Hi, Eric, you’re already jumping in! How are you?
Eric Lavin 1:32
Hi, Brent, Hi Tim. Thanks very much for having me. Very excited to be here.
Brent Warner 1:37
Yeah, we’re glad to have you here. As Tim said, Tim spent a lot of time kind of time kind of jumping around and, like, seeing what you guys are up to. And we’ve, he’s been having a lot of conversations with you guys. I’ve been playing with, with play lab in my classroom. And we’ve, like Tim mentioned before, we had, you know, several episodes where we at least mentioned, and I think we did one little bit in depth about it earlier on. So we’re glad to have you here. Let’s talk play lab.
Eric Lavin 2:04
Let’s talk play lab. Where should we jump in? Should we jump into your classroom? Or should do you want me to start at the very, very high level?
Brent Warner 2:12
Yeah, let’s start big. Tim, you can go with that. Start big, and then we can, we can focus in so you really want
Eric Lavin 2:17
I really want to know what’s going on in the classroom,
Brent Warner 2:19
Brent, oh, yeah, we’ll talk about it for sure. We won’t skip it.
Eric Lavin 2:22
Okay, cool.
Brent Warner 2:24
So let’s start big picture. What is Playlab? Let’s make sure that everybody knows what we’re talking about here.
Eric Lavin 2:28
Okay, so Playlab is – Our mission, our ambition is to be a safe and trusted space for educators to learn about AI, to play with AI, to build with AI and ultimately use AI, and that’s the mission, that’s the vision, the ambition. We view this as a very long project, and we’re at the very, very beginning of the project. And you know, it’s really there’s two pieces to what we do. One is the school safe higher ed safe software, and the other is hands on professional development and so lots of different use cases, because it’s really a canvas for people to build with the new AI technologies and integrate them into their workflows, their context, and ultimately solve their problems. So excited to get back to your classroom soon enough and understand you know more about what you’re doing and these other examples we’re going to talk about.
Tim Van Norman 3:28
Thank you. Now you set a nice high level. You were right. That was a very high level. So let’s get into a little bit of specifics. One of the things that I find play lab great with is I have an idea. I want to try something out. How do I get there? And to me, you guys have designed play labs such that I can start from scratch, but I’m really not starting from scratch and I can build something in 15 to 20 minutes to at least try something out.
Eric Lavin 3:58
Yeah. I mean, you can. I mean, the thing is, you can, you can get to maybe 60% or 70% of the way you want to get to in under maybe even a minute. And especially with for people who have seen the inside of play lab and work, this is not going to I’m not going to screen share. People aren’t going to see this. But when you get into play lab and you building a new app, you see Mad Libs, and maybe you kind of get a blast from the past, but those are kind of their scaffolds and prompts, basically, really ask, you know, what is the job to be done? What is What do you want the AI to do in this context? And, you know, basically build a set of instructions. And, you know, it’s like sending off a little bit of you yourself as an agent that’s out there and allows your this, this, this job, to be run. So
Brent Warner 4:46
I’m going to clarify just a little bit of the language around around this, so that and teachers who are listening are kind of like, I haven’t logged into this. I don’t know exactly what it looks like. Essentially, as a teacher, you log into play lab and you can build a bot. On the back end with plain English, right? You don’t need to have any, I mean, you know the prompt engineering stuff, probably as you’re better, it’s a little stronger, and those types of things, but you can still use very plain English to say, like, Hey, first I want to do this, then I want to do this, then I want it to do this right. And then when you build that on the back end, then you send the link to students, or whoever the user is, right? And then they are going to see a regular, you know, chat bot style thing, right? It’s just a blank screen with a with a bar at the bottom. You can type in whatever you want, but now whatever they’re typing is interacting with the the setup that you gave it on the back end, right? And so, yeah, so if you set up something really easy, say, hey, you know, help me come up with, you know, conversation strategies for talking to some to a mechanic, right? This is my ESL setting, right? And so, you know, please use sentence frames. And, you know, chop off the important vocabulary and ask me to fill it in. When, when I come in and my part of the conversations, then students can then get that kind of AI conversation with the mechanic, learn how to negotiate what they’re trying to talk about, learn the vocabulary that they need to do. And as the teacher, I also it’s coming a little bit more. It’s been building a little bit, but there are ways for me to be able to go in and see the conversations that the students are having too. So I’m not it’s not just you’re off doing something that I don’t know about. I have a place to go and watch their work.
Eric Lavin 6:33
Yeah. So it’s a great explanation. I mean, it’s the instructions, the as you described, and also the references. So to the extent that you wanted to upload either you know, some type of information in the con, in the example, you just gave a rubric to make sure that the you know, they interacted with it in the way that you wanted to, you can also read the references. But yeah, you explained it very well.
Tim Van Norman 7:00
I love the fact that you have the prompt part at the top, but then you also have the guidelines part underneath it. And again, this is all built in so that when you get started, when you come up with a blank one, it’s got the blanks in place for you to get started. So you can say, only use the material I provide or answer this way, or be friendly, or whatever it is you’re trying to do, and be really specific in that. So for instance, I created one that was a chat with Ernest Hemingway, and it probably took me, you know, three or four minutes to do. And sure enough, when you ask it questions, it sounds like Ernest Hemingway might answer the questions in that
Eric Lavin 7:44
way. What did you upload? Any references?
Tim Van Norman 7:48
I didn’t do anything. You’re, you are Ernest Hemingway, um, answering questions if you as if you’re Ernest Hemingway. Did not reference any anything at all. And I was surprised at how well it did very, very quickly. Yeah, yeah,
Brent Warner 8:06
I’ve done the same thing. I found that it works really well with well known resources. So, so I do, you know, I’ve done examples with Fahrenheit 451, quite a lot. That’s one of the books that we’ve read, you know, read regularly, or at least my previous class, and so I’m deeply familiar with this book. And then I set up these activities where students interact with characters from the book. So they pretend they’re one character. You know, the chat bot pretends it’s another character. And then the thing that I that got me and my students the first time we did, we did this was the character of Faber, who isn’t the main character in the book. He’s the professor, you know, that kind of has this history or whatever, but, but they really captured his tone, right? Like pulling in the AI, and it’s like, oh, it recognizes his kind of cadence and the the way he puts his words together, all of these things. And so it was really powerful. And then I had to stop and say, Hey guys, remember, this is not what Ray Bradbury intended. This is for us to have a deeper conversation around, like, understanding characters, motivation, all of these things, but we but I was so strong that I had to step away from that and go like, hey, let’s have real conversations about understanding what AI is, which Eric, I think this question is for you. Is AI important?
Eric Lavin 9:28
Wow, that’s so interesting. Wait before, before I answer that. So did you for that app? Was it one app that had all the different characters in it? Or was each app a different
Brent Warner 9:38
character? Oh, no, no. The prompt is very light. So, so I set it up blank, and I said, you ask your us, the students, who they are. You ask who you are, and then, and so one so the students put in the parameters. I mean, I set up the background, but then, because it asks the questions of, who are we interacting with? And then they. Shows so
Eric Lavin 10:01
you the question is, why is AI an important technology? Yeah, I think it’s like, it self evident, right there. It this, this technology is very good in a number of things. It’s like, it seems very smart, and with just a little bit of guidance, a little bit of prompting, it gives us great value. I mean, I personally use a whole suite of AI apps, in addition to play lab regularly. I recognize that I am probably on the early adopter side of the spectrum. But this, this technology is here. I mean, AI has been with us for a while, but this kind of new paradigm of, I mean, just large language models or generative AI being used a lot, being used a lot, kind of across society, obviously in education as well. And so kind of, when I look at it’s like, yes, the year 2025 like, AI is going to be here. 2026 27 like it’s going to go deeper and deeper. And so it’s really a question of what you know, what shape will that take? What are the organizations and technology that are really going to influence that? And so that’s why I’ve chosen to work at play lab. And I think this is important. And you know, some of the values that are kind of in the design of play lab is to be for educators, to be a safe and trusted space, for educators first, to really learn about this. Like, where did this come from? Why is this thing so smart? Why does it know all this stuff without, you know, me telling it beforehand. You know, what is it good at? What is it not good at? Where’s it? Where does it have bias? Where doesn’t it? And then, ultimately, like, how can I use it to my my purpose. You know, technologies, ultimately, should be tools for us. So that’s a lot of the work that we do at play lab is, you know, one like contextualizing what this thing is, where it came from, but also like how you can put it into a context that works for you, or at least, you know, know that it’s out there and what it can do. But ultimately, we want to empower people to build things the way that both of you have already talked about and use it, and allow, you know, the things that computers really should do for us, to offload those things, and so that you know, humans can do more human things.
Tim Van Norman 12:15
I like that. So one of the things that we hear a lot about when we talk about AI is AI literacy. How do you how do you see AI literacy? Would you see kind of a definition, and then, how do you see play lab, interacting with that and and feeding that? Ai literacy?
Eric Lavin 12:40
Yeah, well, I think really see it on a continuum of literacy to criticality to creatorship. And so literacy being, you know, step one, just what is this thing, the you know, I mean, I think especially with this technology and this tool, the best way to learn is to do and to explore, but if we use it kind of uncritically. I mean, there’s that little phrase at the bottom of all the llms, like, you know, AI can make mistakes. Like, check important information very closely. It was like a whole universe of context about like, why that statement needs to be there. And it’s important to to know, you know, I think a fair amount about that, and not to use these tools on critically. And I think in education, this, this particularly urgent because, you know, a lot of the the the policy is not allowing tools, but the tools are here. Students use the tools. So they use the tools that are on their phone that they can access, and they’re not using kind of the thoughtful apps that Brent you’re describing and other educators are describing,
Brent Warner 13:54
right? And I think I noticed that you dropped out AI in some of the mentions of these parts, and I think this is part of the conversation that’s going on, is even when we were talking AI literacy, AI critical thinking, all these things, it’s like, I’m already seeing people saying, let’s just drop AI from that. Because it doesn’t matter. It’s still literacy, it’s still critical thinking, it’s still all of these things. And so that, to me, is kind of, it’s we’re already seeing interesting shifts around like how people are talking about things, right? And I’ve been fascinated watching some of these conversations on LinkedIn go on, you know, people go back and forth with what terms we should use, but I thought it was interesting, and I like that idea of the critical thinking and the teachers using things responsibly. I’m going to jump into a little different question, just because I know that people listening are going to want to know this is that you had mentioned before, Eric, you’re saying, Hey, this is a safe spot. It’s safe for teachers. It’s safe for students. The question that I get a lot is like, how do we know that an AI tool is safe? Because doesn’t it have to kind. Reach out into the model and provide information, and then doesn’t that information have to come back and interact with whatever you’re doing? So how? How is it like something that we know is FERPA compliant or safe or trustworthy, or all those kinds of things, right?
Eric Lavin 15:16
Okay, so I mean, there’s, there’s lots to unpack here, but just on the safe outputs. Well, I think number one is we are still technically in beta for this very reason, is because safety is is not. We’ve not arrived at it. It’s an ambition, but it’s something that we probably spend an inordinate amount of engineering resources on, around moderation, around model outputs, which, you know, moderation is one side of it, but evaluation is another side of it. What do you want this app to do? What is the objective purpose of this app, and is it accomplishing what you want to do? So like, you know, I’m not sure if this would be safe or unsafe, but, like, the classic example is, hey, I’m your writing tutor. Give me your your essay that you wrote, and I’ll give you feedback on it. And the student comes in and says, Write me a five paragraph essay, and I write it for them. And play loudly. If you try that, it won’t do that. And so that’s for educational purpose. And so I mean this, this, you know, we’re talking about language. And, you know, is it appropriate to AI literacy or just literacy. And you know what is safety? It’s this all encompassing thing. But we want, when we talk about building infrastructure for education, it really does mean something. And it starts with basically the opportunities of practice of educators, and goes from there, there are, like, a set of really important. Not even sure if they’re edge cases, but, you know, things around moderation, mental health, well, being of students that we just hear a lot from our community that, and that’s like another aspect to our safety. You also mentioned FERPA, and there’s, like the whole alphabet soup of tech stuff (laughter)
Brent Warner 17:00
Go listen to our FERPA episode! (laughter)
Eric Lavin 17:04
So, yeah, you don’t need to trust us on that or, I mean, because, you know, that’s in the terms and conditions. I what I always do on the demos is, you know, because there’s a link that you can join play lab on, and you fill out the information, then you can create an account. And if you read every word of the terms and conditions, you will see that we’re on the front foot with data privacy, but nobody reads all the terms and conditions,
Brent Warner 17:27
But Tim reads them!
Eric Lavin 17:31
Tim reads them (laughter)
Tim Van Norman 17:32
or I have ai do it yes, and and that, I love The what you’re talking about there. But that also allows me to look at this as prototyping. And I’ve mentioned that before, um, the ability to quickly create something, um, that I can demonstrate to somebody. I don’t have to spend hours and hours and hours doing something. I can do it. And also, I love the fact that you offer several different language models, not just one. And so there’s, there’s some variety in there, yeah. So what I’ve seen is, as as I learned about you was with a professional learning community, community, a PLC, that seems to be one of the main things that you guys are really looking to do, is build in that the opportunities for the communities of practice, the the idea that that people can work together and learn from each other. Is this something you had kind of mentioned that a little bit at the beginning as well? Is that, is this something that you’re really like tied into your organization as part of it? Or how does that work? I
Eric Lavin 18:53
mean, it’s very much in the DNA play lab really started as a play, a platform to run hackathons, playful AI hackathons. This is like 2022, very early GPT, three days. And it kind of grew and extended and started running the PLC with Relay. And that was really the first kind of public facing work that was done, and that work has just really spread through a bunch of work we have. I think you, Tim, found us through the relay PLC or the California Community College PLC, California Community College. Brent, were you in
Brent Warner 19:36
that as well? I was in the same one as Tim. Yeah, yeah.
Eric Lavin 19:39
So, I mean, just like, so I was trying to figure out exactly how play lab pulled that off, because I joined after the fact, I think, like, a lot of people get flowers for that, including the founders, but also the relay PLC, and people like you that were just taking it around. There’s a woman, Claire Fisher, who, I think Tim you met, who did, like, a bunch of great work, getting that spun up. And so every community college in California. To have the opportunity to join that PLC, and I think we’re going to be able to do something again next year. And, yeah, like that. Work is so important. Like, at the end of the day, it’s about relationship building and professional learning and helping people, you know, identify and talk about their opportunity to practice and, you know, see if this tool can be useful for them. I mean, the thing I love about play lab in truth is the in person. I mean even the Zoom events, but the in person rooms where people are building apps and showing people what’s possible and what it unlocks, just a lot of fun. And yeah, so that’s that that is intended to be a core part of what we do, but we we are a software platform on the other side of things. And so we really want to be able to provide this as infrastructure, and really all this work to live in places that, not necessarily without a play lab representative being there, but this work just kind of continuing on its own. And yeah, I think I’ll leave it there for now.
Brent Warner 20:59
Awesome. And I love that too. I guess I’ll do a little, a little plug here, which is this episode is coming out on, I believe, actually it’s coming out on May 1, and on May 6. I’m, I’m doing this is actually not through you guys. This is through a different online source, but I am doing a play lab exploration with a bunch of teachers on, I believe it’s on the sixth and so if anybody’s listening early and wants to sign up for that and see how what looks like playing on the background, I’ll be doing that. But I think we want to talk a little bit here, since we’ve got you about use cases, right? So because I think it’s, you know, we can kind of talk about, like, a little bit of how I’m using it, a little bit of how Tim’s using it, but, but I think we want to kind of see like different levels, right? So from your your perspective, when you’re working with students, faculty, staff, all these different places, what are you seeing for like use cases, where the average user can go, Okay, how do I jump into this? What am I going to use it for? For myself? Yeah,
Eric Lavin 21:55
so there’s a lot of use cases, as we kind of discuss, and it kind of like, you know, spreads out from, you know, what the individual is doing. But sorry so for faculty, I say, like, the classic use case that has emerged is the syllabus. I think I saw a bot and one of your workspaces that emerged out of a professor at Glendale Community College, Fabiola Torres, I believe the idea is so simple. It’s so great, and it’s such a great use case. I’ll just talk about a little bit. It’s basically like you take your syllabus and you turn it into a bot that’s got all the references it needs and all the personality you want it to have. And so, yeah, I’ll leave the faculty use case at that and see if you have thoughts or reactions. Yeah,
Brent Warner 22:39
I did that. I started the Tim, did you do this for your class too? Or? No, I didn’t do that one. I did a tutor. Okay, yeah, I did. I did the syllabus for my class. And then it was like, about having students integrate, you know, test it out, see what they can find, right? And then a little bit of that exploration too, where it’s just like, start asking questions and see what happens with it, right? And then they can start to have access to that all the time, instead of, like, trying to read through, you know, I mean reading that whole syllabus is and every year it gets longer and longer because of all the rules and things we’re supposed to add to it, right? But it’s like, well, what parts are relevant to you? Are you asking these questions when you need them to fill in the spots, right? So it’s great and then, and it just focuses in especially, one thing that’s a great feature inside of here is the percentage, the variability percentage. And so I dropped that down from my syllabus one to like a 0% meaning it’s just like, you’re only taking data right from my syllabus. You’re not messing around with, like, other possibilities. And so, so it’s a really good example of like, Hey, I’m scared of AI. That’s not the right way to say it, but I’m anxious about AI or whatever else it is. I don’t want it to make stuff up. And so it’s like, okay, this is your one resource inside of there. Set it to 0% it’s only pulling information from that. It’s not going to give you other stuff. And that’s that has been a really useful one,
Eric Lavin 24:02
just to tack back to an early part of the conversation. I would call that an example of AI literacy. And the best way to do that is to be able to, like, test what happens if I put this at 70 or 60 or 100 and what happens at zero? Like, I can kind of contextualize what that means, and I know for a different app. So, for example, a use case where I want to simulate some durable skill around negotiation or resilience, or some like challenge situation. So this is an example we see a lot to kind of simulate critical thinking or to test a skill. And you might want the chat bot to be at 70% or 80% for something like that. Other use cases that come to mind in the kind of administration space, you know, one kind of subset of a syllabus is like, hey, what’s happening this week? Or, like, what do I need to be thinking about in March? Or at whatever period of time the. Like these huge as you know, like business process documents. You upload that and you just say, Hey, be helpful and and help. You know, I’m gonna ask questions and tell me what I need to do and help me complete the task that’s that’s kind of like the third category, faculty, students and administration, staff
Brent Warner 25:17
love it.
Tim Van Norman 25:18
And along those lines, so couple things that we’ve done, staff wise, stuff like that, translation, take a website, give it a link to the website, and say, 0% variability, but translate this into whatever language they want, and let them ask questions out of it. And it’s been amazing the people who looked at it. And, you know, I don’t speak anything other than English, and so I’ll have somebody come in, and I’ll just have them try it, and they’re shocked that with just literally, a couple of prompts and a website, it’ll give back to them in whatever language they Want, which is, really, it’s a really cool translation, a really cool feature. The other neat thing is, because you’ve got so many different llms that you can use, it’s interesting that some of them are better at translation than others, and so that’s often what we’ll find. What I found is, as you work with different ones, you get different there’s different strengths that show up, right?
Eric Lavin 26:27
That’s That’s one of the next frontiers, and that kind of folds into that whole evaluation piece of what models are pretty good, producing good outputs on what types of tasks. And we get anecdotal data, very interesting anecdotal data, like the one you just shared. But I would be curious if Brent saw this as well, if he signs off.
Brent Warner 26:49
But yeah, well, I’ve played with the translations options. I’ve, you know, like trying to switch between different models, all of those things. So I, you know, I think one of the ones was my teaching assistant, you know, Claire, who’s Korean, and she, she keeps coming back, and I, every time I show her play lab stuff, she’s like, she’s like, Oh my God. Then I went, and she goes and plays with it way more than I do. And then she’s like, I did this and this and this and this. It’s like, Oh, okay. Like, there’s, there’s like, you know, so like, and she’s talking about, like, some of the language stuff and all these things. And so it’s really fascinating, because there’s, you know, the ability to pull these endless resources in and then work with them around your goals is just powerful, right? And I think that is something that a lot of people still aren’t quite used to. And then, you know, it just, it feels like, you know, wild west in some ways, right? And then, but then I, the thing that I really like about play lab is coming back to that safety thing is going, Okay, well, you know, it’s wild west, but it’s like, it’s like, wild west with CAP guns. You know, we’re not, we’re not gonna shoot our feet off or whatever else it is, because, because we have that safety in it. Yeah, I think we’re actually running out of time, as long as this question, these these questions are great, and I feel like we can go on forever, but we are going to try to wrap up, Eric, can we talk just a little bit a minute or two about the future? What are you able to share with us, what some of the things we might look for coming down the line?
Eric Lavin 28:18
I cannot predict the future we’re going to work on. We’re going to work on a bunch of different things, on the product, and we are looking for more partners, more R and D partners who, who want to do this work around AI literacy and criticality and empower their teams to be aI creators and get AI into the workflow in the manner that we have been describing here, which is, you know, in short, I know, contextualized to their situation. What could I say is coming in the future? I mean, these models are going to get better. We are going to work on UI, UX for our partners, colleges, K 12 schools and teachers. One thing that actually is on the horizon, Tim, that we’ve talked a little bit about, that we need to talk more about, is a really robust Canvas integration. And that’s really like, born of what we were just talking about. You know, building tools for educators. That’s our roadmap. That’s our mission. So that’s coming. What else?
Brent Warner 29:28
Well, how about… Right now it’s text, right? Are we getting are we going to see image generation? Are we going to see voice, voice integrations in the future? So
Eric Lavin 29:39
Yes, in the future (laughter). So yes, the quick thing on images is just, it’s not reliably going to not produce something that isn’t, you know, quote, unquote, safe. Those are those. Those are those aren’t out forever, and we’re working. On those, and we’re figuring out how to integrate them. Voice, that’s at the top of a lot of people’s lists. A lot of people would like to see that in the product, including myself. So yes, that one’s coming, hopefully by the end of next year.
Brent Warner 30:13
Awesome, awesome,
Eric Lavin 30:16
Yeah, and more opportunities to engage. Just go to play lab.ai and we got a great new Learn site, learn.playlab.ai where you can just chat with the interface there ask any question you have. And we’re constantly uploading new loom videos on features and cool ways to build things in play lab. Awesome.
Brent Warner 30:39
Love it. Wow. So play lab.ai. Everybody can go in and check it out right now. Eric, thanks so much for joining us today. Like I said, we could probably just keep on going easily for twice as long with no pauses. But we are going to try and wrap things up, because we know you’re driving into campus right now as you’re listening. Thanks again. So much, Eric, appreciate your time.
Eric Lavin 31:02
All right, thanks, guys. Nice to be with you.
Tim Van Norman 31:07
Thank you for listening today. For more information about this show, please visit our website, at the higher ed tech podcast.com
Brent Warner 31:14
as always, we do want your feedback, so please go to the higher ed tech podcast.com and let us know your thoughts for everyone
Tim Van Norman 31:20
at IVC that’s listening. If you need help with technology questions, please contact IVC technical support. You have questions about technology in the classroom, please stop by a 322, or contact me. Tim Van Norman AT T van norman@ivc.edu
Brent Warner 31:34
and if you want to reach out to me about the show, you can find me on LinkedIn at Brent G Warner.
Tim Van Norman 31:39
I’m Tim Van Norman,
Brent Warner 31:41
and I’m Brent Warner, and we hope this episode has helped you on the road from possibility to actuality. Take care, everybody.
Tim and Brent sit down with Eric Lavin at Playlab.ai to talk about what Playlab is, how it can help educators, and the value of using AI to customize your work.